The Foundation of "How to MAKE MUSIC so your player's ears don't bleed" Ultimate 101 Tutorial/Class

TheAM-Dol

Randomly Generated User Name
Veteran
Joined
Feb 26, 2022
Messages
241
Reaction score
242
First Language
English
Primarily Uses
RMMV
PREAMBLE
Hello there, thanks for clicking on this thread.
I have been considering making this thread for a long time now. My biggest fear isn't that people see me as a gatekeeping, pompous jerk of music, but instead that I spend who knows how many hours writing this tutorial and in the end no one cares.
(author's note: come back to this section and insert hours spent making this tutorial here: [____])

So why should you listen to me? Who am I?
In short: A washed up musician who never found fame, nor fortune, nor success if you were to measure success in follower counts. Wake up call: that's the 99%. Just because I am part of the 99% doesn't disparage the work that I've accomplished through my 12+ years (and counting) of producing music, plus an additional 6 years as a music student.
As is the case with many in the 99% (and many more in the 1%), my music production skills were largely self-taught through the fire and flames of KVRaudio, which will be an important resource for your music journey ahead. My music theory knowledge is fairly shallow, though came from my years as a music student as well as my father who was raised as a piano prodigy from a young age. If I come off as a gatekeeper on what good music is, I blame KVR for instilling the behavior during my formative years as a musician.
While gatekeeping is primarily just poor etiquette, it does help keep the bar for quality high. That's not to excuse gatekeeping, but in an industry where anyone can just pirate their favorite DAW and become a musician over night, making sure we keep the bar high is kind of important (I just wish we could learn to stop being an a$$hole about it)
Who is this for?
This tutorial is for anyone who is either just getting started with computer music - whether that's because they have never touched anything musical before or if that's because you've always played an instrument but have never explored the world of computer music - or folks who only have a couple years of experience with computer music that has largely been self taught. If you are primarily just self-taught then the resources provided in this thread will still likely be valuable to you nearly in full. If you are someone who has gone to university for music production or you have already spent 5+ years doing computer music, most of this information is likely not going to be relevant to you. But I always encourage people be willing to read because at the end of the day, no matter how much experience you have, you still don't know everything.
What do you cover in this course?
This course is going to be primarily focused on the software side of computer music. That said, just because it's called computer music doesn't necessarily mean that it is entirely software driven, as even some of the most basic set ups still require some hardware such as monitors, monitor headphones, and an audio interface - but since this course is for those who are still in their early years of computer music, I am only going to be talking about software with maybe only light references to hardware.

  • In the course I will introduce common jargon used in the industry, such as DAW, clipping, and frequency spectrum so that you can feel prepared to watch tutorials after reading this course, and can use this course as reference material in the event you forget some of the jargon (happens to the best of us - even while writing this a couple of terms slipped my mind)
  • I will also go over some of the major DAWs - the software you'll be using for computer music - so that you can decide what DAW is best for you, or if you already have a DAW consider the different options if you wish to change or expand the tools you are using.
  • I will cover the types of virtual instruments commonly used for computer music, including a brief discussion on the many types of synthesis, as well as give you a list of a few options commercial and free for you to use.
  • I will cover the MIDI Roll and the many aspects of MIDI to improve your music performance.
The on-going spoiler tag bug:
For some reason, the spoiler tag button in the post editor is causing problems...probably because I use and abuse it a lot throughout this thread and the thread itself is incredibly long, so I guess it's causing some sort of problems. The first bug was the post editor inserting random invisible spoiler tags (yes, even toggling the "BB code" view did not show these invisible spoiler tags). I was able to resolve this by posting this thread (before it was finished) and refresh the webpage. After refreshing and clicking edit, suddenly all of the hidden spoiler tags were visible, and there was a lot of them! The whole post had been infested with
Code:
[SPOILER][/SPOILER]
and nothing in between those tags. Sometimes the random placement of these invisible spoiler tags would actually insert at the start and end of some text, thus putting random portions of this thread into spoiler tags.
For the most part, posting-refreshing-and editing has fixed that. However, there is a new problem, and that is the invincible spoiler tag.SpoilerTags.png
Some of these invisible spoiler tags that infested this post have seem to have metastasized into an invincible variant. No matter how many times I click edit, delete the spurious spoiler tag, then save the post, the spoiler tag comes back. I hope that with some time and some refreshes eventually I will be able to finally remove these strange spoiler tags for good, however, in the mean time, I just wanted to directly acknowledge this problem and also apologize about it. I hope site admins see this and can get to the root of the cause :rswt In the mean time, going forward with this post, I will manually type out the spoiler tag, rather than relying on the "insert spoiler" button in the editor. Hopefully this will reduce the strange infestation of spoiler tags.

Before we begin, I am going to be using this song that I just finished. It is a completed song, and throughout this post I will be sampling, editing, and altering this song for my demonstration of different effects. Give it a listen so you are familiar with what the song sounds like unaltered.
The Terms YOU NEED to Know
This section is not optional. There is going to be a lot of terminology thrown around in this thread. I would advise reading this now to cover a basic gist of things, and as terms appear later in the post, refer back to this section as needed.

DAW - DAW stands for "Digital Audio Workstation". Pronunciation is sort of like the gif/gif discussion. Is it pronounced DAW? or is it pronounced DAW? Heated debates have raged on since the earliest introduction of DAWs as "portable" all in one pieces of hardware, such as the Korg Triton. One thing is for certain, debating pronunciation through text is about as pointless as debating whether a file format is pronounced with a j sound or a g sound. Below is a list of popular DAWs. We'll discuss these options more later in this thread.
  • Ableton Live
  • Cakewalk
  • Cubase
  • Fruity Loops Studio (all the FL users are now angry with me)
  • GarageBand (Mac only)
  • Logic Pro (Mac only)
  • Mixcraft
  • Pro Tools
  • Reaper
  • Studio One
***Since DAW is likely the most important piece of jargon to know, I have placed it at the top of the list. Below DAW all other terms have been sorted alphabetically - this is to make it easier for you to find a specific word later when you need to refer back to this section.

Automation - automation is a DAW feature, in which you can assign a parameter to be "automated". In other words, this parameter can be changed over a specified amount of time. For example, you may want an instrument to fade in over time. Then you can set the -∞dB at the start of the sound (no volume) and then set it's expected volume (such as -3dB ) at the point you want the fade in to reach it's maximum. The DAW will draw a line from the first point to the second point and automate the volume change over that span of time.

Clipping - Clipping is when the audio exceeds -0dB (negative 0 decibels). Why negative? That's complicated to explain, but in short: in the recording and digital space 0 is the maximum possible volume, and so we work subtractively with audio, by taking volume AWAY from sound, rather than adding volume to sound. Above -0dB clipping will occur, and this will create unwanted distortion in the audio. We also call this -0dB maximum "the ceiling", though to be clear: Mastering plugins typically allow you to adjust the ceiling - something you'll want to do. Clipping is illustrated below using Venn Audio's Utility
Clipping.png



Dynamics - Dynamics in music generally refers to the difference in volume between the loudest parts and the quietest parts. The term dynamics can either refer to momentary dynamics (such as the peak and tail of a drum hit), or it can also refer to the dynamics across the whole song (such as the quietest part when compared to the loudest part of the song).

Effects - reference "Plugin" below. Effects are software or hardware additives to existing sounds (in this tutorial, we'll only discuss software). They do not create sound, but can enhance sound. There is all sorts of audio mutilation plugins, but here is a list of common effects used nearly universally on all music tracks:
  • Equalizer - Generally shortened to EQ. You've likely played with an EQ either in your computer or phone's music player, or car or home stereo system. The EQ will raise or lower specified frequency bands across the spectrum. By far, this is the #1 most important effect to use, and if you are not using multiple instances of an EQ across a project then you are doing something wrong. You can preview the EQ effect on the demonstration song in the video below. The effect has been exagerated to make it more obvious. It is not often use to this extreme extent.
  • Compressor - This is not referring to file compression. Compressors effect the dynamics of music by lowering - or "compressing" - high dynamic peaks down to either prevent clipping or to adjust the transients of the music. In the modern age of digital music production, the compressor rivals the Equalizer in necessity. In this post, nine-times-out-of-ten, if I say "compress" or "compressing" I am not refering to file compression. I am referring to lowering the transients of the music. If I am refering to file compression, I will specify that I am talking about file compression. You can preview the compressor effect on the demonstration song in the video below. The effect has been exagerated to make it more obvious. It is not often use to this extreme extent. (volume warning)
  • Clipper & Distortion - Distortion plugins are sometimes called "Exciters". Distortion effects typically process the incoming signal in various ways, either adding new harmonics, clipping the audio, overly compressing the audio, among various other ways of adding unique character to the tone. A guitar amp is an example of a distortion unit. Clipper's likewise fall somewhat close to a distortion unit. Distortion units tend to do many things to mangle the audio to "excite" it. Clippers only do one thing: Clip the incoming audio. Clipping means to cut. With a clipper, you can lower a threshold at which any audio that exceeds that threshold with be clipped. This will introduce distortion to the audio, but may be useful for clearing spurious transients or gaining some valuable headroom. You can preview the clipper effect on the demonstration song in the video below. The effect has been exagerated to make it more obvious. It is not often use to this extreme extent.
  • Delay - Delay is what most people would call an "echo effect". In a sense a Delay and a Reverb plugin are exactly the same, other than the difference in time between echoes. Delay effects typically have long gaps of time between each echo, for example the stereotypical idea of shouting from a mountain top: "Hello!" [hello hello hello] Some delay plugins allow you to create reverb effects by setting their diffusion and time to very low values. You can preview the delay effect on the demonstration song in the video below. The effect has been exagerated to make it more obvious. Sometimes it may be used like this.
  • Modulation Effects - Modulation comes in 3 common forms: Chorus, Phaser, and Flanger. (this isn't a gif/gif debate, it's flanger, with a j sound) Modulation will modulate the audio in various ways. Chorus effects duplicate and oscillates the pitch across a given time. The number of duplicates can typically be increased in powers of 2 up to 16, but the actual number depends on the plugin. The result sounds like a "chorus" of singers/guitar/whatever instrument it has been applied to. Phaser's will oscillate a peak in frequencies over a given time. The number of peaks can increased based on what the plugin allows, from 1 peak to thousands (though thousands is pretty rare). As these peaks sweep across the spectrum it creates what can only be described as "phasing" audio. At high speeds it can almost sound like a sci-fi blaster (a "phaser" maybe? :LZSskeptic:). Flanger's will take multiple copies of the audio and put them out of sync with each other by mere milliseconds. As the time between each duplicate audio track will oscillate over time. The number of duplicate tracks depends on the plugin and can be changed from 1 to hundreds. It's difficult to describe what a flanger sounds like, so the audio demo for modulation effects is utilizing the flanger. The effect has been exagerated to make it more obvious.
  • Reverb - Reverb (never called reverberation) is how sound will diffuse out into long tails when standing in large concert halls or empty rooms with lots hard, reflective surfaces. Reverb is typically used to create a sense of space in the music and is usually a vital tool for music just behind EQ's and Compressors. You can preview the reverb effect on the demonstration song in the video below. The effect has been exagerated to make it more obvious. Sometimes it may be used like this.

Frequency & Frequency Spectrum - The frequency spectrum in music is refering to the audible range humans can hear, which we usually round to 20 hertz to 20 kilohertz (or 20hz to 20khz). To be clear: These ranges 1) vary person to person. It's worth keeping in mind that everyone's ears were designed when the creator went on a drinking bender. 2) the range tends to narrow as you get older. We'll discuss this more in the post later. Frequency is refering to the specific hertz on the spectrum.
Spectrum.png
To the right is the Frequency Spectrum as seen through M EQ.



Headroom - Headroom is referring to the space between the loudest peak of your music and the ceiling of your music (by default, the ceiling will be at -0dB but can be changed to be lower). Ideally, before mastering, you want to keep a reasonable amount of headroom, and even after mastering, it's important to keep some headroom between your master limiter and the -0dB maximum volume for post-processing.
Below is my master limiter with the ceiling lowered to -1dB, as seen from Ozone Maximizer.

Ceiling.png
Instrument Part/Take - Instrument parts (sometimes called takes) are the chunks of audio or MIDI data that populate your audio track.

InstrumentPart.png
LFO - LFO stands for Low Frequency Oscillator (but to make all the music gatekeepers mad, I typically call it a Long Frequency Oscillator). LFO's are more related to synthesis sound design, and therefore is likely outside the scope of this tutorial, but in short: it's a separate oscillator used to modulate synthesizer parameters.

Mastering - Mastering refers to processing the audio in full, including all present effects, instruments, tracks/stems. Editing occurs on the "Master" track, where effects will be applied globally.

MIDI - What MIDI stands for is really not important, but for the sake of posterity it stands for "Musical Instrument Digital Interface". Some maybe think of MIDI being the culprit behind some heinous ringtones pre modern smartphones, but the truth is that MIDI has nothing to do with that. MIDI is just data. MIDI alone does not produce any music. Most music today is written in MIDI, which allows for very versatile portability of music data across projects. MIDI contains hundreds of different functions, but the most commonly used functions are: Note data, velocity data, pitch data, modulation data, pedal data. Each of these datum are assigned to a MIDI "Control Change" or CC, for example mod wheel data is always assigned to CC01. Knowing the CC is important when designing sounds in synthesizers, so the synthesizer knows what to do with the data being passed to it from the MIDI roll.
MIDIroll.png
This is the MIDI roll as seen from Reaper DAW with the velocity MIDI channel active.

Mixing - Mixing is the process of getting all instruments and sounds to sit well together so that all sounds can be heard as the composition is intended, using effects and adjusting volume to achieve the intended listening experience.

Modulate - Modulation is refering to the process of changing a parameter over a span of time. For example, you may wish to modulate the the volume of a synth so that the volume rapidly goes up and down over time. You can modulate the volume by using an oscillator or using automation.

waveforms.png
Oscillator - an oscillator typically refers to a wave that modulates something. Your common wave types are sine, saw, square, and triangle. These waves are used to create sound in synthesizers, make changes to synth parameters using an LFO, or modulate effect parameters. Please see the image to the right.

Post Processing - This is a bit of an "unofficial" terminology I will use to refer to any additional effects that take place after mastering. This is typically advanced file processing, such as dithering (mainly just dithering). Dithering is a complicated subject that I'm not equipped to demonstrate well, so we won't go into it here, but in short dithering is the process of adding subtle noise into your track AT RENDER in order to reduce aliasing that may be caused by compression or the rendering process in order to maintain high fidelity.

Track & Stem - Tracks refers to the individual instrument layer or "track" inside your DAW and includes the instrument - whether it's recorded or a virtual instrument, your automation lanes for that specific track, and your effects associated with that specific track. Most DAW's can support an unlimited number of tracks, but during the hardware days you'd have 8 tracks for recording for home recording, 16 for small studios, and 32 for large studios. A Stem, on the other hand, is when an individual track is rendered down to just that instrument. Say for example, you have a guitar track you recorded, with some effects applied to that track and some automation. You can render JUST the guitar down to a WAV file. The resulting WAV file is not the whole song, but just the guitar from that song. This WAV file is known as the stem. Stems are useful for reducing CPU load on the existing song or for transporting the song into future projects for remastering/remixing purposes.
Tracks.png
Here you can see the various tracks that make up the demonstration song inside the Reaper DAW. (5 Koto tracks :LZSlol:)

Transients - Transients, like dynamics, is referring to the loudest and quietest part of a sound, though in this case, transients only refers to the momentary dynamics. Transients are often used in reference to drums, but can apply to any momentary peak of audio. Below you see an example of a kick drum transient.
KickTransients.png

WAV file - (Here, we are talking about file formats. Compression will refer to file compression) Yes, I know it seems insane for me to define what a WAV file is, as I am sure we all know what a WAV file is, but considering RPG Maker is intent on using compressed file formats, some users may not be aware that 1) yes, .ogg and .m4a are compressed audio formats, and 2) what and why someone would use an uncompressed format. While .ogg is actually a great form of compression (and open source!) at the end of the day, WAV retains all samples and does not alias any frequencies, which means the file retains perfect integrity between projects. Compressing a compressed format will introduce new aliased artifacts in the audio. Aliased artifacts depend on the type of compression but can make music sound: tinny, like it's coming through a phone speaker, digital "scratchy" noise, among other types of artifacts. It may also reduce the frequency response. When bouncing audio files between projects, whenever possible, ALWAYS. USE. WAV. (It also just looks unprofessional to use anything else - I'm looking at you Fiverr session musician I hired who thought a high quality ogg was "good enough".)
For this tutorial, I won't go into too much detail about file formats and compression. It's just important that I bring up WAV.

Plugin - It has nothing to do with enhancing your RPG Maker project, however, likewise, plugins add new tools such as Virtual Instruments, and Effects to your DAW. It's worth noting that typically plugins for music software are self-contained. While RPG Maker plugins may add completely new features to the engine, music plugins do not add new features that interact with your DAW, but instead enhance what is already present inside your DAW. More on that later.
There are different file formats for plugins. We'll go over these in a bit.

Virtual Instrument - A virtual instrument is going to be a software instrument that is added through a plugin. These can be anything from a sampler/sample library, a synthesizer, or sound font instrument. In most DAW's Virtual Instruments are a separate entity from effects.

VST - VST/VSTi/VST3 is a plugin format created by Steinberg, developers behind the Cubase DAW. VST is the primary file format most DAW's use for their plugins, with Logic, Pro Tools, and some outliers being the exception. VST is not an open format, and though it has had a lot of staying power over the past few decades as being the main plugin format, there has been some resistance to it in recent years as people have pushed for a more open format. You can read more about the VST format here.
That was a lot, right? Well, as with any technical field, often a lot of jargon comes with it. So it's important that you learn it. However, memorizing should come naturally over time. The best thing to do is make sure you read of it once just to get the general idea, and if you forget what something is, you can refer back to that section at any time. Let's move on to the next part.
Choosing the right DAW:
As you could quite clearly see from the section above (that you definitely read, right?) there are a lot of choices when it comes to DAWs. The biggest, most obvious limiting factor is going to be price, as DAWs can skyrocket in price, with tools like Studio One costing $400 and Cubase costing nearly $600. So where to begin with all of this? Well, that's not really for me to tell you, but I will give a brief overview of each of the major DAWs I listed in the glossary section.
To begin, I would like to clarify that most of these DAWs I have not touch, or I have only played with during their demo period. This information may not be the most accurate, but is primarily based on anecdotes from other musicians, as well as information gathered from their webpage/wikipedia page - both of which will be linked at the end of each DAW's section.

Before you decide to purchase a DAW, well...first make sure you read about all of them. Second, don't fall victim to their entry level option. Nearly all DAWs offer different tiers for the software, with an entry version being cheap but typically very heavily crippled compared to the full version. Usually the second tier offers the full version of the DAW but without many plugins. I think this is the best option. Then normally a third option that includes the full software plus a handful of bundled software. Other than Ableton, Fruity Loops Studio, and Studio One, most of the time the included plugins are not worth the extra money.

Paradigms: How you'll interact with your DAW
We also need to discuss the 2 workflow paradigms that are commonly found through these DAWs.
The longest running paradigm is a recording-based workflow. This workflow is really more intended to aid musicians who are going to perform and record themselves - that is not just talking about recording live instruments, but also recording their MIDI performance on a MIDI controller (we'll cover MIDI controllers later in this post). As you might imagine based on its name, the design philosophy around this workflow paradigm is to prioritize recording performances, and prioritizing editing of human mistakes with recording. In a sense you could think of the DAW as the tool to aid the musician.

The "newer" (though still very old) paradigm is the loop-based workflow. Again, as you might imagine based on its name, the design philosophy around the worflow paradigm is to prioritize building and editing loops. In a sense, you could think of the DAW as being your musical instrument to become a musician.

Both paradigms can do what the other one does, for example, loop based workflow can of course record audio, but the design philosophy around the software is going to prioritize musicians who prefer to use loops.

In my opinion, the loop based paradigm is the better method when interacting with a DAW. It tends to put more emphasis on using MIDI, and nearly all aspects of audio production and computer music rely on MIDI now.
All Of The Big Names in Digital Music Creation:

Ableton Live - Ableton was built on a different paradigm to music production than the at-the-time status-quo. Ableton was built with the intention of live performance production, which helped it find a lot of success with EDM producers and DJs with it's loop-focused paradigm in music writing. It has remained popular, though compared to a competitor that has a similar paradigm in it's music production design, FL studio, it falls behind in my opinion.
Website: https://www.ableton.com/en/shop/live/
Wikipedia: https://en.wikipedia.org/wiki/Ableton_Live
Ableton Live's workflow paradigm is a loop based workflow.

Cakewalk - Cakewalk has always been the "little DAW that could", originally 1.0 launching on DOS back in 1987, Cakewalk has had many ups and downs over its years. The core user base has been relatively small compared to the other giants on this list, however, that small group is very dedicated (though, most folks seem way too overly dedicated to what piece of software they use, I feel like that dedication is actually not that special for Cakewalk). It was later purchased by Sonar (a subsidiary of Gibson) and then later discontinued, only to later be purchased by BandLab where it now remains as a freeware DAW. I never used Cakewalk during it's pre-Sonar days, nor during it's Sonar days. As a product of BandLab, I found the interface a bit intrusive to use, and I had many questions around it's privacy & data collection policy seeing as how the software is free, but not open source, and requires registration in order to use it. Ultimately due to these concerns, I did not spend a lot of time with it. But if you're on a thin budget, then there is nothing better than free. However, there is another option that isn't exactly free but has an incredibly generous demo period which may be better than this option - but we'll get there. For now, here are the links.
Cakewalk by BandLab: https://www.bandlab.com/ (Considering their rocky past, it may be discontinued by the time you click on it.)
Wikipedia for Cakewalk: https://en.wikipedia.org/wiki/Cakewalk_(sequencer)
For SONAR Cakewalk: https://en.wikipedia.org/wiki/Cakewalk_Sonar
for Bandlab Cakewalk: https://en.wikipedia.org/wiki/Cakewalk_by_BandLab
Cakewalks workflow paradigm (might be) a recording based workflow. (I never confirmed this)

Cubase - Cubase over the years has replaced ProTools which once use to be "Industry Standard", now Cubase is the industry standard and is one of the best tools for composers with an advanced MIDI roll that doesn't make you want to kill yourself every time you have to look at it, composing a symphony is much easier thanks to it's robust composition tools. The problem? Well it's Steinberg who aren't exactly helping the gatekeepery perception around the music industry, with incredibly expensive software, an iron grip on the VST license, and all of their software requires hardware iLok USB keys - which those devices themselves are not exactly cheap either. My experience with Cubase was an incredibly robust software that was unfortunately very unstable. To be fair, that was likely 6 or 7 years ago at this point, so there may have been improvements made to it's stability, but there have not been any improvements to their business model.
Steinberg Cubase: https://www.steinberg.net/cubase/
Steinberg Cubase Wikipedia: https://en.wikipedia.org/wiki/Steinberg_Cubase
Cubase's workflow paradigm is a recording based workflow.

Fruity Loops Studio - At it's release, Fruity Loops Studio went by the name Fruity Loops Studio, but over the years, in an effort to rebrand themselves after nearly going out of business, they renamed themselves FL Studio. However, in my effort to piss off all of the rabid fanboys of this software, I still call it Fruity Loops Studio, and I will likely continue to refer to it as such. :LZSrasp: Fruity Loops Studio had a robust piece of software that followed in the same paradigm of music production as Ableton, however, as the company neared bankruptcy and piracy was rampant with their software, they took matters into their own hands. Were they Steinberg, they would have required an iLok for the iLok key - and possibly a blood sacrifice to the MIDI Roll gods or something - however, realizing why the piracy existed, rather than apply more restrictions to prevent piracy, they lightened up restrictions. The software can easily be tested with little question, just feed it an email and you're good to go. They reduced the price of a license, making it far more fair - unlike Steinberg's requirement of at least 1 healthy kidney - and with the purchase of that license was a lifetime guarantee of free updates. So unlike every other DAW (except one, we'll get to it), you will get all subsequent versions released, from Fruity Loops Studio 1 to 2, to 3, to 4, to 10, to 20, it's all free once you buy a license. We like this; they are customer-minded because they saw the havoc that ensues when companies take up anti-consumer practices. Overall; in my opinion, Fruity Loops would be my #2 choice in DAW.
Fruity Loops Studio by Image Line: https://www.image-line.com/
Fruity Loops Studio Wikipedia: https://en.wikipedia.org/wiki/FL_Studio
Fruity Loops Studio's workflow paradigm is a loop based workflow.

GarageBand (Mac only) - See, I regret saying that I would go over all of the DAWs in the glossary because I don't own a Mac and cannot give much of an opinion on Garage Band or Logic. I will say: GarageBand, if you have a mac, is a great tool for dipping your toes into the water of computer music. It's like the RPG Maker of DAWs. It's not going to do everything the big boys can do, but it's definitely a great starting place for learning, I know this because back in the day when I was a band nerd attending band camps and doing band things, I had no idea what computer music was, but my friend had a macbook and had this super cool software that I wound up staying up all night playing with. That was many years ago now, but I am sure the software hasn't changed much since then.
GarageBand Wikipedia: https://en.wikipedia.org/wiki/GarageBand

Logic Pro (Mac only) - Uh...just go to the links.
Logic Pro by Apple: https://www.apple.com/logic-pro/
Logic Pro Wikipedia: https://en.wikipedia.org/wiki/Logic_Pro

Mixcraft - OH BOY, where do I begin with Mixcraft? For the majority of my time as a computer musician, I used Mixcraft as my go-to DAW of choice. But ask me how I feel about it today and I'd say it's at the bottom of this list for DAWs. It's biggest problem: Bugs. It is so incredibly unstable. Large projects will often and very easily crash, get corrupted, save plugin parameters incorrectly; and support for the DAW is fairly terrible in my opinion, where after each version release they will offer some patches for a few months, but usually swiftly begin moving on to the next version, leaving behind many bugs in the old version which may or may not get patched in the new version, but even if they do get patched in the new version, the new version will introduce new features which introduce new bugs and so the cycle repeats. The positive? Well since GarageBand is not available on PC, Mixcraft is a great (though more unstable) alternative to it. It does a great job of easing the user in with a very user-friendly interface for learning. The price for the normal version is also way more affordable than most DAWs on this list (except 1. We'll get there...), and it comes with a huge library of soundfont instruments, loops, and a couple of middling virtual instruments. However, I have a hard time recommending it. The user-friendly interface, cheap price, and tons of soundfont & virtual instruments are great, but the on-going bugs and low amount of support post-launch of a new version creates more headaches and frustration than I think it's worth, especially when there is a cheaper option that comes with a lifetime of free upgrades...which, I promise, we'll get to that option soon (it's not Fruity Loops).
Acoustica Mixcraft: https://acoustica.com/mixcraft/
Mixcraft Wikipedia page: https://en.wikipedia.org/wiki/Mixcraft
Mixcraft's workflow paradigm is a loop based workflow.

Pro Tools - ProTools once stood as the industry standard for in-studio DAWs. Any studio that wanted to be taken seriously by musicians HAD TO HAVE ProTools, otherwise they were nothing but a waste of time. Now days, Pro Tools is that software your dad probably talks about with his 65 year old retired musician friends as they sip beers down at the local beer & pizza joint. What I am saying is: it's washed up. Pro Tools continues to stick to its archaic ways of having an anti-consumer price tag and DRM in place, and - yes, I don't like Steinberg's iron grip on the VST license, but - refusal to use the current standard for plugins makes this software all but worthless in the modern age of digital music production. (all but worthless...wait, is that saying it has everything good and nothing bad? Whatever, you know what I mean: It's a useless software when compared to the many other options available, especially since it uses a terrible subscription based model for it's pricing.)
Avid Pro Tools: https://www.avid.com/pro-tools
Pro Tools Wikipedia: https://en.wikipedia.org/wiki/Pro_Tools
Pro Tools workflow paradigm is a recording based workflow.

Reaper - This is the #1 DAW...(in my opinion...but it should be your opinion too...) Reaper is by far the most consumer friendly software on this list. Licenses, in their own words cost "not so much" or in other words: less than Mixcraft. The trial period for their DAW? Unlimited. You can demo the software for as long as you would like and purchase a license when you feel ready. The full software is available to use unhindered during demo mode, you only have to deal with a nag screen that asks you to buy a license every time you open the software. So what's the catch? First, the amount of "free stuff" that comes with it is pretty limited. The ReaPlugs are great, transparent workhorses but offer little for flavor (plus these are free to begin with for everyone regardless of DAW - we'll come back to that later in this post). Most DAWs offer at least some basic synths and soundfont instruments but you'll have none of that with Reaper. Second, and most importantly, it's not the most user-friendly software. Out of the box, it's ugly, utilitarian, and if you have any past experience with literally any DAW, I can assure you at first, it's not going to operate like you expect it to. But even that downside can be fixed, because the final win for Reaper is that: literally, and I mean literally, this isn't that zoomer use of the word 'literally', no I mean the actual definition of literally. Every aspect of this DAW can be customized to your liking. Spending a few hours (which I realize is a big ask) to set up the DAW to operate the way you expect can mean anyone's workflow will work with this DAW. And for the RPG Maker devs out there: yes, Reaper can use VST plugins...or you can try their JS plugins. If you know how to program JavaScript, it is possible to make JS Plugins for Reaper. Free from Steinberg's iron grasp! (Oh, and updates are always free.)
Reaper is also a complete tank. There are truly some gifted programmers behind this software because I've thrown some very complex projects at Reaper and it barely even flinches. The only bug I have seen is that some plugin GUI's may begin to run at a low frame rate during long sessions with the software, but this can quickly be fixed by simply saving, closing out, and reloading Reaper. And this may have been fixed, as I haven't updated Reaper (deliberately) in probably a year or so. So in summary: All of my DAW complaints are fixed with Reaper, but the downside is a lack of plugins and a brutally tough interface to make sense of.
Reaper: https://www.reaper.fm/
Reaper Wikipedia: https://en.wikipedia.org/wiki/REAPER
Reaper's workflow paradigm is a hybrid of the recording paradigm and the loop based paradigm. In this regard, no matter which paradigm in workflow you prefer, you can interact with it more or less the same way.

*I want to add a quick note about customization: my current layout, hot key, and mouse control took about 4 hours to set up and continues to be an on-going process of making fine adjustments. So, yes: setting up Reaper unfortunately has quite the barrier of time needed to devote towards it if you are particular about your specific workflow, fortunately if you are starting at 0 experience, then you likely don't have any preferred workflow yet so maybe the time spent customizing may be cut in half. Finally; there are Reaper skins, which will re-skin the software to look and behave like other daws. Take a look at their skin repository (that sounds horrific out of context).

Studio One - Finally we come to Studio One. Studio One has become one of the biggest names among producers in recent years, as it's a fairly new entry into the DAW space (unless you want to count Cakewalk's 100th rebranding, then I suppose Cakewalk's current resurrection is the newest). Studio One is a robust software, incredibly stable, and features a unique drag-and-drop workflow, a beautiful interface, and some great bundled software included. The real problem with Studio One is the price and a development cycle that seems more focused on adding more features instead of improving existing features. Many features included seem like cool concepts but seem to lack the last bit of polishing to make them easily integratable into a workflow. However, if you are completely new to DAWs, then growing up on Studio One may help you create a unique workflow using some of these features. I own Studio One, but I do not use it a lot because I just don't like the workflow very much since it's focused more on the recording paradigm of production, rather than the loop based paradigm found in Fruity Loops, Ableton, Mixcraft, and Reaper. And Studio One's lack of customizability means you're stuck with what you get out of the box.
Presonus Studio One: https://www.presonus.com/products/Studio-One
Studio One Wikipedia Page: https://en.wikipedia.org/wiki/Studio_One_(software)
Studio One's workflow paradigm is a recording based workflow.

What Should You Chose?
Here is my list of DAWs that I think you, the RPG Maker Dev, should consider: Reaper - as if I wasn't clear enough about why in its description. FL Studio - because it's consumer minded business practices and extensive supportive community should make learning easier than other big boy DAWs, and finally Cakewalk - for no other reason than it's price tag of $0.00, ACT NOW BECAUSE THIS OFFER WON'T LAST LONG(judging by their past history...). But at the end of the day, it's not about the DAW you use, it's about the music you make. The DAW is just going to be the tool that helps you get that done. It's very important, and yet also so incredibly unimportant. I hope the list above helps you decide what DAW you wish to use, and if the list still doesn't satisfy, there are a number of other DAWs out there that I have either never heard of or are just smaller in size (such as Reason - honorable mention, I guess.)

Once you have chosen your DAW I would advise you to read the manual provided by your DAW. Nearly all of the DAWs on this list typically provide incredibly extensive manuals and they can be a valuable resource for learning to navigate the user interface. Alternatively you could always try searching Youtube for a "Getting start with [DAW]" tutorial. Due to the number of options of DAWs, it is really impossible for me to walk you through the interface, and going forward I am going to operate under the assumption that you'll be able to do basic things within your DAW such as creating a new instrument track, a new audio track, opening the MIDI Roll, setting a virtual instrument, applying FX to a track, and locating where the master FX bus is.
(but I'll give a brief hint about Reaper since it's audio tracks confused me when I first started using it: A track is just a track in Reaper. Unlike other DAWs, there is no such thing as an "audio track" or a "MIDI track". Tracks can contain both MIDI data and recorded audio data. Likewise, virtual instruments and effects are treated as the same as well. You don't assign a virtual instrument to a track, instead you assign a virtual instrument like it is an effect.)
Your Music Creation Tools: Virtual Instruments.
Going forward, we are going to begin talking about the tools you are going to use to create music. To begin, we need an instrument, right? So now we are going to talk about some common virtual instruments, including samplers & sample libraries, synthesizers, and soundfonts. I'll include links to both free and commercial instruments, however, it's worth noting that the number of plugins out there is vast. So using tools like KVR Audio to find new plugins is ideal, because most certainly, my list will not be in-the-least-bit comprehensive.
There are number of different types of "instruments" that can be used to create your music. Obviously you could record live instruments if you have any skill at playing them. We aren't discussing hardware here, so let's just look at software instruments.

Synthesizers - Synthesizers come in a variety of types, and there is an entire field around sound design that is almost an entirely separate field to just music production, however, having a basic understanding of types of synthesizers may be valuable when deciding what synthesizer to get. Let's go over those types.

Additive/Subtractive Synthesizers - For a relatively pedestrian type of synthesizer in the Synth1.pngsoftware synthesizer space, the concept behind these are incredibly complicated. In short, for additive a number of sine waves are added together to create new sound. Subtractive synthesis attenuates a tone through a filter.
A great example of a popular free Subtractive hybrid FM (we'll get to FM soon) synthesizer is Synth1 by Ichiro Toda. My main complaint with Synth1 is that it's tone feels quite dated now and I've grown tired of it's sound, but to be fair, it had been a mainstay sample of mine (and many other musicians) library for many years, and some of those musicians for nearly 2 decades.

FM Synthesis - FM stands for frequency modulation. FM synthesis is accomplished by rapidly modulating the frequency of a wave. FM synthesis became popular with the well known Yamaha OPL2 and 3 chips which were featured in many synthesizers and game consoles. FM Synthesis, though very powerful - including the ability to create harsh digital sounds found in modern EDM genres - is more renowned for it's ability at defining the synthwave era of the 80s and it's resurgence in popular media.
For example, Super8 is a great commercial synth by Native Instruments based on dual FM operators.

Super8.png


AM Synthesis - If the sound wave is made up of 2 components: Frequency and amplitude, and we can modulate the frequency to create new sounds...what would be the brother of FM synthesis? Why yes, of course: Amplitude Modulation! AM synthesis will modulate the amplitude of the wave rapidly to create new sounds. AM synthesis is far less common in the world of synthesizers, and as such I don't have an example to share with you. However, one type of complex hybrid AM synthesis would be the concept of "snare rushing" or "retriggering". This is an electronic music production technique in which a sample is triggered at such rapid speeds that it generates a tone. Snare rushing was made famous by the Venetian Snares (where the name "Snare rush" comes from) but it is more commonly referred to now as retriggering. (Many years ago, I also used snare rushing to do a basic cover of "In The End" by Linkin Park (which I just spent 10 minutes looking for and can't find it on my Youtube or hard drive...))
AH-HA! After another 15 minutes of searching, I found it uploaded to my Facebook page. I've re-uploaded it to my youtube page so you can see it.


Massive.png
Wavetable Synthesis - Wavetable synthesis is a really complicated form of synthesizing that is difficult to explain, but in summary, a wave is referenced by the synthesizer. This wave could be a wave sampled from a live instrument like a cello, or it could be sampled from a generated tone such as a saw wave. The synthesizer then looks up this wave and modulates it based on however the synthesizer itself was designed. That's a rough overview, but it's really complicated and generally speaking, varies instrument to instrument. But generally speaking a Wavetable synthesizer is defined by it having a table of samples or waves inwhich the synthesizer looks up that table to generate sound. Do you remember the era of early to mid 2010's when Dubstep took the world by storm? We have a wavetable synthesizer called Massive by Native Instruments to thank for that. I love Massive, it's powerful and versatile. Too bad it got a bad rap due to it's overuse with an overused genre of music. From this era of music also spawned another popular wavetable synthesizer that was also used for dubstep called Serum by X-Fer (picture not included in this post). Massive later went on to create a sequel called Massive X which apparently folks really like, but I tried it and absolutely hated it, so I'm not sure what I am missing.
One of the more popular Wavetables to have been released in recent years is Vital by Vital Audio, which is a semi-free wavetable synthesizer. Effectively, the synth engine itself is entirely free, but it's wavetable for free users is limited to only 5. Purchasing higher tiers increases the wave table size.

Generate.pngChaotic Synthesis or Dual Pendulum Synthesis - I'm too dumb to wrap my head around this one. Take a look at this dual pendulum simulation. Notice the strange patterns and seemingly random paths the 2 blue dots of the pendulum take when you move both pendulums at the same time? Now imagine that...but moving fast enough to generate sound. This is the foundation of your Chaotic Synthesis engine. I only bring this up because at this time there is only 1 synth in existence that uses this fundamental as its tone generator, but holy damn is it a cool synthesizer. I would like to introduce you to Generate by Newfangled Audio.
With it being such a new form of synthesis, there aren't a lot of concrete examples of what this sounds like. Fortunately, in my signature link about hiring me to compose for you, I've linked a song where most of the synthesizer sounds in the song were built from Generate. I'll save you the effort of finding that song and just let you click right here to listen to it, I've also ran the demo song in this thread through Generate. It sounds terrible because the song wasn't design around Generate, but should give you a rough idea.

All About Soundfonts

Next I would like to discuss Soundfont instruments. At this point you have read me say "Soundfont" several times without ever defining what exactly a soundfont is. Well the title tells you everything. Just like in text, you have various fonts to pick from that change the character of the text, in music you have soundfonts which can change the character of the MIDI. Remember those heinous ringtones I was talking about in the glossary section that you definitely read? It was never entirely the MIDI at fault for the heinous sounds used, but instead the soundfont library that was used. Soundfonts are very basic sampled instruments, typically a single sample that is triggered and pitched according to the key you pressed. It's very basic in functionality and doesn't allow for a lot of expression. Any expression that does occur may happen from the soundfont player that you are using, rather than from the sample base itself, such as adjusting volume for velocity, attack and decay with expression, and the cut off frequency with the mod wheel. However, you can see when I change my demonstration song to a soundfont based Koto provided by Roland Zenology instead of the sample library based Koto I was using, a lot of the life in the song dies out.

First, I would like to apologize for the deep, ugly note you hear occasionally throughout the song. This is an unintended early introduction to MIDI keyswitches, which we'll discuss a little bit in the Sample Library section. Had I been more mindful, I would have muted these. Secondly, I muted many of the effects and just let the soundfont play dry. You may say that gives an unfair representation of soundfonts and I would say: absolutely, you're right. However this is how many people have heard or even used soundfronts themselves (prompting me to make this long tutorial), which goes to show how lifeless they are when compared to powerful sample libraries. That said, soundfonts can still be valuable tools to create great music. You may not have even realized in the unaltered version of my demonstration song I am already using soundfonts in the mix, but it is less obvious due to layering - a technique we may cover in this tutorial (undecided at the time of writing). Some great soundtracks were originally built using soundfonts too, such as the original (non-HD) versions of Legend of Zelda: Ocarina of Time, Majora's Mask, Wind Waker, and Twilight Princess.
As a final demonstration for soundfonts, I have changed all of the Koto soundfont instruments in the demonstration songs to random soundfonts to create a heinous mess reminiscent of those godawful ringtones.
(I should mention that I forgot to change the text in the video, sorry about that.)

So what are some great soundfont libraries?RolandZen.png
Well by far, one of my favorite synthesizers/plugin workstation is the aforementioned Roland Zenology, unfortunately it uses a subscription based model for monetization - though there is a way to buy a "life time key", but I'll be damned if I can figure out how to navigate their godforsaken website to find out how to purchase that. So that likely puts this rather powerful instrument out of the hands of most, which is unfortunate (they should probably take notes from Image Line and FL studio considering anti-consumer practices nearly bankrupted them with piracy, so they had to shift models to become profitable...Roland)
Sforzando by Plogue has continued to be a popular free soundfont player. Before I acquired Kontakt and Zenology, this was a tool I often used myself, though I haven't used it in a long time. If I remember correctly, it is only a player and doesn't include any soundfont libraries. Libraries would need to be downloaded, but savvy googling will likely turn up some good results. To aid you, you are primarily looking for .sfz files, and on occasions .sf2.

Stuffing a Skilled Session Musician Uncomfortably into Your PC: Sample Libraries
Technically speaking, since soundfonts are technically sample-based libraries, they also would technically fall under this category. But there is a big difference between soundfonts and what I am talking about here.
Have you wondered how I managed to make it sound like I have a live recording of a musician playing the Koto for my song? Is it because I can play the koto? No - though I would like to (I've been told it's a pretty easy instrument to learn too!) What you are hearing is a sample library that was built for Native Instruments Kontakt, which I will unofficially call it a sample engine. Effectively what Kontakt is is just that; an engine. It's a shell that needs a "fuel" in order to do anything. Does this make it a huge rip-off? Sort of. At it's MSRP you get effectively: nothing.
But there is nothing else that really compares to it's capabilities, so unfortunately all of us musicians just suck it up, gripe about the price, and just accept that's the way things are. Also because it's owned by Native Instruments, it's a closed-source software, requiring those who wish to produce a sample library for it to license Kontakt, which directly affects how expensive sample libraries can be, and let me assure you they can be mindblowingly expensive.BerlinStrings.png

So what sets these sample libraries apart from a soundfont library?
As I had mentioned with soundfonts, they are mostly based around a single sample of an instrument. Kontakt libraries have incredibly extensive sampling. Typically every note is recorded, and each note is recorded multiple times to help create round robins. Round robins are created to elimite the "machine gun" effect when triggering a sample multiple times. You can try this now. Try rapidly striking your desk. You'll notice that each hit sounds ALMOST identical, there are still very minor variations in each hit that makes each hit distinct. Obviously in the digital realm where we are triggering sound based on pre-recorded samples, constantly calling that same sample is not going to sound realistic. Kontakt libraries have fake-round robins (done by making minor changes to the sample when played) but in addition to the artificial round robins, most libraries also have real sampled round robins.
You also typically get multiple velocity layers for each note played as well. We'll discuss velocity more when we get to the MIDI section of this post. For now, let's focus on what that means for a Kontakt sample library. As you play an instrument louder or softer, the sound doesn't just simply lower in volume, instead the timbre itself evolves ever so slightly. For example, an instrument like a clarinet will get more breathy raspiness when played softly and when played loudly some harmonics may be more emphasized than others. So velocity layers account for those changes.
Finally, instruments can be played in more than one way. We'd call these "articulations". Think of all the different ways a guitarist might play their guitar: They can strum the strings to make chords, they can pluck the strings and can even pluck them 2 different ways: with a pick or with a finger, they might rest their palm on the strings to create a muted "chug", and countless other ways of manipulating the instrument. A comprehensive sample library often allows for multiple articulations to create a believable performance by means of MIDI key switches. Typically these MIDI keyswitches are put at an unused, very low octave on the keyboard. By pressing the specified MIDI key, on the fly, you can change the articulation of the current library. Articulation can also be changed by other means, such as velocity, mod wheel, or expression pedals, or even when two MIDI notes overlap. All of these various MIDI parameters are handled by the sample engine, in this case, Kontakt.
So hopefully now you can see just how powerful Kontakt libraries can be. With all of that new information in mind, give my demonstration song another listen, see if you can figure out how I used Kontakt for the song.

"But $300 for Kontakt and possibly another $50 - $200 for a library is way out of my price range. Are there any alternatives?"
Yes.
Remember, I mentioned that most of us musicians just swallowed the bitter pill, but none of us were really happy about this choice? Well what happens when you start forcing people to swallow that bitter pill too many times? They either start looking for alternatives or they make their own.
While I was looking up Berlin Strings by Orchestra Tools for the screen shot I posted in the above section, I noticed they have decided to stop supporting Kontakt and instead they have created their own sampler called SINE. From my very brief poking around, it seems they wish to get as many adopters on as possible in an effort to make it become more recognized and to ultimately replace Kontakt in the long run. As such, not only is SINE free, but Orchestra Tools is also giving away some free sample libraries to go with SINE.
Now, to be clear, I have never used SINE as I just learned about as I was making this post.
There is another alternative available.

Decent Sampler by Decent Samples is a free sampler alternative using the Decent Sampler format. The Pianobook community has been creating free sample libraries for the Decent Sampler, giving you lots of tools to begin building your library of sounds for music creation.

How to Become the Pretty Lights: Samplers

Are you sick of hearing my little koto demo song? I know I am. I'm living in Koto hell at the moment as I have had it on loop for about 90% of the time I have been writing this whole thread. Let's take a break from the koto song and give this song a listen:

What's going on in this song? You may have recognized the vocal track is also in Avicii's song "Levels". The vocals are from Etta Jame's song, "Something's Got a Hold on Me". And what about that guitar and e. piano riff? Well that comes from Sonny Stitt's "Private Number".
We call this sampling. Sampling has been around for a long time actually, however it has become increasingly more and more common in modern EDM and even modern pop music. I can't confirm whether Pretty Lights uses a sampler, but samplers certainly make the job a lot easier. In the below demonstration (not by me :LZSlol: ) You can see this guy that I've never seen before using a hardware sampler by Novation, called Launch Pad.

It's not entirely accurate to call them samplers. It can be a bit confusing too with Sample libraries. They are sometimes referred to as Drum Samplers, but more accurately you might call them one-shot samplers, but generally speaking we just refer to them as just "samplers".
The way sampler's work is that you load samples from a song into the sampler. In the case of Novation, you can assign these samples to specific pads, but when we are in the realm of software, we are not bound by pads. When the corresponding key is pressed, it will trigger that sample. Typically a sampler will also allow for parameters to tell the sampler how to handle that specific sample, for example, perhaps you sampled a D note from a song. When you load that sample in to the sampler is has no idea what note that is. So you can specify that it is a D. Once it knows this information, it's smart enough to know how to transpose this sound up or down the scale, so you can trigger the sound all along your keyboard and it will stay in tune. (I had a good demonstration of this in action recommended to me on Youtube the other day and now for the life of me I can't find it...)
I'll be honest here, this is a realm I don't know much about. So I won't go into much more detail. Sampling to this extent has never been a huge part of my music, though admittedly my previous album sampled many different sources. (listener discretion is advised - not just because it references a lot of dark, adult material, but also it seems to be rather hated by listeners - it's certainly a tough one to appreciate.) (oh, I totally forgot that the first track on the album is a great demonstration of snare rushing! The lead synth that comes in mid way through the song is actually a very, incredibly fast, rapid fire snare and not a synth at all!) Doing a quick google search on this topic, it seems this sampler, TX16Wx (great name for marketing...) Software Sampler seems to provide a lot of control and possibilities. Using a tool like this will allow you to effectively have as many possible sounds at your disposal as you have music/audio files on your hard drive, though it's always going to sound digital since it's merely retriggering the sound rather than using anything complicated such as round robins or velocity layers.

And with that, you have now learned about all of the different methods of creating a sound in computer music.
Synthesizers are great, powerful tools that can be used for creating textures and atmostphere to creating all sorts of genres of music, from harsh and digital, to classic and warm, to strange and abstract, to some (not previewed in this thread) even sounding organic.
Soundfonts are useful tools due to their versatility, and usually soundfont libraries are far more complete in the number of instruments when compared to sample libraries, though soundfonts lack the life of sample libraries. Still a useful tool for filling out mixes, emphasizing certain instruments, or for sketching out song ideas quickly.
Sample Libraries are powerful tools that give you the power of a real session musician at your finger tips and are absolute necessities for soundtrack composers.
Samplers are great for getting access to an unlimited number of possibilities for audio, though it will always seem somewhat digital in nature.

Writing Music in the MIDI Roll
Assuming that you didn't sacrifice 1 healthy kidnet to the MIDI Roll god, you are likely using a pretty pedestrian MIDI roll. The MIDI Roll is a pretty joyless space, but it is the place that you will spend the most time. So let's learn how to utilize this tool so it's not frustrating AND joyless.
The Different MIDI Control Changes (CC):

Since you definitely read through the glossary, you already know that MIDI is just data. This data is separated into different control change parameters, and each parameter serves a different function. Here I will go over those functions in order of how likely you are to use them.
*quick note before diving in, I have often referred to CC as a "channel", so if I use the word "channel" when referring to MIDI below, I am talking about MIDI CC. The reason why it's important to make this distinction is that MIDI Channels is something else entirely different - we won't be going into MIDI Channels in this tutorial, so throughout the tutorial if I ever do say "MIDI Channel" or just "channel" I am referring to MIDI CC, but I'll be careful and try not to refer to it as a channel to reduce confusion.
Velocity - This section is important! Don't skip it! Velocity is the most important CC that you need to use. We previously discussed velocity layers in the section about sample libraries that I know you also definitely read. In that section I discussed how samples account for playing an instrument at different volumes and how their timbre changes as the musician plays more or less intensely. It's important to remember that phrase: the intensity of play. While typically volume and intensity are proportional to each other, it is worth keeping in mind that they are entirely separate things. In the realm of digital music and recording you can play something REALLY FRIGGIN LOUDLY but then turn it's volume down and that's going to sound a lot different then if you play an instrument weeeeeaaallly qwietly and turn the volume WAY THE FUG UP. So that's really the philosophy you should take behind what velocity actually means: it just means the intensity the instrument is played at. Yes, this typically is reflected in the volume, but you should consider volume just a side effect of it, not a result of it. So with that philosophy instilled in your head, what do we do with velocity and why is it so gosh darn important?
There are multiple cases for why velocity is important.
Let's go back to our hitting-the-desk example I used when talking about round robins.
If you just smash your fists into the desk at an equal intensity in a rhythm, the rhythm...well...doesn't sound like much of a rhythm, just banging. But by slightly changing the intensity at which you smash your fists into your desk on certain notes, emphasizing some notes and de-emphasizing others, the rhythm begins to come to life more. This is velocity at work. So velocity helps emphasize rhythm.
Velocity.png

When a pianist plays the piano, they aren't just constantly ramming their fingers into the ivories. That would be a rather unpleasant experience. You'll hear that they modulate the intensity to emphasize some phrases and to give an eb and flow to the music. Allow me to just solo the lead koto of my demo and give it a listen.
You should be able to hear the eb and flow of the music as the the picking modulates between hard and moderate strumming.
If it's still too subtle for you then, let me let you in on a secret: this Koto song is actually based off a piano composition I did. So let's listen to a short snippet of the piano version and listen for velocity at work:
So velocity helps bring life to music by flowing the energy (intensity) of the song, rather than keeping a static amount of energy.

You should hopefully now understand the importance of velocity. I often hear many amateur songs (including my own early works) that keep the velocity pinned at 127 (which is the max value). If you even want to hope to sound at all professional, you need to start learning how to take advantage of velocity, and stop pinning your velocity to a single value. There is a way to create minor variation in the velocity value across your whole song without manually adjusting the velocities 1 at a time that we'll get to later in more MIDI features.

As one final note for velocity, most sample libraries and some synths allow you to set a "highest possible velocity". In the case of Kontakt libraries, perhaps your song is more of a mezzoforte (moderate) song at its loudest, and pianissimo (quiet) throughout the rest of the song. By limiting the highest possible velocity, the intensity of the instrument will never exceed mezzoforte. In the piano example above, you may have noticed that the piano remains fairly gentle in it's intensity, but looking at the picture of midi velocity there are 127 maximum peaks in velocity. The reason the piano never sounds like the pianist slammed their face into the keys is because within my piano library I reduced the highest possible velocity.

Let's move on now.
All other MIDI CC's are much less important and more situational, but should still be understood.

Keylab.jpg
Modulation Wheel - The modulation wheel you can think of as a free "do what you want to the sound" controller. Different instruments will use the mod wheel in different ways. Synthesizers will often use it as a filter cutoff, though it can really be assigned to any parameter on a synth. Sample Libraries may use it to transition between different articulations. In the case of my guitar library it controls the strum intensity, separating intensity from velocity. If you are unfamiliar with why I call it a "Mod wheel" then this is because it's referring to an actual wheel found on many keyboard.

Pitch Wheel - The pitch wheel does what it says right on the tin; adjusts the pitch. This beautiful picture of this gorgeous Arturia Keylab 88 (with weighted keys!) is not mine, sadly. However, it makes a great demonstration piece for both the pitch wheel and the mod wheel. You'll notice there is a notch in the middle of the pitch wheel. That's to place your finger in, that's just a nice ergonomic design by Arturia. By why the middle? That's because the middle in the 0 position. Moving the wheel down will shift the pitch of the currently played note down. Conversely, moving up shifts the pitch up.

Foot Pedal/Sustain Pedal - Once again, the name comes from the hardware this CC is typically associated with. Most MIDI controllers (another name for a keyboard like the Arturia Keylab 88 with weighted keys) have an input for a foot pedal, similar to the foot pedals found on a grand piano. In this case, just like one of the foot pedals on the grand piano, this pedal typically controls the sustain. Basically, when the foot pedal is held down, any notes you press will continue to ring out regardless of whether you are pressing the key or not, in other words: it sustains the note.

Expression/Expression Pedal - Expression is like the mod wheel but for your foot. What it does depends on the instrument. I rarely see synths take advantage of the expression pedal, but sample libraries often use expression to change articulations, such as low expression in a string library may make the violinist have a hard and fast attack with the bow, while a low expression may make the violinist attack the strings more gently and slowly.


Volume - Does what it says it does. This is a volume control which is embedded into the MIDI data, rather than part of your DAW's project file. Also, keeping in mind our philosophy around what "Velocity" actually means, using the volume CC allows us to change the volume of the sound independently of the velocity, so we can create that REALLY LOUD whisper or really soft SCREAM.

Bringing Some Humanity To The MIDI Roll:

Step 1: How to Interface with the MIDI Roll
There are 2 different options when it comes to interfacing with the MIDI Roll. Option 1 is to use a MIDI Controller. As I mentioned above, a MIDI controller is typically a hardware keyboard. What makes a MIDI Controller a MIDI Controller and not a keyboard is that a MIDI controller can only do 1 thing: control MIDI, while most Keyboards may actually have sound hardware built into it such as a synthesis engine or piano library. MIDI Controllers come in a range of shapes and sizes. As I have mentioned, this tutorial is more about the software side of things and not hardware, so I won't discuss why 88 weighted keys is the mistress that invades my dreams every night.

Option 2 is to use your mouse. This is why the MIDI roll is so joyless. Every DAW operates similarly, though with some slight variation, but for the most part clicking and dragging is how you can add notes to the MIDI roll.

And if you are going with option 1, don't think you are free from having to use your mouse, no-no-no. You'll still likely need to use the mouse to refine your performance unless you are a piano prodigy, you are unlikely to perform perfectly. However that is the nice thing about MIDI is that you can do a sloppy performance and then open the MIDI roll to fix all of your mistakes and watch as the life gets sapped out of you because you weren't willing to drop $600 on Cubase plus an additional $30 on an iLok USB key that you are definitely going to lose in a months time so you could access the MIDI Roll Fun Zone.

Step 2: Humanize it
Humanize.gif
What you are seeing is not the MIDI Roll's "It's not a joyless place after all" dance. This is me rapidly clicking the "New Random Seed" button in Reaper. Humanize is available in all DAWs but how they handle it is a little bit different in each DAW. In the case of Reaper you can select % parameter offsets from their starting position. Then click the new seed button and it will randomly shift the note and velocity within the specified percent amount. Normally I would vary this percentage only by a small amount, such as 3%, but for this demonstration I wanted to exagerate the effect so it is clearly visible.
Naturally when you use your mouse to click in every note, everything is snapping absolutely perfectly to the timing grid, but the thing is: humans don't play like that. Naturally our clocks are imperfect, but sometimes we also want to play music slightly off beat.
Using the Humanize functionality built into your DAW is a quick way to quickly adjust all MIDI notes and velocities so that there is a more human side to things, stepping away from the perfectly clocked machine.
I'd like to let you in on my secret technique, a secondary option to humanizing.
By default DAWs lock your mouse clicks to the specified grid, however, you can easily turn off snapping. The grid lines are still drawn, so it's up to your imperfect, pixel-impression hand to draw the notes onto the grid. Disabiling snapping can help create minor differences in time, such as a chorus of drummers all hitting their sticks not quite but very close to perfectly in sync. What a specific example that is. I bring this up because you should listen to my demonstration song again, but now, let's focus on the drums. I've soloed them for this demonstration.
Just keep in mind, you don't always have to snap notes to the grid.

Step 3: Quantize it
I mean, I just call it step 3 because that's sticking with the cutesy theming for this unit, but actually you should never quantize right after humanizing, that would be stupid. So what is Quantize? Well, as mentioned: Humans play sloppily, our clocks are imperfect, and unless you are a music prodigy, you are likely going to have some pretty severe timing mistakes in your performance. Quantize will snap your notes to the nearest grid, effectively making your very human performance become a very computer performance. I suppose idealisticly, you'd want to quantize first (so this should actually be step 2) that way you can clean up your most egregious timing mistakes, and then apply humanizing after quanitizing to bring back the human element of the performance.

And with that, we have covered all the important parts of interacting with the MIDI roll (I think, it's 2:30 AM and I'm going cross-eyed from staring at the "Post Thread | RPG Maker Forums" page...).
The key take aways from this is to make sure to vary your velocity. Don't keep things pinned to a single value, but create variations in velocity to emphasize rhythms and create an eb and flow to add a more natural, human element to it.
Speaking of human, the next key take away is to make sure you humanize your MIDI performance using the built in tools and also disabling grid snapping. Don't always keep things locked to the clock, don't be afraid to offset things by a few milliseconds to give stuff a bit more life.


Music Theory Fundamentals: The Rules You Need to Learn so You Can Break Them.

At this point we have covered instruments, and we have covered how to use those instruments in the DAW by utilizing MIDI, so before we continue into our discussion about effects, we should learn some basic music theory. In this section I am only going over some very basic aspects of Music Theory and therefore if you are already familiar with these parts, you can skip this chapter.

Let's begin with the question a question I am sure some folks would want to ask...
"Do I have to learn music theory?"
The short answer to that question is: No, of course not. Many of the musical genres we have today typically came from under-privileged individuals who just wanted to make some sounds, and as their desire to make sound grew popular, we began writing a new chapter to the book of music theory. Jazz, rock, and techno all have similar back stories to how they came to be.
That said, without knowing the rules, your possibilities are infinite, but your likelihood of writing something good is small. Think of it this way; there is a picture called the Picture of Babel (or something like that) that generates a random array of pixels. It will never generate the same array of pixels again, so each generation is a completely unique array of random colored pixels. Just following the rules of the universe, even though the odds are 1 in a billion to the billionth (or possibly greater), there is still a non-zero chance that eventually this random selection of colored pixels will create the most beautiful image ever created...theoretically. But due to a lack of rules on how and where to place those random colored pixels, it will likely take the eternity of human life before it generates anything that even remotely looks like a piece of art and not just random color noise.
The same idea could be applied to you, the blank slate musician who knows nothing about music. The more rules you know about music, the less "random color noise" you will make.
Music theory is deep and rich with history, and I'll be the first to admit that I've had daddy issues for as long as I have been doing music, being under the shadow of my father - a man who knows deep and well the concepts of music. My understanding of music theory is fairly shallow. Let's just start with learning scales, time signatures & tempo, some basic rhythms, and what a chord is. This should be more than enough to get you started down a path that is filled with less random color noise.

Do-Re-Mi, the Notes of the Scale
A scale is built on 7 unique notes, and typically follow the rule 3 full steps, 1 half step, followed by 3 more full steps. The 8th note in the scale will fall on the starting note of the scale but 1 octave higher or 1 octave lower depending on if you were going up the scale or down the scale.

So what does any of this mean?keys.jpg
If you look at a piano you'll notice this repeating patter of keys as seen in the image here. The common scale (and typically the first scale you learn when playing the piano) is the C Major scale. C is the starting key of the scale. You can see in the image that it begins at C. The C Major scale (and relative minor A - we'll get there) is only the white keys on the piano, and as such the scale goes like this: C, D, E, F, G, A, B, C.
Of course there are the black keys too. You'll notice that they are in between C & D, D & E, F & G, G & A, and A & B.
These are called a half step or semi-tone.
To explain what makes a semi-tone a semi-tone and a whole-tone a whole-tone is complicated. Here's really what you need to know:
A whole step is going 2 keys forward on the piano. A half step is going 1 key forward on the piano. Remember the scale pattern I talked about at the start? 3 full steps, 1 half step, 3 more full steps.
Let's follow that pattern on the piano, knowing at a full step is 2 keys forward, and a half step is 1 key forward.
We'll start on C, that is your first step and we'll consider 1 full step. C to D is going from white key to white key, and therefore skipping the black key. That is indeed a whole step. D to E is the same white to white, which is a whole step. Next in the scale is F, and next in the pattern is a half step. Well there is no black key between E and F, and so indeed we are making only a half step up to F. F to G is a whole step, skipping the black key in between. G to A again is a whole step, and A to B also a whole step. If our pattern is 3 full, 1 half, then at this point in the scale we have made 3 full steps, so our next step should be a half step and that half step will take us from a B and put us on C, thus completing the scale.
This 3 full step, 1 half step pattern is used for all major scales.
There is an audio preview of the C Major scale provided on the C Major wikipedia page, please give it a listen so you can hear what we just discussed!

So what is a minor scale?
Well to keep it simple, it's basically making the Major scale sound darker. In the C major scale, our root note in the C, and if we go up the scale it sounds rather bright and cheerful. By changing the root note of the major scale down 2 whole steps, we find the relative minor of the scale. The relative minor will sound more uncomfortable than the major scale. You can preview what an A minor scale sounds like from the wikipedia page. Click on the A natural minor scale preview. Minor scales follow a different pattern than our previously discussed 3-1 pattern. This is due to it being shifted down 2 whole steps. As a result, the pattern needs to shift by 2. The result is 2 whole steps, 1 half step, 2 whole steps, 1 half step, 1 whole step + 1 whole step to land back on the root note.. 2-1-2-1-1+1.

Give this a try on your own. Following the patterns you can discover different scales on your own. If you don't have a keyboard, MIDI controller, or piano, then you can use this website to use your computer keyboard to play the piano (or use your mouse)

What about the black notes?
You will notice in the picture I posted there are 2 letters on each black note followed by a symbol. First, let me explain the symbol next to the letter.
There is the symbol that looks like a lowercase b. This is called a "flat". You may hear musicians say they or other musicians are "playing flat". Flat means you are below the whole tone.
Next to the letter on the bottom there is a symbol that looks like a hashtag, #. This is called a "sharp". Likewise with flats, you may also hear musicians say they or other musicians are playing "sharp". This means you are above the whole tone.
So when it comes to tuning an instrument, you can be flat - meaning your tuning is under the correct note, or you could be sharp, meaning your tuning is above the correct note.

So there are 2 ways to read a black note, and it depends on what type of music you are reading and what scale you are playing in. We're not going to get into the pedantics of this. For this tutorial, let's just keep it simple and say that the black notes can go by 2 different names, and whichever name you choose is a valid name. For example the black note between the C and the D is both above the C (so it's sharp) or it's below the D (flat). So you could either call this note C sharp, or you could call this note D flat.

Octaves - what's that?
Before wrapping up this section on scales, you may be wondering what an "octave" is - I dropped that word a couple of times throughout this section. Since a tone (for example, C) repeats every 8 notes, going from 1 C to the next C is changing the octave. We also number each octave - typically starting on C, with the lowest octave being C0 and normally C9 is the top (but in the digital realm, theoretically it could go on to C-infinity) If you use your piano or the online piano I linked previously, if you push - say, for example - C4 and C5 you may be able to hear that their tone is similar, but one just sounds slightly deeper. Since their fundamental tone is very similar, we consider them to be the same note, simply one is the same fundamental but higher and the other one is the same fundamental tone but lower. Thus one is called C4 (the lower one) and the other is C5 (the higher one). Since there are only 7 fundamental tones (plus the respective half steps), we repeat these 7 fundamentals across the keyboard, and each time the 7 fundamentals repeat, we call it a new octave.

At this point you have learned 1 of the 2 basic constructors of music. Scales build the tones that create musicality in music, and we have seen the repeating patterns, such as the 3-1 pattern that makes up a Major scale, to create and explore different music scales, and hopefully we know that we use 7 fundamental tones that are repeated across the piano to create octaves.
So if we use tones to make sound, we need another component to actually make music, and that component is time.

Rhythm - You Have It or You Don't: That's a fallacy.

Well this section is a bit more dry and boring that the previous chapters, huh? How about some tunes while we continue?

Music is all about patterns. We have patterns in our fundamental tones in order to create scales. We could consider those tone patterns a very vertical method of generating "music" (and that may be the philosophy behind making a chord :LZSskeptic: but we'll talk about that when we start discussing chords). But we move through life not in a singular axis, but through a secondary axis: time. So the second important constructor to music is going to be time and where you put notes and where you put silence. This idea of when there is a note and when there isn't a note is called the "Rhythm", and typically flows in a - you guessed it - a pattern!

Let's take a look at a blank MIDI roll. I've highlighted some of the grid lines to help emphasize them and make them easier to see.
RhythmGrid.png

You'll notice there is a patter going on here. 1 red bar, followed by 3 blue bars, equating to a total of 4 bars.
Welcome to 4/4 time, sometimes called "common time".

Let's start looking at a different demo I have prepared. To keep things simple, the demonstration song is ALSO a video so that you can see rhythm in action inside the MIDI roll:
This is a 1 bar (you can also say 1 measure) drum phrase. In the picture of the blank MIDI roll, this 1 bar would be the space between 2 red lines.
The kick drum plays 4 times during this bar. So these are quarter notes, or also written as 1/4, and since this kick is always landing on 1 of the blue bars seen in the blank MIDI roll picture above, we call this the "down beat". We will always count 1 bar by the number of down beats it has.
The snare plays twice a bar, so in a sense this would be the equivelent length of time as a half note or 1/2 note. As the name implies, it is half the length of a bar or otherwise the equivalent of two 1/4 notes. Since this snare is striking on the 2 down beat and 4 down beat, we call this the "back beat". The back beat is the beat that separates the down beats evenly. This is pretty easy to understand in 4/4 time where we evenly separate the 4 down beats on 2 and 4.
Finally, there is the hi hat. The hi hat is striking on all of the beats "in between" the kicks. Since the hi hat has 8 notes in 1 bar, it is playing on eighth notes which we also can write as 1/8 notes. This beat that happens between the down beat is called the up beat.
We can halve the length of notes further. For example, the next note length after 1/8 notes is sixteenth notes, and I have changed our drum pattern to use 1/16 notes on the hi hat.
You can continue to create new note lengths by halving the previous note's length. So from 1/16 notes, the next is 1/32 notes, then 1/64 notes, then 1/128 notes, and then 1/256 notes, etc. However it is rare for a song to utilize 1/64 notes let alone 1/128 notes.

Now the beat we have been listening to is a typical dance beat often called the "four on the floor" back beat. It's great for dance music of course, but it is also a little bit static. So here is another drum beat - still using the 4/4 time signature, but this time I have used a break beat/drum n bass style rhythm.
For drum n bass it sure does sound pathetic :rswt2:
That's because another important factor to music is tempo. Tempo or more often called BPM (beats per minute) is the speed at which your song plays. We know the down beat to be the beat we count. So basically how many of these down beats occur in 1 minute. The drum examples I am showing here are at 120 BPM.
Here is a list of some stereotypical BPM for different types of music:
Ballads and Waltz: <100
Electro & Dance: 100 - 130BPM
Dubstep: 130 - 150BPM
Rock: 140 - 170 BPM
Drum N Bass: 160 - 190 BPM
Weird Hatsune Miku Nightcore stuff: 200+ BPM

Tempo is one fact to the feel of the song. For the most part I have been talking about the 4/4 time signature. So what is that?
In order to read this, we look at the top number first. Just to be 100% clear, let's switch to 3/4 time temporarily (since both numbers in 4/4 time are the same). So the top number is 3 and the bottom is 4. 3 represents the number of beats we count per bar, and 4 represents what note length we are counting. So a 3/4 time is counting 3 beats of 1/4 notes. 4/4 is counting 4 beats of 1/4 notes. We can also do whacky time signatures like 9/8, where we are counting 9 beats of 1/8 notes.
Here I have taken our drum rhythm and changed it to be a 3/4 drum rhythm:
And just to show how whacky 9/8 is, here is a 9/8 drum rhythm:
(to be honest, that last hanging 1/8 note sounds awkward for this rhythm, I think it might do better as a 4/4 rhythm, but that's just what you get with whacky drum rhythms.)
There are more fun things that can be done with time signatures too. You don't have to keep your song locked to a single time signture for the whole length of the song. Midway through the song you can change the time signature.
In my Arena song (the one I mentioned in my post about the Generate synth), in the latter half of the song I begin modulating the time signature, with 7 bars of 4/4 then 1 bar of 5/4, until just before the end it completely switches to a 6/4 time signature.
And in my album, "BELIEVE" and the title track "Believe", midway through the song I switch to a 9/8 time signature for the bridge.
Additionally, no one is stopping you from using 2 different time signatures at the same time. This is a jazz technique known as polyrhythm and is a complex thing to execute well, especially if you want to use 2 very, very different time signatures. In my album "BELIEVE", the song "Spirit" is using a fairly simple polyrhythm, in which the drums play in 4/4 but the rest of the band is playing in 6/4. (I also believe there is a brief 3/4 drum fill later in the song if I remember correctly)
However, it should be made very clear: if you are just getting started with music, weird time signatures, changing time signatures, and definitely polyrhythms are advanced techniques and you should spend a little time in the 4/4 pool before jumping into the weird jazzy pool filled with black jack and hookers. It's kind of like here, whenever a fresh patron picks up RPG Maker we often advise them to just make a game using the RTP stuff and touch plugins and assets later. Same principle here for really all things music: before you start breaking the rules to do complex jazzy stuff, stick with the well established stuff.


(So, I spent pretty much all day writing the section below about Rhythm, but after taking a short dinner break and thinking about it, I think it's not very useful. I've bogged you down with way too much text and have likely made a mess out of a simple concept. I decided to completely re-write this section which is what you have read above. However, since it took me a few hours to write, I don't just want to delete it...So I am stuffing it into this Spoiler tag for now. At a later point I will move this cut chapter to a separate post in this thread.)
Let's talk about what this means.
First, how to pronounce "4/4". We don't say "4 over 4" nor "4 slash 4". It's just "Four-four".
We'll come back to how to read this and what it means later. For now, all you need to know is that there are going to be 4 beats (the space between the blue lines) between each bar (the space between each red line).
And yes, in music, we call this space between the red lines a "bar" or "measure" - I'll likely continue to refer to it as a bar. You can see in my screen shot of the MIDI roll there are 8 bars present (though I only highlighted 7 bars with a red line...er...I guess technically 6 since I didn't highlight the first bar)

Each note that you place inside your MIDI roll represents a length of time that note is played for. There are names for each of the possible sizes and notes that will fit within the size of a single bar. Since we are dealing with 4/4 time and music really likes the number 4, notes are divided by multiples of the numbers 2 and 4notes.jpeg
So if we place 1 note into the entire length of 1 bar, we call this a whole note. From here, we will use exponents of 2 to divide the value of the whole note.
So our first divisor will be 2, in other words half of the length of a whole note - or another way of looking at it is half the length of 1 bar. This note is called a half note.
Our next divisor is going to be half the length of the half note. This is a quarter note. In 4/4 time, a quarter note will take up 25% of the length of 1 bar, or in other words, 1 quarter of the the bar.
Next, we will divide the quarter note in half to get a note that is now 1 eighth the length of a bar. Are you probably already guessed, we'll call this an eighth note.
We can divide further to get a sixteenth note, and divide further to get a thirty-second note, and finally the last division to create a sixty-fourth note.
Technically speaking you can divide infinitely and continue to create more and more notes, however, most music rarely uses more than a 1/32nd note, and somewhat uncommonly a 1/64th note, but a 1/128th note is almost never used, and a 1/256th note is practically unheard of.

Let's give that a listen. I've prepared a simple back beat drum beat for us in 4/4 time. This drum beat will be 4 bars long.
To start let's just listen for the whole note.
It's a little bit boring huh?
Next, we'll add 2 half notes into the mix.
Still pretty boring.
Now, let's add in 4 quarter notes.
Well it's certainly more lively, but it needs a bit more.
Let's at 8 eighth notes.
Certainly more exciting.
Now let's add 16 sixteenth notes.
Well it's a little bit more exciting but we're starting to see diminishing returns in the "excitement" value if we just keep adding more and more notes into this drum. At 1/32nd notes, it might event start sounding messy and noisy, so let's just stop it there as talk about why it never got particularly exciting.

You need to remember that notes are not just about where you put notes, but it is also about where you put silence.

So we already talked about a bar. A bar were those red lines with 4 beats in between right, remember?
So we typically count music by the number of beats, and 1 beat is 4/4 is going to be a quarter note. The number that we count is the top number of our time signature fraction. So in 4/4, the top number would be 4...well that's no good, both are 4. So let's switch to 3/4 time. So the number of beats we are going to count is the top number, which is going to be 3. So what is the bottom number?
The bottom number is the unit at which we divide the bar. So in 4/4 time, we are using 4 counts of a quarter note.
So if we were to do something whacky like 5/4, then our time signature is going to be 5 counts of quarter notes.
We could also get more whacky with 6/8, then our time signature is going to be six counts of eighth notes.
Let's hear how some of those sound with a 3/4 example:
Oh dear, that's not going to work with such a static rhythm.
We definitely need to add a snare drum in there to give it a bit more life, but we can't just put it on every quarter or eighth or half or whatever. It's just going to sound noisy and bad.
Let's take a brief detour to counting.
Every time you hear the kick drum in the above example is going to be what we call the down beat. The down beat increments us by 1 until we reach the end of the bar.
So 1 bar is going to be *BOOM*1, *BOOM*2, *BOOM*3, *BOOM*4 | *BOOM*1, etc.
The high hat has been accented. On the down beat (matching the kick drum) it's just a short "chk", then we have the eighth note in between the 2 kick drums. This will be your up beat. On the up beat, the high hat goes "tshhhh" and when counting we would say "and"
We nearly have the common four-on-the-floor, back beat rhythm. But what's the back beat?
Noticed how the drum and hi hat rhythm goes 1 AND 2 AND 3 AND 4 | 1 AND 2 AND 3 AND 4
So what if we pretend for just a moment the kick drum hitting on 1, 2, 3, 4 is actually an eighth note. That would mean 2 and 4 would be an up beat. But since it's not an eighth note we'd just count it as 2. 2 and 4 are separating the beats. We can use this equal division in the bar as our back beat and accent the back beat with a snare.
As you can see, you can use the equal division of your bar, or event the equal division across multiple bars to divide and accent important notes in the rhythm.
Now, with that in mind, let's hear some funky time signatures and finally close out this lengthy chapter.
Here is an example of 3/4 time, with the drum rhythm shifted to be more dynamic.
Here is an example of a really whacky time signature, 9/8, again I have shifted and played with the drum rhythm to make it more dynamic.

Before closing out, there is one last piece to this puzzle, and that is length of time. How many seconds or milliseconds is a quarter note?
Well, that is defined by tempo. Tempo, some times called BPM or "Beats per Minute" defines the speed at which the song plays. My Koto demonstration song has been playing at 135 BPM, while these drum examples I have been showing you has been playing at 120 BPM.
Using speed can change the feel of a song. A fun little experiment is to try taking some songs you are already familiar with and bring them into a tool (such as your DAW or an audio editor like Audacity) and either slow fast songs down or speed slow songs up and listen to how the feeling of the song changes.
Playing with tempo, time signatures, and rhythms is an important part of music. For now, I recommend sticking with the 4/4 time signature until you get more familiar with writing music. Then you can start breaking rules and do things such as polyrhtyhm where 2 time signatures are playing at the saem time (say for example your drums are playing in 4/4 but your instruments are playing in 9/8), or do mid-song time signature changes. But for now, stick with a solid 4/4 to get your baring.

Strike a Chord

Now that we understand a scale, and we understand rhythms, we are able to combine rhythm with notes to create a melody. But how do we decide how that melody should flow?
Well generally speaking most music follows a chord structure. A chord is simply 2 or more notes to create a complex harmonic. The notes that make up this chord are typically the notes that our melody will follow.
For example, a C Major chord is comprised of a C, E, and G. Your simple major chord is going to follow this pattern: Starting from the root note, 1 note every 2 whole steps. If we look at our piano we can see (starting at our root, C) that E is 2 whole steps above the root, and G is 2 whole steps above E. This is what we'd call a tritone. It's not really that important that you remember what that is called, but for now just keep it in mind that a tritone is a chord comprised of 3 whole tones.
Let's see what that looks like:
Tritone.png

So as you can see, stacking notes in specific patterns can create a chord. Chords can occupy more than 3 notes, and they can even occupy more than 1 octave. Rather than describing all the different types of chords like diminishes and minors and sevenths - all of which is where my music theory knowledge begins to get a bit fuzzy - it's better to just play with notes. Play around with different combination of notes to see what kind of interesting chords you can create.

Most EDM music uses only a single chord - and sometimes that single chord is only comprised of 2 notes. But beyond EDM songs begin to be structured with 2 or more chords in order to prevent their melody from feeling stagnant. Let's go back to the good ol' Koto song and listen to the chords.
It's a bit tough to hear since the Koto's are playing quite chaotically, huh? By the way, this chaotic playing is called Tremolo. Tremolo is when you rapidly vary the volume, or in this case, rapidly play the instrument. You can also rapidly vary the pitch when playing, and this is called vibrato.
Anyways, let's use a cleaner tone so it's easier to hear the chords. I'll select a basic triangle wave as my tone.

Now it should be much easier to hear what is going on with the chords. You will hear that each chord is comprised of a tritone, though in actuality there are 5 tones in total making up the chord, but that's because 2 tones from the tritone are repeated 1 octave up to make it have this full-bodied sound. (A3, C#3, F#3, C#4, F#4)
Now each chord, which changes every measure, is rising up the scale. This is what we call a chord progression. To be clear: not all chord progression have a rising action. The rising action was simply the choice I made for this song, which I will make clear to you as to why later in this post. What you should take away is that a chord progression is a series of chords that move in sequence. Typically this sequence of chord movement is 1 chord per bar, but that's another rule that one day you will learn to break, but for now, remember to stay in the 4/4 RTP pool, and go into the jazzy black jack and hookers pool later.
How you decide the way your chords progress is really up to your ears. You can easily find common chord progressions online and use those, but as one pompous British musician put it, these over-used chords are just "zombie chords". Experimenting may lead you to zombie chords, or it may create something pretty unique, so give it a try! (And don't be afraid of using zombie chords - in his own pompous words: Understandable for beginners, less-so for everyone else)

With that, we have finally completed this chapter. There is a lot left on the table unanswered, such as creating bass rhythms and harmonies. I'll leave it up to your own internet sleuthing to find out more about those.
As for what we have accomplished here, well we've learned what a scale is and the patterns that define the common scale types such as Major and Minor scales.
We learned about the time divisions that make up rhythm, and with that knowledge we can now combine rhythm and scale to create melody.
Finally we can use chord progressions to help our melody move by having the melody follow the notes of the chord and that chords are comprised of 2 or more tones, such as the common tritone chord.

This tutorial has been broken into 2 parts! Continue reading it in the post below, or click here.

Upcoming chapters:
How can I Improve My Music Writing/Production Abilities?
 
Last edited:

Ratatattat

Veteran
Veteran
Joined
Mar 22, 2020
Messages
247
Reaction score
244
First Language
English
Primarily Uses
RMMV
WOW what a coincidence that you just posted this right now and I was here to see it - I've been trying to get back into learning about making good video game music the past few days, saving tutorials to my YouTube "Watch Later" and such (but haven't yet found time to sit and watch them or play around). And then this helpful intro gets dropped at my feet!

Thank you for writing and posting this! I just read up through the jargon section, and plan to come back to read the rest. I'm the type who tends to just mess around with things to figure them out, but I have definitely reached the point with music that I'm like "okay maybe I should go back and actually learn the basics now" :LZSlol: So this will be an extremely helpful launching point! Thanks again, your efforts are seriously appreciated <3
 

gstv87

Veteran
Veteran
Joined
Oct 20, 2015
Messages
2,866
Reaction score
1,925
First Language
Spanish
Primarily Uses
RMVXA
you might want to lead with the definition of DAW and THEN explain the rest.
 

woootbm

Super Sand Legend
Veteran
Joined
Apr 26, 2014
Messages
336
Reaction score
295
First Language
English
Primarily Uses
RMMV
Very interesting. I've always been interested in how folks make music. I do have a music buddy who has a sound studio in his garage with a nice DAW, but he does it the analogue way with recording non-digital instruments.

And when I was a kid I had access to some music notation software (my dad's, he's a music major) so I wrote my janky MIDI's using that (Finale, in my case). So I always wondered how people go about making the digital music out there.
 

TheAM-Dol

Randomly Generated User Name
Veteran
Joined
Feb 26, 2022
Messages
241
Reaction score
242
First Language
English
Primarily Uses
RMMV
Purpose of Breaking off the Tutorial from the Main Post to a Separate Post:
It seems I have discovered another "bug" on the forums. Although, bug is probably not the best way to describe it. It seems I may have hit a soft cap on characters in my original post - though it's hard to say for sure if that is what is happening since I am only presented with a non-descript error message that pops up when making further edits to the post. So far, whenever I delete large chunks of text from the changes I just made, the error disappears which is why I believe it's related to a character limit.
As such, I've had to move the remainder of the tutorial into this thread. Had I know this would have been a problem in advance, I would have reserved a post just below the main post for editing. Oh well, this is giving me motivation to move all of this into a PDF file which may be best for those who wish to reference it often.

Your Music Creation Tools: Effects.

At this point, hopefully you are able to pluck out a few notes and maybe make a tune or two. However, when it comes to pre-recorded music, often it lives and dies based on how it's mixed. It is surprising how a bad song can suddenly become good with the right kind of mixing behind it and a really masterfully composed song can easily become unbearable to listen to with poor mixing. (It's a different story when performing live music)

So in this section I am going to be going over the important tools as you will use them to mix your music. This is the part to help your player's ears, this is the part where if you don't get it your player's ears will start bleeding and if you do get it, then they will find your game a far more enjoyable experience.

As I mentioned in the glossary, there are a large number of effects that achieve all sorts of different sound qualities: sometimes these effects are signal processing and mixing, some times these effects are for flavor and adding character to a sound. We are primarily going to focus on signal processing in this thread. For adding flavor to your music, I definitely encourage you to take a more explorative and experimental approach - throw a guitar amp on a piano, put a flanger on a violin, and turn the knobs to 11 and go nuts. There is no right answer, and the only wrong answer is something sounding absolutely terrible to you.


Before we begin, there are 2 free VST plugin collections I would like you to download.

The first is going to be the ReaPlugs collection. These are Reaper DAW's default plugins, but don't worry if you decided to use a different DAW than Reaper. These plugins are free for everyone to use.

Next, we should also utilize Melda Audio's superb plugin collection. Their plugins are donationware. You are not required to donate if you chose note to, though if you decide not to donate the plugins may have a banner at the bottom requesting that you donate.
(actually I think they changed this business model at some point, it might just be a free version or a paid upgraded version...)

Both of these FX bundles are incredibly powerful tools that most musicians use - I started using ReaPlugs long, long before I ever downloaded Reaper DAW. The ReaPlugs are very transparent in their signal processing, which is great for mastering or when you simply want to make changes to a sound without severely changing the original sound.

Melda plugins (which will be called M plugins from here on) are powerful, offer a lot of audio monitoring tools built into the plugins themselves, and are very effective. Their saturation algorhythm is gentle but adds some nice warmth. Their plugins are still fairly transparent, but not as much as the ReaPlugs.


Installation instructions for Plugins:

(you can skip this if you already know how)

I don't remember if ReaPlugs and M Plugins include an installer. In the event they don't, you'll need to place these plugins inside a folder - typically your DAW will create a VST folder under C:/Program Files/VST (also Program Files (x86) for old 32bit plugins). Occasionally DAWs may create a /Program Files/Steinberg/VST folder. You are not required to put the plugins into this folder as you can go to your DAWs options and specify what folders you use for your plugins. Some plugins include an install wizard packaged in an .exe or .msi file, in which case the wizard will install the plugins in the appropriate place for you, however it's still best practices whenever installing ANYTHING to read through the installer carefully and make sure things are going where you want them to go.

Most DAWs will either require that you restart them when you install new plugins or run a "check folder" from within the options to re-check the plugin folder while the DAW is still open (it's just easier to close and open the DAW again, so don't bother with clicking the check folder button.)

Now that we have some tools, let's get started mixing.

The Most Important Tool In Your Arsenal: Equalizers and How the Heck You Use Those.

Let's start with ReaEQ:
ReaEQ.png

So, what exactly is an EQ for?

EQ's can be used in many different ways. In short, they are used to change the sound profile of the incoming signal.
You can either change the sound to enhance certain aspects of a sound (perhaps you have a snare drum but it's missing some *umph* in it's body, so you can use an EQ to add some bass into the snare to give it a fuller body.) Or you can use an EQ so that sounds sit better in the mix (For example, you may have some bell-like instrument, like a Celeste, in which there are harsh resonant frequencies. You can use an EQ to reduce those harsh frequencies.)

There are 2 important workflows for EQ's which I've already given you examples of above:
  • Using an EQ Additively - in other words, to enhance the sound
  • Using an EQ Subtractively - in other words, to reduce a sound.
In many cases (but not always) you will likely need to use 2 EQs.
The parameters in ReaEQ are going to be nearly identical in all EQ plugins, but the layout of the interface may change. Everything you learn here about ReaEQ can be applied to any other EQ - for example, M EQ. Look at the numbers in the picture posted above. These are our figures which I will describe what they do below.
1) These are your bands. The bands are the individual areas that will be affected by the EQ. You can move these bands up and down the frequency spectrum by clicking and dragging it or (my preferred method) typing in it's frequency in the frequency field as seen in figure 3 in the picture.
2) This is the band type. We'll discuss this more soon, but in short, your band can have different shapes, and these shapes effect what frequencies will be affected by the EQ - selecting these shapes in done through the band type.
3) This is the frequency at which the band will affect the in incoming signal. Refer to the glossary if you don't remember what frequency and frequency spectrum is.
4) Gain is going to be how much this band will affect that frequency. Obviously at 0 gain (as it currently is in the picture) we won't be affecting anything.
5) The bandwidth is how wide the band is. A band can affect a narrow range of frequencies or affect a wide range.
6) In ReaEQ and many other EQs, the number of bands you can use to affect the sound can be changed. By Default ReaEQ only has 4 bands, however, ReaEQ has support for an unlimited number of bands, and therefore you could do a lot of signal processing inside a single instance of an EQ (but personally, things start to feel too crowded at more than 7 bands, so in the rare cases I need more than 7 bands, I just use a second instance of the EQ so it's easier to see what I am working with. ReaEQ is very light on the CPU and therefore even a potato computer can likely run hundreds of instances of ReaEQ without a problem)

The first EQ is optional:
For mixing purposes, generally you first want to make the instrument sound the way you want it to sound - so work additively first. For example, many singers will have a "sweet spot" in their voice along the frequency spectrum. Using an EQ to bump that sweet spot may make their voice seem more exciting and cut through the mix more easily. In this case, working additively means we are adding to, or boosting the gain of the bands in our EQ.

After you work additively try not to add anything else after that. Any more EQ's after your "additive" EQ should only be substractive. When balancing a mix, you should always work subtractively. Every track in your mix is going to have an EQ on it 99% of the time (there have been weird outter lying conditions, but those are the exception, not the rule). And every one of those EQ's on your track should be working substractively, so you should be moving the gain of the bands in the EQ down. Subtractive EQing is not optional.


A short hiatus
This project is cancelled.

After some consideration, I have decided to cancel this project. This is due to a number of reasons:
1) Originally this project was going to be a short overview, but obviously it became way more ambitious and bloated and became a project way larger than intended.
2) This project was started under misguided notions that there were not existing resources like this on RPG Maker forum. However, that's verifiably false with only minimal research - many other beginners music tutorials already exist on the forum.
3) This project was started during my summer vacation where I had more time to work on it. Unfortunately now that the vacation is over, finding time to work on this is virtually impossible.


One final note to those learning to write music, here is how you can improve yourself as a musician:
Listen to way more music, and don't close yourself off from different types of music. Learn to appreciate all music, even if it's a genre you don't like, learn to understand why that genre exists.
Write more music. Do quick 1 minute - 1:30 diddy's from time to time. Try a One-Synth Challenge, write lots of music, good or bad, don't worry about the quality just focus on making a song.
On a similar note: Put into practice every new concept you learn. If you learn something cool, something new, or something weird: make a song trying to do that. Even if your execution is poor, at least you tried.
Here are list of songs I enjoy: some of them weird, some of them interesting, some of them I should feel ashamed for sharing.
Röyksopp - Here She Comes Again
The Chemical Brothers - Another World
The Humble Brothers - Epicenter (Sim City 4 Soundtrack)
Brasstronaut - Insects
Massive Attack - Psyche
Combichrist - California Uber Alles
Esno - CME
Iron & Wine - Boy With A Coin
Kiltro - Curico
The Cinematic Orchestra - Breathe
Archive - Collapse Collide
Sawa - ijippari Mermaid
Sayonara Ponytail - Dance Floor to Kabochanoubake
The Nephilim Novel - Bless

No, I definitely didn't link one of my own songs :LZSangel:

Thank you to everyone who's read through this, and best of luck on your music-making journey. It's going to be a long one, so make sure you are having fun.



(Original post before adapting this thread for my tutorial)
Noted and fixed. The reason it was ordered the way it was before - alphabetically - is because that section is intended to be used as a glossary. While introducing terms in order of necessity is great for the first time read, finding specific terms in that order later down the line for reference is not really ideal.
However, due to the important of DAW, I have moved it to the top of the list and added a note about why its at the top, but left the remaining terms in alphabetical order


Changelog
08/14/2022
  • Improved some formatting in the section about DAWs including moving where the paradigm information is so that it makes more sense.
  • During the summary of which DAW you should chose, I added some information about why I chose the DAWs that I did.
  • I also added a final section to the DAW section that advises the reader to either read the DAW manual of the DAW they chose or to watch a basic "getting started" tutorial.
  • Began writing the chapter on Effects.
  • This chapter so far includes information about some free effects bundles that users should download
  • likewise, it also includes information on how to install plugins
  • I began writing about how to use an EQ.
08/13/2022
  • Cleaned some more grammar and typos.
  • Added the new chapter all about music theory!
  • Which means I: Added the section about scales
  • And I: Added the section about rhythm.
  • And I: "Deleted" the section about rhythm
  • And I: Rewrote the section about rhythm
  • And I: Wrote the section on chords
08/12/2022
  • Added new parts to the Preamble section, including "Who is this for?" and "What do you cover in this course?". *Note the "What do you cover in this course?" section added is incomplete and will remain incomplete until the entire course is finished being written.
  • Added "Post Processing" and "Instrument Parts/Takes" to the glossary.
  • Added a section on the 2 workflow paradigms found in DAWs, "Recording based worflow" and "Loop based workflow" just before going into the section that describes each DAW
  • On a similar note, at the end of each DAW description I have added a note about which paradigm the DAW uses.
  • Fixed some typos and formating, and squashed some of those spoiler tag bugs. (Apparently it took refreshing the whole page for some hidden ninja misplaced spoiler tags to show up in the post. How or when they got there is beyond me. It's like the website just vomitted spoiler tags all over my post that I know for sure I never placed there.)

 
Last edited:

TheoAllen

Self-proclaimed jack of all trades
Veteran
Joined
Mar 16, 2012
Messages
6,907
Reaction score
9,440
First Language
Indonesian
Primarily Uses
N/A
More music composition tutorial is always welcomed :)

From my experience teaching someone to make music (at least digitally). The harder part is to get the rhythm right. Oftentimes, they just scrambled notes in the piano roll. And they do not utilize the tempo/BPM.

In my opinion, this should be the first taught to anyone. Get this right, and everything can follow.
I've linked this tutorial to my own tutorial for cross reference if you don't mind.
 

TheAM-Dol

Randomly Generated User Name
Veteran
Joined
Feb 26, 2022
Messages
241
Reaction score
242
First Language
English
Primarily Uses
RMMV
In my opinion, this should be the first taught to anyone. Get this right, and everything can follow.
I guess that shows the difference of workflow paradigms at work :LZSwink:
As I pointed out in that new section I added about DAW workflow paradigms, one way is to see the DAW as a tool to help improve the musician, and the other way is to see the DAW as your instrument to become a musician.

With that in mind, if following the second paradigm of seeing the DAW as your instrument, then you probably wouldn't teach a new guitarist all about music theory before they even touched the instrument, it just wouldn't make sense.



So this project is a lot, a lot bigger than I thought...sheesh, to think I originally thought this would just be a 4 hour, knock-it-out-in-one-afternoon, kind of project. I didn't get as much work done on this as I did yesterday due to some work I needed to get done that is not related to writing a college essay on music production. Here are the updates:

  • Cleaned some more grammar and typos.
  • Added the new chapter all about music theory!
  • Which means I: Added the section about scales
  • And I: Added the section about rhythm.
  • And I: "Deleted" the section about rhythm
  • And I: Rewrote the section about rhythm
  • And I: Wrote the section on chords
Will I finish this project tomorrow? Who knows. I thought this was just going to take 1 day, but after spending 12 hours the first day writing it, I thought "Well, maybe just 2 days" and here we are at the end of the second day and it is still not finished. Didn't think this would be a long term project, but fingers crossed the whole thing is done tomorrow!
 

TheoAllen

Self-proclaimed jack of all trades
Veteran
Joined
Mar 16, 2012
Messages
6,907
Reaction score
9,440
First Language
Indonesian
Primarily Uses
N/A
With that in mind, if following the second paradigm of seeing the DAW as your instrument, then you probably wouldn't teach a new guitarist all about music theory before they even touched the instrument, it just wouldn't make sense.
Fair point.
But then again, instead of starting from an expensive DAW or a complicated one, I started from a free MIDI sequencer (that can't even use VSTi). So it's like instead of choosing an expensive grand piano, an electric keyboard with synthetic sound would do to get started.

Good luck finishing the tutorial.
 

Shaz

Global Moderators
Global Mod
Joined
Mar 2, 2012
Messages
44,914
Reaction score
15,980
First Language
English
Primarily Uses
RMMV

I've moved this thread to Tutorials. Thank you.



I wasn't really sure where to put this - it's not specifically an RPG Maker tutorial, but it's very helpful and I think it would get lost if I put it in Useful Development Tools. Art, Literature and Music isn't really the place either.
 

TheAM-Dol

Randomly Generated User Name
Veteran
Joined
Feb 26, 2022
Messages
241
Reaction score
242
First Language
English
Primarily Uses
RMMV
I wasn't really sure where to put this
Thanks, I wasn't really sure where to put it either. Since it was music related, I figured it would just be better to put it there and let the mods figure it out if I was wrong :LZSlol:
 
Joined
Jul 7, 2022
Messages
3
Reaction score
4
First Language
English
Primarily Uses
RMMZ
I have a background as a professional musician and have a formal education in professional music.

Thanks so much for sharing your knowledge and helping out the community (for free!). Don't worry about being thought of as a "gatekeeper" (a term overused these days and often incorrectly). Music does actually have rules and laws, some of which are based purely on physics!

Anyway, I won't make a long post here, but just a sort of wink and a nod- you and I both know it's a good thing you wrote this. It will help any interested reader up the production value of their amateur or hobbyist music production and up the standard of quality in their RPG Maker games.

Very kind of you to help others in this way, and help improve the community. I have helped many amateur or self taught musicians grow by leaps and bounds simply by explaining to them what a tonal center was, and that the function of the dominant chord is generally to take our ear back to the tonal center. Of course, they'd already written many pieces using this principle, but they hadn't become consciously aware of it yet. It took a 5 to 10 minute explanation and a couple example songs, and took their writing to the next level, as well as their enjoyment!

:) Thanks again, nice to read.
 

Latest Threads

Latest Posts

Latest Profile Posts

I've never felt prouder of my little game!! *o*
DamageEvil_2.png added!
index.php

Forum statistics

Threads
125,672
Messages
1,172,710
Members
164,799
Latest member
PurpleClouds
Top