On Saturday, July 21, we welcomed Carly Beath, musician and audio engineer, to Bento Miso for her New Game Makers lecture and workshop, Creating Music: Audio Production for Games.

Here’s a recap of the session, in case you missed it!

Overview

  1. Signal flow (how do I connect things)
  2. Synths, samplers, apps, and other ways to make bleeps and bloops
  3. Setting up a session
  4. Troubleshooting (aka avoiding frustration)
  5. Working with audio tracks and MIDI tracks
  6. Making Sounds
  7. Recording and editing sounds
  8. Effects and mixing
  9. Games

Signal flow

Computer is connected, both in and out, to the sound card (usually through Firewire or USB). The sound card has an external output to speakers; it can accommodate MIDI devices such as keyboard and other control surfaces with output for communication back to the computer. Finally, sound cards have inputs to receive signals from synths, microphones, guitars, and other instruments.

Key point: What goes out, must go in!

Common connectors

  • 1/8” (like your iPod/iPhone headphones)
  • 1/4”
  • XLR (female and male – often used for microphones)
  • RCA (Common to powered speakers)
  • MIDI (Don’t transmit sound—only data!)

Synthesizers

Synthesis is the manipulation of signal generated by hardware or software synthesizer. Synths often look like keyboards, but sometimes do not.

  • Ribbon controller (such as the Swarmatron, Stylophone, and Monotribe)



  • Touchpads (such as Kaossilator)
  • Synths played using external controllers (like the Mopho)
  • Theremin



  • Light beam

Samplers

Allow you to trigger pre-loaded snippets of recordings. You can sample pretty much anything that makes a sound!

Software synths/MIDI

Software synths that live in DAWs [digital audio workstations], for example, Monster.

Apps, DS games, and toy keyboards

  • Sound Drop (iOS)
  • Bebot (iOS)
  • GyroSynth (iOS)
  • Korg DS10 (Nintendo DS)
  • Elektroplankton (Nintendo DS)
  • Speak and Spells, circuit-bent noise makers

Setting up a session

To work with all of these ins and out and get them recorded into your computer, you’ll need a digital audio workstation (DAW). The DAW will allow you to records, playback, sequence, arrange and otherwise process your audio and MIDI signals.

  • Audio tracks (microphones, guitars, synths, samplers, etc.)
  • Instrument tracks (MIDI-based tracks, usually virtual synth sounds inside the computer are being controlled by a MIDI controller)
  • Mono vs. Stereo – 99% of the time, you’ll be using mono tracks as you can mix the left and right spatial relationship of your sounds inside the DAW.

Set your Ins and Outs

  • On audio tracks you’ll select both ins and outs.
  • On software instrument tracks, you’ll select an output. Instead of an input, you’ll select an insert plug-in.

Tracks vs Channels

It’s sort of confusing! But here’s a way to think about it:

  • Tracks = space on a reel-to-reel tap; lanes for your information to be recorded.
  • Channels = strip on the mixing board; where you affect the final output of the sound/date you’ve recorded.

Troubleshooting

  • Check that everything is turned on and plugged in.
  • Check your volume knobs and sliders.
  • Check that your track is record armed and that it’s not muted.
  • Check that your ins and outs are set properly.
  • Follow your signal flow! Slow down and take the time to look how everything is connected and where it’s going—it’s easy to make a small mistake.

Working with Audio vs. Working with MIDI

_.Audio _.MIDI
The instrument you record is the instrument you hear Instrument sounds can be changed after recording
You are recording external sounds You are recoding sounds that are inside the computer, using an external controller.
Editing is done by slicing waveforms (cutting, duplicating, deleting) Editing is done by moving individual notes

If you’re using a mix of audio and MIDI tracks:

  • Make sure you use a metronome (helps you keep your tempo!)
  • It’s a good idea to start with a MIDI track, such as percussion, and layer audio over it so you have a good foundation to work with.

Making Sounds

As all sounds produce waveforms, sounds can be generated or manipulated through some basic concepts

Oscillators (waves)

Different waveforms are combinations of fundamental frequencies and harmonics and oscillators can generate them. These could be used to produce a simple sound or manipulate other audio waveforms. Basic wave forms include:

  • Sine
  • Square
  • Sawtooth
  • Triangle

Envelopes (ADSR)

Shape sounds over time with Attack, Decay, Sustain and Release controls.

  • Attack (time from zero to peak)
  • Decay (time from peak to sustain level)
  • Sustain (level during duration of sound)
  • Release (time from sustain to zero)

Filters

Control the tone of a sound via the cutoff dial and “fizziness” with resonance.

  • Low pass filter
  • High pass filter
  • Band pass filter
  • Notch filter



Amplifier

  • Anything to do with volume is controlled here.

LFO (low frequency oscillator)

Control oscillators whose shapes can be used to modulate filters and amplification.

  • A signal below 20hz that modulates a sound
  • Creates a pulsing effect
  • Vibrato, tremolo
  • If you’ve heard dub step, you’ve heard an LFO at work (wub, wub, wub)



Recording and editing sounds

Levels/gain staging

When recording audio into your DAW, you want to ensure that you avoid clipping (going into the red) so that you don’t introduce distortion into your track.

This can be done through controlling the volume of the device you’re recording from and balancing with both the hardware gain of your sound card and the DAWs input control.

This is also important to consider with the sounds you have assigned to your MIDI notes, except all of the gain and level controls are in your software as all of the sounds are coming from inside the computer.

Quantization

Even the best performers need a little bit of help when laying down a track, and quantization is the help that keeps your timing together.

Quantization snaps notes played through MIDI to the note grid based on your preference (whole note, quarter note, triplet, etc.). It can be done as you’re recording MIDI data or you can highlight the individual notes or sections in your track to to be quantized selectively.

Recorded audio generally can’t be quantized, but some DAWs, such as Ableton Live, allow you warp the audio to different tempos by automatically stretching and compressing the waveform without changing its pitch.

Transposing Notes

After MIDI data has been recorded, you can manually transpose the notes in your track up and down the keyboard scale, simply by highlighting the notes and dragging them up and down with your mouse or nudging them with the arrow keys on your computer’s keyboard.

This will allow you to change key, chord values and more to match what you want to hear in relation to the rest of your sounds.

Recorded audio can’t be transposed this way, but often its pitch can be modified to produce higher or lower sounds, however the quality of the audio begins to degrade quickly—think of it like stretching gum out too far it begins to tear.

Changing Sounds

MIDI tracks can have their sounds changed with a click of a mouse. Simply find where your DAW keeps its instruments and select one—suddenly the piano part you played using your DAW’s piano sound is now a xylophone.

A DAW’s instruments can be set up in many ways, so a continuously chromatic instrument like a piano may become a percussion instrument where each note is a different part of drum, for example, as opposed to a progress of notes. Experiment and keep track of your favorites.

As mentioned above, audio recordings can’t be changed in this way as their waveforms have been recorded, not the action of the performance.

Moving Sounds

This applies to audio that’s been recorded into your DAW. The waveforms can be chopped up, duplicated and deleted by highlighting the desired portions.

They can also be moved along the length of the track, the individual slices aligned to the timing grid used for your composition (i.e. 4/4, 3/4)

This makes combing MIDI and audio tracks easier to manage and opens up limitless composition possibilities.

Mixing Sounds

To avoid having all of the sounds in your tracks come together in a muddy, confusing mess, you’ll need to apply various effects and monitor your levels to create space for your composition to breathe.

Levels. Refers to the playback level of each track and is monitored by the indicator of each channel (usually measured in decibels) in your DAW.

The level amount can be controlled by adjust the DBs with the controls in you DAW or often can be automated to adjust over time in your track.

While the goal is to always avoid clipping, you may want to layer your sounds by adjusting their levels to allow for a more dynamic sense of space in your composition, as well as effects such as fading in and out.

Reverb. Refers to the real or artificial effect a physical room can have on sound waves. For example, reverb in a DAW can simulate what it would like to have the piano part your played sound like in a concert hall (open, acoustically tuned) vs. an old concrete factory (cavernous with lots of hard surfaces for sound waves to bounce around in).

The effect’s intensity and how much the original sound is mixed in (dry/wet) is all controlled with the parameters in your DAW and can often be automated over time—like have ever fourth bass drum sound like it’s coming from inside a steel drum, for example.

Delay. Refers to the process of buffering or sampling incoming audio information and then delaying the signal before playing it back to create a repeating or decaying echo sound.

There are different types of delay that can be locked to different frequencies, timing divisions and other parameters.

Flange, chorus and slapback echoes are examples of common pre-set delay effects available in most DAWs.

Panning. Refers to controlling the distribution of a sound across the stereo field. In other words: focusing a channels output to be more on the left/right speaker or more evenly distributed.

Most DAWs can automate this effect allowing you to pan a sound across the stereo spectrum at different rates.

Compression. Refers to the narrowing of a channel’s dynamic range by amplifying quiet sounds or reducing the volume of louder sounds.

Compression is used to optimally mix in sources with much dynamic range (such as drum kits) to allow them to be situated in the mix. Effectively, this means losing some nuanced detail in the sound but bring the frequencies that make up the sound more closely. The affect of the compression can be tuned via its loudness threshold and its attack and decay.

Compressors can be side chained in a DAW so that a bass drum can have a “ducking” effect on a channel with a more continuous tone to create a sound that appears to be ebbing and flowing with the bass drum.

How does this relate to games?

Sound in games is often overlooked or thrown together as an afterthought in most indie titles.

This is partially due to lack of familiarity with the concepts covered above, but also because the the full range of possibilities of sound are not fully considered:

  • Identity: Setting the tone and theme for characters and levels.
  • Feedback: providing additional information for the player about the game world; the effect their actions are having
  • Mechanics: providing alternative ways for the player to interact with the game, solve puzzles or problems; allowing audio to drive the game where text, graphics or motion normally would.

An exercise

  1. Find an existing, random game clip on Youtube that you aren’t familiar with.
  2. Select 30 seconds of footage and play it with the sound off!
  3. Spend an hour designing sounds for the actions you see; create some atmospheric textures, etc.