Sound And Music For The Moving Image

Sound Cue

Is just an instance of sound, a lump of sound.  Also referred to as an asset

Music

Title Music

Kinetic

means that the sound evokes motion

Watch under the skin, Grant says that it is a very weird and interesting film.  We were analysing its soundtrack by Mica Levi

Underscoring

Music “underneath” the dialogue, informing the mood of the scene.  It is normally non-diegetic, ie, the characters can’t hear it.

 

When making music for my animation, try to keep underscoring in mind, because it can make a scene way more powerful, especially when it does not fit the action on screen, like the Jaws song being played under a scene of a happy beach.

 

Temp Tracks

They are pieces of music that film directors pick as a reference guide for how they want their own specially produced music to sound like, which they then give to the composers.  But this can mean that the music that is produced is very similar to the original piece of music.

 

Spot Effects

Spot effects have one point of synchronisation.  One point that editors need to line up with the video because it is a very short effect.  An example of this would be a gunshot, you only need to line up the sound to the moment when the trigger is pulled, and Bob’s your uncle.

Are sound effects that have not necessarily been made for the film, so could have been from sound libraries.  These are mostly sounds that can’t be made easily in the real world, like gunshots and laser guns.

 

Foley Effects

Foley effects tend to be much longer than Spot effects, with much more synchronisation points to line up in time with the video.

Are sounds that are made in post-production, by Foley artists, who watch the film sequence and make the sounds in real time.  Foley sounds are all the sounds that you would expect to hear.  In some cases, these are more effective than Spot Effects, because Foley artists make sure that the timing of the sounds is on spot exactly, so an editor would just have to line up the first sound to the right place in the film, and all of the other sounds would be in the right place.

Atmospheres (Ambience)

The atmosphere is the sound underneath all the other sounds.

When thinking about sound design in relation to an animation or what not, if you have a blank bit, where nothing much seems to be happening, really try to think creatively about what sounds could exist in that space

 

Silence

 

Sound Assignment Task 1

Video games use sound in a completely unique fashion, compared to other forms of visual entertainment, like film or theatre.  Games can use sound to evoke a strong feeling or emotion in the player; to tell the player that an enemy is nearby; to greatly enhance scenes of action; to create an overwhelmingly powerful soundtrack that gives the game an unforgettable atmosphere; to increase the realism of the game with foley sounds; or just simply to show the hunger of your character, with a cute little tummy rumble.  The use of sound in games is extensive, unique and most importantly, can be immensely enjoyable.

A game’s soundtrack plays a big part in the game’s atmosphere, and therefore the player’s experience.  Like, where would Super Mario (1985) be without its iconic theme song that gamers young and old can recognise in an instant?

This dangerously catchy 8-bit song uses extremely simple, high-frequency sounds to create a very happy sounding tune, which gives the listener the sense that this game is just going to be a pleasant, fun little adventure.  But partway through the song, the non-diegetic sound effect of Mario sliding down a pipe is heard and the music is suddenly made up of much lower frequency sounds, which instinctively tells the listener that there is danger in this invisible plane.  This is achieved solely through the use of sound.  The pipe sound is then heard again, and the music instantly sounds much happier, with higher frequency sounds which tell the listener that they have escaped the danger.

It’s kind of amazing just listening to this song because the sound design is so masterful that you can almost see the action depicted in the music in your mind’s eye.  In a way, I guess it’s like the music is telling a story.  It’s so damn cool.

That was a very simple, early type of soundtrack, but video game music has evolved just a tad since then…

bloodborne-ps4

Bloodborne (2015) is a victorian-gothic horror game set in a fantasy dystopian version of London, where almost everyone is afflicted with a terrible disease, and lethal terrifying monsters hide behind every corner.  The terrifying setting is perfectly replicated in its soundtrack, which is created by a large orchestra, which creates an amazingly unnerving and eerie atmosphere and gives the player the sense that a horrific beast could jump out at any time.  The sound designers chose to leave out certain instruments, like woodwind and trumpets, to make the atmosphere “black and withered”.

In the video above, the piece starts out with light stringed instruments, and soft low amplitude high frequency female vocals, which sounds almost angelic; this invokes a sense of epic mystery in the player, and that whatever foe they are up against at this point in the game is so important and powerful that it sounds like even the heavens are singing about it.  The amplitude slowly increases, and more and more instruments join in, as it crescendos, and spectacularly goes from angelic, to very threatening and scary, but it is also in a minor key, so has a really sad feel to it; it is if this beast has some kind of underlying, horrific and sad trait or backstory that has just been unveiled to the player.

Although different in pretty much every way, the soundtracks of both Super Mario and Bloodborne both masterfully create an enthralling atmosphere and experience, and are made so well that they allow the player and or listener to almost see the game being played in their mind’s eye.  This just goes to show what beautiful feats can be achieved in games, with music alone.

But it’s not just the soundtrack that creates the soundscape of a game, it also needs foley sounds; which were named after the sound designer Jack Foley.  They are the every day in-game diegetic noises that you would expect to hear, like the rustle of bedsheets when a character gets out of bed, or the squeaky door as your character groggily stumbles out of their bedroom in a sleepy stupor; or even your character’s gun firing, as they find that the world has been overrun by zombies.  Although the player probably won’t notice these sounds, they are vitally important to add to the realism of a game (if the game designers want to achieve a realistic effect, that is); and without them, the game would feel empty, like there was something vitally important missing.

maxresdefault-4

Batman Arkham City (2011) is set in the Batman universe, where Gotham’s slums have been converted into a city-wide prison, where the inmates are allowed to do as they please.  When Batman is kidnapped and dumped there, he takes it upon himself to seek out The Joker.

The game devs hired the best Hollywood foley artists in the world to create the most immersive experience that they possibly could.

I could ramble on and on about all of the meanings of every single foley sound in here, but for your sake, I won’t, and I will try to keep it relatively short.

The clip begins with Batman walking down a grim corridor.  The foley sounds here are Batman’s low-frequency footsteps; they will have been added in post-production and were probably made by recording someone walking on a hard surface, with a reverberation effect added, to make the footsteps sound like they’re echoing through the seemingly empty, uninhabited building.  This might give the player feelings of longing, feeling like they are alone in this empty building, but because the footsteps sound very purposeful, they could also give the player a sense of power, and purpose.  But as Batman nears his objective, the footsteps are almost drowned out by other foley sounds, like steam, which subconsciously tells the player that this scene is important, as its sounds overpower Batman’s; this might also allude to the prospect that Batman is weaker than the enemy that he is about to face, as even his foley sounds are killed off by his foe’s.

At about 7:25 mins into the clip, Batman starts pummeling Dr Freeze’s visor and head.  These sounds were created by the legendary foley artist John Roesch, who has worked on over 540 films and games, including Micheal Jackson’s “Thriller”.  Roesch said that for the face punches, Dr Freeze is “stuck within this robotic, confining type outfit”, so Roesch made these sounds by stuffing a rollerblade boot inside of a ski helmet and hitting it with a hammer.  These low-frequency sounds reflect the powerful, relentless brutality of Batman’s punches and the mixture of suit and flesh that his fists are connecting with.

All of this meaning can be derived from just a couple of relatively simple sounds, which clearly demonstrates the power and necessity of foley artists’ work in games.

The ambience is basically just background sound; it’s the sounds that you would hear if you just stopped and listened to the world around you.

lfndtoirttvx

Firewatch (2016) is an open world exploration game, where you traverse dense forests and sparse mountains alike, and your only companion is a stranger on the other end of a radio, therefore the majority of the game is spent exploring the lush environment alone.  You can imagine just how important ambience is in this game.  The attention to detail in the captivating environment and the relaxing soundscape almost make the world in this game feel truly alive.

This is a 1-hour long clip of the start menu, which is comprised of only the character’s home, a start button, and some extremely relaxing ambience.  One of the main jobs of a start screen is to tell the player what kind of experience they’re going to have, and this does it perfectly: the high frequency sounds of the whistling wind tell the player that they will face harsh environmental challenges, but the wind isn’t extremely powerful so they won’t be too hard. The occasional high amplitude and high-frequency squawk from a bird tell the player that there will be a prominent aspect of animals and nature.  The low amplitude sound of the flag flapping in the wind is always just about audible over the howling wind, which shows that the main element of this game is nature but beneath, there is still the ever-present human focus of the story.  The whole effect that this has on the player is, at its simplest, relaxation.

It is a game about relaxing and losing yourself in the world, and I think that even the title screen captures the soul essence of this game with its skilful use of ambient sound.

And finally, last but not least, the member of the sound family that can make or break a game is dialogue.  Even if a game has amazing graphics, a great story and an epic soundtrack, but has bad voice acting it can put the whole game out of joint.  But if the voice acting is done well, it can make a good game a great game, by bringing the characters and their personalities to life.

5ds9ls7

Life is Strange (2015) is a game where you play as Max, an introverted photography college student, who finds that she can rewind time.  She and her childhood friend Chloe have a week to try to stop a tornado destroying their hometown, Arcadia Bay.  But the main focus of the game is the rekindled friendship between Chloe and Max.

Although the dialogue in this game is famous for being cringy, doing a bad job at trying to imitate how teenagers speak, it is also known for having great voice acting, especially for Chloe (the blue-haired one).

Ashley Burch, Chloe’s voice actor, in my opinion perfectly brings out Chloe’s rebellious, sarcastic and really pissed-off personality through the use of her voice.  Because Burch’s personality is similar to Chloe’s, I think that she embraced the role, which really helped to bring Chloe to life.  Her voice alone was such an iconic part of the game, that when they made the second game and used someone else to voice Chloe, there was a big, disappointed backlash from the large fanbase.

I think that Max’s voice actor, Hannah Rebecca, also does a really good job of bringing her character’s shy, introverted and artistic personality to life.  She mostly keeps Max’s voice in the low amplitude range, which reflects the shyness of her character.

The contrasting aspects of both the characters’ personalities and voices make for a really interesting and entertaining game, seeing the differences between them unite into an engaging friendship, which struck a chord with the players.

This just shows what good voice acting can do for a game and therefore the critical importance of it.

giphy1

So that, ladies and gentlemen, was my best attempt at showing you all how goddamn amazing sound can be in video games, and the dizzying amount of intricate meanings that the incredible sound designers put into their deliciously wonderful soundscapes… if done well, that is.

 

Research:

https://www.theguardian.com/technology/2014/apr/08/computer-gaming-audio-lucy-prebble

https://blog.us.playstation.com/2015/05/18/the-story-behind-bloodbornes-haunting-soundtrack/

https://www.nerdreport.com/2015/02/05/foley-games/

https://en.wikipedia.org/wiki/Batman:_Arkham_City#Plot

https://www.imdb.com/name/nm0736430/

 

Reverberation and Delay

Reverberation or reverb is “a prolonged version of a sound” – Oxford dictionary.

When we hear a sound in the real world, we hear it through sound waves travelling through the air, and into our ears.  But there are multiple sound waves per sound, and if you are in a room, then a number of them will bounce off of the walls before they hit your ears.  This causes a reverberation effect to be heard, as multiple sound waves from the same sound hit your ears at different milliseconds, and may even be audible after the source of the sound has stopped.

reverb1

Using the way that the sound, well, sounds, we are able to guesstimate how big the room is, for example, if we are in a big room, then you may hear more reverberation than if in a small room.  We can also use it to tell us how far away the source of the sound is from us.

Using the clapping hand’s example, because sound waves take time to travel, put simply delay is the time in between when the source of the sound is created and when it hits your ears.  So if you are in a big room, and at the opposite end some poor sod accidentally falls into a drum kit, you may see the kerfuffle a small amount of time before you actually hear the raucous.

 

References

https://www.sxsevents.co.uk/about/resource-hub/explanatory-articles/sound-delay-explained

 

 

How To Make a Fuzzy Radio Effect With Logic Pro X

car-radio-code-57def9cc3df78c9cce6a9434

Have you ever had a sound clip of some music, or a dumb little voice memo, and wondered what it would sound like if it was played by a really crappy car radio?  No?  Well here’s a really tedious explanation to sink your uninspired teeth into.

Ok, so to create a track go to (Track)  (New Tracks), or press the little plus sign button above the track list.

When opening up a new project, there will be a little window at the top, then go to audio, and at the bottom of the window, there will be a little place to specify how many tracks you want.  Type the number of tracks you want to create, and press enter.

On the tracks in the track list, each of them has a little (R) button, which when pressed flashes red, which shows that it is ready to record sound into that track through your microphone.

After recording a sound, use the top and tailing method to trim the sound clip.  To do this, go to the drop-down menu which looks like a crosshair, which is just above the track, next to a white arrow button.  Click on it, and select the lasso tool to zoom in to the sound clip.  When zoomed in, select the same drop-down menu and click the scissor tool.  Hover your mouse over the sound clip, above the amplitude line, and click and drag, or just click (I don’t remember which) to cut the end of the sound off.

Select the numerical area above the sound clip; it should highlight yellow.  This makes the sound clip repeat over and over.  Then click the (EQ) button to the left of the tracklist, and the EQ Menu should come up; then bring the green slidey button frequency thingy up, to increase the amplitude of that frequency range.  Mess around with it until it sounds nice and distorted, like a crappy radio.

Click on the (Channel IQ) button, and when the new window pops up, click on the (Manual) drop down menu and click the (0.9s  Small Combo Spring) option.

Find some sound from the inside of a car being driven, and play it at the same time as your edited sound clip, to see how much it sounds like a crappy radio in a car.

To export your new kickass sound clip, click on (File) then (Export) then ((however many audio tracks you have, for example, 5) tracks as audio files).  A big ol’ window pops up.  Pick a location, then click on the (Format) drop-down menu, and click (AIFF). On the Bit Depth drop-down, select (16-bit), on the pattern bit, select (Region name) and hit (Export).  You should now have the same number of sound files as tracks you used in the location you chose to save your sound clips.

As a treat for reading that heap of boredom, here’s a little prize for you, in the form of a neat little snack of information that momentarily cured some of my stupidity;  1234 and the metronome buttons are to keep you in time whilst playing music.  Now wasn’t that underwhelmingly splendid!  Yay!  Here’s a party popper to make you feel good about yourself, and give you the sense that you’ve accomplished something.

 

confetti

 

 

Sound Editing

Top and Tailing

Is a method used in the process of sound editing, and means when you delete the unwanted bits at the start and end of the sound clip so that you’re left with only the sound that you wanted.

 

EQ (Equalisation)

Lets you boost and or cut the amplitude ranges of various frequencies in a sound, and is more commonly used for music.  You can get a very wide range of different pitches, by increasing and decreasing the amplitudes.

 

High-Pass EQ Filter

Allows the higher frequencies to pass through, but attenuates (cuts) the lower ones from the cut-off frequency.  For example, if you had an audio clip of a heavy metal band playing a song, but only wanted to hear the high hats on the drums, you could use a high-pass filter to only allow the high frequencies of the high hat through, and cut all the lower ones, like drums and growling vocals.

Low-pass EQ Filter

Lets you do the same, only vice versa; it lets the lower frequencies pass through unharmed, but cuts the higher frequencies and all of their hopes and dreams.   An example of this would be if you had the same audio clip from above, but instead, you only wanted to hear the bass, you could add a low-pass filter, and cut all of the higher frequencies, like the high hats and the ear-bleedingly high-pitched screaming vocals, and only be left with the bass.

 

Peak EQ Filter

Provides gain or loss at a certain centre frequency, so in other words, it increases or lowers a very specific frequency.

 

Shelf EQ Filter

Allows you to boost, or reduce the signal strength below, or above a set frequency.  These signal strengths are commonly known in music as bass and treble, the “bass” is the lower frequencies, and are the boomy heavier sounds, which are in the 20Hz – 400Hz range.  The “treble” is the higher frequencies, and are the squeaky and sharp sounds, in the range above 5kHz.

 

References

https://en.wikipedia.org/wiki/Equalization_(audio)#High-pass_and_low-pass_filters

https://www.quora.com/What-is-the-difference-between-bass-and-treble

https://www.dummies.com/education/science/science-engineering/how-to-characterize-the-peaking-filter-for-an-audio-graphic-equalizer/

 

Analysis of Bonetrousle

Bonetrousle is an extension of the song “Nyeh Heh Heh!” from the massively popular indie game Undertale.  The music in this game is very important because it tells the player about each character’s personality; for example, Bonetrousle plays when the player fights Papyrus, who is an overly confident and cocky skeleton who thinks he is amazing.  Bonetrousle is basically Papyrus’s theme song.

Continue reading

The Basics of Sound

Waves

We were taught about two different waves in our sound class, the Transverse Wave, and the Longitudinal Wave.

giphy

The Transverse Wave is where the crest (the raised bit) travels perpendicular to the direction of oscillation (movement back and forth in a regular rhythm), which creates a wave. For example, you see a Transverse Wave when you make a ripple in a still lake, or when you jump on a trampoline.

Sound is carried in the form of Longitudinal Waves, (also known as Compression Waves) which are all about particle vibrations.  When music comes out of your speaker, it is vibrating the particles around it, and causes them to bump into each other; and then those ones bump into the next ones, and so on (as seen in the slinky example above), until they reach your eardrums, which process the vibrating waves, and allow you to hear that sweet yodelling guilty pleasure of yours.

Basically, the main difference between them is that the particles in a Transverse Wave move up and down, in a wave-like motion, but with Longitudinal Waves, particles just bump into each other, causing a domino-like effect of waves of compressing particles flying out from the oscillator.

But we use the diagram of a Transverse Wave to draw sound, because it is a lot easier than actually drawing the compression of particles.  Here is an example of how  Longitudinal Waves are drawn.

loudspeaker-waveform

The more compressed the particles are, the louder the sound.

 

Amplitude

Amplitude is the distance that the crest (highest point of the wave) and trough (lowest point of the wave) are from the equilibrium (the horizontal middle line) which represents no movement of particles, and therefore no sound.  When talking about this distance, we say large amplitude (greater distance) and small amplitude (less distance).  The greater the amplitude, the louder the sound, and the smaller the amplitude, the quieter the sound.  Here’s a neat little diagram.

1

 

 

Frequency

Frequency basically just means how many cycles a sound has per second.  Okay, before you freak out about not knowing what the heck a cycle is, a cycle is a unit of frequency called a hertz.  One hertz (written as 1Hz) is basically just when the compression of particles returns to the equilibrium.  The wave diagram represents this single cycle as the line of the wave goes up from the middle line to the crest, down to the trough, and back to the equilibrium line once again.

1-cycle-228lebg-300x273

When there are a lot of cycles a second, it is known as high frequency, and when there are fewer, it is known as low frequency.  The more hertz there are per second, the higher the pitch of the sound will be.

3eeaa4494e90910be97ef71a047d7723

Okay, so 1Hz means one cycle a second, 2Hz means two cycles, 3Hz means three, and so on.  When there are one thousand cycles a second, it is called one kilohertz (1KHz).  When there are one million, it is one megahertz, (1MHz) one thousand MHz is gigahertz, (1GHz) one thousand GHz is terehertz, (1THz).  The list goes on, but neither of us really need to know about them at this point.

Young people can hear frequencies between 20Hz and 20KHz, and the prime age for hearing in males is about 19, and 21 for females.  But as people get older, they lose some of the cells that are responsible for picking up the sound vibrations, and lose the ability to be able to hear higher pitched sounds.  Older people’s hearing ranges from 20Hz to about 15KHz.  But there are some frequencies that humans can’t even hear; infrasonic frequencies are too low in pitch for us to hear, and ultrasonic sounds are too high.

 

 

And now, because that was a whole load of really boring and complicated stuff, here’s a cute little Pikachu balloon 🙂

X2d3

There, ain’t that better?