Written by Jody Toomey
Fans of Zelda: The Ocarina Of Time might remember the instructions of Navi, the fairy that guided the player through the game. Much maligned as possibly one of the most irritating characters ever designed, Navi nevertheless served a practical purpose. In gameplay terms, Navi functions primarily as a guide that points out clues in the environment and helps the player learn the controls and advance in the game. Most of her hints are about how to progress in the story or defeat enemies. She can also be used to lock on enemies in the game, items and other characters.
The sound of a game is one of its most important, yet underappreciated and underutilised aspects. You cannot ship a game without sound, even if it’s a mobile game that you play with the sound off because let’s face it, who doesn’t? Audio is everything in a game. It can be iconic and the thing that people refer to the most when discussing games. Just look at the influence GLaDOS and the turret buddies from Portal have had. Or it can be annoying, irritating and turn people off an otherwise brilliant game, especially through overexposure. How many of us haven’t grown to loathe a particular sound or piece of music simply because you didn’t stop hearing it for three hours straight, or the character just irritated you with their constant interruptions of ‘Hey Listen”?
In a game that music provides emotional and contextual feedback, the mood of the music can provide emotional cues to drive the narrative. But music can also be used to provide contextual clues about the game itself. It can be faded out to indicate a player is going the wrong way in an RPG, or a sudden pick up of tempo or change of style can give warning of an impending boss battle. Leitmotifs can be used to associate a certain character with something, like having the Imperial March every time Darth Vader is on screen.
Sound effects and UX feedbacks give cues and clues about player interactions with the environment, NPC’s, objects etc.
UI feedbacks tell the player when they’ve clicked the right button and successfully upgraded their stats or saved their game. Sheikah slate anyone? Anyone of these, when not sourced, produced or implemented correctly can ruin the game. One single noise. Which is why it’s so important to do it well.
But here’s the thing, no one will actually notice.
No one notices good audio because the game is suddenly that much more engaging that it becomes part of the whole experience and almost fades into the background. When it’s done well it can become one of the most engaging and exciting aspects of a game. By contrast, when it is done badly, it stands out and people will cringe. It will ruin the whole thing beyond repair. One way developers can get better audio results is to extend the concept of a musical sonic palette to include game audio. This is a concept that has been used for years in teaching music and composition. In this world it is defined as “The characteristic range of tonal or instrumental colour in a particular musical piece or genre, or a particular composer’s work; (also, and in earliest use) the range of sounds which can be produced by a particular musical instrument.” But when you transplant the idea to games it becomes everything you hear in a game. The sum total of all sound within the game. Music, UI/UX feedbacks, atmospheric and ambient sounds. Everything.
At its heart, any game is a series of interactions by a player in an artificial world and it’s this world, this environment, that helps you define the sonic palette which allows you to better identify your audio requirements when developing your audio asset list.
The designer needs to mentally place herself in that environment, think to herself “What does it sound like?” and then just listen. Write that down. Draw sketches. Point out where you hear noises coming from. What sort of sounds are they?
This has most benefit when you develop it at the same time as you’re developing your environment as it then gives you the scope to integrate and iterate audio in conjunction with other assets. It’s done with art, why not with audio?
Audio has traditionally had something of a problem in game development. The ugly stepchild that gets left behind. The traditional manner in which audio is implemented is to use placeholders and then, somewhere nearer the end of the process, farm out audio production to third parties. There are some exceptions among AAA studios, but by and large, it’s left until last.
What will happen is the musician or sound designer will get a brief outlined the gameplay, maybe some artwork, a few notes on “feel” or “emotion” and if they’re lucky, some gameplay video. This leads and continues, the trend of a half-hearted approach to one of the games most important design elements. This stems, in part, from the fact that audio isn’t taught in any serious detail during games design courses leaving designers, programmers and artists with the very little idea of the scope and power of audio and absolutely no clue about how to utilise it for the betterment of their games. Game design students learn nothing of audio, something the industry is demanding, not just qualified audio producers.
In education, there is no such thing as a specific game audio course. You might get qualifications in film sound, composition or music production but none of them teaches you how game audio works, how to implement it or how to get the best out of it in-engine. Colleges have been known to say that there just aren’t the opportunities out there for anyone doing audio but the truth is, there is. With the rise of VR, audio becomes even more important. Audio is the thing that completes the VR experience. During his talk at GCAP this year industry legend Stephan Schutze said: “In order for VR to be taken seriously, people have to start teaching audio.”
Hey, Listen. He’s right.
Jody is an audio producer who specializes in video game audio design.