Avalanche Studios – Magnus Lindberg [Sound Director]

As I’m still in Sweden, my series of game audio interviews with the local studios continues, with Avalanche Studios being my latest interviewee. Magnus Lindberg is responsible for the audio work that goes on there, and took the time to speak with me about what he does.

I’m curious to know how you got your start in the audio and video game world. Can you tell me about that?

As with most audio professionals, I started off making music for fun. Moving from that into sound design was a simple choice, as I enjoyed making weird sounds on my synthesizer more than making actual music. One thing led another, and I ended up doing my first commission for an animated short movie, which I enjoyed a lot. That’s how I got started working in the entertainment industry.

Coming to Avalanche was a process that had been ongoing for a while. I had friends in the game industry and they had kept telling me to get more involved on the audio side of things, since I was working on some interactive projects at the time. I didn’t want to though. Video games had no resources at the time; you’d have one megabyte of memory to work with for audio, among other constraints. To me, that meant I would have spent more time cutting sounds away to fit the memory, rather than building things up to sound immersive. So I didn’t show much interest until one of my friends at Avalanche encouraged me in the mid 2000s, saying that the new console generation had improved on memory capacity. So the first game I worked on was “Just Cause“, for the Xbox and PS2.

You gave an interview in 2009 where you were asked about whether audio professionals in the game industry should value education over experience, and you said that experience was more important at the time. Do you still feel that way?

My answer was more valid in 2009 than it is today, at least in Sweden. You can get a proper game education now, as opposed to how it was when I was in school, where they didn’t really teach me what I needed to know about how audio works in games. You need to know how to manage your audio resources when dealing with interactive platforms like games, which is very different from linear media like movies.

Having said that, the gaming industry has really boomed in Sweden, so it would be hard to get a job with no experience. So I think both education and experience are important now.

Despite being an indie studio, Avalanche has had notable success since its creation in 2006. As someone who’s been working with the company for ten years, what are the most noticeable changes you’ve had in your work as a Sound Director?

It’s all tied up to the general improvement of the consoles, and the fact that you didn’t have much resources to work with back in 2006. Things only just started to pick up around that time. Memory isn’t an issue today, so you can decorate your games with a lot of sound, and it’s expected of any AAA game. The PS2 and Xbox didn’t really offer that, and game developers didn’t have any reason to spend too much money on audio as a result. It’s been the increase in processing power of consoles, more than anything else, that has allowed companies like ours to expand our audio ambitions.

How has Avalanche’s success affected your workload? Has it allowed you to delegate more work to your subordinates, for example?

Avalanche has made a habit of having the proper audio representatives on each project. So my work hasn’t become harder; it’s rather the opposite. Our producers make sure we have a good process. My job is mainly to make sure people have high ambitions for getting good audio into the games.

What does the size of Avalanche’s audio team look like?

It depends on which project you look at, but it’s between 5% – 10% of our total staff.

What’s the role of Producers in interacting with the audio team? Do they primarily communicate with the Sound Director, or do they go to your subordinates with their wishes and questions?

If Avalanche starts to develop a game that features the use of vehicles, then we create a team exclusively for vehicle resources and they handle everything to do with that, from artwork and physics to animation and audio. That team will have a Producer attached to it, and the Sound Director will manage the goals in that team, but the rest is for that team to decide. It’s all scrum-based, which means it’s iterative in short bursts.

magnus

(Above: Magnus Lindberg)

Do you feel like Avalanche being an independent studio forces you to have certain considerations that wouldn’t have been present if the company was a subsidiary of a major studio?

It depends. If you work for EA, of course you’d have access to a large pool of studios across the world. We have two offices, one in Stockholm and another in New York, and we try to work with each other as closely as possible, which involves sharing costs for things like recording sessions. So we’re limited in that sense. But most of our sound guys are old timers with lots of experience and huge network of contacts in the industry. I saw that you talked to Max Lachmann from Pole. He’s been working with people in the US and Europe for many years, and we’ve tried to hire people like that, who are the best and can pull off their jobs because they’re experienced. So things like that make up for being independent.

Seeing as Avalanche focuses on sandbox/open-world games, how does that affect the audio process?

That’s an interesting topic, because the nature of our games surely affects the audio. Open-world games have a simulation nature at their core, and it’s hard to fool the people who play our games if the audio isn’t up to par. A corridor-based shooter game easily indicates where the boundaries are, both visually and audibly, so it’s easier to polish your sounds for a linear game like that. But open world games have to have audio that sounds great from all angles. So if the player decides to pile up fifteen barrels of gasoline and blow them all up, it has to sound as proportionally compelling as blowing up just one barrel. So we’ve always had to battle these types of conditions, and have gotten increasingly better at handling all the different contextual differences that players can be presented with.

Have you ever received any critique from players for having failed in this area? Like someone complaining that they heard the same looped environment sound in two different places, for example?

We do try hard to make sure those things don’t happen, but when you develop these kinds of games, you get all sorts of hilarious things happening that we try to iron out. For example, just dialogue is something that can get complex to handle, since you have conditional dialogue that’s triggered by specific events in the game and mission-based dialogue that’s triggered by clearing objectives. When those two forms of dialogue collide in the middle of a project, weird things happen. But we try to address that before putting a game out.

There seems to be a recurring theme in Avalanche games: explosions, guns and vehicles. Has working with those elements given you a specific mindset on how audio should be developed for games?

Of course. Looking at a game like “Just Cause”, one of the main challenges is the fact that it’s a utopian sandbox game. This means that things like the game’s voice-count can go through the roof at any given time, causing the game to play 250+ voices simultaneously, which you definitely don’t want, seeing as the maximum number of voices tends to be 48 – 64. Creating a system that can handle that eventuality is one our biggest audio challenges for “Just Cause”. A game like “Mad Max” wasn’t like that. There we had the opposite problem of having too much silence, since the game is set in a wasteland world. So we had to fill it with weird sounds that stretched for kilometers across the world, things that would have been redundant in “Just Cause”.

What kinds of advantages does the Avalanche engine offer the audio team when you guys are working?

A lot of it is tied to the fact that it’s terrain-based. We control it completely ourselves, and use FMOD as a playback system, but the system that controls the audio is based on the surrounding vegetation and terrain, and what acoustics should be used in those spaces.

I’ve heard that EA DICE were offered the original Star Wars soundtrack files when developing “Battlefront“. When Avalanche works on licensed titles like “Mad Max”, do the owners of the license also provide you with resources to use from the film?

Well, the movie was shot parallel to making the game. The studio participated early-on by given us a sample of what was being done with the film, so that we stayed in sync, but their vehicles and audio recordings had different specifications than what we needed. We needed to record a vehicle from every possible angle so that when it’s being used in the game, it provides a believable experience. Movies don’t do that, since their media is linear. So we couldn’t really use their resources. We would have had more use of their music than their sound effects or vehicles recordings, to be honest.

Speaking of music, is it typical for Avalanche to outsource composition of soundtracks to composers outside the company?

Yes. I’ve been insisting since we started that we should have a stable of composers that we have for our titles. The dialogue between the studio and composer tends to be that we’re looking for the right tone for the game, whilst pushing the envelope, so as to not fall back on the usual suspects of sounds. Since the music in open-world games has to be adaptive, it gives you a lot of room to try new things. The composers have to be a part of that and enjoy that challenge, because it’s not just about composing music, but rather a system of music that works with interactive nature of the game.

When you guys make sequels to “Just Cause”, do you tend to recycle audio sounds?

Not across franchises, but there’s definitely inspiration to be taken from one game to its sequel. We’ve established a methodology with guns and vehicles, where it’s easy to improve on the quality of our assets when moving from one game to the next, but it’s that seldom we re-use the exact same recordings, seeing as a sequel usually offers greater access to new computing power on a new generation of consoles.

Is there any home-field advantage in Stockholm when it comes to recording audio assets? Are there sounds that you’ve found are suitable to be recorded here?

I don’t know. It’s just a city really. Things like weapons regulations are pretty strict, so we’ve recorded more weapons outside of Sweden than in Sweden (laughs). Our New York offices are great for helping with weapons and vehicles recording, especially custom vehicles which we recorded a lot of for Mad Max.

Things like parachuting and grappling are a big part of “Just Cause”. What was it like recording those kinds of sounds?

I remember this one time during the making of Just Cause 2 when we acquired a bunch of foley equipment and went berserk trying and get the desired sounds; wire twangs and things like that. So the grappling sounds are just a combination of different recordings to create that sound. I doubt a real grappling hook actually exists for anyone to record (laughs). But we made our foley gear work nonetheless.

Can you talk about the gear in your own studio? As a Sound Director, what kind of equipment do you gravitate towards?

Sure. We have a lot of microphones of different sizes. Our latest purchase was a Sanken CO-100K mic, which is used for recording at the highest possible sample-rate, after which you can down-sample your recording to create blurs in the audio that make for good zombie sounds. Aside from that, we have a lot of contact mics that are small and sturdy, which make them good for recording vehicles by putting them into engine bays and such. In terms of studio gear, there’s mastering units like Thermionic Culture Vulture, valve units, hardware EQs and such. We’ve had a lot of analog gear over the years, so it’s hard to go completely digital for us. You lose something when going from analog to digital.

I’ve met a lot of people in the game audio world that don’t seem to care about the sound of analog gear and prefer the convenience of digital functionality. They say the audio assets are hard to process through physical equipment, and that you have to run the sound back and forth through too many chains.

There are batch tools that can process a large number of signals through a hardware chain. Sure, it can be tedious to set up, but it sounds great as a result. Most of the sounds in “Mad Max” went through a chain that included the Culture Vulture and a Clariphonic EQ, just to ensure that everything shared a certain sheen that glued it all together. You can’t do that kind additive blending on the console; it has to be done beforehand.

What kind of speakers do you use?

Currently we use the industry standard Genelec 5.1 and 7.1 systems, since we know how they sound, and it allows all of us to move between each other’s studios.

Are you a plugin user?

You have to be. We love the Synapse producers and FabFilter stuff. Softube is great too. We have the UAD systems as well.

Do you prefer Universal Audio plugins to Waves ones?

I do. Waves plugins have such small user interface windows. They were the first plugin company, and they’ve stuck to their small-window aesthetic since the 90s, when computer screens themselves were smaller. With FabFilter plugins for example, you can expand the plugin window to fill the entire screen, which you can’t do with any Waves plugin, which is inconvenient.

Is there a sound design chain that you couldn’t do without?

I don’t do that much sound design anymore, but when I do, I do it in a less predictable way, where I don’t use a pre-made chain of plugins that have an effect I can foresee. That’s probably because I’ve been doing it for so long that it’s no fun to use things that always give the same result. I know people that use more or less the same setup to give them a digital version of an analog sound, like using analog summing plugins or EQ emulations. I’d rather not do that.

When you do use them, do you find analog emulation plugins to be effective?

We have SSL X-Racks that we can use if we wanted that sound, but the plugins are catching up. I think Brainworks have made a fantastic summing plugin, even if it doesn’t beat the hardware units yet.

The rise of computer technology brings automation into various industries and displaces the job market in those sectors, like in the case of Uber. Is there any extent to which you’ve seen that happen in video games? Is it possible that we could reach a point where the software displaces the job of Sound Directors? 

Not really. Open-world games are a relatively young industry, since they’re a simulation of the real world. You had flight simulators as an early version of that, but our games are a more expansive than those, and there’s still a lot of ground to cover. You can’t even compare the audio resources of open-world games to the quality of their graphics! So it’s going to be a long time before computer automation can replace a human being in constructing that. The process of making things sound right requires a human ear, since you have to do approximations to make things run on a console. You can’t just have an algorithm try to simulate reality as accurately as possible, since the console would just die on you. You have to create audio that is able to fool the ear whilst working within the limited capacity of the consoles CPU, which you can’t do without a human being distinguishing between what’s believable and non-believable.

You just said that open-world games have much better graphics than audio. So how can the audio professionals close that gap? What’s left to do the audio department for the two to line up?

If you look at a propagation model for audio, most of today’s games don’t have relational audio sources. Rather, audio sources are relational to the listening point in the game, or at best some acoustic phenomenon on its path to the listening point. But audio sources in a game don’t know about each other the way they do in the real world. They don’t phase shift each other or have acoustics that are relational to other sound sources. If you have two sound sources in a room playing simultaneously, the acoustics phenomenon in relation to the listening point shifts.

All sound sources in games are really stupid, whereas in graphics, this isn’t the case. There’s more time spent on making the visuals more relational to one another, since graphics cards allow for cross-calculations between different surfaces and light sources. You don’t have anything like that in audio yet. If the audio engine tried to do anything that intricate with today’s processing power, the console would just die on you.

Doesn’t that mean that we have to wait for the processing power of the next console generation to offer more audio possibilities? It sounds like a waiting game, rather than a matter of effort.

It’s also an approximation game as well. You have to try to emulate real-world acoustic phenomenons in a cheap way, which is what we’ve been doing since the beginning of open-world games, though we’ve made progress with our models in “Mad Max” and “Just Cause 3“. Your subconscious places you even more into the world of a game when audio starts sounding more relational. With the advent of virtual reality, I know that a lot of developers are trying to get that effect across over headphones in virtual surround sound and such. If they can get their audio into a relational propagation model, it’ll have a huge influence. The impressiveness of gaming will significantly change at that point.