Frédéric Devanlay [Foley Artist/Sound Designer]

Frédéric Devanlay is a sound design and foley artist who’s work in the video game world dates to the early 90s, and prior to that he was releasing music on major labels and working as a touring musician in the 80s. I recently paid his Big Wheels Studios a visit to talk about his sound design and foley work on games like Far Cry 2“, “Splinter Cell“, “Watch DogsandLife is Strange“.

Hi Frédéric. I’ve heard that you got your start in the 80s, doing three years of music school. As someone who currently teaches classes at ISART, what do you think is the biggest difference between music schools in the 80s and now?

Well, it’s hard to say, because I teach sound design at ISART, which is something that didn’t exist when I was in music school 30 years ago. Also, I spent three years studying music theory and composition before learning anything about playing instruments, which I found to be quite disappointing. I even remember how my guitar teacher didn’t like that I played left-handed, because I had to reverse the guitar. Things are different in music education today; many of my students are multi-instrumentalists who have a good sense of music before they even enrolled in the school.

Is it true that you worked as a session musician and played in a band in the 80s?

Yes, during the New Wave movement. I was in a band that made music similar to acts like Depeche Mode. We weren’t great musicians, even if some of our musical ideas were good (laughs). But we played a lot of concerts and had fun doing it. My band was called Bleu Nuit, and we had a few releases on Phonogram, which is now a part of Universal Music. They were only EPs though. We didn’t sell a lot, but it was fun, and I went on to do other things after that.

As a session musician, I played with artists like Christophe and Indochine in the 80s. Later in the 90s, I worked with a lot of DJs like Claude Monnet and Martin Solveig, doing drum and synth programming. I’ve even done sessions with them at my current studio, and Claude is still one of my best friends to this day.

What was the first notable game project you worked on as a sound designer?

My first major project was for the children’s game, “Adibou. I had my first studio in Vincennes, near Montreuil, and during a session in 1993, a guy came by and asked me if I was interested in doing sound design. I hadn’t worked with that before, but he gave me the phone number of a girl who was working on Adibou, and I gave her a call. She was willing to hire me, even though we weren’t sure if it would work, due to my lack of experience. But I ended up working on multiple volumes of the game, which was cool.

I would later reach out to Ubisoft and send them my CV, hoping to get some more work within the game audio industry, but I didn’t hear back from them for a long time. Three years later, the phone unexpectedly rang, and it was Ubisoft! They offered me a job, which is how I got my start in working for major studios.

Did “Adibou” do well commercially?

Yes, it did. A lot of kids who grew up in the 90s played it in France and it’s still well-known here.

Another 90s game you worked on was “Atlantis II“. You’ve said in past interviews that your studio setup at the time was mainly an Akai S900 sampler and an Atari computer, running Cubase. What was it like to do sound design with that kind of gear?

I did start on an Atari ST, with Windows 3.11 and Cubase, but by the time I was asked to work on “Atlantis II”, I had moved to using a PC. I had the Akai S900 and the S1000, and would record my sound effects to analog tape, but I later switched to using DAT. I would be given floppy disks that contained game animations which I had to dub with sound effects, so I’d record my sounds into the Akai S1000, and then sequence the sounds using MIDI from either the Atari ST or PC, and finally record to DAT. Then I’d send the DAT by mail to the developer.

Aside from “Atlantis II”, what were you doing through-out the 90s before you started working with Ubisoft? 

I just kept working on “Adibou” titles. That was ongoing project for around seven years, with several titles like “Adibou 1”, “2” and “3”, as well as “Adiboud’chou” (his little brother), and the “Adi” series (his older brother).

And what were your first projects at Ubisoft?

My first real project with Ubisoft was “Donald Duck: Quack Attack” in 2000, and the second one was “Donald Duck PK” in 2002. Whilst working on those games, I had mentioned to my boss that I really liked “Tom Clancy’s Splinter Cell“, which had come out in 2002, and he later phoned me about that. When I answered my phone, he asked, “Are you sitting down, Fred?“, and I was like, “No, I’m standing “, and he said “I think you should take a seat “. So I did, and he then said “You’re going to be working on the next Splinter Cell “. It was pretty exciting revelation, and was the best thing he could have told me. The first title we did was “Ghost Recon 2“, before moving on to “Pandora Tomorrow“.

Didn’t you get a BAFTA Nomination for your work on the Tom Clancy games?

Yes, for “Pandora Tomorrow”, “Double Agent“, “Ghost Recon Advanced Warfighter” and “Ghost Recon Advanced Warfighter 2“. I never won anything, but I’m happy that I was nominated.

And then came “Far Cry 2”, which sold a few million copies.

Yes, but actually, I worked on voice recordings for the first “Far Cry” as well, and then did all the vehicle recordings for “Far Cry 2“.

It’s true that “Far Cry 2” sold well, but personally, I think it’s the worst entry of the series. The first one was groundbreaking, but the sequel was a bit boring and didn’t have a lot of action in it. But the later entries were much better, I think.

To what extent do you think sound design affects the success of a game?

A video game is a collective creation, so if it turns out bad, it’s not because of just one thing. Obviously, good sound design is better than bad sound design, but I’ve never seen a situation where excellent sound design changed people’s opinions about a poorly-received game. The Metacritic ratings and reviews usually don’t mention people in the audio team anyway.

Life is Strange” is a game that was well-received for defying conventional gameplay. How did you end up working on that, and how does the unconventional nature of the game affect your work process?

I know some of the guys at the audio department of the developer, Dontnod Entertainment, since they had already worked at Ubisoft. The video game industry is like a merry-go-round where people move between companies quite often, so the community stays pretty small. We’d worked together on past games, and they called me in to do foley and sound design work on “Life Is Strange“.

The game required me to make a lot of foley assets that set the right atmosphere. When two people are talking in a room, they usually standing close to each other, and their movements have to reflect that. So the audio assets have to have the proper intensity of footsteps, body movements and such. The sound team and I would do a lot of work for two months on an episode, take a one month break, and then start another episode. That was a major difference from my past games. With “Splinter Cell”, I’d work for four to six months at a time, and for “Dishonored 2” it was a two-year process. But with “Life is Strange”, we took our time and went episode by episode in order to understand the story and get the details of our recordings right.

Did your work process change with “Life Is Strange 2“?

No, it’s largely remained the same. We just had to rerecord foley, textures and atmospheres because the story and settings are different in this game. Also, since the plot is set in Seattle, instead of Oregon, we went looking for new nature recordings of things like the wind and the ocean, as well as animal noises that better reflect that setting.

What’s the typical size of the sound teams you’ve worked on in the past?

For “Life Is Strange”, we had five people on the audio team at Dontnod, in addition to the programmers who integrate the assets into Wwise; I think that’s a typical size for a sound team.

By the way, the term “sound designer” is something I get called less and less nowadays. The label “Audio Artist” gets used for me instead, since I make the audio assets. The programmers who are responsible for integrating those assets into Wwise or FMOD are now being called “sound designers”. It’s an interesting shift.

How do people in France generally view the work of sound designers for games?

We love music and audio in France, but I don’t think the French media give sound designers the kind of credit they get in the UK and US. In France, the game reviews might say something like, “The audio team did such and such “; all the audio professionals get lumped together in one group, whether it’s a sound designer, audio director or foley artist. The critics here mainly review the music, and have little interest in talking about other aspects of audio, and we don’t have any awards for sound design or foley, like the BAFTAs do. Not that I have any problem with that though. I’m happy that I have a job and can be a part of the interesting projects that I work on.

Tell me about the prominent game companies that exist in France.

In Paris, the big one is Ubisoft, but we also have Dontnod (“Life Is Strange”, “Vampyr“), Quantic Dream (“Detroit: Becoming Human“), Spiders (“GreedFall“) and Amplitude Studios (“Endless Space 2“). There’s also some game studios in Lyon, like Arkane Studios (“Dishonored 2“, “Prey“) and Ivory Tower (“The Crew“), although they were bought up by Ubisoft in 2015.

Given their size and reputation, what was it like working with a major studio like Ubisoft? Does the work of the Paris branch differ from the ones in other countries?

When I did most of my work with Ubisoft in the 2000s, the Paris offices were being asked to handle the audio tasks of other branches; we were so well-staffed that we had to reinforce the foreign branches on their games when it came to audio. But as the company grew, the different branches all created their own sound departments with an in-house Audio Director and their own sound designers. So now they only use the Paris offices for support during crunch time.

When you have a major French studio like Ubisoft that can impact the video game industry by buying smaller companies or hiring a lot of talent, how does that affect the game audio scene in Paris?

Y’know, people don’t even consider Ubisoft to be a French company anymore, since most of their work is done in Quebec and Montreal now, even though they still have studios in Paris. But as far as their effect on game audio, I don’t really know, because I’m not an employee there anymore. At this point in my career, I prefer to work as a freelancer, rather than be a full-time employee at a company like Ubisoft, even if that means less work during the year. But I have my independence, and if I get contracted to work with a developer like Dontnod, I may only spend a few days at their studio and it’s mainly for the fun of meeting my colleagues there. I don’t have to liaise with the company’s production pipeline, like the artists or animators. I take my orders from the Audio Director and deliver my assets to him.

How would you rate the experience of working with major studios versus indie ones?

For me, there’s not much difference between working with independent or major studios. I’m currently working with an independent studio in Taiwan on a small game, and I do my best on that like always. But working with medium-sized studios like Dontnod or Arkane is the most comfortable, because you can get to know everybody on the team.

Let’s talk about your sample library company, Red Libraries, which you created in 2014. What inspired you do to that?

I had already been creating sound effect packs for other companies like Zero G, SoundMorph and Big Fish Audio, and one of my friends was like, “You should make your own company “. But I was reluctant to deal with all the day-to-day work of creating metadata for files, setting up a website, managing the sales, etc. Eventually though, I listened to what people were saying and set up Red Libraries, but to be honest, I should have started sooner. Today’s technology allows anyone to buy a recorder and make his own effects, so the industry doesn’t need sample libraries they way they used to ten years ago. Also, there are tons of websites that sell sound design tools now, from the very best to the worst. But my sample sales still earn me some money that I can reinvest into my work, so I don’t mind doing it. Also, when I make packs for other companies, I get a fee when they initially contract me, and then 18%-20% of the royalties. It’s like signing a record deal (laughs).

What has been the thought-process for making and releasing sound packs? Did you have any ideas for how you wanted to create your product range?

My initial desire was to release packs that focused on sound textures, like wood, water and metal. I then moved on to record things like windmills and medieval atmospheres in castles. I later worked on a movie that was set in Paris, and it required us to record many different city sounds. So I would rig a lot of microphones onto my motorcycle in order to record the sounds of the city as I drove around. For the future, I may focus on sci-fi packs and stuff that’s heavy on sound design, rather than foley and field recordings.

How have your sound packs been received by the industry? Have you gotten any positive feedback?

I did win an award from MusicTech for the CyberStorm pack with Zero G, which felt good. That pack has done well, I think.

You’ve said in past interviews that you were inspired by movies like “Terminator” and “Transformers” when making the “CyberStorm” pack, because you couldn’t find those kinds of sounds online. But when I look online, I find lots of those kinds of packs, and they’re very popular nowadays. What did you mean by that?

But it wasn’t like that before, especially in the 2000s. I think it’s easier to make those kinds of futuristic sounds nowadays because they don’t require a recorder or microphones. Granular synths and samples like Kontakt are enough. Back in the day, futuristic sounds were made mainly from recording acoustic sounds, and then running them through hardware samplers for processing, in addition to layering in synth sounds. It required a lot more work than today, which is why the number of sci-fi themed packs have increased online.

How did you make your sound library for SoundMorph, “Future Weapons“? 

I started off by mentally creating two separate factions. I might have called one of them “The Resistance” and other one “The Order”. But the point was just to have two camps, with one being more technologically advanced, and the other one more rustic, with Mad-Max type weapons. Making that kind of division in my mind allowed me to create a good contrast of style that I then used to fill out the soundscape for that pack.

In your opinion, what’s the best store to get sound libraries from?

I really like the stuff Frank Bry releases with The Recordist, as well as HISSandaROAR from Tim Prebble, who is based out of New Zealand. Those are the companies I check first, and afterwards I might visit other websites like SoundMorph or A Sound Effect.

In the film world, especially for superhero movies, it seems like soundtracks and sound design has become a bit standardized. We hardly see today’s composers having the effect of a John Williams, by creating memorable themes with cross-over appeal. An exception might be Hans Zimmer, perhaps. Has that kind of thing happened in the video game world also, where music and sound design start to sound the same in every game?

The thing is, when you get hired to work on a project these days, unless it’s something major like Pirates of the Caribbean, you aren’t given much time to complete your work. That’s why if I ask you to sing the theme song from any recent Marvel movie, you wouldn’t be able to; so much of their music sounds repetitive, and a lot of that has to do with the short deadlines, which force composers to write music that sounds like what has already been successful in the past. In the game world, we saw that kind of change take place after the “Transformers” and “Tron movies, which made robotic construct/deconstruct sounds very popular, but I think that phase is over now.

As someone who has recorded sound design and foley for a lot of war games, can you tell me what that process is like?

I can tell you about “Ghost Recon”, since I did all the sound design and weapons recording for that one.

Recording guns is quite difficult in France, as well as getting the permission to record them on military property. It’s not like in the US, where you can drive out into the desert and record there. France is quite small, so if you do want to record weapons, it’s easier to do it indoors. I spent a lot of time watching war movies and documentaries, and bought different whatever weapons so I could learn about what frequencies and velocities were important to focus on when it comes to gunfire. But in the end, I never recorded a real M16 rifle for “Ghost Recon”. The gun sounds in the game were all approximations of real weapons that I created myself, but it worked well because I made the sounds as sharp as I could, so as to imitate the sound of real gunfire. To be honest, I think the sounds you hear in movies and other games is quite tame. Having spent three years in the military, I know how loud gunfire can be, and I can imagine how the EQ curve looks like on other developer’s weapon sounds. There seems to be a cut in the mid-range of their gunfire frequencies, which smooths out harshness.

One of the difficult things to do when recording sounds for a war game is to create gunfire that sounds different between the player’s camp and the enemy camp. They can’t both sound the same, otherwise you can’t tell who’s doing the shooting. I think companies like EA DICE do a good job in their “Battlefield” games of sonically separating gunfire from different camps.

As someone who has played a lot of the games you’ve worked on, I’d like you to tell me an interesting fact about the following games:

Rayman: That was a compilation of different Rayman minigames, called “Rayman Raving Rabbids” and was a collaboration between a bunch of companies. I just worked on a small part of that.

Watch DogsThe audio work for that game was shared between different companies, so again, I only worked on a part of it. Most of my work was for the audio related to camera footage shown on your phone during the hacking process.

Remember Me: That was the first game from Dontnod, and it was quite futuristic, which required me to use samples from CyberStorm (laughs).

Dishonored 2 That game had a steampunk setting, which required a lot of attention to detail, since it contained a lot of objects that were very intricate or delicate, like small, luxury cigarette boxes or 18th century typewriters. So doing foley for that kind of thing was meticulous; we had to create a lot of environment and atmospheres sounds, and many of them required additional processing in post-production to sound right.

Let’s talk sound design. Since you mentioned CyberStorm, can you tell me how you make robot sounds for a game?

The first thing I do is to analyze the animation or images of the robot itself to see what material it’s made of. I want to know what kind of parts it has, how many joints, and what kind of metal it’s made of. Then I search for sounds that match what I see. Only then do I load them into my DAW and start layering.

For robotic construct/deconstruct sounds, I might turn to iZotope Stutter Edit or a synth with a good LFO. But the most important thing is to layer sounds; the main sound is always a composite of layered materials, and the stutter sounds are just playing a supporting role. For “Remember Me”, I split one of the robots into five parts: head, body, arms, legs and footsteps. Each split might have been around ten tracks, for a total of about 50.

And when do you use a synth, as opposed to a sampler for your sound design work?

For games like “Vampyr” or “Life is Strange”, I don’t need any synths, unless I’m creating a low bass rumble for something ominous. But for a game like “Remember Me”, I often used synths to create sci-fi sounds that fit the game; I remember using some Reaktor ensembles to make mangled synth sounds.

For samplers, I use Kontakt now. I recorded all my footsteps for “Life Is Strange” into Kontakt and made different kits for each character. Then I’d trigger them using MIDI notes, line the notes up with the animation, and export them as audio.

How equipped is Big Wheels Studios to record foley? Are you able to do most of your work there?

I can do a lot in my studio, but I prefer to record things like footsteps outside. When I listen to different sample packs that contain footsteps, I can hear the acoustics of the booth they were recorded in, and I don’t like that, so I try to avoid doing the same thing. It feels fake to hear the reflections of a recording booth on a sound that’s used for an outdoor scene. That kind of thing always becomes an issue when you record sounds with a loud transient; each room has it’s own sound, so it’s hard to avoid hearing the reflections. But if I have to record footsteps in my studio, I’ll first record the sound of the room before each take, and then I import the room sample into iZotope RX during post-production. That way, I can use it as a reference to de-noise the recording, so as much of the room sound disappears as possible. But I don’t overdo it, because it can create undesired artifacts if you process the recording too much. RX is very handy. You can even use it to remove bird sounds that were caught in your outdoor recording by accident.

What are some of the challenges of recording underground versus overground? And what about daytime versus nighttime recording?

The main difference is the acoustics of course, but there’s also the signal-to-noise ratio that exists overground. When I do outdoor recordings, I try to have the same level of volume in my headphone as what I hear in real life, because turning up the gain in my headphone becomes problematic when there’s a lot of noise in the environment.

I remember a time when I recorded some footsteps in the grass. There wasn’t much noise going on around me, but for some reason I heard a lot of noise in the recording when I listened back to it. But thankfully, I was able to fix it in post-production. When you process footstep recordings through a program like RX, you do it one footstep at a time, and each one is only about a second long. So after slicing the recording into individual footsteps, de-noising them and adding fades, you end up with very little noise in the sample, which is great.

I love to record at night. That’s when I prefer to record footsteps, because it’s more quiet. Environment sounds, however, like animals, soundscapes and human interactions are determined by the scene in the game; atmosphere and nature sounds are different during from day to night, so you have to be mindful of that when recording. If you need wind or storm sounds, then the time of day doesn’t really matter, but animals and the presence of humans aren’t like that.

Cool. Can you tell me about the equipment you have in your studio at the moment? What gear are you using?

Things have changed at my studio in recent times. I got rid of my console because I don’t record musicians here anymore. I also can’t imagine having to make recalls on the console every time the development team makes a changes to the game and I have to re-edit a session. So I don’t really need that much analog gear anymore. I have an Avalon pre-amp and a Tube Tech compressor for recording vocals and single instruments. For soundcards, I have an Apogee and an Apollo from Universal Audio, which have low latency. I sold my Barefoot monitors and bought some Focals and Genelec 5.1 speakers. The rest of my tools are software that I use on my two Macs.

Do you still use Nuendo and Wavelab?

Yes I do. I’ve used Steinberg products since the beginning, starting with Cubase, and now Nuendo. Personally, I’m not a fan of Pro Tools. It’s the industry standard, but not because it’s the best; it’s rather because Avid was so early on the scene and became popular quickly. Also, I use a lot of MIDI tools, which Pro Tools isn’t that good for. Regardless, I have Pro Tools in case clients want me to use it. But in the game world, I see people using a lot of Reaper, which is very cheap, as well as Cubase and Nuendo.

Nuendo is a good compromise between audio and MIDI functionality. It also has some new features like batch-creation, a randomizer for sound creation, and it allows you to create a direct link between your session and Wwise, so you can send your audio assets directly into it. I’ve been talking with the guys who work at Steinberg, and I think they’re deciding to focus on penetrating the game industry with Cubase, since Pro Tools is so well-established in film and TV already.

What kind of microphones do you use in your line of work? Are there any industry standards for that?

I don’t think we have standard mics for foley recordings. It’s all about your budget. The most important thing is not the brand of microphone, but rather the mic placement. If you have the “best” mic in the world and you put in under a rock, you won’t get a good recording of much. The best foley artists know where to place their mics. Also, foley artists who work with game audio don’t have a lot of time to capture their recordings like studio engineers do. I remember when I used to work in studios, the drummer would play his kit and the engineer would take his time to get the mic placement right, and would move back and forth between the drums and the console until he got the sound he wanted. We don’t have that luxury in foley. If you want to record the sound of an approaching train, you can’t wait until you feel comfortable. You’re either ready when it passes by or not.

So what kind of gear do you recommend for aspiring foley artists?

A recorder from Sound Devices is a good piece of gear to have. Their mics, pre-amps and converters are usually good, but they can be expensive, in which case you might prefer a small Sony PCM D100. I also like mics from DPA, the Sennheiser MKH 8040, the 8050, the 8060 and the Sanken CO-100K, which is a great contact mic. But your mic preference  should depend on what you want to do. If you want be a field recordist, the Sanken is good choice because the range goes up to 100 kHz. That gives you a huge range to time-stretch or pitch the audio if you need to later.

Using a contact mic will allow you to get close to the sound and capture the details of it, so for that I might use my Barcus Berry Planar System, which is normally used to record pianos or cellos, but you can use it for other things too, like footsteps, alongside a Neumann KMR 82 shotgun mic. So you place the Barcus Berry on the floor, aim the Neumann at your feet, record the footsteps, and later adjust the balance between the two recordings. When you record footsteps in a big indoor space, like an auditorium, I would recommend having a close mic to record your feet, but also a room mic that’s five to ten meters away. That way, if the game character is ever approaching or leaving a point, you have two recordings that you can adjust to create the impression of distance. But you need a very quiet room for that, because the room mic can easily pick up the sound of ventilation or other sounds that the close mic won’t, and that can interfere with using both of them together.

Tell me about the different categories of mics you need as a foley artist.

You need cardoid, hyper-cardoid and omni-directional mics. When I record footsteps outdoors, I use a cardoid or hyper-cardoid. A contact mic can be useful as well for added texture. For ambience, I’ll use two or three cardoid mics in spaced Left-Right or Left-Center-Right triplets, or an omni-directional mic like from DPA.

In studio recordings, you occasionally hear debates about tube mics versus solid state mics, or vintage versus modern mics. What’s the most important aspect of a microphone that’s used for foley recording? 

The ideal is to have a versatile mic that offers both wideness and precision, but normally you would utilize separate microphones that specialize in different aspects, which are used collectively to record a source. But like I said, it comes down to your budget. It’s like choosing a speaker; you have to listen to what the microphone sounds like and pick the one you like, based on what you can afford.

If you could only choose one mic to use for a foley recording, what would it be?

I would probably take the Neumann U87, because it offers cardoid, omni-, and figure-eight positions. It doesn’t have a huge range, but 20 Hz – 20 kHz is pretty good, and if you have a good pre-amp you should be fine. The DPA 4060 is also great for recording the sound of clothes that people wear as they move around.

What if you could use only two mics for a recording?

Then I would take one hyper-cardoid, like the Sennheiser 8070 or Neumann KMR 82, along with the U87 or AKG C414.

Thanks a lot Frédéric for this interview. It was very educational. What does the future hold for you as a sound designer and foley artist?

I’ll continue to work on “Life is Strange 2”, and I may have a new game project coming in the summer also. I never take summer’s off, because I know new projects start then. So I’m looking forward to that.