In the wake of the success of Remedy Entertainment‘s “Quantum Break” video game, which was released last month to glowing reviews, I was able to chat with the game’s BAFTA-nominated Finnish composer, Petri Alanko. The 46 year-old has already tasted mainstream game music success after having scored the “Alan Wake” series a few years ago, and is receiving even greater recognition this time around. Check out what he had to say about his history, studio gear and career as not only a composer but trance music artist below.
Hi Petri. Thanks for being willing to answer a few questions about your work. Can you tell me about how you became involved with scoring video games? Was it a deliberate pursuit of yours from early on?
Composing for picture had always been my career of choice since I was a kindergarten boy. Back then, however, there was no gaming industry whatsoever, so my dream was just a castle in the clouds. It took decades to have even the slightest resemblance of a possible market for such music-making. Finland, you see, wasn’t exactly the most obvious country for the gaming industry in the early 80’s and even 90’s.
I started out by getting quite heavily involved with pop music after my high school years. For a long time I survived thanks to pop productions and songwriting, but eventually my heart wasn’t in it anymore and I decided to move on to what I would actually enjoy. When the word got around, somehow I received offers for only those projects I really wanted, and the less attractive projects disappeared. Little by little I got involved with the IT business, content providers, etc, and when those people left their original companies and formed their own, they kept calling me, thankfully.
A friend of mine, after a few career moves, ended up at Futuremark, which was partially owned by Remedy Entertainment, and during one of their mutual summer parties, my friend was asked whether he knew anyone capable of writing modern orchestral music. He gave them my number, and my phone rang the next week. I had dreamt about making music for games, but couldn’t think of any Finnish company I’d love to work with back then – except Remedy. We met soon after, I was shown some material, and we decided I’d write a short demo; I composed music for a little cinematic, in which “Alan Wake” appears for the first time.
So, solid dreams, hard work, some luck, and a lot of contacts led me to my current career.
So scoring “Alan Wake” and your success with that is what led to you being offered “Quantum Break”?
Yes. I’d composed the OST for “Alan Wake” and all the DLC packages as well as “Alan Wake’s American Nightmare“, and each project was incredibly cool, well-steered, and it seemed that I fit nicely for what Remedy wanted. Our communication worked really well during the development of those titles, and I felt we had a fruitful dialogue going on. They know I’m not taking this nonchalantly. I’m putting in my full time and effort all the time, and I never count hours.
After “Alan Wake American Nightmare” was done, and the game itself was waiting for a release, Saku Lehtinen (Art Director at Remedy) contacted me with some music requirements for a new presentation. The new idea looked cool, and pretty soon afterwards I received the first draft of the pitching cinematic they had prepared for the concept…and it looked really good, even as a mere prototype of a draft. I was sold immediately.
Most companies are a bit scared about when to bring the composer in. In my opinion, the longer the gig, the better the music you’ll get by bringing in a composer early on, unless you just want fillers and pointless war-elephant drum patterns.
Since “Alan Wake” is considered a sort of spiritual prequel to “Quantum Break”, were you in any way prepared for scoring your latest game by working on the first one? If so, in what ways?
Well, both are in some way dealing with matters happening inside the characters’ heads, but in the case of “Alan Wake”, he pretty much was all alone against everything, as opposed to “Quantum Break”, where events are causing ripples in everyone’s lives. Further differences are that Alan Wake saved his wife, Jack Joyce saves everyone. Alan Wake sacrificed himself knowingly, Jack Joyce acts against his own will. Nevertheless, both stories were perfect dramas. They had a very deep narrative to draw from and all the characters (including the antagonists) offered a humongous amount of material to write for.
What I loved in “Alan Wake” was the delicate, quiet notes. At some point, someone at Remedy said, “It would be nice to put something like that behind some of the busier scenes”. I’m not sure if it was the Audio Lead, Richard Lapington, but I had quite a few discussions with him too, about the score’s development, and used that feedback a lot. Also, since some of the raw material for my instrument sounds came from the Remedy’s audio team, I must praise their work here: what I got from them was all top notch stuff.
There’s too much library music around nowadays; far too many developers and even TV producers are relying on pre-composed music to accompany their characters from the beginning to the end, and somehow that has almost destroyed some of the magic of games and TV – and even the movies. Why rely on meaningless pieces of looped music underneath a scene? Because it fits there? Well, in that case, why not just create the whole TV show/movie/game with ready-made clip art, instead of actors or graphic artists? The music has to have meaning, and you can reach it only via custom compositions, made for the occasion only. I have to add that I have nothing against music libraries, but they don’t really raise any quality thresholds in daily TV/games/movies. In ads and commercials, they’re okay, maybe with some meaningless little apps as well, but if the developer has any self-esteem at all, I’d adamantly maintain that they should use bespoke music.
It’s interesting that you say that. Many of today’s video game scores for AAA games like “Halo” or “Call Of Duty” aim to describe themselves as “epic” or “grand”. Were there any stereotypes you wanted to avoid in creating the Quantum Break score, so as to not just make another collection of orchestral soundscapes and pulsing synth sounds?
Well, ‘epic’ should be defined by the nature of the music, and if the scene’s been directed and written properly, it doesn’t need 120 drummers to back it up, or an ogre choir, or 1/128th note guitar solos. Forcing something “epic” usually makes a scene uncomfortable and bleak. I very much tried to avoid the clichés of sci-fi soundtrack music, like the typical “action drumming” sounds everywhere. Instead, since Jack had his determination and clear goals set in his mind, the movement could be subtle, minimally underlining and emphasizing the game’s events. Of course, there had to be fast-paced music or otherwise it would have been a bit distracting, but there’s a surprising amount of what appears to be low-key score music that caters to the motions, motivations, urges behind the action itself. In most cases there was a huge amount of ambiences present in the background as sound effects (by the audio team). Initially, the directors felt that those scenes might have survived with sound effects only, but it was late decided to add subtle music to those scenes as well.
In “Quantum Break’s” case, employing an orchestra would have been distracting and made the perspective more grandiose, and we decided early on to try and survive without any orchestras, to keep Jack closer to the player holding the controller. We wanted to keep in mind that he was just a guy in a cab in the beginning. Since his change wasn’t voluntary, I decided to use much longer and subtle musical arcs than most so-called superhero games would typically use.
Much of the workflow used in making electronic music has become standard across other genres, such as using digital synths and plugins, MIDI controllers, etc. Has your past work in the trance genre served as an advantage for your work as a game composer?
I would say it works both ways. There are still a surprising number of people who make the assumption, “you’re doing club stuff, so you’re a DJ and cannot play any instruments”, and I actually don’t mind that at all – the truth about what I can do is already out there. However, since the build-ups in trance music are lengthy, some over ten minutes, there’s a lot of tension and release involved, which has been a good exercise for me.
Considering the tools, I’ve had pretty much the same setup for ages. Only the computer processing speeds and amount of plugins has changed, but the core of my workflow has remained almost the same since 1986, with only a few changes. For example, after “Quantum Break”, I’ve been using a Roli Seaboard as my main controller – it forces me to think differently. And now I’m seriously considering switching over to Cubase.
I’ve heard about your love of modular synths. In terms of comparing them to subtractive synths, would you say that there’s a sonic difference that can be attained using one over the other? Or is it merely a question of workflow and customization that modular synths offer?
It’s a combination of reasons. For me, music is a set of symbols and colors in my head, and the analogy applies to modules in my frame, too. A basic oscillator is colored as a pale grey-blue circle, whereas a QMMG filter is an orange-brown pentagon with bright red stripes…don’t call the shrinks, haha.
Although I tend to construct mostly subtractive signal paths with my modular stuff, the nature of each patch makes them slightly more alive than a subtractive synth, as the signal path isn’t that clean. Usually, with simple subtractive synths, like a Roland SH-101, you can learn how to predict it’s sounds by simply by looking at the settings on the knobs. The same applies eventually to each modular system, if you use it daily and know the pieces thoroughly.
Modular synths appeal to me as puzzle pieces that can be used to construct something new; a bit like a collage. But I’m not criticizing the ready-built synths either, nor the digital or the hybrid machines, as I own some from each camp.
Something that really interests me is grain clouds, granularity, and resynthesis. I got drawn to Synclavier very early on, but have never owned one. Now that there’s a company specializing in refurbishing synths like that, I’ve been considering adding some hardware resynthesis into my arsenal. But I’ve got no space in my studio as most of my analog gear is already all over the place…in a rental space, at my parents’ house, in the garage, etc.
Concerning your work on “Quantum Break”, I’ve read on your website that you said “…a huge amount of custom sound libraries had to be created even before the main composing could commence”. Can you talk a bit about this process of creating your library? What are the steps you take to create something that doesn’t sound like a preset or sample pack content?
It’s partly psychological and partly about the craft. There were no suitable sounds for my work in 2011/2012, and I decided to start recording a lot of metal and glass to try to get something tonal from there. Typical glass recordings tend to merely be the sound of glass breaking or the rubbing of one’s finger round the glass rim, but the object itself can provide much more than that. The same applies to metal, piano frames, string instruments and other resonating real-world objects. Sound libraries are usually made according to certain usability aspects, and back when I started to create my own library, none of them offered anything new. Also, libraries of guitar feedback sounds were merely heavy metal pastiches and comedy takes, or just too short and clichéd.
Once I decided to begin making my own stuff, a lot of raw sounds ended up being used as convolutions, like cellophane paper, tinfoil, plastic and glass shards. Sometimes I adjusted the start and end points of the recordings, and sometimes I just let the echo reflections come and go whenever they wanted. Usually these recordings takes 2-3 months, processing them can take 3-4 months, and creating instruments from raw samples with Kontakt, Reaktor, Iris2 or Kyma takes another 1-2 months – that was my initial cycle during Quantum Break. However, since the game needed a lot of material, the preparing phase took even longer.
Also, since some commercial library providers can be a bit pricey when it comes to sample usage rights in commercial products, I’ve always tried to do something myself instead of relying on something 2000+ other people are using daily. Again, I’m not criticizing the rights payments. It’s very much okay to be paid for your hard work, but some developers might encounter a few surprises when certain applications are released in the US App Store. I’ve heard of Spectrasonics charging $10,000 dollars for some sound that was used as a warning signal in a piece of media. Whether that’s true or not, I don’t know
What were some of your most-used tools for your latest score, in terms of both hardware and software? Any particular reason for these choices?
Well, a few things come to mind: Roland V-Synth XT, Prophet-6 and the Arp Odyssey, but mentioning only those is an understatement, as the range of my arsenal was much larger, depending on what I needed or wanted. For instance, quite a few tracks were done with the Roland System-1m and my analog modular synth, interfaced via Expert Sleepers’ Lightpipe and S/PDIF interfaces. I also used their Silent Way plugin.
Sometimes a track started out analog, but was transformed into a totally digital production towards the end, but I’d say about 80% of the music was analog, created using Oberheims, old Rolands, Korg, Arp, Sequential Circuits, Moogs, etc. I used to have a white-face Arp Odyssey, but like my dear old Prophet-5, they were both sold and replaced with remakes (Korg Arp Odyssey and Prophet-6) and boy do they sound great!
I’m very keen on using Universal Audio’s Apollo interfaces, and own three of those and a hefty load of their plugins.
I’ve had an Oberheim Matrix-12 and Xpander for ages, but I turned to Arturia’s Matrix VS plugin when it came out. The sound was pretty much identical to my Matrix-12 and way too different from my Xpander. The former had all the caps replaced, the latter was in its original state. I had too much trouble with them, so I decided a more predictable solution had to be implemented.
I used a lot of Zynaptiq stuff, DMGAudio plugins, ValhallaDSP…but if I had to name a few key plugins crucial for the production, they would be Kontakt 5, Reaktor 5 (I hate Reaktor 6; it’s SLOW), Iris2, Alchemy, PitchMap, and Morph.
I usually choose a plugin based on its abusability, haha. Some plugins output interesting stuff when used against the recommendations. For instance, Celemony Melodyne was used heavily for the end of “Alan Wake”, and I even transposed an entire orchestral cue with it. Towards the end of “Quantum Break”, Melodyne’s spectral edit capabilities were used, and that was all over the soundtrack.
What are some of the equipment that one can expect to find in your studio? Anything out of the ordinary that an average producer might have never seen?
Macbeth M5. I’ve had two – not at the same time, though. Also, a while ago I still had a summing mixer made of Neve 8086 input modules, but they kept on breaking down (they were scavenged from an old 70’s mixing desk) and I sold it. Then I acquired a Neve proprietary summing mixer and series 500 preamps and EQs. Also, my usage of the Neuron VS and Nuke controller has gotten a bit out of hand. I’m using it through Plogue Bidule, and it’s interesting: you put just about anything in, and out comes this monstrous sound, full of detail. Then there are the Haken Audio Continuum and Roli Seaboard controllers.
We’re living in the age of the digital revolution, where most of today’s music is made on laptops, using samples packs and preset sounds, which leads to a lot of same-sounding music being made. You’ve talked about how tiresome some of this can be. How has the availability of digital tools helped your work, and how have you balanced that out with the presence of analog gear?
It’s not about the tools, but rather the experience, knowledge, and the ideas. Put ten people with the same gear into the same room and let them make their own albums, and out comes (most likely) ten different albums.
Recall is a clear plus of digital. It’s appreciated greatly in my profession. Tools need to facilitate, and not burden or slow you down. In my case, the whole rig of digital gear is interfaced permanently to all the inputs and output of my analog instruments, and I know them all by heart, so when the production cycle kicks in, I can react quickly. Making everything accessible leaves more time to hone my ideas, and to concentrate on the core harmony and melody of the music.
However, I do think the so-called “producers” are losing some of their edge nowadays. What separates a regular idea from a great idea is the “campfire test”, which means that whatever you create should be able to work around the campfire with a guitar or an acoustic piano. You can “produce” a mediocre idea into something clever, but the production flavors cannot fade the foul taste in your mouth if your core musicality is poor. This is why I’ve always kept each melody away from my DAWs or notebooks at least two weeks before writing it down. It feels crucial to let things mature first: will they survive the test of time? Do they have enough longevity? I tend to route a lot of signals through my modular and sometimes I need to process some recorded signals with Izotope RX, but most of the time I just let it be.
How are you preparing for your concert with the Helsinki Philharmonic Orchestra and Choir? Is translating trance music into an orcestral performance going well?
Yes, it is! We sold out the first concert in just under 15 minutes, and as I’m talking to you here, I just got a text on my phone stating “The 2nd one sold out”, coming from a friend. We quickly arranged yet another concert for the same night, and luckily it was okay for the solo guests, orchestra and the choir. It’s amazing, and tells you so much more about subcultures, their fan-base, and the eagerness of the performers. We’re all very, very thrilled and excited, and the versions of the tracks are going to be so cool as it’s going to be very different from your average classical concert. There’s a possibility we might need an occasional 808 kick here or there, maybe something else too, but I’m not sure yet. It’ll be interesting and nerve-wracking, and I must get my hands back in grand piano condition, so I’m honing my Hanons and Czerny-Germers until August the 26th like a maniac, haha.