Yoad Nevo has been an in-demand mixer and producer as well as a digital audio guru for over twenty years. With numerous production and mix credits, as well as a major role at Waves Audio, he’s played an active role both on the musical and technology sides of the industry. I met up with him at his studios in London to talk about audio, plugins and his custom made Neve desk.
Hi Yoad. Thanks for having me over to chat. Can you tell me how you got started?
I started at seventeen by doing a year at audio engineering school in Tel Aviv. During that time, one of the biggest studios in the city contacted my school about needing an assistant, and I was recommended for the job. A few months later, I started assisting on recording sessions.
During one of the sessions with a big artist, the engineer in charge had to leave, and the producer asked me if I could take over the session to record guitars. “No problem “, I said. In reality, I was nervous as hell, but I played it cool. I didn’t have a lot of experience with recording at that time, but since it was a guitar recording session, I knew I could get a sound I liked, as I’m a guitarist myself. I ended up engineering and co-producing the whole album after that session. It was a big album, which took a year to make. When it was over, the studio manager wanted me to go back to working as an assistant, but I considered myself an engineer at that point, even though I was only nineteen. So I left the studio to become a freelance engineer – and sat by the phone for a good few months (laughs). But then things started to pick up and I got work in different studios. Israel is a small place, so you get to work on all kinds of projects: pop, rock, classical, world music, electronica, etc. It was a good experience for me.
What was characteristic about Tel Aviv’s music scene when you started out, as opposed to what you later encountered in London?
In London and other big music scenes, it’s more natural to become an engineer that works mainly in one genre, like rock or hip-hop. But when I was younger, you had to be able to do everything. So I would record brass one day and then guitars on another, and then I started programming and producing. One of my first breakthroughs was with a band I produced, which became very successful. In addition to producing, I was a member of the band and co-wrote the songs. But I didn’t want to go on the road with them, preferring to stay in the studio, so I left before the album even came out.
You got royalty cheques from that though, right?
To be honest, I wanted to leave so bad that I signed whatever their lawyers gave me (laughs). I don’t regret it anymore, but back then I ended up looking for work whilst they were having success off the album I produced and co-wrote. That’s just how you learn, and it’s fine. But I’m glad that I chose to stay in the studio. I did end up getting a lot of work with big artists and by the time was twenty-five, I’d done almost everything you could do as a mixer. In addition to that, I had assistants in different studios that would assist me with engineering. So I might engineer the rhythm section of a track, leave the studio, and they would do the overdubs. I could then come back later to mix everything. This way, I could work on four or five albums at the same time.
Did you get ear fatigue doing that all day?
Not really. I never worked loud, and still monitor quietly, so it was never a problem.
You’ve said in past interviews that you had to teach yourself how to be an engineer as you went along. But wouldn’t you have to make mistakes at your client’s expense?
I sure did. Not too many, fortunately. At the end of the day, everything worked out and the labels and clients were happy, apparently, as they kept hiring me. I knew the desk and gear, so it was more about experimenting with things like microphone brands and placement. Those things stay with you for life. I don’t work long hours anymore, but if I had to work two days straight, I could do that, as I’ve done it so many times in the past.
Do you feel that working with Waves at such an early point helped your transition from analog to digital?
Absolutely. Back then, digital audio was in its early days, and you couldn’t do much with it. You couldn’t run many real-time plugins simultaneously; you had to process the audio and then bounce it. But even prior to working with Waves, I had already mixed an album in-the-box, in 1993. I had a Triple DAT system by Creamware, which could run one real-time plugin on the master bus. So I would have to solo a sound, insert the compressor on the master bus, bounce it, bring it back into the session and replace the original file. I would then process the sound with an EQ on the master bus, and did that for each instrument. When I wanted to use a reverb, I would bring all the faders on the mixer down, insert a reverb on the master bus, and start pushing faders up. I wouldn’t get any dry signal, and had to imagine how the instruments would be placed in the wet reverb. Then I’d bounce the all-wet reverb file and import it back in as an audio track. If there was too much reverb on one sound, I’d do it again until I got it right.
When I moved to London in 1998, I was still working on analog desks, but also had seven computers as a part of my setup, since you could only run a few plugins in real-time back then. I had four Pro Tools systems, with two of them being Native, a Creamware system, a TASCAM Giga Studio system and SampleCell system. All of these would run together, and each computer handled a different thing. One would run all the plugins I used for guitars, and another one would be for drums, another for reverbs, etc.
Having worked in both analog and digital realms, do you prefer one sound over the other?
I prefer the sound of analog, though I appreciate the benefit of using plugins, which is great for processing individual words in a vocal, rendering reverbs, editing, etc. The functionality can sometimes outweigh the depth you get from analog gear. The way I work now is the best of both worlds, since I have my Neve console for summing, but still work mainly in-the-box, with the exception of Q-Clone, which allows me to use the desk EQ.
Working in-the-box also allows me to recall my mixes easily. All I have to do is line up my faders to unity gain. I always hated doing recalls on analog consoles in the past. It takes two to three hours and never sounds quite the same as the original mix. So my workflow has changed, and this is how the benefits of digital come into play. In the old days, I’d have two days to mix a song, whereas now I can work on a mix for a few hours, and then listen to it on different systems at home, in the car or on headphones, which is where the real work comes in. I’d make comments to myself whilst listening, load up the session in just five minutes and tweak the mix. So I’m able to span my work over a period of a few days, but I end up spending roughly the same number of hours as before.
I see that you have a big synth collection here. Where did you get all that?
I just collected stuff over the years. I like my analog synths, but I’m a fan of digital synths too, which is why I enjoyed very much developing Element and Codex for Waves. I like how analog and digital domains interact. For example, our new wavetable synth, Codex, was created using my analog synths, which I used to create the wavetables. It’s not a sampler, since the technology lets you use wavetables, allowing for more diverse manipulation, but you still have access to sounds that come from a MiniMoog, an SH-101 or a Korg MS-10.
Would it be fair to compare Waves’ Codex to Native Instruments’ Massive, since they’re both wavetable synths?
I love Massive. It’s a great tool, but there’s nothing to compare. The waveforms in Massive are still mostly classic waveforms, such as sine, square, etc. Also, Massive makes heavy use of artifacts in it’s wavetables, much like the classic wavetable synths of the 80s and 90s. In Codex, we strove to eliminate these artifacts. I’m not saying that artifacts are bad, but with Codex we took wavetable synthesis to the next level.
Another Waves product you’ve worked on is the NLS. What was the process like for making that?
It was a very lengthy process. We had to figure what we wanted to capture from the desks, which involved running different test signals through them, and taking different measurements of the electronic components for modeling. We did a lot of the experimentation and R&D on Spike Stent’s desk, and had it shipped to Tel Aviv for six months. Following that, when we had the process figured out, I sampled the 32 channels of my desk here in the studio. We did the same with Mike Hedges’ desk.
In regards to your own custom Neve desk, why you haven’t contacted Neve about making a new custom one, instead of using a desk from decades ago?
Because it’s a one of a kind desk. The biggest size Neve used to make of this specific model was 48 channels, and mine has 60 channels, since I had two desks merged. The center section was taken from a desk in a film studio in LA, so the Master channel is eight-way, 7.1, which is great for me since I do a lot of surround mixing. It’s an old school, Class-A, all transformer desk from 1981, and it’s the first Neve desk to have channel dynamics. So it’s unique, and sounds amazing.
What are your thoughts on the sound of SSL consoles versus Neves?
If I were to go back to mixing using an analog desk solely, I’d go for an SSL. My Neve desk doesn’t have recall built into it, which would make it unusable for that. Also, I prefer the SSL for sound sculpting, which is why I use the SSL plugins so much, but even then I’ll run it through the Neve desk to get the extra width and headroom. I also use the Neve for recording, and you can’t really compare the SSL pre-amps to the Neve ones, as the Neve mic pre’s have so much more headroom.
GTR is another Waves plugin that you helped develop. I use it a lot, and noticed that it has a sharp, present sound that cuts through the mix around 3 Khz.
This is something I had a lot to do with. I’m aware that some people may not like it but I wanted GTR to have the sound I would personally use when recording guitars, which has a lot of presence in the mids and highs. On GTR3, we went back to a more natural sound for the PRS models, so we ended up having both. When I record guitars, I do a lot of processing to the sound in a way that sits well in the mix.
What are some of the most creative ways you’ve used GTR?
I use it on vocals as well as on room mics for drums. The amp distortion in GTR is something you can’t get from a pedal or another plugin. We modeled the amps to be very responsive to different levels. Overdrive and pre-amp distortion is very sophisticated and responsive, and lends itself really well to drums, unlike the distortion you get from pedals, which is sonically close to digital clipping. So if you want to use GTR on drums or vocals, try taking the cabinets off and you’ll get an interesting result.
How do you feel about the fact that Waves has created a legacy for helping to bring about the digital revolution in music, whilst simultaneously equipping people with tools like the L1 to destroy dynamic range in a mix?
I keep saying to Meir Shaashua, Waves’ co-founder who designed the L1 and L2, that we’re sort of responsible for the loudness war (laughs). But it’s always like that with technology and art. When the technology is available, artists will abuse it, and then the art evolves as a result. In the early 80s, when the Yamaha NS-10s came out, it changed music. The punch you hear in 80s music comes from the fact that a lot of music from that period was mixed on those speakers. Same with the DX7 and RX11. Not to mention the TR-808. The 808 was meant to be a virtual drummer, yet what it turned into was something completely different. So this always happens. The L1 and L2 changed music, for better or worse. When cameras became available, it changed art. Prior to that, painters would sit for hours creating realistic portraits, but then the camera made that obsolete. Now in the modern age, no-one wants realistic pictures, but would rather edit them with filters, etc.
What Waves plugins would you recommend to new producers/mixers?
You’d recommend get both the SSL bundle and the TG12345? But they’re both desk emulations.
The TG12345 may be a channel plugin, but it does something entirely different, sound-wise. With the SSL bundle, you can mix and make things sound modern. With the TG, you can be creative and get more character out of it because of its unique dynamics, drive and EQ sections.
What are your thoughts on the TG12345 versus the REDD?
I prefer the TG, as it’s more versatile. I like the harmonics and the sound you get by running things through the REDD. It has a nice bass and treble boost too. But you can’t compare it to the TG12345 in terms of functionality. The TG has a compressor, three-band EQ, parallel compression, etc.
When mixing, I’ve heard that you don’t use a lot of reverb, and that you turn to other things to create a sense of depth for your sounds. What kinds of “things” are those?
Won’t delays clutter up in mix?
It depends on the style of music that you’re mixing. I may have said that around five years ago, but things have changed since. Back then, things were a lot more in your face, and if you wanted depth, you would use a slap-delay, which creates depth without smearing transients, and avoids taking up too much headroom in the mix. Because delays aren’t diffused, you can make it a lot lower in the mix than reverb. It’s also about headroom. Even if you’re using a reverb which isn’t very present, it’s still going to take up a lot of headroom, since it’s frequency content is pretty wide. If you use delays on vocals, you can make it 20 dB quieter than reverb, and it works for added depth. Also, delays give you extra control of the groove on a song.
You mentioned in a MusicTech interview that when you were working with analog tape, you’d sometimes hit tape hard to smear transients. Why do that?
Because it sounds good. Maybe not for EDM, but again, genres are evolving. It’s just easier to mix when you don’t have too many transients. Smeared transients also add to the depth that people talk about in analog gear. Having said that, life is too short to be recording to tape these days (laughs). I’d rather get on with other things, so I use tape emulation to achieve this effect.
You’ve also mentioned in past interviews how each new plugin added to a chain degrades the sound it’s affecting. Can you talk about that more?
I would tend to think that using more than two EQs in series isn’t beneficial for anything, unless you’re automating filters. And why use more than two compressors? I wouldn’t use more than two compressors on a vocal; each one does something specific, and I’d control their relationship. Over-using plugins makes things sound too digital, and the result is that the mix stays in the speakers instead of being in the room. This is why I use analog gear as a reference. With analog, you don’t hear the speaker, but rather the presence of the sound that engulfs you. That’s a very important point of reference for mixing and recording. It’s like siting in a room with a guitar amp; you don’t hear the amp or the cabinet, you hear the sound of the instrument, and that’s what I’m looking for when I’m mixing. When I don’t get it, and only hear the speakers, then I know something is wrong.
You’ve done a lot of webinars for Waves. Have there been any other ways that you’ve been sharing knowledge of audio and music?
Yes, my book, “Hit Record“, which is also going to see a digital release soon. I did some tours with Avid when Pro Tools 9 and 10 were released, and I occasionally give some masterclasses. My webinars have the most exposure though, and people comment very positively on them, so it appears that people have found them beneficial, which is great.
Wrapping up, can you tell me about what the future holds for you?
I’m working on a lot of different things, spanning from mixing to developing software, to writing and producing songs, mastering, making sample libraries, etc. I love doing it all. It keeps things interesting.