After having made his way into my interview section, Trifonic makes a natural continuation of his relationship with Speakhertz by talking to me about his music-making and production. As someone who’s brand of electronic music falls outside the mainstream festival favorites, it’s likely that you’ll find a sea of unconventional production advice, which you’ll be able to apply to you own music endeavors.
Guitar sounds seem to be one of your trademark sonic signatures. What are some of your main ways of recording your acoustic/electric guitars into your DAW? Any tips on creative ways to capture certain sounds?
I usually record acoustic guitar with a condenser mic, placed several inches away from the 12th fret. This allows me to pick up some of the string and finger noise (which I like), but get rid of the boomy low end that can occur when you stick a mic directly in front of the sound hole. This is how I recorded the acoustic guitars in “Ninth Wave” and more recently the guitars in The M Machine “Black” remix.
For electric guitars, I mainly record direct into a DI, and use amp simulator plugins like Native Instruments Guitar Rig 5. Recently, I bought a Fractal Audio AXE FX II guitar processor. I now record all of my electric guitar sounds through it. I occasionally use some real amps – I have a Fender Hot-Rod Deluxe and a Mesa Boogie Dual Rectifier. They sound great, but the flexibility of Guitar Rig 5 or the AXE FX II is so much easier to deal with.
Can you tell me some of your guitar processing techniques, once they’re in audio format? Do you run them through any FX chains?
A large percentage of the sounds in Trifonic tracks are often guitar! Guitars are fun to process, because they always maintains some level of “guitar-ness” no matter how much you mangle it.
One thing I like to do is record an acoustic guitar part and chop up each individual note, and reverse them. This keeps the sequence of notes in the order in which they were played, but each individual note is backwards. Probably the best example of this is in “Parks On Fire”. The reverse acoustic guitars come in about two thirds the way through the track. It fades in out of nowhere and starts to transition the song towards the peak. It’s a special moment and the reverse guitars really help the transition.
I also like to spectral process guitars with the Michael Norris SoundMagic Spectral plugins. They’re great for creating lush guitar textures.
Other than that, I do simple things like chopping up and rearranging guitar slices.
For those who can’t play guitar, but want to incorporate such sounds into their music, how much progress do you think they could make with guitar VSTs or synths? Would you recommend any?
I don’t know. I think most guitar VST’s or synths sound terribly fake. It’s all about context though. As a layer, I’m sure it can work fine, but as a solo or featured instrument – no way.
A lot of non-guitarists seem to be satisfied with fake guitars and I’m sure most listeners don’t care and can’t tell the difference. That being said, guitarists can hear the difference and it makes us cringe!
Given the ambient nature of much of your music, what kind of effort goes into creating spaces with reverb and delays? Do you have any tips/tricks for that, or is it as simple as busing sounds to a hall or plate?
I love reverbs and delays! I abuse them and overuse them, and it’s a problem. I can’t really take my own advice that I’m about to give – but I should. Less is more. Use less reverb on all of your sounds in general and strategically apply reverbs to create space and depth. Too much reverb on all of your tracks makes everything sound washy and two-dimensional.
On a more practical level, EQ’ing reverb returns is usually quite helpful and important. Sending a bunch of tracks to a reverb can get pretty muddy. Usually I do a bit of EQ’ing on the reverb return to get rid of bad resonances or mud.
I also love to create pads with reverbs. On the remix of “Black”, I created the pads in the intro and throughout by inserting a long 100% wet reverb on a guitar part. I bounced the part and loaded it into a sampler so that I could control the pitch and release time. I love making reverb pads.
Last, but not least, short delays and un-synched delays are the best ways to create a sense of space and width. Reverb often takes up too much space. I often can get away with more wetness with delays.
On tracks like “Ninth Wave” there’s a lot of hall reverbs, auto-pan movement, and ambient textures going on. How do you avoid having things get muddy in the mix, with all this space and panning?
A lot of tedious automation and work! If unchecked, that track would be a train wreck. I solved the space/clutter issues 2 measures at a time. Once I had the track arranged and laid out, I looped 2 measures and listened very careful for any sonic clashes or sounds masking other sounds. If there was a problem, I would try muting unnecessary parts or automating the levels to make everything sit properly. I combed through the entire piece several times listening to every detail.
I make music that has a lot of sonic density. To make it all listenable, I have to comb through the details many times. Sometimes I listen with my LCD monitor off (looking at the session can be distracting for critical listening), and take notes while listening to the mix. I address each note and listen to it again and take another set of notes. I do this process several times.
The bottom line is that making tracks with a lot of density and clutter is always going to be a nightmare to mix. I wish I was a minimalist. it would be so much easier to mix! I like to torture myself though and that’s why I make densely textured/layered music.
You seem to have no problem with both stacking sounds on top of each other, and creating very glitchy audio snippets that sit very close together in the arrangement view of your DAW. How do you avoid having the different melodic ideas become too confusing and overcrowded?
I trust my ears. If something sounds confusing, it is confusing and I need to fix it. With a track as dense as “Ninth Wave “or “Baalbek” or even an older piece like “Parks On Fire,” there are tons of layers of sound. In order to make it all coherent and not confusing to the listener I try to create a clear focal point at every moment.
With a traditional song, the focal point is easy to figure out. The singer is usually the focal point. With instrumental or electronic music the focal point can be any number of sounds and it can change from moment to moment. As a producer you have to be clear on what the focal point is at every moment. Once you decide on a focal point, you have to push everything else out of the way by:
1. Modifying your arrangement (don’t have too many overlapping parts that draw attention away from each other)
2. EQ – carve out space + get rid of bad resonances.
3. Volume/pan automation – bring down the levels of competing parts or pan them so that they don’t live in the same stereo space.
In “Baalbek”, I have melodic phrases that start on one instrument or sound, and end on another. As a listener your brain seamlessly makes the connections and ties together the phrase even though it is occurring across many instruments. In that particular instance, the focal point is changing, but I try to make sure it is clear and easy to follow for the listener.
You’ve talked a lot about resampling your audio. What are some examples of creative ways that you’ve reprocessed audio sounds?
I really enjoy iteratively processing audio. I think my favorite reprocessing technique is to add a small amount of distortion and automate interesting filter movement with an EQ. I render the sound and then do another round of processing. I listen for harsh frequencies and the unique characteristics of the sound. I try to eliminate harsh frequencies and accentuate the unique aspects. Each iteration of processing transforms the sound slightly – it’s usually not radical changes. The deeper I go with it, the more the processed sound strays from the original. I also love to reprocess sounds with extreme time-stretching. Logic’s flex-time is great for doing that.
Given the role of ambient nature of your music, can you tell us what role mid/side EQing play in your mixes? Do you have need of using that?
I don’t do too much mid/side processing. Generally I’ll make sure the sides don’t have too much low frequency energy, but more often than not I don’t do any mid/side processing. People talk about mid/side processing like it’s magic. It’s certainly a powerful way to correct problems and you can do some creative things with it, but it isn’t magic and it’s often not necessary. You don’t need to do mid/side processing to get a huge stereo image and sense of ambience.
That being said, I do like it when an EQ let’s you process stereo or m/s. It gives you more corrective powers.
Do you have any buss compression techniques that you’ve found useful?
I don’t typically use buss compression. If you’re going to use it, you want a low compression ratio (less than 2:1) – less is more unless you want to hear the compression artifacts.
Do you have any favorite sample CDs that you can tell us about? Why these in particular, and what do you use them for?
I love the old Shawn Lee: Planet Of The Breaks – it sounds like vinyl hip hop drums, but it’s all recorded and processed drum sets. I treat them like I would the classic breaks, chop them up and reprocess them – but unlike the classic soul/funk break beats, you don’t have to worry about getting sued!
What are your thoughts on master buss processing? Do you work with any plugin on your master?
I don’t do much buss processing. I try to get everything the way I want it on each channel and group. I have a lot of friends and peers that mix into their mastering chain, which in some ways makes a lot of sense. Having 10db of gain reduction is really going to change your mix and how your drums sound. Heavy limiting can easily distort the low frequencies etc. If you mix into the mastering chain, you adjust your mix naturally to the limiting. That makes sense, especially in the loudness war world that we live in.
I choose not to work that way, and I don’t really care that much about loudness. Bottom line, I just want the music to have a vibe and emotion. Vibe trumps perfection every time. I sometimes do several different versions of a mix and choose the mix with the best vibe – not necessarily the version that is cleanest or most hi-fi.
People remember the emotion and the feeling of music, they don’t care about boring music that has a perfect mix.
True! Well, thanks again Trifonic for being featured on The Frontliner. I can’t wait to see what people think of your in-depth sharing of production tips. I’ll also encourage people to follow you on Facebook, Twitter and Soundcloud.