Our new highlight for your studio
Digital recording has made the possibilities for sounds and recording techniques very unwieldy, almost endless. All this opens up new sources of error. In the past you had cables and tapes, today you have files, plug-ins, interfaces—and usually not just a handful of them. It's not always obvious where the problem is when the mix suddenly sounds strange. We summarised 5 classic "invisible" sources of error for you and shared some tips on how to do it better..
Let's start with a classic: the reverberation, also called reverb. A bit of reverb on the guitar track and the voice just sounds great. And this is the problem. It's so seductive to use reverb effects. And it's done with a click of a mouse. The reverb already ends up on several tracks—until the whole mix finally drowns in a sea of reverb effects. It is particularly tricky here because many musicians only ever listen to individual tracks. They then sound good with reverb, you bring them into the mix and mentally tick them off. In the end, the super reverb frays the mix for you. So:
Reverb is an effect, not a sound concept!
And if you already like to use reverb: use an equalizer for your reverb effects. Mixes each reverb separately. The mix then sounds tidier. By the way, this applies to all effects and effect plugins: Remember that all of your individual tracks should eventually become an entire track.
Listen to several or all tracks in your mix together from time to time.
More reverb? From time to time we also hear from musicians who want the opposite: to remove unwanted reverb from a recording, whether ambient sound, spring reverb in an amplifier or artificial reverb added afterwards. That works with plugins. The two most famous of these are iZotope RX 6 De-Reverb and Zynaptiq Unveil.
iZotope RX 6 De-Reverb is one of the modules in iZotope's RX and RX Advanced software. You can run it as a separate plugin in real time in your DAW. One benefit of using this plugin to remove reverberation from a recording is that iZotope's whole suite of tools is quite powerful. So you not only buy the plugin, but also many other options. The de-reverb reads the profile of the reverberation on your audio track and subtracts these characteristics during playback. So you would need to find an open reverb tail on the audio track, select a section of it, and tell the plug-in to read the reverb's profile. De-reverb then adjusts accordingly, removing the reverberation from the track without affecting the rest of the audio. Attention: You can make detailed settings on the plugin that are similar to a compressor. If you feel the track sounds too compressed, adjust the plugin a little less radically.
Zynaptiq Unveil uses smart technology to isolate and minimise unwanted reverberation on your tracks. Small advantage: The plugin can also increase the reverberation. Zynaptiq Unveil does not perform the removal by phase cancellation, but by separating the foreground components from the background components of the audio track. This is done with the help of pattern recognition and modelling. What's ingenious about this is that you can also use it to completely isolate the reverb from your audio track and deliver it to different outputs via panning. Unveil comes with useful presets. So you don't have to be an audio engineer to use the plugin.
Put microphones as close to the sound source as possible—looks right, doesn't it? The idea is usually: the closer the mic is to the sound source, the less noise is interspersed. Unfortunately, this often triggers the proximity effect. The result: The track booms but there's no way to remove the boom afterwards.
What is the proximity effect?
The proximity effect is sometimes referred to as the near-field effect. It occurs when the microphone is in the so-called near field of a sound source and picks up the sound. This is how the effect comes about:
The near field of a sound source is the size of a wavelength around the sound source. If there is a microphone in the near field of a sound source, the microphone overemphasises these frequencies. Deep sounds have longer wavelengths than high frequencies. This means: a deep sound's near field is larger than a high frequency sound (because the wavelength is longer). So a microphone is much more likely to be in the near field of bass frequencies. In other words, the bass gets louder, mostly in the 100 Hz to 150 Hz range.
Knowing this, you can of course also deliberately play with the proximity effect. However, the proximity effect is often unwanted. Then you can mix it with parametric EQ.
The experienced reader may remember the expression "printing hot": In the days of analogue recording devices, musicians liked to record ("printed") very loudly ("hot"). Among other things, to play as clearly as possible above the internal noise of the audio technology. "Printing hot" was adopted into digital recording technology because musicians wanted to reduce the alleged loss of quality of digital systems. Not only is this unnecessary today, it can be a real problem. Because a digital recording can only tolerate a limited maximum volume level. If the level exceeds a certain limit, the signal will clip very suddenly. You will not be able to get this "clipping" out later in the DAW. It only takes several loud tracks, effects (unlevelled reverb effects!) and plug-ins before the overall level exceeds an acceptable level. The clipping can later be clearly heard on compressed formats such as mp3.
Nowadays, the volume can be increased afterwards and digital recording tools have hardly any background noise.
And the volume can only be adjusted upwards. Making a track quieter again in production is much more problematic, because frequencies that simply belong to the sound are lost. The clipping does not disappear afterwards either, but only becomes quieter.
Record as quietly as possible and slowly "feel" your way to higher gain levels.
As the handling of volume and effects shows: a lot is not always good. As a rule of thumb for miking applies to the basics:
More mics, more problems.
If several microphones are at different distances from a sound source, the sound will not arrive at exactly the same time. Sound propagates "only" at 350 m/s. That can be audible.
Many microphones also often cause phase problems. Without going into physical details, frequencies cancel each other out when different wavelengths hit different microphones.
For home studios, healthy minimalism often yields better results.
Less is more?
Attentive readers may notice: In the article about the first sound recordings in the home studio we write something else – related to drums. It pays off to work with many microphones. When playing drums, you should use several mics and experiment a little with the distances. This is simply due to the fundamentally different frequencies of cymbal, snare and kick drum.
Inexperienced singers tend to mumble in the studio. That is because they don't open their mouths wide enough when singing. That often goes unnoticed in rehearsals and live settings. However, this tendency is intensified by the extraordinary situation in the studio. In the studio, every musician is "naked", i.e. without any accompanying instruments. The sound engineer listens and watches.
Singing the lyrics excessively clearly helps. Deliberately sing along to each letter and open your mouth wider than usual. This seems unusual, but often improves the vocals noticeably. Especially in the studio, singers should select the best microphone for singing.
Different microphones pick this up differently depending on the voice. Condenser microphones are very sensitive, for example. Many musicians swear by condenser microphones in their home studio because they have heard that they are "the" studio mics. But that is a gross oversimplification.
Fotos ©The Bland, Christoph Eisenmenger