Understanding Reverb!

Understanding Reverb

RA’s Jono Buchanan takes a detailed look at the science and applications of the staple effect.

Reverb (short for reverberation), firmly resides within the A-list of effects, as it’s a go-to treatment for almost all track types within any mix. Whether you’re looking to enhance dry vocals with a specific spatial type, bring drama to searing lead lines, or simply glue your mix together with a treatment which can be shared by a number of sounds within a mix, we all know that tracks mixed without reverb can sound dull and lifeless. But how much do you know about reverb and the parameters which tend to crop up within most reverb plug-ins? And to what extent can reverb be enhanced by other plug-ins—EQ, filtering, gating, phasing—to help you create treatments which are completely suited to your tracks, rather than relying on a preset, in the hope you’ll get what you need?

This tutorial will focus on all things reverb—helping you better understand the types of reverb plug-in available, the key parameters within reverb plug-ins and some extra steps you can take to help squeeze a great result from the tools you have.

Broadly speaking, two types of reverb plug-in are available—convolution reverbs and artificial ones. Convolution reverbs use “samples” of real spaces to apply reverb to a sound. This is done by recording an impulse response (or IR), which is a sound triggered within a space to spark its natural reverb, which can then be used within a reverb host. Convolution reverbs use a neat trick to allow this to happen; they use the dry trigger in phase inverted form to cancel out the same sound present within the reverb tail, meaning that all you’re left with is a recording of the space in question which can then be applied to your mix.

If you work in a studio with a great-sounding drum room, or you like the sound of your local church hall, for instance, you can make a recording in that space and use the resulting file as an impulse response, bringing those particular natural acoustics to your mix. The sound you choose to trigger to capture an IR is crucial, as reverb is tone-sensitive. In other words, if you use a hi-hat as your trigger, you’ll only capture the treble aspects of a room’s reverb, while a sub-bass will, conversely, only help capture the bottom end. For those taking IR recording seriously, it’s common to find that engineers use sine-wave sweeps which rise quickly from low to high pitch, to ensure that all parts of the frequency spectrum are captured.

On paper, convolution reverb sounds perfect; after all, what could be better than the organic nature of a real space being applied to your mix? Well, it’s certainly true that if the IR stage is taken seriously and if the room you’re recording sounds perfect for the projects you’re going to apply it to, convolution reverb has a lot going for it. However, those are two big “ifs.” As you’d expect, any convolution reverb is as impaired by its IR recording as it is enhanced by it. Suppose the natural decay of the reverb you’ve recorded is three seconds and you suddenly decide your mix reverb needs to be nearer to five. There’s no way to stretch the impulse response to increase the decay time, so that won’t work. Similarly, in order to capture the space you’re recording perfectly, you’ll need good microphones and a mobile recording rig, plus a conducive atmosphere to make a clean recording so, somewhat ironically, the majority of producers using convolution reverbs tend to stick to the preset IRs which ship with the plug-in. However, as we’ll see later, there are some great “abuses” of convolution reverb approaches which can bring some real creativity to your mixes.

The alternative reverb type, which is more common, is artificial reverb. Like sound synthesis, this process involves a plug-in constructing a man-made reverb solution based on key parameters, which model the variable elements of a space and allow you to control these to build a spatial solution which works for your track. Spaces vary in lots of ways—their shapes, their sizes, what they’re made of, how reflective the surfaces of the room are—and all of these things and more combine to produce the tone of a reverb tail. Most plug-ins allow you to tweak parameters corresponding to these natural characteristics, usually meaning that if a flexible solution is one you’re looking for, artificial reverb can often provide what you need. So, while both reverb types can achieve great results, let’s look at specific parameters to understand how they can be configured to give you the best results within your mixes.

Reverberation is split into two distinct phases: early reflections and reverb. Early reflections are the reflected signals which meet the listener just milliseconds after the dry signal has arrived, perhaps only bouncing off a single surface before reaching the listening position. As you’d expect, these signals don’t have time to be significantly distorted by the environment in which they’re playing back. Rather than the virtual surfaces of your plug-in sucking tone information out of these individual reflections, early reflections sound more like the dry signals which triggered the reverb in the first place. However, after a few more milliseconds, the early reflections are replaced by true reverb; the reflection of signals that have bounced off multiple surfaces. These are often very different in character, particularly if the overall reverb time is long. How they change depends on the space being modeled. For instance, if you were constructing a cathedral reverb, the overall time would be long, as cathedrals are vast spaces which sound can travel up and down, bouncing off walls, pillars, the ceiling and the floor but constantly being reflected back from hard, un-absorbent materials.

In fact, cathedrals produce such rich reverb taps because of the brick-work from which they’re made—as these feature rough, unpredictable surfaces, the sound which is reflected from them disperses in wild, unpredictable directions. To understand this better, imagine hitting a tennis ball against a smooth wall. The angle the ball bounced away from the wall would be inversely proportional to the angle you hit it at in the first place. In other words, if you hit the ball at 45 degrees to your left, it would bounce a further 45 degrees away from you as it bounced back. Now imagine the wall you’re hitting against featured bricks which randomly stuck out from the smooth surface. As you hit the ball, it would be impossible to predict which angle the ball would bounce back at. It might catch the edge of a brick and move sideways, or it might catch an opposing edge and come straight back to you. Either way, the result would be random.

Sound waves, causing vibrations through the air, work the same way as they hit a wall; if the surface is uneven, they’ll respond like the tennis ball in the second example, making their reflections unpredictable. You might think that such a degree of randomness would be a disadvantage but in sonic terms, the opposite is often true. To understand why, we need to understand a little bit about natural phasing. As we know, sound waves are so-called because sounds form vibrations in the air which can be measured as waves with rising and falling lines. As you’ll also know, doubling a track in your DAW and copying exactly the same audio file down to the duplicate track and pressing play doesn’t always yield a great result—rather than simply increasing the volume with a doubled signal, often you’ll find that nasty phasing occurs, with some elements of the signal jumping out in volume and others appearing to cancel out altogether.

This is also true for reverb. If you work in a perfectly square room with equally reflective materials on each side, as a sound triggers, it’ll enter the room, bounce off surfaces on either side at the same time and return to the listening position from left and right sides simultaneously. This causes the sonic equivalent of a doubled file, with some frequency content seeming to get louder, with other parts cancelled out. As you’d expect, this provides completely undesirable reverb and studios constructed with this shape go to great lengths to treat their walls, floors and ceilings to ensure that such undesirable reflections are kept to a minimum.

Compare a space like this to one constructed with our virtual tennis ball example and you can imagine the benefits of the latter; such reflections would be minimized due to the unpredictable nature of the sound bouncing off its surface. In larger music venues, this is why you tend to find the walls and/or ceiling are treated with physical objects which purposely break up sound waves to ensure a more random result. Just look at the ceiling design of London’s famous Royal Albert Hall, for instance. Those flying saucers aren’t just decorations, they’re distributing sound signals all over the hall to ensure a higher quality of sound.

Artificial reverbs provide a range of parameters which can be adjusted to tailor settings so that they’re appropriate for your track, with sliders or dials to move between the extremes of each setting. As you’d expect, different plug-ins provide different controls but some parameters are common to most effect plug-ins, as they control the most typical characteristics of a real space.

Taking a plug-in like Logic’s GoldVerb as an example, you can see the parameters for Early Reflections and Reverb across the left-hand side. Firstly, there’s a “Pre-delay” slider which sets the time between the original sound source triggering the sound from the plug-in and the initial reflections being heard. If you set this too low, the dry source and the early reflections could merge and conflict, while a setting too high will result in an obvious ‘gap’ between the source signal and the reverb starting. The room shape slider below allows you to model the shape of the space you’re working in, varying the number of reflective surfaces in particular. Finally, you can set the size of the virtual space you’re applying reverb from, so you can tailor a treatment from a wide base such as a whole mix or single instrument such as a piano, or restrict it for smaller mono sources.

As the sound moves into the full reverb phase, you can control how long it takes before the reverb tail begins via the Initial Delay slider, before controlling the stereo width of this portion of the reverb via the Spread slider. Remember, sound waves have energy which runs out over time, more so as the signal is absorbed as it bounces into walls. As bass waves are longer and slower, they’re more resilient to this process of absorption so as reverbs tail away, it tends to be high frequency content which gets sucked away, leaving more bass frequencies in reverb tails the longer they go on.

The High Cut dial allows you to mimic this, filtering out high frequency content, or indeed, unnaturally preserving it if you so desire. The Density dial, as its name suggests, controls the density or diffusion of the reverb. This spreads the reverb tails out, creating more natural results the higher the dial is set, though if you’re after grainier, more unusual reverb tails, lower settings here might appeal.

Lastly, the all-important Reverb Time dial allows you to control how long it takes for the reverb to decay altogether. If you want a small “drum room” type setting, a second or less will be desirable here. If you’re modeling a church hall, you can experiment with much longer settings. At the top, a slider lets you set the balance between Early Reflections and Reverb, while the mix slider on the right sets the balance between input (dry) and output (wet) signals. If you’re using a reverb like this in-channel, this dial will prove crucial. If it’s set up on an auxiliary, the mix should be set to 100% wet, with the auxiliary send level controlling the dry and wet balance.

GoldVerb is a typical native DAW plug-in, allowing for some configuration without offering every possible reverb parameter. If you want more in-depth results, a plug-in like Sonnox’s Reverb lets you go much further, tweaking a wider selection of parameters relating to the same two stages. The Early Reflections section here lets you choose a room shape, while then placing the sound source to be effected either at one end of that virtual space or somewhere between front and back. After that, as well as being able to change room size and the stereo width of the object you’re applying reverb to, you can control Taper and Feed Along. Taper lets you balance the volume of reflections relative to how far they’ve traveled, so you can unnaturally increase the volume of far-travelling signals which should have a lower level, for instance, while Feed Along allows you to inject a “re-amplification” process into the early reflection stage, re-distributing echoes to increase their density.

Feedback controls allow you to decide to what extent the early reflections will be re-circulated in a room but, as discussed earlier, this could lead to unpleasant phasing issues, depending on the shape of the virtual space you’re creating. Accordingly, a Phase Selector allows you to offset any problems if they build. The Reverb Tail section is also awash with options, including Reverb Time and the Overall Size of the reverb. Again, Dispersion of reflected signals is variable via its own slider and phasing issues can again be addressed if necessary. The Absorption Slider effectively allows you to change the materials from which your virtual space is being built, by controlling how much signal is swallowed by the environment, while Diversity creates change in the reverb either creating a narrow, focused result, or a more varied one across a wider stereo picture.

As you’d expect, all of this means that a wider range of reverb treatments can be constructed from a plug-in like this but in terms of finding a solution which fits your track, where do you start? As with synthesis, the best results come from experimentation with parameters and a keen pair of ears, though there are some rules you can apply which will help you get a better result more quickly. Firstly, wherever possible, remember that reverbs work best when they enhance a mix, rather than becoming the focal point of its whole sound. If you’re listening to music live, you know the sound at the very back of the venue isn’t going to be great and the reason for this is that here you’re receiving the greatest amount of wet, reverberant signal compared to the dry sound you’d hear on stage. In other words, the further away you are from the source sound, the less focused and dynamic it sounds.

As the process of adding reverb adds space and distance, be careful that you don’t add too much. Secondly, think about whether you want a shorter, truer reverb offered by early reflections or a longer, more detailed one using the reverb tail. It’s possible to simulate the energy of a drum room using ambience effects which rely heavily on early reflections even when making electronic dance music, while bass frequencies triggered by kick drums and bass sounds rarely respond well to swamped, longer reverb treatments as they become overwhelmed in the mix. Think too about density levels—do you want a rich reverb where reflected waves are spread randomly through a virtual space, or a tighter, grittier one where signals reflect more evenly?

Of course, the beauty of the DAW environment means that the reverb you set up can be further enhanced by other effects too. Perhaps the most famous example of this is gated reverb, where the natural decay tail of a reverb is suddenly snapped shut as it drops below a gate’s threshold level. Using gates, rather than a rich, full treatment, an unnatural volume cut-off is suddenly applied to a reverb. This works particularly well on sharp percussive sounds like drums or synth stabs, though of course you’re free to try them on any sounds you like. Similarly, phased, flanged, or filtered reverb treatments can also work wonderfully well. Perhaps most usefully, though, lots of reverb plug-ins feature their own EQ sections, allowing you to control the tone response of a reverb from bass to treble.

Suppose you’ve created a reverb for a vocal part which sounds great but is causing sibilant spikes in the sound, creating clusters of over-bright reverb every time an “ess” sound is sung. By switching on a reverb’s EQ, you can reduce these completely, tailoring the tone response of a particular frequency area to the demands of your track. Similarly, if you’re applying the same reverb to lots of sounds within your mix and suddenly the bass-end sounds too dry and disconnected, you might be tempted to send a little signal from your bass channel into the reverb, only to discover the space becomes overwhelmed and muddy. Using EQ to back down the levels of bass in the reverb can help you strike a balance between the dry and wet sounds which works more successfully. For special effects, why not try stacking multiple reverbs up in a chain? As always, a keen sense of sonic adventure and experimentation will help you achieve the best results.

Published / Thursday, 23 February 2012

About these ads

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s