ReaMix: Breaking the Barriers with REAPER


Free download. Book file PDF easily for everyone and every device. You can download and read online ReaMix: Breaking the Barriers with REAPER file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with ReaMix: Breaking the Barriers with REAPER book. Happy reading ReaMix: Breaking the Barriers with REAPER Bookeveryone. Download file Free Book PDF ReaMix: Breaking the Barriers with REAPER at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF ReaMix: Breaking the Barriers with REAPER Pocket Guide.
Upcoming Events

The best approach is that which works for you. Some sound engineers insist that you should always mix only with your ears, and ignore all other sensory input. Some people like to sketch out a virtual sound stage before they begin their mixing. It gives them a starting point when it comes to positioning the different instruments in the mix. Others see no point in it but prefer to simply play it by ear. Try sketching out an overhead view of the stage layout that you are aiming to create with your mix.

A couple of examples are shown below: Consider the two illustrations below. The first is for a song that features a lead vocalist, two backing vocalists, a rhythm guitar, banjo, acoustic bass and drum kit. We might have it in mind to create a sound stage like that shown below as viewed from above , where front of stage is at the bottom of the diagram.

Suppose this song includes a break in which we wish to feature the banjo. This leaves us with what might be a rather thin mix. We might decide that whilst bringing the banjo right up front and centre stage, we might also wish to not only push the rhythm guitar further back, but also somehow to spread it out so that it appears to fill most of the space behind the featured instrument.

During this chapter of this book, you will be shown how to create effects like this. The basic idea is that by splitting a track into several channels and applying different FX to each channel before joining them up again, we can make some pretty impressive sounds. Shown on the right is one of the mixing plug-ins that we use quite extensively in some of our examples. The beauty of mixing tools like this one is that you have at your fingertips a very easy method of putting your track together.

Download Reamix Breaking The Barriers With Reaper

Each of the channels in the example shown, there are eight channels has its completely independent volume and pan controls. As you work through these examples, you will be very pleasantly surprised, if not astounded, the first time that you discover just how much creative control this puts at your fingertips. In some examples, we will be using special Channel Splitter plug-ins to do this. These are relatively straightforward. However, in many cases these channel splitter plug-ins are not capable of giving us the results that we want.

This is when we have to use a different complicated method, splitting our tracks into channels in a way that may not be too obvious or intuitive. Suppose we were to split a Vocal Track into two pairs of channels. We could then, for example, apply separate EQ to each pair of channels perhaps making one warmer and the other more present and then use the Channel Mixer to pan them differently before joining them up to create a more interesting and varied vocal effect.

The diagram on the right illustrates how this might be done. The same original vocal track is passed into two separate instances of ReaEQ, then, as the diagram shows, the output of each EQ instance is fed through a different, separate pair of channels. How then is this done? The answer comes in two parts. The default is two, but this can be changed to any number up to In the hypothetical example that we are considering here, a total of four channels is needed. In the example shown here, 4 Track Channels have been defined. In most cases, the default settings for both input and output are Channels 1 and 2.

However, you can change this as you wish. So, to return to the example in question. The default input and output settings are just right for the first of our ReaEQ instances — Channels 1 and 2 in, Channels 1 and 2 out. However, in the second instance we still will need to bring in the signal through Channels 1 and 2, but we want to send it out through Channels 3 and 4 — and only Channels 3 and 4.

We therefore require in this example a second instance of ReaEQ. For this second instance, you would change the settings on the Plug-in pin connector interface as shown here on the right. This is because most probably the whole channel splitting concept simply did not exist with your previous DAW software at least, not for audio. Be prepared to persevere. In time you will get used to it, and you will be surprised at how easy it becomes. It is especially useful when you want to create a spatial relationship between two instruments, but there arises a problem of one always tending to drown the other one out.

This might be the case with our banjo and our mandolin. We may need for various reasons to place these two instruments close together in the panning spectrum. A problem may arise, however, because of these two instruments the mandolin is by far the more present. By this we mean that it resonates at those frequencies up above 1, Hz or so, where the banjo just does not go. Put quite simply, we position our instruments in such a way that the weaker instrument is able to wrap itself around or bookend the stronger instrument, thus preventing it from breaking out. The illustration on the right demonstrates this concept.

In this case, we have decided to pan our guitar to the left, our banjo more or less towards the centre, and our mandolin to the right. This might be the case, for example, if the banjo was the main rhythm instrument for this particular tune. Notice how by the use of bookend panning we have been able to contain the otherwise over-dominant strains of the mandolin.

Exercise In this next example, we will use an instrumental recording which includes a banjo, a mandolin, a bass guitar, a rhythm acoustic guitar and a lead acoustic guitar. This might make it easier to understand. You can change these settings later if you wish. Click and drag the tracks to change their order, so that left to right the tracks line up as shown right. Solo the Guitar Track now Track 4 and play the tune. Insert an instance of ReaEQ into this track. You should find that by taking off about 4 dB at Hz and adding about the same at around 7, Hz you should make the sound a little brighter.

The next few steps can appear strange if you have never done this before. Set the number of Channels to 6. Add the two sends shown on the right. Solo the track and play. Adjust the levels of the three volume faders to suit. Unsolo the track. You should notice that the rhythm guitar sound is full and bright, yet allowing the other instruments to cut through very clearly.

If you wish to hear the rhythm guitar track by itself, hold down the Alt key while you Solo this track. Save this file. In this example, the Sends on the first track Track 4 correspond exactly to the Receives on the second track Track 5. If you wish to make any changes e. This means that the volume of each receive that is finally used in the track mix can also be controlled using the 3 Band Joiner. Notice how the second illustration makes greater use of the available space. Please note that this is an example designed to illustrate the concept and implementation of bookending.

It is not intended to serve as an example of a complete mix. Have you ever seen a live performance where every single member off the band remains motionless throughout the entire gig? Then why mix as if they do. Rather than keeping each instrument locked into one place for the entire mix, be prepared to use envelopes to make changes to your panning at different parts of the song. This is a less commonly used technique. You know how you react emotionally and in other ways to various sounds. These differences are illustrated in the following diagram, the Fletcher Munson curve.

This diagram quite clearly illustrates the actual levels required at different frequencies for the perceived volume to appear equal. You can see from the dip around the Hz to Hz area that these are the frequencies that we hear the loudest. Similarly, our ability to hear sounds drops off quite rapidly at frequencies below about Hz and above about Hz. Notice in particular that as the overall volume is raised, the lower frequencies become more prominent. You can see this, for example, by comparing the shape of, say, the phon curve with the shape of, say, the 40 phon curve.

Depending on the mix of frequencies which make up one song compared to another, both may appear to be at the same volume, but to the listener one will appear louder than the other. You will ultimately want both to appear to be at approximately the same volume. For example, if a track seems boomy, you may need to lower the bottom end by what appears to be quite a substantial amount around the Hz or Hz range to fix the problem.

On the other hand, if a track seems to be too present, just the tiniest cut around the Hz to Hz area might be enough to fix it. Drums and percussion will be considered a little later. For guidance only: this chart shows only fundamentals, not harmonics. Notice the lightly shaded area that we have described as The War Zone. Take a careful look at the chart on the previous page.

Notice how so many instruments are always competing with each other for the same piece of acoustic space. That, incidentally, is before we even begin to talk about harmonics. This can happen at any frequency. For example, the viola and the clarinet occupy almost an identical range of acoustic space just about all the way from their lowest notes to their highest. However, the area to which you may need to give this issue the most constant attention is likely to be that area labelled The War Zone, between about Hz and 1, Hz.

Just about every instrument you are ever likely to need to mix will want to lay claim to some space within this zone. Try an experiment. Put on a CD which contains a full range of instruments and sounds. Well produced classical music is ideal for this. Now sit down and listen. Listen carefully for the different frequencies, starting with the highs and the lows then, after you have identified them, gradually converging towards the mids. Close your eyes and pay especial attention to where music seems to be coming from.

Do the lower notes seem to be coming up at you from below somewhere, while the higher sounds are drifting down from a plane higher up? Congratulations, you have just discovered the importance of the dimension of height to a good mix. When you are listening to music, you never just hear one frequency on its own. You hear a complex pattern or patterns of many different frequencies in different combinations. If individual frequencies are capable of affecting us in various ways, how much greater is likely to be the effect of different combinations of frequencies?

The sound of any musical instrument is made up of not just a single clean note at a time, but of a whole series of notes that are buried within that sound. These are the harmonics, the elements that shape the sound. As much as anything else, it is the way one musical instrument produces its harmonics that gives the sound its timbre and distinguishes the sound of that particular instrument from any other.

This point matters because certain combinations of odd numbered harmonics will tend to produce a more edgy sound, whereas the even harmonics will create a more soothing sound. Notice that every harmonic is arithmetically an exact multiple of the root. In doing so, we also add an extra dimension to our mix, a dimension that will make the recording immediately appear more vibrant and alive.

This is the dimension of height. We are going to sweep each track one by one to identify which frequencies appear to be the most interesting. Then by boosting those frequencies a little and sometimes reducing the same frequency on those tracks panned close by make that instrument more distinctive in our mix. Solo the Mandolin track and play it. Open the FX Window for this track. Make sure that the ReaEQ plug-in is enabled. Select Band 2 and change the band type to Bandpass with a bandwidth of about 1 octave.

As the tune plays, slowly move the frequency slider from left to right. You should find that round about the Hz mark the sound has a pleasing distinct brightness and clarity. This, then, is a key frequency for this instrument. Change the band type to Band and create an EQ curve similar to that shown below right. If you wish, add a similar gain around 4, Hz.

You should now repeat this procedure for each of the remaining four instruments, but not at the same frequencies of course. In each case, sweep to find the optimum frequencies. If you are using the same panning as in our example, start with the Lead Guitar. Because this instrument is closest to the Mandolin, as well as adding some gain to its own key frequencies, you might also like to make reduction around the Hz mark.

As you play the tune, you can switch global FX Bypass on and off to evaluate the effect of your changes. Either hold down the Control key and click on any individual track FX Bypass button, or better still, assign a keyboard shortcut to this function. A possible suggested solution to this exercise is shown over the page. In fact, our suggestions are, if anything, somewhat conservative. What matters is that your mix should sound right!

This primer spends a fair amount of time discussing EQ because it is such a powerful, useful and versatile tool. The purpose of these last few sections has been to help you to understand a theory, and then see how to put that theory into practice. It is much more important that you understand the technique and the theory, so that you can apply them to your own mixes in the future. Familiarise yourself with the technique of listening to a track scanning with bandpass EQ and then flipping to band EQ when you have identified the frequency to be cut or boosted.

This is a very useful technique which we will use throughout this primer and which will serve you well in your own experiments with EQ-ing. Be aware that it is only a guide, and use it as such. Remember that no two instruments are exactly alike. That which works a treat on one acoustic guitar, for example, might not have the same effect on another. Notice that for those instruments where it is especially appropriate, the formants are shown in this chart.

The formants are those frequencies at which the instrument is most distinctive. Formants can often be considered as being the frequencies which contribute most to giving the instrument its distinctive sound. Brightness both sides at 2, Hz to 4, Hz. Shrill on the right side above 5, Hz. Acoustic Guitar Fullness and body around to Hz.

May be dull around 1, to 3, Hz. Presence and clarity around 4, to 6, Hz. Sparkle above 10, Hz. Bass Guitar Feeling around 40 to 60 Hz. Presence around 1, Hz to 2, Hz. High harmonics around 5, Hz. Dobro Fullness around Hz to Hz. Bite around 1, Hz to 2, Hz. Bright around 2, Hz to 5, Hz. Electric Guitar Fullness around Hz to Hz.


  • Diffraction effects in semiclassical scattering.
  • Performance in Java and Bali.
  • The Gift of ADHD: How to Transform Your Childs Problems into Strengths (2nd Edition);
  • Statistical Techniques for Project Control?
  • Rough Set Theory: A True Landmark in Data Analysis.

Bite around 2, Hz to 3, Hz. Presence around 5, Hz to 7, Hz. Sparkle above 8, Hz. Harmonica Fat around Hz. Highs around 2, Hz. Bright harmonics around 5, Hz. Mandolin Fullness around Hz to Hz. Clarity around 2, Hz to 5, Hz Sizzle around 10, Hz. Piano Resonance around 50 Hz. Bass around 80 Hz to Hz. Bite around 3, Hz to 5, Hz. Presence and Harmonics 5, Hz to 15, Hz. Trumpet Fundamentals around Hz to 1, Hz. Formants at 1, Hz to 1, Hz and 2, Hz to 3, Hz. Violin Fullness at around Hz. Formants at Hz, 1, Hz and 1, Hz. Scratchy at 7, Hz to 10, Hz. Overtones above 16, Hz.

Woodwinds Full at Hz to Hz. Crisp around 2, Hz. Clarity around 4, Hz to 6, Hz. You might at first think that this dimension at least would be an easy one to fix. After all, the closer a sound is, the louder it sounds, right? Well, no, actually. Before you can begin to understand how to manipulate the dimension of depth, you need to understand two important facts. Different frequencies decay through space at different rates. This means that as you move closer to or further from a sound, the relative levels of different frequencies will change.

As a rule, higher frequencies decay at a faster rate than lower frequencies. That is why when a distant sound first approaches you, depending on the physical environment, you may find that hear only the bass frequencies at first, or in other circumstances the higher frequencies first, with the total picture filling out as the sound gets closer and closer. The clarity of sound deteriorates with distance. Put another way, the more distant a sound, the more it appears muffled. This happens not by any absolute factor, but according to the environment in which the sound is being produced and heard.

From this we can draw a number of inferences, but in particular this: Making an instrument seem closer to us or further away in a mix is an illusion that can seldom be created satisfactorily by adjusting the volume of that instrument alone. It can be used to change the perceived distance of an instrument in the mix, making it appear closer forward or further back.

Unlike, say EQ or Delay, the effect that compression has on a signal is often quite subtle. This makes it a more difficult tool to understand and to master. The Threshold is the level at which the compressor will kick in. The Ratio determines the degree of compression that is applied. For example, a ratio of 4 to 1 means that if the signal coming in to the compressor is 4 dB above the threshold level then the signal going out will be 1 dB above the threshold level.

The available range is from all the way up to Infinity More explanation follows with step by step exercises and examples. ReaVerbate is simple but surprisingly powerful. ReaVerb is even more powerful but quite complex in its setup. For this reason, in this section we will be using another excellent freeware reverb program, Kjaerhus Audio Classic Reverb in the various examples.

Different reverb plug-ins vary slightly from each other. Some use different terms to describe what is essentially the same parameter. The key parameters, however, are as follows: Reverb Parameters Parameter Explanation Pre Delay Imagine that you are sitting in a room listening to a musician who is playing a violin. As she starts to play, the sound of her music travels out in all directions. If you had been alone in a vast open field, all of the music that you would hear would have travelled directly from the violin to your ears. In an enclosed environment, however, what you will hear will be an incredibly complex pattern of sound.

Some will reach your ears directly from the instrument. These will be immediately followed by those sound waves that have bounced off a surface such as a wall. The very short time that passes between these two events taking place is known as the Pre Delay. Increasing this variable creates the illusion of a larger space. Early Reflections Early reflections are those sounds that reach our ears directly and distinctly after bouncing off a surface such as a wall or a ceiling, or a piece of furniture. It is worth understanding that these sound waves do not come to an abrupt halt after bouncing off one surface.

They instead travel further to bounce off another surface, then another, and another. Gradually, as they fade through time, these reflections become less distinct and merge into each other. The principle is the same. Damping As our sound waves bounce around all over the place, their timbre or nature will be affected by the kind of material that they encounter. For example, reverberation off a hard surface like a concrete wall will be clearer and more distinct than will reverberation if curtains are hung in front of the same wall.

In the latter case, the higher frequencies will decay more quickly, resulting in a warmer sound. High and Low frequency These settings can be used to restrict the reverb to only the range of filters or attenuation frequencies that we specify. We have already encountered this concept when we were looking at Delay. The first thing you need to understand is that there is no magic or genius about them.

A preset is simply a collection of settings that you might logically expect to belong together. For example, if you have a small room size, you would expect to find along with it a short pre-delay time. So the good news about presets is that for the most part they represent fairly safe and sensible combinations of settings. If you feel that you want to recreate the atmosphere of a basement nightclub or a medium sized concert hall, then it is possible that a suitable preset might be able to help you. You do need, however, to be much more wary of presets with names like Male Vocal or Female Vocal.

Another issue with presets is that their strength might also at times be regarded as their weakness. Quite possibly, one of the methods that you may wish to use to create that edge or tension might be to create a reverb effect that models an environment that simply cannot exist in the real world. If you want to be creative, if you want to really make the best use of reverb, you really do need to be prepared to travel outside the comfort zone of presets. In short, presets may be often safe … but they are not the be all and end all, and they are often also boring. How many times have you heard a mix in which a thin wimpish vocals trickle out at you from dead centre between the speakers, often too loud because raising the volume has been the main or only technique used to lift them above the rest of the mix.

Remember two things that we have learnt so far: 1. Think about the nature of singing. There are two particular characteristics of the human voice just crying out with the mixing opportunities that they create. Both are blindingly obvious, which might be why they are so often overlooked: 1. The frequency of the vocal will change constantly and continuously throughout the song.

The volume of the vocal will change constantly and continuously throughout the song. Yet so many people do. Within your mixing arena you have a huge amount of space available in terms of various permutations of height, depth and width. The very last thing you should be doing is what so many people in fact choose to do — just confining it to a worn out scraggy little path straight down the centre mid-way between your speakers.

The example that follows uses female vocals. The same principles, of course, can be applied to male vocals, just with different parameter settings. The illustration on the right shows the FX chain that we will be using in this example. It is only an example, and once you have got to grips with the technique you will be able to create far more interesting instances for yourself.

Some Ideas for Spacey Vocals There are a number of factors which determine how best you can use the technique described in this section to create spacey vocals. These include the timbre of the voice, the style of singing, the variations in volume, the microphone used and the microphone placement. Keeping this in mind, some of your per channel options include the following. EQ to emphasise specific qualities of the voice. Compression with different settings to make different frequency bands more or less present.

Also be prepared to experiment for the best effect with the frequencies at which you split your bands. We will first split the vocal tracks into four pairs of channels, in this case by frequency range — low, medium, high, ultra high. Each range will be panned differently, with different subtle FX applied to in this case two of the four ranges. The four channel pairs will then be joined up and gently compressed. This will not only create a richer, fuller and more interesting vocal.

It will also mean that as the song is played, various of these effects will be discernible from time to time depending on other factors such as the levels of different instruments , ensuring that subtle changes to the vocal will occur as we move through that fourth dimension, time. For this example, you can use the supplied project RosesBloom. Of far greater importance, however, is that you should learn to apply these techniques to your own mixes. This ensures that the compressor, when required, will kick in quickly and just as quickly kick out again.

Remember that once you have experimented with your settings and created those that suit your particular needs, you can save them as a preset. You will then have a de-esser that you can apply to other projects as well as the current one. When you zoom in closely on a recorded audio item, it is usually quite easy to visually identify a plosive, from its jagged waveform pattern. Observe the example shown on the right.

It is part of a vocal recording. The first item shown is fine, but you can observe a sharp jagged pattern at the start of the second item. There are a number of techniques available for fixing them, some simpler than others. Only if all else fails need you try the more complicated remedies. However, the normal volume envelope is applied to your track after the FX Chain. Therefore, if you use a volume envelope to fade down the plosive, any other FX that you may have such as EQ or delay will be applied to that plosive before it receives its corrective treatment.

This can actually make it harder to fix the problem. Mute Envelope: Chances are that this might do the trick, but it can still be problematic. The biggest issue with using a Mute envelope for this purpose is that the Mute envelope cuts in and out severely and suddenly. It can sometimes appear to create a hole in your song. Noise Gate: In theory, it may well be possible to use a Noise Gate to correct plosives, but it really is a bit like taking a sledgehammer to crack a nut.

Unless the Noise Gate parameters are set correctly, the gate can open and close too sharply, in a way that can actually make unwanted sounds appear worse even than if they had just been left alone. This same issue arises — indeed more so — when you try to use a noise gate to eliminate unwanted breath sounds.

Moreover, if a vocalist is creating plosives, the chances are that there will be several throughout the song — and you can bet your last dollar that each one will require separate noise gate settings. The Volume Pre FX Envelope Especially if the mix is a fairly busy one, you might get away with using the Pre FX Volume envelope as shown below right to simply fade down your track at the offending point. On the other hand, if there are only one or two plosives, you may wish to consider whether you want the clutter of an entire envelope to address an issue that occurs only in a couple of places.

Yet, often the simple split tool alone may be all that is required to fix the problem. Observe the example below right. Simply by splitting the media item at the point of the plosive, and then adding crossfades, we lower the volume going into the plosive and raise it coming out of it. For example, you can use the split tool to isolate the plosive as a separate item, then mute that item, or lower its volume and fade in and out of it.

This can help to make your edits appear more seamless. Two variations on this concept are illustrated below. In the first example, the plosive has been isolated into a separate media item by splitting , then the volume of that item using the Item Properties settings has been lowered to around —10dB. Crossfades into and out of the item have been added. The second example differs in that instead of adjusting the volume of the isolated item, it has been muted.

The big advantage of using this method is that it enables you to address each individual plosive precisely according to its unique characteristics without cluttering your tracks with excess envelopes. The idea of using EQ is that it effectively enables you to momentarily lower the volume at the exact frequency at which the unwanted sound is most prominent, rather than lowering the overall volume of the track as a whole. This lessens the possibility of appearing to create a hole in your mix. One of the Spectral Analysis tools mentioned earlier in this primer can be used to help you identify the rogue frequency.

As it plays, look for any obvious change at the point where the plosive sound occurs. Remember that you can use the Playback Rate slider on the Transport Bar to slow down playback while you are searching for this. Alternatively, you can use the sweeping technique described in detail in the section headed Sibilance. If you are having difficulty, use both methods and try to identify where both sets of findings converge.

Once you have identified the frequency, make the appropriate EQ adjustment a possible example is shown below , then add an automation envelope to ensure that your change is only applied at that point. Using a Multiband Compressor The other main method of taming plosives that you will wish to consider is to use a multiband compressor such as ReaXComp. This is a more sophisticated version of using EQ, but works on a similar principle.

Instead of just lowering the volume at that 46 2 — Pre Mix Fix: Corrective Action frequency, we squash it. An example is shown above. Of course, as with the EQ example, it is only an example. You will need to determine for yourself in each case which actual settings and values are required. If in doubt, start with a setting around Hz. If you wish, you can use a bypass automation envelope to ensure that the compression is only applied where and when it is required, not for the whole track.

You will have to decide to what extent these are a problem. You may wish to remove them altogether, or you may prefer simply to lower them, especially if you feel that the presence of some breathing sounds adds atmosphere or reality to a recording. That said, often the solution of Splitting the Item already discussed above can also be used for reducing or removing unwanted breath sounds.

Using a Noise Gate It is quite likely that you will find that ReaGate can be used to eliminate breathing sounds. The trick is to get the settings right so that the gate closes when the volume drops below the audible level of the vocal recording, staying closed to eliminate sounds such as breathing, but also reopening in time not to miss any of the vocal when it comes in.

The two illustrations below illustrate this concept, using ReaFir. In the first illustration left if you look carefully you can see some low level noise below the horizontal line. This is shut out and does not get heard. The second illustration right shows a vocal passage that is powerful enough to break through the gate. A closer look at the second picture, however, reveals a problem that can occur with noise gates. Notice that in this example the last fade of the vocal passage is shut out, because it falls below the threshold of the gate.

If you are not sure about these settings, start with something similar to those shown right and make your adjustments accordingly. The Threshold determines the decibel level at which the gate will close and open. The Pre-open setting ensures that ReaGate will look ahead and can anticipate when a change is coming. The Attack setting determines how quickly the gate opens when the signal rises above the given threshold, Release determines how quickly it closes how quickly when the signal again falls below that threshold.

In addition, the Hold setting determines how long to wait after the volume falls below the threshold before beginning to close the gate. Think about this for a moment. Breath sounds usually occur after a pause perhaps between verses and usually occur after a significant period of silence.

When the vocal starts immediately after the breath you want the gate to open quickly, with a short attack time. This is not always the case with noise gates. As you will see later, noise gate settings for percussive instruments are likely to be very different from those required for dealing with vocals. The big advantage of this method is that it unlike a Noise Gate it is not applied in real time, and therefore places no burden on the CPU when the track is played.

The settings shown above right are fairly conservative and make a reasonable starting point. An example of applying these settings to one vocal clip is shown right. This is because the presence of the bleed makes it difficult to identify and appropriate levels for your settings, especially for the threshold.

You will see shortly how a noise gate can sometimes be used to reduce bleed, but this is not always appropriate. Consider the situation where you have recorded a band or perhaps just a duo or trio live with just two or three microphones. This might not be an ideal way of making a recording, but circumstances can dictate that this happens. You play your tracks back and notice from time to time on one particular track there is a breathing sound, as one of the singers draws breath before each line.

This could be a case for Spectro, with its real time spectral editing capabilities. Check out the manual that comes with Spectro for full information about how to use this wonderful plug-in. Meanwhile, an example is shown on the right. In this case, we have identified the unwanted breath sound, isolated it by drawing a rectangle around it, then muted that area. Notice that the other sounds below Hz and above Hz are still heard. It is possible when you use Spectro in this way that when the track is played on its own there may appear to be a noticeable hole between these frequencies.

You might find, however, that when you play all of the tracks together this is not discernible.

ReaMix: Breaking the Barriers with REAPER

If it is, you have at least two remedies at your fingertips. You may also need to adjust the panning at this point. You can use envelopes for this. Again, use envelopes to do this. The best way to do this is to use the ReaTune plug-in to fix individual pitching errors, then to apply the FX to the media item as a new take. The procedure for doing this is outlined below. This is generally regarded as the preferred algorithm for fixing vocal pitching issues. This is usually a better option than deleting it.

If you later find you have missed any pitch errors, you can restore your original take as the active take, make your further changes with ReaTune and apply as new take again. Sometimes an accelerated fadeout will help, sometimes splitting and trimming can help, but very often the best solution here will be to use Spectro. In the example shown on the right, you can see a sudden spike in the spectral pattern where a musician has accidentally created a clicking sound immediately after the end of the tune.

This could be caused, for example, by accidentally catching a finger on the instrument as he removes it from the string. In this example, Spectro has been used to isolate and mute the offending noise. Example This first example will illustrate the use of Spectro to eliminate unwanted background sounds. Select the Vox or Vocal track. Set the Volume fader to Zero and the Pan fader to Centre.

Solo the Vox track. Set the option to Follow host cursor to Y. Position the play cursor at the time You may need to zoom in to get to this exact position. Play the song. You will hear the singer drawing breath at about This breath will be visible on the Spectro graph. Select the M for Mute button for that region.

Play the song again. The breath can no longer be heard. Save the file. This second example will use a different approach to a similar problem. Position your play cursor at Select and solo the Vox track. Play it. You will notice a sound at about , just before the vocal comes back in. Make sure that Snapping is disabled. Select the media item, then click and drag your mouse to select that part of the track which contains the sound see right.

Right click over the area and choose Split item at time selection. Right click over the selected item and choose Item Settings, Mute. If you wish, use your mouse to draw a fade in and out from this section, as shown in the illustration below right. This will bind them together as one. This could happen under any number of circumstances, varying, for example, from a solitary musician playing a guitar and singing at the same time to a whole band, complete with drum kit and all. In the first case, you might record with two microphones on the guitar and one vocal mic.

In the second case, you might record using a dozen or so microphones when you record simultaneously. In both cases, the method of recording used means that you will not have discrete tracks for each voice or instrument. Conventional wisdom says that recording with discrete tracks, layering one over the other, will be more likely to yield the best results.

Live recording can have certain advantages. However, we cannot ignore the fact that bleed can bring its own problems. In each case, you will have to assess the situation and make a judgement. Sometimes the best course of action may be to simply live with the spillage between instruments and do the best you can with it.

Other times, you may wish to consider using any of a number of techniques to ameliorate it. This is especially likely to be the case if there are just two or three items that have been recorded at the same time — for example, a guitar and a vocal. The trick is to identify the different frequencies at which the main part being recorded and the intruding part are at their strongest and weakness.

Section 3 of this primer includes a chart which you might find helpful, but the real trick is to sweep the EQ to find a frequency band where by lowering the gain you lower the effect of the bleed. Something like the settings shown on the right might be worth trying where you have a female vocal bleeding on to a guitar mic.

Remember that in shaping the sound in this way you will also be changing the timbre of the sound that you are aiming to improve. Depending on the circumstances, this might or might not matter. For example, the example shown above would have the effect of making the guitar sound a little less bright. In some circumstances this might not be acceptable. In others it might be. This could be the case, for example, if because of the particular arrangement you are relying on the guitar to add some bottom end to your mix, or if you have also recorded the guitar in-line on another track.

This technique might end up being felicitous in that by locating and reducing the vocal from the guitar track, you will at the same time create space in the overall mix for the vocal to sing through. ReaFir is a multipurpose dynamics plug-in that almost defies categorization or description. The example on the right shows the opposite of the previous example. Here we have recorded a female vocal track which includes a lot of bleed from an acoustic guitar being played and also recorded separately at the same time. By using ReaFir to gate out some of the lower frequencies on the female vocal track we reduce substantially the impact of the bleed.

Of course, by doing this we also run the risk of removing some of the warmth from the vocal itself, and may also make the vocal appear a little thin. Possible techniques that you can use for fattening and warming up vocals include the careful use of channel splitting, reverb and delay. These are all discussed later in this primer. Split and Mute, Lower Volume or Delete This technique is most appropriate when you want to effectively eliminate a whole passage from a track. This might be the case, for example, when you have an instrumental break on a vocal track.

You can simply split the media item and mute the unwanted passage. This may help to bring more clarity to the instrument or instruments that are to be featured during the break. In the first of the examples shown below, the passage has been isolated and lowered in volume. In the second, the same passage has been muted.

In this example, unlike the previous example using Spectro, noise is present throughout the entire track. You will see how ReaFir can be used for noise reduction. In this example unlike the examples of using Spectro , the noise is present throughout the entire track. The need for a noise reduction plug-in can arise when an otherwise good track has some unwanted background noise on it.

This might, for example, be hiss or rumble, or the sound of an air conditioner. ReaFir can be used to remove such sounds from your tracks in real time. In order to do this, you must first identify a passage on the track perhaps a second or two where you have recorded the unwanted noise by itself. This is likely to be at the very beginning of the track. This will be marked with a red line see below right.

Open the project file RosesBloomAll. Select this track and solo it. Select Gate mode and lower the red line sufficiently for the track to be heard normally. Observe the wave pattern. You should see that somewhere close to Hz the density of the waves thins out quite noticeably. This is the frequency below which you only the guitar is present, not the vocal. By manipulating the ReaFir settings see above , you should be able to significantly remove this range from your track whilst still allowing the vocal to be clearly heard.

The settings shown are only an example. Be prepared to experiment to get it right. You may find that this has the effect of making the vocal now sound a little thinner or tinny. Play back the whole mix and listen carefully. It may be that the bleed of the vocal on to the guitar track will be sufficient to compensate for this. Notice that you should feed into the track only a small amount of the wet signal. Use just enough to make the vocal a little warmer and fuller, not so that the reverb can actually be heard. If you like, open it and examine it. This file is not meant to represent anything like a final mix for this song, only to illustrate some of the techniques that you can use to reduce the impact of bleed if you need to.

Play the song with and without the FX set to bypass. You should notice that with the FX chain engaged the vocal can be heard more distinctly. The flow chart shown below represents the flow of the audio signal on the Vocal track which contains the guitar bleed. You can use any freeware screen capture program such as EasyCapture to capture images of the spectral analysis at different times. For example, you might have one which captures the image for guitar only, another for guitar and voice together, and a third for voice only.

It should give you a reasonably accurate picture. By comparing the different images, you should be able to get a reasonably accurate idea of where you should at least start to make your modifications. The example below shows Multi Inspector freeware version being used to compare the output from two tracks. Again, this is not an area in which we can lay down hard and fast rules as to exactly which method you should use to go about this next step. However, it is important that you develop a methodology and stick with it.

The suggestion here is one that works well for me. The following summary table outlines this method. The main objective is to make our project settings as neutral as possible before we begin the real job of mixing. As with many of the other tables in this book, you may wish to consider photocopying the page for easy reference. Action Reason Unsolo any soloed tracks You will need to hear all of the tracks that are intended to be part of your production in order to prepare your project for mixing.

Unmute any muted tracks unless these are muted because you think you are unlikely to need them in your mix Pan all of your tracks dead centre and set the Master Output to mono In the course of recording, seeking out and correcting glitches, etc, it is possible that you may have panned certain tracks in ways that may be different from those that are appropriate for your mix. You will be wanting to start your mixing from as neutral a situation as possible. Play your project In this next stage you will be aiming to get the sound levels for all tracks approximately right.

You will need to take a flexible and common sense approach here. For example, it is likely that in the final mix some instruments will need to be faded up and down at certain points. Right now, you are aiming to get an approximate balance. Adjust the sound levels for individual tracks up or down until the overall balance sounds about right Do not use the track volume faders or envelopes for this purpose. There are two reasons for this: 1.

The track volume fader and default volume envelope both apply Post FX. You want to get your audio signal set to the approximate required level before your FX chain, not after it. We are aiming to get to the position where everything is as neutral as possible before we start. You will no doubt find plenty of uses for track faders and volume envelopes later. Instead, use the Item Properties window seen below for your various media items, adjusting the Normalize fader to suit. If you do this, the best idea is probably to display it, make the adjustment to the whole envelope , then hide it.

Save the project file to a new name This ensures that you will keep an accurate copy of your project in its premix state, should you need to go back to it. The reasons for this are all about space. Mixing requires that every part in the production — every part — has to be given its own space. Arguably, many amateur and alas some professional mixes are spoilt by a failure to give adequate attention to the issues of space than by anything else.

The first mistake that some indeed, too many make is to think of space in a mix as consisting of width alone, and that this is simply determined by panning. But width alone is not enough to create a true sense of space in your mix. There are in all not one but four dimensions, all of them important. Most commonly, the left-right placement issues are controlled by Panning.

Height frequency The frequency range of instruments and their harmonics spans a range of approx 20 Hz to 16, Hz and above. This determines another spatial dimension of our mix, which we can think of as being height. The role and importance this dimension can play in adding colour to your mix is too often overlooked. This is the aspect of mixing that people are often talking about when they refer to acoustic space.

Depth As your music plays back through the speakers, some voices and instruments will appear to be closer to you than others. If you like, you can think of this illusion of depth as being conceptually similar to the illusion of depth that will be created by a landscape painter. Time Like height, time is a dimension to which sufficient attention is often not given.

Your mix should not resemble a static snapshot, but should behave dynamically through time. As you begin to understand that sound operates across these four dimensions, you will grow to appreciate why each of these is important, and why the manner of combination of these different dimensions matters. Sometimes equally gifted people will approach the same task with completely different systems and methodology.

The best approach is that which works for you. Some sound engineers insist that you should always mix only with your ears, and ignore all other sensory input. Some people like to sketch out a virtual sound stage before they begin their mixing. It gives them a starting point when it comes to positioning the different instruments in the mix.

Others see no point in it but prefer to simply play it by ear. Try sketching out an overhead view of the stage layout that you are aiming to create with your mix. A couple of examples are shown below: Consider the two illustrations below. The first is for a song that features a lead vocalist, two backing vocalists, a rhythm guitar, banjo, acoustic bass and drum kit.

We might have it in mind to create a sound stage like that shown below as viewed from above , where front of stage is at the bottom of the diagram. Suppose this song includes a break in which we wish to feature the banjo. This leaves us with what might be a rather thin mix. We might decide that whilst bringing the banjo right up front and centre stage, we might also wish to not only push the rhythm guitar further back, but also somehow to spread it out so that it appears to fill most of the space behind the featured instrument.

During this chapter of this book, you will be shown how to create effects like this. When it comes to artistic mixing, the trick is to be able use the right tools for the right job together and in combination, not to regard each on its own or in isolation. The chart below shows the primary functions of some of the tools at your disposal.

Notice that you can create a feeling of width in your mix by the way in which you pan your various instruments. But if as well as width you wish to add more depth to a particular instrument you may also need to use some EQ on that instrument to bring it more forward or further back in your mix. And if you also wish to control the way the instrument responds to time, you might also wish to consider adding a touch of delay.

This leads us to an important observation. There is no such thing as a standard or universal setting. The simple answer to these questions is that there is no simple answer. Even if we are talking about the same guitar, or the same voice, the number of factors which go into determining the optimum settings is such that the question almost has no meaning. The same vocalist crooning on a ballad will require different treatment from that which would be needed if they were belting out a rock and roll number. For example, if your musical arrangement consists of acoustic guitar, violin and mandolin then you would expect to put more bottom end on the guitar than if the arrangement was, say, guitar, double bass and cello.

What sort of message or feeling is the song intended to convey? A song intended to create a feeling of joy and happiness will require different treatment from a sad or mellow song. An instrument might be playing quietly in the background for much of the song, but perhaps have twenty seconds or so of fame somewhere in the middle. It will likely require different EQ, volume, panning and compression settings at different times throughout the song.

All of this leads us to one inescapable conclusion. You will never learn how to use FX properly as long as you depend on presets. Let us suppose that you have decided to open a restaurant. In that case, which approach would be more likely to yield the best results? I rest my case! One more point. The chart shows those aspects of mixing for which the various mixing tools are shown.

This information should be taken as a starting point, not a limitation. As you will later see, the different spatial dimensions of sound do interact with each other. For example, it is perfectly feasible to envisage circumstances when panning can help to create a feeling of more or less depth, or where the application of a touch of compression can affect the perceived width of an instrument.

The Spatial Dimensions of Sound 3. Many of the examples that follow will involve taking an audio signal such as a recorded vocal track , splitting it into multiple channels or several tracks, treating the different tracks or channels in different ways, then at some stage joining them up again. For the most part, in these examples, you could achieve similar results using whichever of these two methods you prefer or sometimes a combination of both.

However, this does use up a fair amount of screen real estate. The basic idea is that by splitting a track into several channels and applying different FX to each channel before joining them up again, we can make some pretty impressive sounds. Shown on the right is one of the mixing plug-ins that we use quite extensively in some of our examples. The beauty of mixing tools like this one is that you have at your fingertips a very easy method of putting your track together.

Each of the channels in the example shown, there are eight channels has its completely independent volume and pan controls. As you work through these examples, you will be very pleasantly surprised, if not astounded, the first time that you discover just how much creative control this puts at your fingertips. In some examples, we will be using special Channel Splitter plug-ins to do this. These are relatively straightforward.

However, in many cases these channel splitter plug-ins are not capable of giving us the results that we want. This is when we have to use a different, more complicated method, splitting our tracks into channels in a way that may not be immediately obvious or intuitive. Suppose we were to split a Vocal Track into two pairs of channels. We could then, for example, apply separate EQ to each pair of channels perhaps making one warmer and the other more present and then use the Channel Mixer to pan them differently before joining them up to create a more interesting and varied vocal effect.

The diagram on the right illustrates how this might be done. The same original vocal track is passed into two separate instances of ReaEQ. Then, as the diagram shows, the output of each EQ instance is blended together using separate channel pairs in a channel mixer before being passed to the volume and pan faders. How then is this done? The answer comes in two parts.

The default is two, but this can be changed to any number up to In the hypothetical example that we are considering here, a total of four channels is needed. In the example shown here, 4 Track Channels have been defined. In most cases, the default settings for both input and output are Channels 1 and 2. However, you can change this as you wish. So, to return to the example in question.

The default input and output settings are just right for the first of our ReaEQ instances image top right — Channels 1 and 2 in, Channels 1 and 2 out. However, in the second instance bottom right we still will need to bring in the signal through Channels 1 and 2, but we need to send it out through Channels 3 and 4 — and only Channels 3 and 4. We therefore require in this example a second instance of ReaEQ. For this second instance, you would change the settings on the Plug-in pin connector interface as shown here on the right.

This is because most probably the whole channel splitting concept simply did not exist with your previous DAW software at least, not for audio. Be prepared to persevere. In time you will get used to it, and you will be surprised at how easy it becomes. Throughout this section and those that follow you will find a number of illustrative diagrams.

In most cases, their role is to help you to visually understand concepts which are often themselves quite complex and complicated. Please therefore understand that these diagrams are there as illustrations only. Their content should not be taken too literally. In many cases, the visual metaphors have been exaggerated in order to illustrate an otherwise difficult point to depict.

Before you commence panning, each track is pointed dead centre. This ensures that its output will occupy the entire space both left and right speakers. The trouble is, so does every other track. Everything is literally being piled on top of each other in an unholy scramble for domination of the same space. The result is a sound that you might be tempted to describe as foggy. Take a look at the illustration on the right. Look at it carefully and you can see a mix made up of three instruments — a guitar, a mandolin and a banjo.

Because nothing is panned, at most frequencies the three instruments are just fighting again each other. The mandolin gets a bit of a break at the higher frequencies above about Hz because neither of the other two instruments goes up that high , and similarly the guitar benefits from a little bit of space of its own at the lower end of the scale, below about Hz.

Straight away we can see a difference. This concept is illustrated by the second diagram below right. Of course the instruments will still overlap, but each now has a definite area of space somewhere between the left and the right speaker that it can call its own. The most important advantage of this is that each individual instrument will now be heard more clearly. Which of these two examples do you think represents the better example of panning? How can this be so? Well, here is an important dictum. There are no rules to panning, only laws.

In other words, panning laws dictate that if you pan in a certain way, you will get certain predictable results. However, there are no rules which govern which results are desirable. That depends on a number of factors, including the style of music. Currently everything is panned dead centre. At this stage we are not concerned with all the elements of mixing, such as which instruments to feature at which point, or whether to use compression, EQ or reverb on any tracks. In order to do this, one track at a time, open the Item Properties dialog box, adjust the level of the Normalize fader, then click on Apply.

ReaMIX Breaking The Barriers with REAPER pdf

There is no single solution to this exercise. It is perfectly possible that you might yourself come up with a different panning strategy that you like better. If so, please use it. By toggling between Mono and Stereo playback you can evaluate the effectiveness of your panning. Possible Solution In our possible solution, we have used the Item Properties dialog box for each track to raise its level by about 6 dB.

The panning is shown in the illustration left. Notice that the Bass is placed at or near the centre. This is a fairly standard mixing technique. Be careful not to pan too aggressively — the instruments still need to blend together as an integrated mix. It is especially useful when you want to create a spatial relationship between two instruments, but there arises a problem of one always tending to drown the other one out. This might be the case with our banjo and our mandolin. We may need for various reasons to place these two instruments close together in the panning spectrum.

A problem may arise, however, because of these two instruments the mandolin is by far the more present. By this we mean that it resonates at those frequencies up above 1, Hz or so, where the banjo just does not go. Put quite simply, we position our instruments in such a way that the weaker instrument is able to wrap itself around or bookend the stronger instrument, thus preventing the stronger one from dominating the mix too much. The illustration on the right demonstrates this concept. In this case, we have decided to pan our guitar to the left, our banjo more or less towards the centre, and our mandolin to the right.

This might be the case, for example, if the banjo was the main rhythm instrument for this particular tune. Notice how by the use of bookend panning we have been able to contain the otherwise over-dominant strains of the mandolin. Exercise In this next example, we will use an instrumental recording which includes a banjo, a mandolin, a bass guitar, a rhythm acoustic guitar and a lead acoustic guitar. This might make it easier to understand. You can change these settings later if you wish.

Click and drag the tracks to change their order, so that left to right the tracks line up as shown right. Solo the Guitar Track now Track 4 and play the tune. Insert an instance of ReaEQ into this track. The next few steps can appear strange if you have never done this before. Set the number of Channels to 6. Add the two sends shown on the right.


  • Recensioner i media.
  • ADVERTISEMENT.
  • Post navigation.

Solo the track and play. Adjust the levels of the three volume faders to suit. Unsolo the track. You should notice that the rhythm guitar sound is full and bright, yet allowing the other instruments to cut through very clearly. If you wish to hear the rhythm guitar track by itself, hold down the Alt key while you Solo this track. Save this file. In this example, the Sends on the first track Track 4 correspond exactly to the Receives on the second track Track 5. If you wish to make any changes e. This means that the volume of each receive that is finally used in the track mix can also be controlled using the 3 Band Joiner.

Notice how the second illustration makes greater use of the available space. Please note that this is an example designed to illustrate the concept and implementation of bookending. It is not intended to serve as an example of a complete mix. Have you ever seen a live performance where every single member off the band remains motionless throughout the entire gig? Then why mix as if they do. Rather than keeping each instrument locked into one place for the entire mix, be prepared to use envelopes to make changes to your panning at different parts of the song.

This is a less commonly used technique. These changes are independent of which instrument is carrying which part of the song. The combined effect of these changes and variations is to produce a more live and spontaneous feel to the song. Note This file is not meant to represent a final mix for this song. You can see that, apart from a limiter in the Master to prevent clipping, no FX have been applied anywhere. The purpose of this project file is purely to illustrate how easy it is to improve your mix with just a little use of dynamic panning.

Now try it for yourself. See how you can make your mix more interesting with the use of dynamic panning. This primer will give you examples and suggestions as to when you might wish to use envelopes, but it is assumed throughout that you already understand what envelopes are, how to create them, and how to make adjustments to them.

In most cases it is likely that you will want to use the Post FX envelopes, but the most important thing is that you understand just what you are doing! Put at its simplest, whichever pan law is selected will help determine the rate at which the volume of a track appears to decay in the mix as the track is panned further away from the centre. The pan law is set in the Project Settings and is by default applied to all tracks in a project file. If in doubt, it can be a good idea to choose a default setting of —0.

To do this, simply right click over the pan control fader in either the Track Control panel or the Mixer and select your required track setting from the drop down list. A project default setting of —0. This information should form a solid reference point when it comes to looking at the techniques and methods that we use to create a wonderful magical illusion with our mixes. That illusion is to use just two speakers to generate a sound that is rich and spatial in all its dimensions. Treat this chart therefore only as a general guide, not something that you must learn by heart and recite!

The demise of vinyl and the rise of music in digital format has seen a tendency to pump up this range to an extent that was previously not possible. It is the range usually affected when you adjust the bass setting on your car or home stereo. Too much of this and your music may sound boomy. Mid Range — to Hz This is the range that perhaps needs the most constant attention. Too much here will make an instrument or mix sound muddy, and can even cause irritation and annoyance.

Reaper's Blossom -- Overwatch cartoon @TheJamcave

High Mid Range — to Hz This is the range that our ears are most at ease with. So much so that boosting a frequency around 1dB in this range has the same perceived effect as would be achieved by a 3 dB boost in any other frequency range. Thus, this is the area in which we need to be most careful when making adjustments to EQ.

High — Hz to Hz This range pretty much reflects the range boosted or cut on your car stereo when you adjust the treble control. A little gain in the overall mix around here can make a production sound brighter. Ultra Highs — above Hz This is where the late harmonics occur. Be very careful about boosting here.

You know how you react emotionally and in other ways to various sounds. These differences are illustrated in the following diagram, the Fletcher Munson curve. This diagram quite clearly illustrates the actual levels required at different frequencies for the perceived volume to appear equal. You can see from the dip around the Hz to Hz area that these are the frequencies that we hear the loudest.

Similarly, our ability to hear sounds drops off quite rapidly at frequencies below about Hz and above about Hz. Notice in particular that as the overall volume is raised, the lower frequencies become more prominent. You can see this, for example, by comparing the shape of, say, the phon curve with the shape of, say, the 40 phon curve. Depending on the mix of frequencies which make up one song compared to another, both may appear to be at the same volume, but to the listener one will appear louder than the other.

You will ultimately want both to appear to be at approximately the same volume. For example, if a track seems boomy, you may need to lower the bottom end by what appears to be quite a substantial amount around the Hz or Hz range to fix the problem. On the other hand, if a track seems to be too present, just the tiniest cut around the Hz to Hz area might be enough to fix it. Drums and percussion will be considered a little later.

For guidance only: this chart shows only fundamentals, not harmonics. Notice the lightly shaded area that we have described as The War Zone. Take a careful look at the chart on the previous page. Notice how so many instruments are always competing with each other for the same piece of acoustic space. That, incidentally, is before we even begin to talk about harmonics.

This can happen at any frequency. For example, the viola and the clarinet occupy almost an identical range of acoustic space just about all the way from their lowest notes to their highest. However, the area to which you may need to give this issue the most constant attention is likely to be that area labelled The War Zone, between about Hz and 1, Hz. Just about every instrument you are ever likely to need to mix will want to lay claim to some space within this zone. Try an experiment. Put on a CD which contains a full range of instruments and sounds. Well produced classical music is ideal for this.

Now sit down and listen. Listen carefully for the different frequencies, starting with the highs and the lows then, after you have identified them, gradually converging towards the mids. Close your eyes and pay especial attention to where music seems to be coming from. Do the lower notes seem to be coming up at you from below somewhere, while the higher sounds are drifting down from a plane higher up? Congratulations, you have just discovered the importance of the dimension of height to a good mix. When you are listening to music, you never just hear one frequency on its own. You hear a complex pattern or patterns of many different frequencies in different combinations.

If individual frequencies are capable of affecting us in various ways, how much greater is likely to be the effect of different combinations of frequencies? The sound of any musical instrument is made up of not just a single clean note at a time, but of a whole series of notes that are buried within that sound. These are the harmonics, the elements that shape the sound. As much as anything else, it is the way one musical instrument produces its harmonics that gives the sound its timbre and distinguishes the sound of that particular instrument from any other.

Notice that when you use EQ to raise or lower the volume of any particular frequency, you are actually raising or lowering the volume of a particular harmonic. This point matters because certain combinations of odd numbered harmonics will tend to produce a more edgy sound, whereas the even harmonics will create a more soothing sound. Notice that every harmonic is arithmetically an exact multiple of the root. Earlier, we explored how panning can be used to add width to a mix.

Consider again our banjo — guitar — mandolin example, shown on the right. The second illustration right demonstrates this concept. It shows the effect of adding some gain to make the banjo sound a little brighter in its upper range, whilst at the same time dulling down the mandolin over the same range to make room for it.

The visual pattern that represents our sound stage is now more varied and interesting, and less made up of homogeneous shades of grey. Right now, it is only important that you get your head around the concept. Notice that what we have done here has not so much been to add height to our mix as it has been to make the existing height more colourful and interesting.

This is done by identifying those ranges within the harmonics where adjusting the EQ settings might add a little sparkle to our mix. In the third illustration right we have done this with the mandolin. To summarise, in this section we have learnt that there are two ways in which you can use EQ to modify and improve the way your mix fills out the frequency range and shares the available space there between the different instruments. You will be better equipped to make the best use of EQ if you are able understand the relationship between different frequencies and how the listener perceives sound.

The more you are familiar with these, the better you should be able to respond to the challenge of using EQ to its best advantage. In doing so, we also add an extra dimension to our mix, a dimension that will make the recording immediately appear more vibrant and alive. This is the dimension of height. We are going to sweep each track one by one to identify which frequencies appear to be the most interesting. Then by boosting those frequencies a little and sometimes reducing the same frequency on those tracks panned close by make that instrument more distinctive in our mix.

Solo the Mandolin track and play it. Open the FX Window for this track. Make sure that the ReaEQ plug-in is enabled. Select Band 2 and change the band type to Bandpass with a bandwidth of about 1 octave. As the tune plays, slowly move the frequency slider from left to right. You should find that round about the Hz mark the sound has a pleasing distinct brightness and clarity.

This, then, is a key frequency for this instrument. Change the band type to Band and create an EQ curve similar to that shown below right. If you wish, add a similar gain around 4, Hz. You should now repeat this procedure for each of the remaining four instruments, but not at the same frequencies of course.

In each case, sweep to find the optimum frequencies. If you are using the same panning as in our example, start with the Lead Guitar. Because this instrument is closest to the Mandolin, as well as adding some gain to its own key frequencies, you might also like to make reduction around the Hz mark. As you play the tune, you can switch global FX Bypass on and off to evaluate the effect of your changes.

Either hold down the Control key and click on any individual track FX Bypass button, or better still, assign a keyboard shortcut to this function. A possible suggested solution to this exercise is shown over the page. In fact, our suggestions are, if anything, somewhat conservative. What matters is that your mix should sound right! This primer spends a fair amount of time discussing EQ because it is such a powerful, useful and versatile tool. The purpose of these last few sections has been to help you to understand a theory, and then see how to put that theory into practice.

It is much more important that you understand the technique and the theory, so that you can apply them to your own mixes in the future. Familiarise yourself with the technique of listening to a track scanning with bandpass EQ and then flipping to band EQ when you have identified the frequency to be cut or boosted. This is a very useful technique which we will use throughout this primer and which will serve you well in your own experiments with EQ-ing. The trick is to first identify the key frequencies for that instrument, and then to double or even treble or quadruple the track, making sure to EQ and pan each copy differently.

Here is a very simple example. Example 1. Play the file. Notice it consists of a single track, a mandolin. There is almost no other instrument that on its own sounds as naked as mandolin. We can do something about this. Using a combination of ReaEQ with a bandpass EQ Filter to sweep the track and an analyser plug-in such as VST MultiInspector Free, identify the three frequencies approximately which are most interesting in shaping the sound of this instrument. There is no single correct answer to this question. Different combinations will produce different, but equally interesting outcomes.

For the purpose of this exercise, let us suppose that we have identified Hz, 1, Hz and 5, Hz as the frequencies that we wish to emphasise. Add a new track and place it above the existing Mandolin Track. Label this track Mandolin Mix. Add two more new tracks after the original Mandolin track. Label these Mandolin Copy 1 and Mandolin Copy 2 respectively. These will be Tracks 3 and 4. It is important that both these should be Pre FX, as shown on the right.

Now make Track 1 a folder and Track 4 the last track in the folder. Your project should now appear similar to that shown below: 8. As you work through the remaining steps of this exercise you will probably want to leave the project running. We are now going to work on Track 2. Hold the Alt key while you click on the Solo button for this track. Using ReaEQ, adjust the settings for this track so as to give a significant boost around the Hz area, and a significant reduction around our higher key frequencies.

A possible outcome is shown in the first of the screen shots on the right.

ReaMix: Breaking the Barriers with REAPER ReaMix: Breaking the Barriers with REAPER
ReaMix: Breaking the Barriers with REAPER ReaMix: Breaking the Barriers with REAPER
ReaMix: Breaking the Barriers with REAPER ReaMix: Breaking the Barriers with REAPER
ReaMix: Breaking the Barriers with REAPER ReaMix: Breaking the Barriers with REAPER
ReaMix: Breaking the Barriers with REAPER ReaMix: Breaking the Barriers with REAPER

Related ReaMix: Breaking the Barriers with REAPER



Copyright 2019 - All Right Reserved