I blog all things for the unsigned songwriters, and artists!
If you’re producing your own or other’s music and you’re not using Soundcloud, you might check to make sure that you aren’t, in fact, living under a rock.
Soundcloud is the world’s leading social media site for music (YouTube is definitely bigger, but isn’t focused on music, and MySpace doesn’t offer the same degree of interaction). It allows you to share your own tunes, see what your favorite artists are doing, and discover new music from across the globe, among other things.
The sad truth about Soundcloud, like most social media, is that musicians don’t always comprehend the best way to use it. There’s really no excuse for this ignorance. Using Soundcloud to your benefit is quite easy.
Generating a following on Soundcloud is quite easy, though the audience is a little skewed. Most Soundcloud users are fellow musicians, who are just as interested in getting their music out to the world. They often follow a quid pro quo model of participation, which guarantees that if you follow them and comment on their work, they’ll do the same. In that sense, you’re preaching to the choir. But any exposure is good exposure.
There is a piece of advice in the STEFM guide that I’m not completely on board with. They advise you to never put works in progress on your Soundcloud page. I’ve seen plenty of artists with a substantial following posting unfinished tracks, and it doesn’t hurt them. Some people use demo tracks and fragments as a way to stimulate audience involvement, which might work for you, too.
The bottom line, though, is to get on Soundcloud and get involved, since it is an unparalleled medium for growing an audience, even for a home producer.
As a songwriter, the ending is the least of your worries. Odds are that the ending you choose for your song will be changed or replaced by the artist or producer if the song gets cut. That said, you still have to figure out how to end your song on a demo and during live performance. Otherwise, you’ll be stuck for all eternity playing a song that never ends.
There are countless ways to end a song. Ultimately, you have to decide which one works best for each of your songs. Try several endings and see if one fits the song better than the others. Here are some examples of the different kinds of endings used in contemporary songs. Feel free to experiment with these, change them, combine them, and come up with your own unique ways of wrapping things up.
Custom tag: Musical section or signature lick written to be an ending.
Intro tag: A reprise of the intro used for ending the song.
Riff tag: An ending that incorporates a signature lick or riff from the song.
Motive reprise: A final, instrumental repetition of the main musical theme.
Chorus/verse reprise: Like a motive reprise, but with verse or chorus melody.
Hook fade: Several repetitions of the hook over a fade out.
Double chorus fade: A double chorus that fades out in the second half.
Hook dead stop: A final repetition of the hook, ending in a sudden stop.
Any of the endings that draw from another part of the song may require a bit of tailoring. Tweak and adjust them as necessary. A few tricks to try include slowing down at the very end, modulating the key of a final chorus, or adding a brief rest or bit of silence before a final chord.
To begin with, there are many things that you cannot do in a home studio. A competent recording of a live band - still the mainstay of the recording industry - is usually impossible in your bedroom. Fitting an orchestra in there is also challenging.
And even though the topic of this piece is whether award-winning (i.e., Grammy) music can be conceived, recorded and mastered in a home studio, there’s nothing to indicate that any such recordings actually have been.
What is possible theoretically is not always possible in practice. That being said, however, amazing things can be done in home studios, and it’s an interesting topic for recording enthusiasts to ponder.
Can all of the semi-pro equipment that promises great results actually deliver a recording that rivals the “majors”?
A Bit Of History
Making a record used to be a complex task, requiring engineering by highly trained technicians using specialized equipment in a large, dedicated facility. There were also many other people involved, each with a specific contribution made to the process. Musicians, vocalists, songwriters, arrangers, producers, publishers and engineers all had their own areas of expertise; each did a job and that job only.
As the business matured, these jobs started to blend, and recording technology also advanced. The equipment became smaller, cheaper and less complicated to operate. By around the mid-1970s, it was possible and practical for recording artists to purchase home versions of professional recording equipment in order to produce and record themselves as well.
In the last 20 or so years, the wide availability of low-cost, high-quality digital recording technology has greatly narrowed the price and performance gap between pro gear and semi-pro demo making tools.
There has been the rapid, massive evolution of the digital audio workstations as a primary recording medium. Further, it’s now possible for anyone with a computer, whether used mainly for word processing or surfing the Internet or whatever, to access the same technology that professional recording engineers, producers and artists currently use.
But back to the intriguing question: Is it possible for someone to actually record a award-winning product with a desktop, laptop, or any other currently available type of dedicated home recording device?
What’s The Difference?
I believe that most people would rather hear a great song, with a killer performance—even if it had a sound quality that was less than state of the art—instead of a badly performed, horrible piece of drivel that is “slick” sounding and well recorded.
If that’s the case, will there be any difference in the sound quality if a very talented artist records his or her new album at home rather than at a big studio, especially if the material is just as good as it was on the last record, and the same producer and same engineer are employed?
The software the artist owns is the exact same program and version that the big studio has. Does the hard drive on the studio’s computer sound better than the hard drive on the artist’s computer?
In the digital world, data is data, after all. Does a document on one computer read any differently than that same document on any other, assuming everything is working correctly?
So, is there any difference between the sound quality the talented artist will get if making the record at home as opposed to recording it in a very expensive studio. Theoretically, the answer is no, but in practice, things are not always so simple—especially in the recording world.
Remember, in my little example all the personnel are skilled, talented and very good at what they do. And so are the arrangements, players, productions, material, etc.
The Input Signal
Things start to differ a bit when we get to the actual sounds that are being recorded. First and foremost on my list is the quality of the input signal.
I like to explain this concept by using a lowly little audio cassette (remember those?) for my example. If you carefully transfer what you feel is the best sounding recording you have ever heard to that cassette, then play back the cassette copy, a semi-faithful reproduction of the original recording will be heard. There may be some added noise, distortion or signal coloration, but there will be a definite aural signature of the quality of the original signal.
In other words, if you record something you think is the best sounding, most professional, sonically perfect example of high-quality sound you’ve ever heard on to a cassette, it will sound very much like the original on playback, except for any noise and/or distortion that the cassette may impart on the signal.
However, if you then stick a little telephone answering machine microphone into that same cassette deck, and record yourself saying “Testing 1, 2, 3; How now, brown cow?” into that mic, anyone and everyone who hears it will be able to tell that the recording was made using a cheap mic plugged into the cassette machine.
That lowly little cassette still has the ability to allow the best sounding recording ever made to be distinguished from the homemade, cheap mic recording.
So, in a digital recording environment, which is supposed to be an uncolored neutral medium, people making recordings must make sure they have the highest quality input signal - even though it can be altered after the fact - so as to get the sound/image they want printed with the best quality, or, at least, with the quality that is desired.
What I’m saying is that while your recording gear will do a professional job of recording what you put into it, you must do a professional job on the front end in getting that sound. And that’s where the big studios can’t be beat.
The Big Studio Advantage
Where the big studio has another advantage is in the quality of the mics, preamps, equalizers, compressors etc., that they own.
To help get that input signal to sound its best, the big studio is usually second to none.
Accordingly there are many more choices than the artist would have at home. Plus, there is a difference of literally hundreds of thousands of dollars in the total worth of that equipment, compared to what the artist might own.
Still, if the artist chooses one very good front end for the home system, with a sound that he or she loves, then there is a good possibility that the sound of the big studio can be approximated for that artist.
But there will not be as many choices, variations, or options available at home. One factor that can help home studio owners is to use samples.
To the artist’s advantage, these samples are usually made in big studios using high-quality equipment. This further levels the playing field, because if these sames are used, the quality would be identical to that of when the samples were made in the larger facility.
Next of my list of things to get the home studio closer to the big studio is accurate, trustworthy monitoring, which is crucially important.
The trend that most engineers and producers follow for monitoring is to shy away from the large monitor speakers in a studio, and instead use smaller, near field systems that are found in many facilities and that can be easily carried around. These speakers are much smaller, and very affordable, which is another element in the artist’s favor when planning his or her project/home studio.
In the last several years, self-powered near field monitors have gotten very popular, and because the power amp is built into the speaker, once again it will be the same one, whether used in the studio or at home.
However, no one will try to argue that the sound you get out of a near field system in your bedroom can equal an $80,000 system in a well designed acoustical space, which brings us to…
As far as having a nice large recording room with high ceilings that is acoustically tuned, with isolation booths, and controlled reflections and reverberation, the big studio will always have the advantage. That is, unless the artist lives in an old former church or similar structure, or has spent a fortune treating the room.
However, because many instruments are recorded direct, and a space for recording vocals or a single instrument can be designed in a spare bedroom, it is less of a problem than it would be if trying to record an orchestra at home.
Finding a good place at home for your control room is a sticky challenge. Standing waves, bass build-up, high-frequency absorption, reflections of sound off the mixer or computer all have to be dealt with.
What you’re trying to do is to get an accurate sound in the room, which will let the mixes translate well on other systems. This is not always possible in a room at your house.
Fortunately, there is a lot of literature and information on the subject, and quite a few companies have come out with products that allow one to quite reasonably treat the room with baffles, diffusers and bass traps.
When the room is “tuned” for the best response, recording, mixing and monitoring there becomes a trustworthy process, and the artist can rely upon the sounds being heard and mixes being done as accurate and true.
The Final Consideration
Finally, there is mastering. Most big studios send their product out to dedicated mastering houses for the final finishing touches. No one will argue about the sterling quality of the world’s best mastering houses. If you want your project done right 100 percent of the time, that’s where it should go.
Click to enlarge
IK Multimedia T-RackS, which includes the vintage tube equalizer shown here, is one of many mastering software suites well suited for home/project studios.
However, more and more project studios do desktop mastering to go with their desktop recording. The software tools can do a surprisingly decent job, and many home studio owners find that they can master projects to their own satisfaction.
Yet the fact remains that home-mastered projects can suffer from a lack of good monitoring, and more often suffer from a lack of good sense on the part of the person doing the mastering.
There’s a reason that pro mastering engineers do a professional job, and it’s because they know what they’re doing, in addition to having the best tools. In the end, the project studio is best for people who have a good foundation in recording and know what they want.
It is a technological marvel that so much can be done in a bedroom studio, but the major studios serve a crucial purpose and always will.
We’ve made a huge hubbub about creating the optimal recording environment in your home studio. We’d love to see some kind of soundproofing or acoustic treatment in your studio. The sky is the limit with these things and if you can make a professional environment, then go for it.
But consider and embrace the truth: you have a home studio. It has limitations, but it is also a very unique place on this planet. There is really no other studio like it.
For many, that means that you have to deal with some intruding noise. But it also means that you have the opportunity to record in some very special places. And I’m not talking about your bedroom. If you want to get some variety, move out of your studio from time to time and check out a few other places in the home:
Bathrooms are chock full of reflective surfaces and can create some very interesting reverbs. Can you make them with convolution reverbs? Sure can. Or you could just throw up a mic in the shower and get some crazy sounds right away.
Garages are typically sonic nightmares. Sounds seep in. And the sound of you rocking certainly seeps out. But if you can find the right time of day or do some quick treatment, you might be able to grab some cool, weirdly echo-ey tracks in there.
Halls and Stairs
If you have hard wood floors, then +2 points for using your hallway for recording. Or if you live in an apartment complex with a stairwell, you might find some incredible reverbs there, too. Ignore the chaos of such a situation. Maybe you can even get your neighbors to help.
Utilizing any space like this is going to help you make very colorful tracks. You’ll find that your color tracks really stand out against a background of neutral tracks that are recorded with very little room sound. And that’s totally in your power at home.
In a pro studio, you can get great sounds with little interference at greater distances and that can be very helpful for mic’ing some instruments. If you want the “best” tracks you can get in an imperfect environment, you don’t have that luxury. So you need to tailor your efforts accordingly.
This generally means that you’ll get the best results at very close range to your sound source. That doesn’t mean you can just thrust an SM57 into the cone of an amp and get the best. You still need to test the position for the best results. But you’ll get your best recordings this way.
If you want to approximate the effect of distant mic’ing, get some convolution reverbs involved. Short delay times mixed at low volumes in with the original signal can give you that roomier sound you’re looking for and you can dial it in at your leisure. Don’t be afraid to make a commitment to a sound though.
Nullify Your Enemies
In this case, your enemies are the errant sounds from the outside environment. Use the directional pattern of your microphones to cut them out. Cardioid patterns only capture what they’re pointed at. So point them 180 degrees away from pesky outside noises.
1. The Better The Source, The Better The Recording
The first rule to live by is quite simple: the better the source, the better the recording.
Think of your microphone as your ear. If something sounds bad to your ear, chances are it won’t sound great in front of a mic. Making sure your source is the best it can be is the first thing to remember whenever starting a new project. That could mean a new set of strings, fresh drum heads, or having your vocalist do warm-ups before tracking. And, no matter what, everybody should tune before a take.
Remember that there’s a lot you can edit out later on down the line, but there’s a lot that you can’t add if it’s not naturally there — that includes tone, body, and definition, all things you’ll lose if your instruments aren’t in good shape.
2. Save Your Work Often
Back in the mid 90s, I remember getting a 850 megabyte hard drive; I never thought I’d fill up that hard drive, and felt very proud of myself for being so far ahead of the times.
Now, storage space is virtually free. A terabyte of hard disk storage can be bought for less than $100, and there’s no excuse not to back up your sensitive data.
Nothing is worse than losing something you worked for hours on, especially when you’re running your studio as a business and you have a paying client. Always save your work between takes. It also doesn’t hurt to have an external hard drive that you backup your sessions to nightly; if something happens to your hard drive, you’ll at least have a copy to start over from.
3. Always Keep Spare Parts
You may not think of this at the time you’re buying your equipment, but stuff does break, even if brand new, and sometimes, instruments may need last minute repair. Believe it or not, I’ve had a few sessions fail due to brand new equipment!
Keeping basic items at your studio will always help keep things going smoothly when the inevitable happens. Stock a set of guitar strings (both electric and acoustic), some drum sticks, and always keep spare instrument and microphone cables on hand. You never know when your session will be saved because you came to the rescue! It also helps to be able to kindly suggest a new set of strings to the stubborn guitarist who showed up with old, dead strings on his axe.
4. Nothing Leaves Until The Check Clears
This tip applies only to the home studios that record for profit, not your simple project studio, but it deserves a mention of it’s own. Don’t ever, ever let any mixes leave your studio until you’re paid in full. This includes mp3 copies you send out via email, and CD-Rs you let leave your studio with rough mixes. At any point during the recording process, a financial dispute of some sort may arise, and they’ve still got a rough mix. This is rare, but it happens.
Remember, anything you let leave your studio, you can never get back. Just ask any of the number of engineers who’ve gotten burned by non-paying clients!
5. Keep It Simple
I can’t stress this last tip enough: keep it simple. One of the biggest and most common mistakes a new recording engineer can make is being too fancy. You’ll waste a lot of time — and your client’s money, if working for profit — by overdoing it in the studio. Examples of this include recording an instrument in stereo when a mono (single) track will do, doing too many vocal overdubs, or laying down too many guitar layers. Let the band’s music speak for itself.
Part of your job as a lyricist is to get inside the minds of the characters you create in your songs. You need to express what is real for them. You also need to remember that the words that come from your heart and your pen will hopefully be coming out of a performer’s mouth someday. The exercise that follows will help you craft lyrics consistent with the images projected by the artists you intend them for:
Decide which artist, in your wildest fantasy, you would most want to record your song. Be sure to choose an artist who records outside material. Write down that artist’s name.
Close your eyes and visualize a stage in front of an enormous crowd of cheering fans. Hear the announcer introduce your “dream artist” performing his or her Number One smash hit—your song! Pay attention to the details. Notice what the artist is wearing and what kind of accompaniment there is. Then, listen to the artist singing your song.
Let these images “write” your lyrics. When the performance is over, soak in the praise and adulation of the fans. Then sit down backstage and have a discussion with the singer. Ask what the artist would want to convey in this song and what words and images he or she would use to say it. Now write down everything you saw and heard. Include specific details—especially any lyric ideas the “artist” suggested.
It can also be effective to imagine hearing your song coming out of the radio. In addition to helping you write a “radio-friendly” lyric, this exercise may also help you create the melody and arrangement.
Your fantasy artist may not be the one who ultimately records your song, but if that artist has a proven track record of hits, and you write a song he or she could potentially record, it’s likely that song would also be suitable for a variety of other artists in a similar style.
"Strip away all the filler. It may take three, four, five, ten rewrites. You don’t need five more songs in your catalog. You don’t need one more. You need one great one. It’s too competitive to let yourself off the hook with lines that are just okay. Dig deep to find that part of you that makes it special and get rid of the things that you would discount if it was another person’s song. It’s just too competitive."
The hook is generally included in the chorus section, because it directly addresses the hook’s information with each line. Sometimes the chorus simply restates the message presented in the hook. At other times, the chorus may be a list of ideas that culminates in the hook.
The chorus usually appears two or three times during the song and contains the most important information, so it needs to be even tighter and catchier than the other parts. The most wonderful verse you’ve ever written would probably get boring if you used it three times in the same song. The chorus needs to be absolutely bulletproof to make a listener want to hear it again and again.
Lyrically, chorus information is sometimes more abstract than verse information and is often a philosophical statement or moral of the story supported by the verses. It’s usually presented in a leaner and more compact style than that of the verses. A simpler meter with longer notes and less syllables can help keep this density from bogging down the chorus or making it too heavy. If the other parts of the song have done their jobs, the transition to a different type of information and style of presentation should be logical and easy.
From a musical standpoint, the chorus should sound big and memorable. Ideally, the chorus should be easy for listeners to sing along with. This makes your song interactive by letting the listener participate and also makes it easier for a listener to remember the song; it’s much easier for a person to remember something they did than something they only heard.
What’s worth more, the song or the recording?
The rights answer is, “it’s a tie.” But that’s not currently how these siblings are treated.
The debate over music licensing royalties for digital mediums has shined a light on the fact that, when it comes to royalty rates, songs and recordings are treated very differently, seemingly without rhyme or reason. Depending on the type of transmission, we have cases where the song (and songwriter) is paid something and the recording (and performing artist) nothing (i.e. terrestrial radio), and other cases where the recording is paid much more than the song (i.e. digital streaming services like Pandora).
When I was in DC this April for GRAMMY on the Hill, concepts like “one music” and “equity” were thrown around as rallying cries for a more unified system of licensing. As an industry, we now need to put our money where our mouths are. If indeed we believe in these catch phrases, the answer is very simple.
Publishers and record labels have a chance to work together towards increasing the overall top line number and then share the results equally, regardless of distribution medium. This is how we’ve always operated for synchronization revenue, and this logic needs to be applied across the board to all music rights.
If we as “one industry” can get negotiate 55% – 60% of Pandora’s revenue for rights holders, then the song and recording should split that evenly. Similarly, if we can get 10% – 15% from terrestrial radio, this too should be split evenly between recording and song.
Additionally, the money should flow from the service to the label and to the publisher in parallel, not through the label and then a portion to the publisher [the current process for paying mechanical royalties to publishers]. The song and the recording should be treated equally.
Who’s to say whether the writer or artist added “the most” value to the resulting work? It’s impossible. It is a symbiotic relationship. Was it Frank or Sammy, Billy Mann or Pink, Dr. Luke or Katy Perry? It was both. One without the other would remove the magic.
As is always the case in life, most complex issues can be boiled down to playground rules. If there are two kids and one swing, they share it. Most kids do this on their own. The government rarely has to step in. I watch my 7 year old “share and be fair.’
Can’t we in the music business remember these simple rules? We will all be better off if we do.
1. Make your song memorable and easy to learn.
There are several ways to create a memorable song. First and foremost, is there something about it that sticks in the listener’s mind and sets it apart? That’s a great place to start. Almost as important, though, is whether the song is easy to learn. If it is, then lots of things can happen. Not only can music fans pick up on it and sing along, but an artist is much more likely to connect with it and learn it as well. Lyrically, making sure your rhyme scheme is consistent in the verses and that your choruses are simple and the same from chorus to chorus is a great start. Regarding your melody, while it should be unique and memorable, it also helps to keep away from something so complicated that it’s tough to learn. Remember, in order for an artist to record your song, they have to learn it. The easier you can make that job for them, the better.
2. Your song should be easy to sing.
Just as important as making your song easy to learn is making it easy to sing. Not only will that help the artist in the studio as they’re recording it, but if the song is easy to sing then performing it night after night becomes less of a chore if - fingers crossed - it becomes a hit. A few things you can do to make your songs easy to sing are to keep your lyric more relaxed and conversational and make sure your melody’s range isn’t out of reach for most artists. An early indication that you might be missing the mark is if your demo singer has trouble either with the range or remembering the melody. Demo singers are specifically trained to work in the studio, and if they’re having trouble, then how can you expect an artist - who isn’t necessarily a studio pro - to be comfortable with it?
3. Portray the artist in a favorable light.
Another thing to keep in mind is how the artist will “look” singing your song. As songwriters we sometimes forget that when someone else sings our song, most listeners will just assume that the words the singer is singing come directly from them. Given that you’re hoping an artist will attach their name - and reputation - to your song, it’s that much more important to make sure your song portrays the artist in a favorable light. Does it make them seem like a good person? Do they appear insightful? There are very few artists who want to appear like a lost cause or someone who’s not put together, so keep that in mind when you write. This may sound obvious, but often we write to process feelings of sadness or frustration, and while that may be good for us, it might not be all that interesting for another artist to record.
4. Craft a universal message.
This is a tricky one. While you want to write from a meaningful, personal place, it’s important to keep in mind that it still has to be a universal message that people can relate to. Here’s the good news. Sometimes our most personal stories are the most universal. In other words, by telling the truth in your writing and staying sincere, it’s highly likely that others will relate to what you’re saying.
5. Take a unique angle.
Given that pretty much ALL songs are about love and people, what have you done in yours to make it somewhat unique? Have you taken a fresh look at an old story? These are the kinds of things that you should consider when reviewing any song you’re hoping to pitch to an artist. Also, it’s worth noting that writing about current events is problematic because, often, by the time a song gets cut, those events are old news.
6. Form industry connections./> Let’s assume you’ve taken all of the above suggestions to heart and you’ve got a song that’s ideal for pitching to an artist. Now comes the hard part. You have to do your homework and figure out how to get your song to the people connected to the artist or artists you have in mind. Making sure that you’ve slowly and organically grown your network of industry connections should always be in the back of your mind. By having relationships with managers, producers, publishers and record labels, you’ll be in a much better position to get your songs into the right hands. I know this is much easier said than done, but it doesn’t have to happen overnight. Start attending music conferences, taking trips to music cities like New York, Nashville and Los Angeles and join a local songwriting organization to get the ball rolling.
7. Consider collaborating with an up-and-coming artist.
A widely known “secret” in the industry is that you’ve got a much better chance of getting your song cut if you write it with the artist. What you might not have considered is that before Garth Brooks was “Garth Brooks,” he was just another artist-hopeful, singing demos around town. Given that the likelihood of Garth Brooks writing with an unknown songwriter is next to zero, why not find the next Garth Brooks and write with them early on in their career? I can’t tell you how many songwriting success stories have started that way.
There are three scenarios in which your digital mixer can lead to laziness.
This one is tempting when you have the same people in the band every week. You create one scene and label it “music” and use it for every song, every week, every month; no EQ adjustments, no effects changes, maybe a volume tweak here or there.
You are mixing just as lazy as when you had an analog mixer and rarely touched the EQ knobs. Congratulations, all your songs have the same generic sound. You might say I’m hyper-sensitive to this form of live mixing. You’d be right.
You create a good baseline mix for the first service with the mindset you will improve your mixes (saving the scenes) through your multiple services so the last service will sound the best. After all, you get the most people at the last service.
You are doing a huge disservice to the congregation and missing the point of your job. You should have the first service sounding the best it can sound. The people attending this service are no less important than those attending the last service. Subsequently, if you are doing this, you’ll start hearing comments like “the first service never sounds as good as the last service.” Is that what you want to hear?
During the worship practice / sound check, you spend your time creating great song mixes. You save each song as a digital scene so come service time you only have to recall the scene for the song.
Your service-time mix suffers because the acoustic properties of the room have changed because now the room is full of people. What sounded great in the empty sanctuary now only sounds so-so. It’s better than being in scenario 1 or 2 but it’s not where you should be.
The good news is you know the importance of distinct song mixes but you’ve allowed yourself to be lazy and miss out on sculpting those mixes into even better mixes for each service. Not only do the room’s acoustic properties change when it’s full of people but, like I mentioned in another article, you are mixing for the moment and you can’t completely pre-mix for that moment. Your mixing needs to be somewhat re-active to the congregation as the mood fits. But, I digress.
Fight the Lazy!
Let’s break this down into steps:
1. Create different song mixes.
Does your worship music on your iphone all sound the same? No. Don’t use the same scene for all of the songs. It’s ok to have a baseline mix but consider it a starting point.
If your musicians change from week to week, then the baseline might not be possible. It depends on how the bands are grouped and the functionality of your mixer. Some mixers can save channel settings separately while others save all the channels together as one scene.
Bottom line, songs are mixed differently and you need to work with the same mindset.
2. Plan out how you will use scenes.
You can use them per song, per element, or per a group of elements. For example, I use around five scenes per service. Each scene is for one song plus any elements before or after where a logical break occurs. For example, if the last song of the worship set is concluding with the last notes ringing out, it’s ok if the speaker starts talking so the music sound stays as needed. There are all sorts of ways of arranging scenes. Take your schedule and break it out into logical groups.
3. Plan your first service like it would be your last.
Even if you only have one service each weekend, put your energy into creating the best song scene mixes possible during your worship practice and your sound check. By the time the first service rolls around, you should know you have done your best. You should expect to make some minor changes to your mixes but those are simply part of live mixing.
Consider the first service (each service) as if it was the last service you were ever mixing. You want it to be the absolute best.
The Take Away
The ability to recall scenes takes a large burden off your shoulders. You can get better individual song mixes and, in the case of multiple services, you can create a consistent sound from one service to the next. This is all good but it doesn’t mean that you can stop mixing.
The mix that worked during practice might need tweaking when you hear it with a room full of worshippers. In the case of multiple services, after reflecting on the first service, you might discover you could improve your mix for the next service by modifying a vocal mix. And let’s not forget mixing for the moment. Recalling scenes is great but don’t those saved settings define your mix.
Mixing is the second stage in the recording process, which comes after the tracking has completed. In a basic sense, mixing isn’t that hard to understand. Mixing involves blending all of your separate tracks into one stereo pair suitable for listening to on any radio, Walkman, iPod, or car stereo. Mixing also involves adding effects to polish up the sound.
The art of blending disparate sounds is very difficult. When you hear an acoustic band, the blend is taken care of for you; the reverberation is natural from the room. As soon as you start close-miking instruments, reproducing the sound in a realistic fashion becomes a challenge. While you might not know how to make a good mix yet, you certainly know a bad one when you hear it.
Mixing is all about perception. Can you perceive that this group of instruments really sounded this way? The best mixes sound natural, and they try to replicate how those instruments should blend together. If the mixing engineer has done his or her job, nothing out of the ordinary should be noticeable. That is, nothing catches your ear as “unnatural” or out of place. As you know, it’s easy to spot a bad mix; there’s just something “not right.”
Many engineers talk about hearing in multiple dimensions. Understanding those dimensions can help you figure out what’s going on in a good mix. Here are the basic dimensions you’ll encounter in mixing and what it means to work with them:
Foreground/background: Bringing sound forward and backward in a track using volume
Depth: Using effects to create the feeling of closeness or distance
Up and down: Using EQ to help tracks sit in their own distinct part of the frequency spectrum
Side to side: Placing sounds from left to right using the pan controls
Without oversimplifying the process too much, these four dimensions give you an idea of what goes into a mix. Now let’s look at what goes into working with these dimensions so that you can start mixing like a pro.
Guitars can energize a mix or absolutely destroy it. I’ve watched rookies look dumb-struck at the mixing console because they didn’t know how to handle mixing two guitars. Mixing two guitars is a simple process in which you do the same thing to each guitar channel EXCEPT with one added step.
First of all, you MUST identify the role of each guitar in the song. A guitar is either going to play rhythm or lead. Take the two guitars in the song and identify the role of each.
Let’s say, in this example, there is a rhythm guitar and another guitar that will play rhythm with the occasional lead elements in the song. Let’s make them both electrics.
Electric Guitar Number 1
Start with the rhythm-only guitar and go through these three steps:
1) Roll off the low end. Drums and bass should be working the low end so let’s clean up this first guitar by using a high pass filter. There is no perfect frequency cutoff for a HPF so I can’t say, “enable it at exactly 104 Hz.” Start at around the 100 Hz point and slowly sweep the HPF frequency up until you hear a better sounding low end from the overall mix. I’ve used it as high as 280 Hz. Don’t worry about the number, listen for the right spot. Much of it depends on what the electric has going on; mucho distortion, overdrive, pick a flavor. If you only have a fixed-point HPF then enable it.
2) Remove the bad. The old rule of cut first comes into play here. Take a sweep-able EQ point and cut it around 6 dB then sweep from 250 Hz up through 4 kHz. At some point, the guitar’s mix will sound better because you’ve notched out the offending frequencies. Hey, I don’t know why there are usually offending frequencies in everything that produces sound, there just is!
3) Give it presence. Take another sweep-able point and boost 4 dB with a moderately wide frequency range (adjust the Q value). Sweep it from around 1 kHz to around 3.5 kHz. Likely, you’ll find a point in the middle of that area where the guitar comes to life.
Guitar Number 2
Perform the above steps on this guitar as well.
Now for the tricky part! The result of this work is two guitars that sound great on their own. But together, they just aren’t ready for the dance.
Remember the guitar roles; one is rhythm and the other is rhythm / lead. Who should stand out between the two? The answer; the second guitar.
Look over the EQ settings for the second guitar. You want this guitar to be heard more clearly during the lead portions of the song. You DON’T want it fighting for space.
Look at the area where you boosted the presence for the second guitar. Let’s say it’s at 2.3 kHz. Jump back to the first guitar and apply a cut into that same area. If you are running an analog board, try this with a sweep-able mid-range control. If you only have one mid-range sweep then use it. The goal should be to allow the lead guitar to shine through.
Listen to the result. For added separation, boost a bit more presence for the second (lead) guitar. In the end, trust what your ears tell you is right – unless you’re tone deaf in which case you are in the wrong line of work.
This process will also work with acoustic guitars. Electrics seem more daunting but don’t fret it. [rim shot]
The Take Away
There’s a dance between like instruments. The one who leads can change from one song to the next. The key is allowing that guitar to lead the dance. So in summary, roll off the low end, get rid of the nastiness, add presence, and then provide separation by giving dominance to the leading instrument.
For the professional songwriter, the hook is perhaps the single most important part of the song. Usually the title, or contained within the title, the hook is the essence and embodiment of the song’s central theme or message. Sometimes the hook is a common phrase; other times a phrase is twisted into a play on words, like “Not on Your Love” or “Lifestyles of the Not so Rich and Famous.”
Hooks cannot be copyrighted. Any songwriter can take any hook and use it. Still, using a recent hit for your hook is usually a bad idea: It’s unoriginal and may confuse listeners. A hook from thirty years ago might be a great start for a new song, especially in a different genre. Some hit songs that share the same hook are “I’m Sorry” (Brenda Lee, 1960; John Denver, 1975), “My Love” (Petula Clark, 1966; Paul McCartney, 1973), and “Venus” (Frankie Avalon, 1959; The Shocking Blue, 1970; and Bananarama, 1986).
A scan through the charts will reveal that there are dozens of different kinds of hooks. However, some kinds seem to have better luck than others. Here’s a list of some of the types of hooks that reappear in hit songs decade after decade. Included are a few examples of each kind and the years they charted:
Common phrases: It’s Now or Never (1960), Tossin’ and Turnin’ (1961), I Heard It Through the Grapevine (1968), That’s What Friends Are For (1986), Miss You Much (1989).
Names: Tammy (1957), Big Bad John (1962), Hey Jude (1968), My Sharona (1979), Jack and Diane (1982).
One-word hooks: Don’t (1958), Yesterday (1965), Escape (1979), Jump (1984), Faith (1987).
Dances: The Twist (1960), The Hustle (1975), The Safety Dance (1982), The Electric Slide (1991), Watermelon Crawl (1995).
Mini-trends like “the” songs (“The Letter,” “The Chair,” and “The Ride”) and “un” songs (“Unforgettable,” “Unbreakable,” and “Unbelievable”) pop up often enough to make them potential hook-hunting ground.
A quick look at the fifty all-time top singles in Billboard magazine shows that well over half of all hit hooks revolve around a central theme of love (first love, passion, love lost, and other variations), but a hook can be about anything. Don’t believe it? Also in the top-fifty singles are songs about dancing in jail, racial harmony, cold-blooded killers, poverty, a famous battle, the plight of the working man, and astrology, just to name a few.
As a recording engineer in training, you’ll have to know a little bit about sound waves and electricity, because they are pivotal to understanding recording. In this chapter, you’ll see why it’s impossible to separate music from science — the terminology is everywhere, impossible to escape. Have no fear!
Sound is emitted by a source and travels in waves that vibrate back and forth pushing air molecules around them. The sound waves create sound pressure (volume) as they push through the air molecules, which make our eardrums vibrate and pick up sound. Without a medium for sound waves to travel through, there is no sound.
The speed that a sound source (a monitor speaker, for example) vibrates tells you the frequency of the sound that comes out. If a speaker is playing a perfect A (440Hz) tuning note, such as one found on metronomes and tuners, it is vibrating back and forth 440 times a second. The faster the source vibrates, the higher the sound; the slower it vibrates, the lower the sound, or pitch, you hear. Sounds are rarely made up of just one frequency; actually there are many frequencies present in any one sound. The science behind it is beyond the scope of this book, but just understand when you play or sing one note, there’s more than just one frequency present.
You might be saying to yourself, why do I have to know this? That’s a legitimate question, and here’s the short answer: Understanding frequency and how sound works is essential to mixing and most all effects. We don’t just talk about “low sounds”; you’ll see on your EQ knob that “low” might have “80Hz” next to it. Your microphone might have a “100Hz roll off” on it. You might read an article about boosting the 10kHz band to improve presence and clarity. Wouldn’t you like to know what that all means? Simply put, the audio community, which you are now a full-fl edged member of, deals with terms like hertz and kilohertz, so you should simply learn what they mean to avoid confusion!
Ranges of Sound
Let’s talk a bit about the ranges of sound you might be used to. Your stereo might have a bass and treble knob. These knobs are used to boost or cut a certain range of frequencies. The specific range of frequencies involved will differ from system to system, but this is generally known as equalization (or EQ). EQ is simply the boosting or cutting of certain frequencies of a sound. The most basic EQ you will encounter is a three-band EQ on a mixer (either outboard or virtual).
There are a few different types of effects that are utilized in studios. The first, equalization (EQ), isn’t really an effect per se, but for our purposes, we’ll lump it in with the rest. EQ comes in many flavors, from a simple three-band EQ found on many 4-tracks and mixers to elaborate parametric equalizers that give a great deal of control over individual frequencies. Dynamic processing involves effects that control the volume or dynamics of sounds. Effects like compression, limiting, gating, and expanders all control the volume of tracks.
Special effects usually encompass delay and its many incarnations, such as tape delay, and multitape delay. Modulation effects like chorus and phasers and flangers change the sound by utilizing a delayed signal mixed in with the original signal, which either delays that signal or changes how the delayed signal sounds. The mixture of the two signals is the characteristic sound of modulation effects. Reverb is the most important effect to learn how to utilize well. Every sound we hear has some reverb. Reverb, which is short for reverberation, is a natural occurrence when sound waves reflect and bounce off surfaces. The larger the room, the longer it takes the sound to come back to your ears — giving you the feeling of space and distance. Reverb is such an important part of acoustic sounds that when we record without it, it sounds quite strange. It is possible to emulate the sound through reverb processing, giving you the feeling of space.
Hardware vs. Software
Years ago, effects were done exclusively by outboard effects units that were rack mounted. Certain effects processors were multifunction units and could produce reverb, delay, and other effects all within one unit. Other gear was more specialized to one job, like a compressor for instance. The great part about outboard gear is that it sounds really good. Many studios still use them instead of computer plug-ins because the engineers feel the sound is better.
Early outboard gear used analog technology to produce the effects. As technology improved, manufacturers turned to digital signal processing (DSP) chips to improve the quality of the sound. The digital-effects processor was born. It was only a matter of time before a computer was able to do the job of DSP. Indeed, that day has come. Now, instead of needing floor-to-ceiling racks of gear, you can re-create all the effects you want through software. This is where the home studio became powerful. No longer do musicians need all the space and expensive gear! Now, through software, a computer (or a studio-in-a-box) can do it all.
THE ERA OF HI-RES DIGITAL AUDIO IS HERE
I’ve been watching the rise of hi-end digital audio recently, and as I put the individual pieces together over the past year or so a picture is emerging that has inspired me to claim that we are entering a new era of hi-res digital music. We are, I believe, at the edge where compressed audio (MP3s, AACs, etc) will be as outmoded as 8-track tapes and new hi-fi consumer playback systems playing uncompressed digital files will soon be the norm. Evidence that we’re entering this era comes from a number of different sources, most of them efforts to get hi-res digital into the market.
HI-RES PLAYBACK DEVICES
On the device side Neil Young’s PONO system has led the way and has been a great advocacy platform, but Sony has just announced that they’re building hi-res digital playback devices and offering their catalog in hi-res, too. Other commercial devices have recently emerged: Astrel & Kern’s hi-res portable player, a $200 hi-res portable digital player from FiiO, Teac’s affordable hi-res DAC with integrated amplifier, the ubiquity of Peachtree Audio’s integrated amps with hi-res DACs, various DSD players, compact DACs meant to work with portable computers, and the list just goes on. Almost all of these devices are brand new to the market, aimed not just at audiophiles but at listeners of all ages in all areas of musical interest.
To be clear, these players will, of course, be of varying quality, but they are all designed to outperform and/or vastly improve an iPod, laptop or phone. Better clocks, much better D-A converters, better analog stages and headphone amps, better power supplies to run it all. Better playback systems will also play back lower-res files much better, making backward-compatibility with existing digital music libraries not just possible, but better.
EXPENSIVE HEADPHONES, THE GATEWAY DRUG
As we know, on-ear headphones have also become ubiquitously available and - remarkable in a market where theft was the dominant transaction - people are willing to shell out a lot of money for them. While we can groan about the fact that many headphones have become more a fashion statement than a hi-fi investment, the very notion that there is better sound to be had, and that it might cost some money, sends a strong message into a market that was previously dominated by the steal-it-and-screw-it mentality. That’s a big deal. The very availability of expensive headphones is like saying, “Slow down and really focus on the sound for a moment. Just listen.” Anyone who walks into an Apple store today is confronted with a selection of headphones that include stunningly revealing offerings, like Bang & Olufsen’s new B6 headphones (their first headphone offering in over two decades!). Such headphones at Apple Stores make exposure to hi-fidelity experiences a mainstream offering for the first time in a long while.
On the file side, we’re seeing more and more albums being converted to hi-res digital formats, and here’s where the discussion gets really interesting. Neil Young’s PONO format is going to offer a provenance guaranteeing that their catalog of hi-res files derives from direct transfer from studio masters - often analog master tapes. Sony will be offering a similar guarantee, and many smaller labels are now doing the same thing. Hi-res digital sales sites such as HDTracks gather together the hi-res offerings from around the world into an easy-to-use store and offer information like the following:
"THE GRATEFUL DEAD STUDIO ALBUMS WERE MASTERED FROM THE ORIGINAL MASTER TAPES IN AIRSHOW STUDIO C, BOULDER, CO. TRANSFERS WERE DONE AT 192KHZ / 24 BIT FROM AN AMPEX ATR WITH PLANGENT REPLAY ELECTRONICS TO A PRISM ADA-8XR A/D CONVERTER INTO A SOUNDBLADE WORKSTATION."
Not everyone is going to care that much about the individual pieces of gear used - and Grateful Dead fans have been notoriously struck by wide-spread audiophila - but this kind of information will likely become part of the credits on hi-res digital, because people will soon want to know that they’re buying the best possible file. No one wants a DSD transfer of an MP3, and when possible a direct transfer from original masters will be the thing people want.
But why? The appeal of that direct transfer of a studio master is fascinating, when you think about it in terms of fan appeal rather than audio quality. Music fans have always wanted to be as close to the artists as possible - front row center seats, an autograph, candid interviews, etc… Fans love to be close. And with hi-res digital there is the sense that the listener is in on something intimate, something usually reserved for those lucky enough to enter the sacred sanctuary of the recording studio where the records were made. We can argue about fidelity and hi-res all we like (I’m bracing myself for the comments to follow), but - regardless of whether the digital code has any actual relationship to what happens in the studio - from a marketing perspective music fans want access to their favorite artists and they’ll pay for it! Hi-res digital files might be as close as some of today’s fans can get to feeling like they’ve snipped a lock of hair off of a Beatle.
DELIVERING HI-RES DIGITAL MASTERS
For those of us making records, the game is changing as concerns about forward compatibility are forcing us to think about hi-res digital releases in real terms for all genres. Previously dominated by classical and jazz recordings, I predict that people working on records in all genres will be asked about hi-res compatibility. Audio workers will have much to learn, and there are many angles to examine. Questions like: What will we do when our source files are lo-res? What are best practices for converting between formats? What’s the best way to archive our work for future-proofing it? What are the different formats cilents will be demanding? Do I invest in new gear to be ahead of the curve or wait?
Truth be told, as much as I am a big big supporter of the hi-res digital movement, I have a lot to learn about how best to deliver it in the studio. This is one of the reasons I’ve returned to archiving final mixes onto analog tapes that can be converted later. Analog seems endlessly future-proof (Steve Albini more famously agrees).
HI-RES FORMAT WAR - DSD VS. PCM
And, as with all moments of format change (e.g., going to stereo, the introduction of portable tapes, the battle over CD standards, etc), there are competing formats. In this case it seems to be down to DSD (Direct Stream Digital) and PCM (Pulse-Code Modulation). Critics and champions of both formats exist, but for the time being the reality seems to be that we’ll need both because of the need for backward-compatibility with existing digital music libraries which are PCM-based. And there’s much talk of how DSD and PCM play together, the fact that nearly all DAWs operate on PCM platforms, whether one must convert to PCM to edit DSD, and how certain playback devices manage (and even convert between) the two formats. It’s a bit of a mess, but I think the market will suss it all out soon enough - one format likely to win out as VHS did over Beta video.
But I’m not nearly as interested in the details of the format battle as I am in the fact that it’s happening at all. The format battle is another strong indicator that we’re entering into The Hi-Res Digital Era.
Streaming is a likely replacement for file ownership. Hi-res files are big and probably not going to stream effectively any time too soon, but probably sooner than we imagine. The idea that streaming could become a hi-fi experience is very promising for many reasons, and might help bring us around to a place where streaming services are valued as highly as the works that stream through them deserve to be valued. Also, it’s important to remember that even a streaming audio file (of whatever quality and resolution) will sound better if handled by a hi-res, hi-quality playback system.
There are a number of people who would contend that hi-res digital is a needless ruse, that 16bit/44.1khz CD technology was the pinnacle of what’s needed in digital to get us the music in the best format possible. There are also a slew of double-blind-A-B test that show that people can’t hear a difference between data-compressed audio, CD-quality and hi-res anyways. I really don’t want to engage that discussion but I also know that I must, at least preemptively. So, a the risk of derailing the discussion, let me quickly outline my thoughts on this:
Most importantly: hi-res digital files formats are only one part of the equation. We need great playback systems, too, and, YES!, CD-quality files can sound amazing on a great player. Much of what will make the Era of Hi-Res Digital wonderful will be the availability of great playback systems and beautifully converted masters and increased awareness of fidelity.
I am highly suspicious of blind-A-B tests as the end-all investigation (see this for more). I believe that even when we can’t hear something consciously, small differences can affect us on the subconscious level, and significantly, over long periods of time. This is a wholly different take on how aesthetic experiences and our senses work than is supported by the blind-A-B paradigm. It hasn’t been properly tested except for once, and not fully (read this for a description of the study). Such tests are expensive and time consuming, unlike blind-A-B tests whose prevalance might be merely due to convenience and inexpense (the parallel to the MP3 not lost on me!).
The equipment used in these A-B tests (often a “subject’s” personal computer) needs to be revealed, especially the playback systems used because playback systems can mask differences.
There are many really well respected people out there who believe that there are differences in the time-domain (how sound travels) not the frequency domain (which sounds travel) and that we need further investigation into this area of interest to understand better why more than a few people (and many with expert ears) prefer hi-res digital formats.
Suffice it to say, there is much to investigate going forward and I am broadly asking that people catch their breath, slow down, and begin with a humbler sense that there is still much to learn about digital audio. I also ask that more people openly question the scientific paradigms being employed to test our perception of hi-res digital audio.
I believe that records should sound amazing and cost a lot of money. I believe the cost of records should have risen with the rate of inflation, not declined inversely. I know that the Big Bad Labels are going to try to sell their back catalog over again in a new format (yes, this has happened before). I also understand that many of the playback products and audio files will come out under the marketing-friendly monicker of Hi-Res Digital and will not meet quality standards (my cheap, plastic record player as a kid said “hi-fidelity” on it, as did many crappily pressed records I owned). Yet I believe that Hi-Res Digital offers the record industry the opportunity to both raise the fidelity experiences for people on a mass scale (just look at HDTV), and I believe that hi-res digital can put a more appropriate price-tag back onto records.
Powerful aesthetic experiences move people and can cause people to value what they’re experiencing far more. Three-dimensional, holographic experiences of well-made soundscapes can transport people, absorb them, inspire them, relax them, energize them, fascinate them, raise new questions and ideas - all the good things art is meant to do. Again and again I’ve watched people who’ve never heard great sound enter my mix room and say, “Holy Crap! That’s amazing!” So many close their eyes and sink into the sound, almost instinctively.