Wednesday, March 28, 2018

Monday, March 26, 2018

#StudioTrappin - 5 Royalty Streams Every Indie Artist Should Know -

5 Royalty Streams Every Indie Artist Should Know

 March 26
, 2018
This post originally appeared on Repost Network's blog.
Today, music fans can easily access music from their favorite artists or discover new artists to fall in love with, pitting major established artists against their up-and-coming indie artist counterparts. And the music industry is changing for the better as a result (the Recording Academy now recognizes music released on free services for GRAMMY Award consideration and Billboard has accepted YouTube and SoundCloud streams for the purpose of charting).
Innovation in technology has made it possible for any indie artist with decent enough production tools and access to the Internet to record and release new music at any time. And with thousands of artists pumping out new music, it is no wonder that the industry has grown to over one million new tracks entering the global music market every month.
Each of these tracks begin earning royalties from its first play on any of the 400+ digital music services and 3,000+ webcasters operating around the world. And all of these royalties, billions of dollars of royalties, flow through a complex network of pipelines into various buckets of royalty collection with the ultimate goal of trickling down to the appropriate music creators and rightsholders. While this sounds straight-forward for a number of reasons this is far from a smooth process; and millions of dollars in royalties are in fact not making its way to the music creators and rightsholders to which they are due.
Part of the reason starts with you, the music creator. It is especially important for independent artists to understand the various income streams that your releases generate and the ways in which you must be setup to collect your royalties.
Here is an awesome infographic created by Future Music Coalition that visually breaks down how creators are compensated. Below it, I highlight five royalty streams that every indie artist should be setup to collect.
Future of Music Coalition “How The Money Flows Back…” Infographic.
If you plan to release music digitally, you should be aware of and setup to collect all of the royalty streams that your music earns. Your music earns royalties for the use of two different copyrights. The first is the copyright for the composition (song). The second is the copyright for the sound recording (master). These two copyrights earn royalty streams that are collected and paid out by different sources to different income participants, as explained below.

Royalty Stream 1: Performance Royalties for Compositions

With few exceptions, virtually all uses of your composition earns performance royalties. Performance royalties are earned when your composition is played on digital radio-like services (e.g. Pandora), when your composition is accessed and played through on-demand streaming services (e.g. Spotify), and when your composition is performed in venues, bars, and restaurants. All of these companies have performance licenses from one or more performing rights organization (PRO). In the United States, ASCAP, BMI, SESAC, and Global Music Rights are the PROs who issue blanket licenses for the performance rights in compositions to digital music services. In return, these services pay royalties to these PROs. The PROs then pay 50% to the songwriter(s) of the composition and 50% to the publisher(s), in accordance with the publishing splits reported to the PRO by the copyright owners. In order to collect performance royalties, you must join a PRO and register your composition (your songs) and the associated ownership splits (for example, 4 Writers might have equal ownership (25% each) or varied ownership (Writer 1 - 25%, Writer 2 - 50%, Writers 3 - 12.5%, Writer 4 - 12.5%)) to the PRO in a timely manner. One of the reasons music creators and rightholders do not receive the performance royalties that their compositions earn is because they have not joined a PRO or have not registered their songs with their PRO.

Royalty Stream 2: Mechanical Royalties for Compositions

Mechanical royalties are earned when your composition is reproduced and distributed in phonorecords (a medium in which a sound recording is stored). This includes compositions embodied in sound recordings stored in physical formats (CDs, vinyl, cassette), MP3 permanent downloads (e.g. iTunes), and interactive streams (e.g. Spotify). In the digital music sector, streaming services secure mechanical licenses either directly from copyright owners or by utilizing the compulsory license as provided by copyright laws. Regardless of how they secure their mechanical license, the major services pay mechanical royalties to Harry Fox Agency (HFA) and Music Reports Inc. (MRI), who then pay the publishers of the composition. One of the reasons music creators and rightsholders do not receive the mechanical royalties that their compositions earn is because they have not registered their songs with HFA or MRI, who help digital music services secure the mechanical licenses. For unsigned indie artists, this can be much more difficult if you do not have a publisher because HFA only represents eligible publishers who’ve affiliated with them. MRI is a rights administrator and will issue notices to copyright owners if their digital music service clients intend to utilize the copyright owner’s composition in a manner that requires a mechanical license. Spotify pays HFA mechanical royalties for the compositions used in their platform. Amazon Music pays MRI mechanical royalties for the compositions used in their platform. (Note that in the United States, iTunes passes the mechanical royalty to the distributor, who then pays the label. If you’re an unsigned artist, then you receive the income since you are your own label. Outside of the United States, iTunes and on-demand services such as Spotify pay mechanical royalties to a mechanical licensing society in the territory represented by the society. In order to capture these foreign mechanical royalties, a publisher or administrator must affiliate with and register the compositions with the foreign mechanical collection society.)

Royalty Stream 3: Permanent Download Royalties for Masters

A permanent download is generally a sales transaction through a digital retail store (e.g. iTunes). This income is passed along to the distributor, who then pays the label (less any applicable commissions). If you’re unsigned artist, then you receive the income since you are your own label.

Royalty Stream 4: Interactive/On-demand Streaming Royalties for Masters

Just like a permanent download, interactive/on-demand streams (e.g. Spotify) of sound recordings generates master use royalties that is passed along to the distributor, who then pays the label  (less any applicable commissions). If you’re unsigned artist, then you receive the royalties since you are your own label.

Royalty Stream 5: Non-Interactive Streaming Royalties for Masters

Unlike a permanent download or interactive/on-demand streams of sound recordings, non-interactive streams are not paid to your distributor. Webcasters and digital services that broadcast recordings over the Internet (e.g. Pandora, iHeart Radio), cable (e.g. Music Choice), and satellite (e.g. SiriusXM) in radio-style programming where the end users/listeners have limited to no control over the selection of music (non-interactive) pay a royalty for the digital performance of sound recordings to SoundExchange. SoundExchange then pays out 45% of the royalties to the featured performers on the recording, 50% to the copyright owner of the master recording, and 5% to a fund for background vocalists and session musicians maintained by AFM & SAG-AFTRA Intellectual Property Rights Distribution Fund. One of the reasons music creators and rightholders do not receive the non-interactive royalties that their masters earn is because they have not joined SoundExchange or have not registered their tracks with their SoundExchange.
When you release music digitally, you should be aware of the various royalty streams that your music earns, where those royalties are collected, and how to claim your earnings. Your distributor is one source of income for two of the royalty streams mentioned. To unlock the rest of your royalties, you’d need a capable publisher and a record company or you’d need to stay on top of the administration yourself.
This is where TuneRegistry steps in to help.
TuneRegistry is an all-in-one music rights and metadata management platform for the independent music community. Easily organize and store your song details, recording metadata, credits and ownership splits, and release information in your TuneRegistry account. It’s your robust music catalog manager that’s accessible online, so you don’t have to worry about tracking down emails, storing through documents in various desktop and cloud folders, losing collaborator contact information, or any of the other messy issues that most indie artists face.
TuneRegistry is your one-stop source for keeping your music catalog in check.

Friday, March 23, 2018

Thursday, March 22, 2018

#StudioTrappin - HOW TO MIX RAP VOCALS -

It’s not uncommon for me to get a message from a new client saying, “I don’t know what mastering is but I’ve been told we need to do it”. It’s also not uncommon for a client to tell me they’re not sure what to listen for after I send them my initial master of their project. Some common problems are thinking a master is “too quiet”, thinking the spacing between songs is not correct (especially when songs overlap/crossfade), or metadata/CD-Text issues.
When I find myself sending the same email response to clients on a regular basis, then I know it’s time to write an article about it to direct them to, and hopefully it’s helpful to others as well. So, without further ado, here is my two (hundred) cents on things you should be listening for when you a receive an initial master from your mastering engineer. I’ll also get into things such as what not to expect, and what pitfalls to avoid to achieve a better end result.
This article also gets into why you shouldn’t fall into the trap of saying, “it sounds great, but can we make it louder?”. The reality is that by going louder, the less great it will likely start to sound. Ian Shepard and some others have gotten into more detail about the reasoning for this and there is no reason for me to expand on their thoughts because Ian, in particular, has already covered it very well as you can read here. As we progress into a loudness normalized world, quiet and dynamic becomes the new loud. The extremely loud and crushed masters that people have made over the years, and are still making today are ultimately just getting turned down in more and more listening situations these days, which leaves them sounding harsh, small, lifeless, boring, and fatiguing to listen to.
First I’ll start with some technical things to be aware of so you can accurately listen to your master and be sure of what you’re hearing.

Check Your Playback Settings

By far, the most common issue when a client is evaluating a master I’ve sent out is an incorrect setting or configuration somewhere in the playback chain. This could be a hardware and/or software issue. A majority of the time, when a client reports a problem, it’s actually something on their end being set incorrectly. I actively try to minimize any software setting issues by delivering initial masters in DDP format, accompanied with a custom and easy to use DDP Player for their project.
DDP (disc description protocol) is a file format most commonly used to electronically transmit a CD production master to the CD manufacturer. DDP is much more reliable than a physical CD-R master, and it can be transmitted via internet so it’s widely used in the mastering field.
Even if clients are not planning to make CDs, I still prefer to send the master as a DDP image to get initial approval from them because the DDP with DDP Player is very foolproof. The user cannot make any modifications (intentional or accidental) to the audio or spacing between songs. The DDP Player launches as a standalone application (PC, Mac, or iOS device) and the master can be accurately listened to when it comes to the spacing between songs. There are no internal options and settings such as EQ or loudness normalization so the sound quality and characteristics can be fairly judged. Aside from listening to it on a computer like a virtual CD player, clients can burn their own CD-R if desired, as well as export each CD track as a WAV or mp3 file. However, as soon as the audio is exported from the DDP Player as a WAV or mp3 file, the audio is susceptible to any settings in their media player which can easily go wrong. The same things can go wrong if the mastering engineer simply delivers a folder of WAV or mp3 files to evaluate. The files are then easily subject to consumer media player settings.
For example, iTunes has an equalizer built in and I know from experience that many people have this turned on without realizing it. Over the years, I’ve had clients report that they are hearing distortion on their master, only to find out they are listening in iTunes with an EQ boost which can cause distortion. There is also a “Sound Check” setting in iTunes which normalizes the playback level of all songs to be the same. Typically, this ends up turning down louder songs and turning up quieter songs. Sound Check has its uses but be sure it’s not turned on to start with if you are using iTunes to evaluate masters, though later in this article I’ll explain when it is a good time to use Sound Check or other loudness normalizing settings. iTunes also has a “Sound Enhancer” setting that should be left off when doing any critical listening though you may prefer it for casual/personal listening. I actually had my most critical and detail oriented longterm audiophile client asking me why his DDP master from the mastering engineer (I didn’t master this particular project) didn’t sound as good as the WAV files when played in his iTunes app. After some digging we realized he had Sound Check enabled and to our surprise, he actually preferred the Sound Check processing. The other thing iTunes and other media players can mess with is the spacing between songs.

DDP Is Your Friend, Even If You’re Not Making CDs

Before I started using a custom DDP Player to deliver masters for approval, it was very common for a client to tell me that the spacing between songs is not correct, and maybe two songs that were meant to be overlapping or crossfaded had some silence between them now and no longer a smooth transition. Usually the cause of this is the media player adding additional time between the songs/tracks which can happen on both playback, as well as when burning a CD. Also, mp3/AAC and other lossy encoders can add a few milliseconds of silence to a song so it’s not uncommon to run into perceived issues that were not present with the WAV masters if you are using these compressed file formats to check your masters.
In general, I discourage using consumer apps to audition your master. Too many things can go wrong, and only one thing can go right, if you’re lucky! If you must use a consumer media player such as iTunes, I suggest triple checking that there isn’t an EQ enabled, and that Sound Enhancer, and Sound Check are turned off, as well as the setting for “Crossfade Songs”. Also, make sure that the volume control within the app is turned all the way up and use your main computer output or audio interface output to adjust the playback level. I mainly refer to iTunes in this article as that is what I’m most familiar with but whatever media player you may be using, be aware of the settings and preferences so the audio is not being altered in any way. These days, any reasonable mastering studio/engineer can deliver a master to you in DDP format with a custom DDP Player for you. At the very least, they should be able to send you a DDP image, and HOFA makes a DDP Player you can purchase for roughly $10 (USD).
Back to DDP itself: Within the DDP Player, clients can view and approve all the CD-Text which includes things such as song titles, album title, artist name for the album as a whole, as well as for each track in the case of compilations or when a track has a featured or guest artist. Other info that can be viewed for approval is ISRC codes, UPC/EAN, as well as the number of tracks and their lengths. All in all, it’s a great overview of the EP or album.
Another commonly cause of client panic attacks is that they burned a CD of the master and the CD info is either incorrect or of another CD. This is because iTunes can’t read CD-Text and is probably confusing the burned CD with another CD in the Gracenote Database. More on this here.
So for all these reasons, I feel strongly about delivering masters for approval as DDP unless it’s just a single song project for digital distribution only because all these details can be verified and everybody is on the same page no matter their geographical location or computer skills. There are no software settings that can be incorrectly set within the DDP Player and it’s a great way to keep everything intact with the master. Then, the moment the DDP is approved, it can be sent off for CD production if CDs are being made and the mastering engineer can move forward and make all the additional master file formats needed for digital distribution, as well as vinyl and cassette pre-masters if needed.

Check Your Hardware Configuration

On the hardware side, I suggest listening to the master in the place you most often listen to music. Your ears and brain are likely well tuned to this environment whether you realize it or not. Don’t overthink this. It doesn’t have to be a perfect environment, just a familiar one. In many cases it’s your automobile, living room, or perhaps home studio. There could be a case where your listening environment is so misleading that a good sounding recording sounds bad, but that’s an entirely separate article.
I don’t see it as often anymore, but many car stereos and home stereo receivers have a “Loudness” button on them. “Loudness” was a terrible name for this button, as it should have been called a “Quiet Listening Compensation” button. The intent of this button was to increase the low frequencies when listening at low levels. It basically gives the bass frequencies a boost which can be needed if you are listening at softer levels. It was also implemented before the brick-wall loudness wars ensued, back when music had both dynamic range and headroom for things to be additionally boosted. With today’s common mastering levels, it’s not hard to overload a playback system by engaging the loudness button or other EQ because there is no more usable headroom and something has to give. So my point here is that if you have a stereo receiver with a Loudness button, turn it off unless you intend to use it for it’s original intent which is listening at low levels. However, be aware that a really hot master can overload the internal elements of the playback system and cause distortion that is not actually in the master itself. Best to just keep it off for master evaluation purposes.
The logic behind the Loudness button was that when listening to music below 85 decibels, our ears hear less low frequencies, and listening to music above 85 decibels causes us to hear (and feel) more low frequencies then are naturally there. 85 decibels is in theory where our ears have the most natural frequency detection. So, if you listen at moderate or loud level with the Loudness button engaged, you’re likely hearing (and feeling) the low end boosted or enhanced far more than it really is. If you listen at a very low level, your ears may perceive less low end.
That being said, try listening to your master at roughly 85 decibels. Although not extremely accurate, most smartphones offer a free decibel meter app that can get you close to 85dB. Of course, it can be useful to listen to your master loud and soft, but just be aware of how it sounds when listening at roughly 85dB where our ears are reported to hear the most flat and accurate frequency response. It’s also advised to find some test tones and play it through your system to be sure that there are no technical issues. One bad cable or connection can cause a phase issue and suddenly things are getting canceled out. This happened to me recently where one member of band I did a master for said that after mastering the vocals were nearly gone, the drums were weak, and the guitars were very loud. I knew immediately that there must be a bad cable somewhere in the stereo system as I’ve had this happen to me as well in years past. I knew the center channel was being cancelled out to some degree and causing those perceived changes. It didn’t take long to explain this and have him try it on a different system. I’m just amazed he didn’t notice this when listening to other recordings.
If you have the means to run a few test tones to make sure the left and right channels are correctly wired, all the better. Also, if you are listening on a surround sound system but your music was only mixed and mastered for stereo as is often the case, make sure that at least to start with, you’re listening in stereo mode on the receiver and not in surround mode as the surround encoders can sometimes do strange things. After you approve the stereo mastering, if you prefer to listen with some surround settings enabled, that’s your choice.
I suggest setting any EQ options on your system as flat as you can stand it, if not completely flat. If you have to make any drastic settings to make your favorite recordings sound good, or your master in progress, something is likely wrong with something in the playback chain or with your speaker placement in the room. It doesn’t hurt to stress test your master in progress to see how it responds to a moderate to extreme high or low end boost, but ideally your favorite recordings and your master in progress sound great when the EQ on your system is flat or turned off.

Headphones Can Be Revealing

Listening on headphones can be good for certain aspects of your evaluation process. If you don’t have a good sounding listening room, headphones can easily remove the room from the equation. Also, listening in headphones is probably the best way to detect any unwanted noises, clicks, or pops in your master. Ideally your mixes should be free of any unwanted noises but in reality, there are often things lurking in your mixes that become more apparent after mastering. These can be things such and artifacts from a bad edit, mouth/saliva sounds, a digital clocking error, noise or hiss from a guitar amp or other recording equipment, the bleed of a metronome through headphones and back into microphone, or anything else really. When I am mastering a project, I do a quality control listen with lights dimmed to help give my ears complete focus to catch any stray noises, and then address them as needed with an audio repair tool called RX by iZotope. It’s an amazing tool for removing unwanted noises and anomalies. It’s a lot like Photoshop for audio. Your mastering engineer may or may not be as thorough and I encourage you to do a self quality control listen on headphones for any issues that may have crept out after the mastering processing. It’s obviously best to find and report these types of issues before the mastering is finalized rather than catch an issue after production and distribution is underway.
The message here is to make sure there are not any software settings that are altering the sound quality, or other aspects of your master such as spacing between tracks. Be sure your hardware is properly wired, configured, and not inducing any issues. Make sure your playback system is trustworthy with some known great recordings. Get familiar with some of your favorite sounding recordings and productions, how they sound in your main listening space and/or your automobile, and then listen to your master in progress.

Loudness Potential

On the artistic side, be realistic about what you want your master to be. I know it’s very tempting to compare the loudness of your master to another mastered album, but it’s important to know that the “loudness potential” of a song is not often determined in the mastering process. This means that many things factor in to how loud your master will come across and there is a difference between measured loudness on a digital meter, and perceived loudness which is how the loudness of a song registers with your ear and brain. This is a polite and roundabout way of saying that a master of a crappy mix may read the same (or even louder) than a master of a great mix on a digital meter, but the master of the great mix is still going to sound louder and have more impact than the crappy mix. This also means at at some point, your mastering engineer runs out of headroom to go louder without inducing harsh artifacts or audible distortion.
Many things tie into the loudness potential of a song starting with the performance, followed by the recording quality and technique, as well as the characteristic and aesthetics of the mix. It’s a cumulative process but I bring it up because I’ve done many mastering projects where a client requests a louder version than what I’ve initially done, and then an even louder version, until we reach a point where going louder starts to sound terrible and becomes hard to listen to, and we still never hit their ideal goal. This is when I have to explain the loudness potential vs. perceived loudness thing.
It’s easy for our brain and ears to be fooled into thinking a louder master sounds better, but I’m a firm believer that sometimes the practices used to get a measurably loud master can do more harm that good to the material, and if you simply loudness match the two songs by manually adjusting them, you may actually think that the song that originally seemed quieter and not as good suddenly sounds better. This is because the quieter song usually has more transient details as opposed to the louder song which has most of it’s peak information shaved off from digital limiting and/or clipping the waveform. Also, if you are comparing the loudness of your master with another song, be sure you are comparing the loudest section of your song. Many songs start out somewhat softer and get louder. If a song starts out super loud, there is no room for the choruses and bridges to get louder and bigger. If you do feel your master needs to be louder, be sure to confirm and clarify if you want everything louder overall, or perhaps you just want some of the intros to be less quiet and dynamic, and maybe the loud parts are already loud enough.
This is where the Sound Check setting in iTunes can be useful. If you are worried that your master in progress is too quiet and doesn’t sound as good as another track, listen to them both in iTunes with Sound Check enabled. Other media players may use a similar feature called Replay Gain which has a similar effect. What these do is normalize the loudness of songs to have the same measured loudness. This levels the playing field and allows you to decide which one sounds better, not which one sounds louder. An important distinction.

The Loudness War is Essentially Over

Even in the days of vinyl there was a loudness war, but the digital loudness war actually causes serious damage to the audio material. The loudness of vinyl has a few variables to deal with that digital audio isn’t subject to such a the running time of the longest side, bass frequencies panned to wide (or phase issues), or vocal sibilance. A loud vinyl record could be cut loud when everything else leading up to it was well done. A quiet record cut was usually due to having some or all of the issues mentioned above. Since digital audio doesn’t have those limitations, we are tempted to achieve goals that are unattainable because digital audio doesn’t have a needle that will jump out of the groove on playback, or lacquer cutting heads that will burn up. We commonly see people try to push and push loud digital masters beyond the point of diminishing return. It’s sort of like trying to jam a square peg through a round hole. If you’re even able to make that happen, the peg has been shredded and destroyed.
With digital mastering, another factor to consider that is becoming more significant by the day is that there are a myriad of music streaming services and ways to consume digital audio and media. Nearly all of them have a form of loudness management or loudness normalization either on by default, or an option for users to turn it on. This means that the loudness of a song is analyzed, and then the playback level is altered so that the loudness of different songs isn’t going up and down as you listen. Unfortunately, there isn’t a common loudness target. Spotify, Apple Music, TIDAL, YouTube etc. all normalize playback to their own specific level. Some services also do “album normalization” and adjust the loudness of an album as one unit, so the quiet and loud songs still have the same dynamic, and some just go song by song which can have odd effects such as making a spare or minimal acoustic song appear as loud as a rock song.
What this means is that no matter how loud you try to master your new song or album, it’s very likely that it will be turned down when the end user listens to it. Ian Shepard and some others have done more extensive articles on this particular topic, but the main point is that in more and more cases, those that do extremely loud digital masters just end up getting their music turned down more, and those that opted for a more natural and dynamic master end up sounding bigger and louder, and in most cases sounding better because there are both micro and macro dynamics retained in the music. We are very much in a transitional period and I would expect more of a unified normalization target to take shape in the next year or two.
All this is a complicated way of saying don’t be tempted to chase the loudness of an existing mastered record, especially if the musicianship and recording quality are of a much higher caliber than yours because that may be a losing battle. Be willing to accept the loudness potential of your recordings and just know that in most cases, the playback level will be normalized anyway. So, you should have no fear of sounding quieter than something else. If you’ve got the best mixes you can do, and you’ve hired a trusted and competent mastering engineer that knows what you’re after, it’s likely that he or she is dialing things in as best as possible and found the sweet spot for your material. When I deliver a master, I try to find what I consider the sweet spot based on how the client answers the loudness question on my project submission form. Roughly 2/3 of the time I get approvals on the first try (when it comes to loudness that is), 1/3 of the time they say it sounds great but “can you go a little louder” so we try it, and every now and then they ask for me to go quieter after hearing the first version to which I happily oblige. I would say about 1/2 the time I get asked to go louder we end up going back to where it was or splitting the difference, but hearing it louder helps the client decide what they prefer and realize that the initial version was the sweet spot and their material has reached it’s effective loudness potential.
On the loudness topic, it’s important to think about where you want the music to sound good. Doing a more natural mastering approach on a stripped down and minimal recording of an acoustic guitar and vocal, or piano and vocal may sound good in a quiet listening environment. However, if you try to listen to this in an automobile or other noisy environment, some of the quieter passages might get lost and you’ll really notice the natural but extreme changes in loudness within each song, and from song to song. This isn’t necessarily wrong or bad, it’s just an artistic decision. You may not want to use additional compression and limiting to raise the average loudness because adding those things could make things sound too harsh or unnatural. One example is the original CD master of the “Heartbraker” album by Ryan Adams. I admit I haven’t heard the remastered deluxe version to know if it’s drastically different than the original. After the first track, most of the songs have sparse arrangements and the vocals and acoustic guitar work fluctuate from very soft to very loud. It can be hard to listen to in a noisy environment but if you listen to it in your living room late at night when everything else is quiet, it’s a very powerful recording. You may need your volume control set slightly higher than when listening to the latest Green Day album (in an un-normalzied situation) but who cares… Set the appropriate listening level and then enjoy. There is a reason that stereo systems and media players have a volume control. Use it!

What is the Point of Mastering?

The main goal of mastering is to make sure the material sounds as good as possible in all listening situations, but “good” is subjective. You should be able to listen to it start to finish without the urge to adjust your playback level or EQ as you listen. The songs should have the proper spacing and flow between them. The titles should all be complete and accurate. If you care about ISRC codes, you should provide them to your mastering engineer as soon as possible as they can’t be added to a CD master retroactively as some seem to think. Lastly, do a focused quality control listen on headphones with no other distractions to make sure there are no glitches, unwanted noises, or other anomalies before signing off on the mastering. Avoid temptation to be loudness competitive with a song or album that is not in the same genre or class as your recording.
Even on what you might consider a “perfect” sounding master, each speaker and headphone system will have their own voicing. Don’t be too alarmed by minor differences from system to system as this is essentially unavoidable, but don’t be afraid to point out more extreme variations that you may notice. While I used to assume it went without saying, don’t expect your laptop speakers or tablet/smartphone speakers to reproduce the low end accurately. While ideally your mastering engineer is working on a full-range system, it’s not a bad idea to check your master on system that has a subwoofer or ability to accurately reproduce low frequencies down to 20Hz. If your project was mixed on smaller speakers, there could be some low frequency problems hiding down there and while it’s the job of the mastering engineer to address this as well as possible, you should also double check this on your end on a system that can reproduce it before signing off to make sure you are OK with it.

Vinyl Test Pressing and Reference Lacquers

I also get asked to listen to vinyl test pressings a lot from clients which I’m happy to do. Turntables add an additional layer of variables to the situation compared to a digital master or CD which at the most basic level, either plays or does not play. If you have a cheaper turntable and cartridge that was never properly set up and calibrated, don’t be surprised if a test pressing has distortion, sibilance, or other playback issues. 99.9% of the time when a client tells me there are speed/pitch issues with a test pressing, it’s because their turntable is not playing at the correct speed. Lacquer cutting engineers go to great lengths to continuously calibrate their lathes. It would be extremely rare for them to accidentally alter the speed and pitch of your recording. The problem is that a person may not notice this on other records they casually listen to but because they are more familiar with their own music and recordings, it may suddenly be obvious that
there is a speed/pitch discrepancy due to their turntable. If you suspect an issue, try a few different turntables if possible before panicking. If your turntable hasn’t been properly set up and calibrated, and costs less than a hundred bucks, I wouldn’t put too much into how your test pressing sounds on it.
There are also plenty of videos on YouTube that go over how to properly set up a turntable, and there are a few test records you can purchase to help calibrate your turntable to it’s optimal state.
On the subject of vinyl, I already have an article about how to set yourself up for a great sounding vinyl record. Most good lacquer cutting engineers will send you either a reference lacquer to play for yourself, or a digital capture of their lacquer cut so you can hear what it sounds like when played on a properly set up turntable. If you don’t have a great way to listen to vinyl, hearing a digital capture of the lacquer from your cutting engineer is probably the way to go.
Well, this article was meant to be short but it ended up being long as usual. If you’re still reading, I hope this article gave you some insight on what to listen for, and what rabbit holes to avoid when you are listening to your mastered material and deciding how you feel about it.


#StudioTrappin - 12 Tips for Composing Music for TV Shows -

Do you ever find yourself watching TV and think that you could produce the music heard on the show? If you have decent production, editing, and mixing skills, then chances are you have what it takes. The following are 12 things to consider if you’re looking to compose for TV.
[Editor’s note: some words are used interchangeably: composing = producing = writing, songs = tracks = cues]

1. Sign up with a PRO

Performing Rights Organizations (PROs) issue licenses for TV networks, streaming services, radio stations and music venues to publicly perform (broadcast) copyrighted music. PROs collect and distribute performance royalties on behalf of their members. If you wish to receive royalties from the performance of your music on TV, you must be registered with a PRO.
In the United States, there are three main PROs: ASCAP, BMI and SESAC. For you international readers, click here to get a list of countries and their respective PROs. Both BMI and ASCAP have open enrollment and SESAC is invite-only. While there are some differences between them, the most important thing is to simply choose one. You will not receive performance royalties without being a member. After signing up, you can register the songs with your PRO.
In my experience, if you’re working with a music library (who is also the publisher), they’ll handle the registration, as they need to update the song registration to include their information to receive their share of the performance royalty. The music publisher will receive approximately 50% of the royalty for the performance
Bonus Tip: After your music has been sent off to the library and your songs are registered with a PRO, I suggest signing up with TuneSat. They offer a free tier that allows you to monitor up to 50 tracks. This is a great way to know when your music has been performed on TV.

2. Listen to the Music

Take time to watch shows for which your music would be suitable. Pay attention to the song structure, instrumentation and dynamics. A typical two to three-minute pop-style structure will work in most cases. Keep the intros short and add interest by adding and subtracting elements throughout the cue.
While it’s not uncommon to hear variety, most shows will stick to a certain genre or vibe. Focus on how the different sounds are used to evoke emotion. Check out the repertoire of popular and established libraries like Extreme Music and Killer Tracks to get an idea of where the bar has been set for TV music. If your production and mixing chops aren’t in the ballpark of what you hear in those libraries — you still have some work to do.
Bonus Tip: TuneFind is a great resource to find information on the music used for a particular show.

3. Know Your Strengths

Focus on producing the genres you enjoy. This may sound obvious, but spend time producing music you really love to make. The people choosing the music for the show (e.g. editor, music supervisor) will hear the difference between an energetic, heartfelt, well-produced track and a bunch of Apple Loops thrown together.
One of my favorite things about making cues is that I get to be a musical chameleon. I come from more of a guitar-based rock background, but probably 80% of my royalties come from Pop and EDM cues. At the end of the day, certain genres of music get used all the time on TV, so you should be adept at producing at least of one those genres to have a chance at getting placements.
Bonus Tip: While you want to create engaging, high-quality music, you also can’t afford to spend endless hours on a cue that will maybe only play for 15 seconds on a reality show. I’ve spent days on some cues, only to find out the cue I produced in 30 minutes from hitting record to bouncing the master received the most action.

4. Keep it Simple

The music should never get in the way of the dialogue or the story. Avoid harsh sounding leads, super busy drum patterns, or other musical elements that draw too much attention. Unless, of course, that’s exactly what the scene song calls for!
Leave some space in the arrangement. Remember, the music that gets placed in a show is there to support the story and create an emotional response from the viewer.

5. Sample Libraries

It shouldn’t come as a surprise that almost all the music you hear on TV shows comes from a virtual instrument. Save for the occasional guitar, bass, or shaker — almost every sound I use is a synth or sampled instrument.
As producers, we have a nearly endless amount of sounds from which to choose. While it’s a fairly hefty upfront investment, Native Instruments Komplete has paid for itself many times over. The wide range of sounds and upgradability have cemented Komplete as the foundation of my productions. Plus, third-party developers are always coming out with new instruments using the Kontakt format. Check out sites like Kontakt Hub and VSTBuzz for deep discounts on new instruments and sounds.
Bonus Tip: Get organized. A few years ago, I grew tired of scrolling through endless lists of samples and patches. It took a few days, but I organized my samples into folders (Kicks, Snares, Percussion Loops, etc) and favorited often used samples and patches which sped up my production workflow.

6. Mix and Master

Every cue you submit to your publisher should be mixed and mastered — in other words — broadcast ready. There are some great mixing and mastering tutorials available, so I won’t get too detailed in this article, but learn to mix and master your cues so they’re ready to go for the publisher. Don’t crush the mix buss and leave some headroom.
Bonus Tip: Ozone and Landr are your friends.

7. Be Consistent

Practice makes perfect. It’s a tired cliche, but relevant to music production. Every time I find myself deep inside a writing phase, things just seem to click. Chord progressions and melodies come more naturally. Mixing and Editing become effortless and my workflow is more efficient.
When making cues, try to make a batch of 10-12 cues in that particular genre/vibe. This will give the publisher and ultimately the editor/supervisor plenty to choose from. Don’t be upset if your favorite cue from the batch never gets used (see bonus tip #3).

8. Be Reliable

Have you seen the 2008 movie Yes Man with Jim Carrey? It had a profound influence, for better or worse, on my approach to writing cues for TV and life in general. When I have an opportunity to produce music for a TV show or commercial, I try to clear my schedule and deliver the music on time.
Oftentimes, if I get a request from a publisher, they need the music quickly. Sometimes I get a few days and sometimes only a few hours. TV production schedules are tight and music is one of the last pieces added to a show. Be prepared to reliably deliver high-quality music on time.

9. Be Flexible

There have been numerous occasions where I’m asked to remix a cue or deliver stems of a mix. Usually, this process only takes a short amount of time, but be cautious of your mix not quite sounding the same after you’re asked to deliver stems. Save your plugin presets to quickly recall a mix and achieve similar results.
By creating stems, you give the editor a great amount of flexibility when syncing music to picture. It could be argued that having a more “flexible” cue makes it more likely they’ll choose yours over another. If you checked out the Extreme Music or Killer Tracks libraries linked earlier in the article, you’ll notice that most of the cues have several variations, including 30-second versions, instrumental only, and more.
Bonus Tip: In addition to creating stems, include the BPM, Key, Genre, and other metadata to make the cue more easily organizable for the publisher.

10. Quick and Easy Delivery

I use Dropbox, but there are many other backup and file sharing services to choose from. HighTail, Box, WeTransfer and Google Drive are all decent options. The key is that you need a reliable and convenient way to share files.
Bonus Tip: Make sure that the service you use has a mobile app. There have been multiple occasions where I’ve been contacted by a publisher and was away from my studio. Being able to send files while you’re on-the-go could be the difference between getting the gig or not.

11. Be Realistic

The feeling of hearing your music played on TV is exhilarating. It kind of legitimizes all the work you’ve put in over the years to become a music producer, or maybe it will simply impress your friends and family. Either way, you have some money coming your way thanks to US Copyright Law.
As the composer/producer of the cue, you will be paid one of two ways: on the front-end or the back-end. In the case of composing for Film, Games or TV Commercials, getting paid a sum of money to compose the music is considered front-end. In the world of TV, especially reality TV, front-end payment is rare and back-end payment from your PRO is the norm.
One lovely thing about back-end is that you receive small royalties every time your cue is played on the show. All those 15-20 second performances add up over time and lead to a nice royalty payment.
As you can imagine, the process of getting paid for the performance of your cue takes time. Six months to a year is about the average amount of time it takes the TV network to submit cue sheetsto the PRO and for you to get paid.
Bonus Tip: Sign up for Direct Deposit payment from your PRO. If your royalty payment is less than $100, they will not send a paper check.

12. Reuse and Recycle

If you’re like me, you probably have a ton of unfinished tracks with names like “heavy dark beat” or “A minor thing”. Those unfinished tracks that don’t have a place on your next album or mixtape could easily become a cue.
There are numerous ways that you can reuse and recycle a previous MIDI performance or audio file. Import a MIDI file into a new track with a new sound, change the tempo, and slide some notes around to create a new variation. This is a quick way to get the creative juices flowing.
There you go — 12 tips that hopefully gave you some useful and practical information about composing for TV. Most of the music professionals I know have multiple sources of income outside of their primary gig. Composing for TV is a great way to supplement your recording, mixing, producing gigs.