What's your approach to balancing frequencies?

The immediacy of jamming with just a single box and squeezing as much as possible out of it is something I’m always drawn towards. With the M:C there’s obviously limitations, but still I like the idea of producing full tracks or even performing a whole set with it. However when it comes to mixing and balancing frequencies and especially low-end it never quite feels like I’m getting anywhere. Basically it never sounds equally deep and convincing on smaller speakers compared to my headphones.
Knowing it is not trivial to get the mix perfect without additional tools (mixer, DAW, …) But apart from that I’d like to ask for advice in the sense of »how far can it be done without anything else«.
Questions I am constantly asking myself revolve around designing better (more suitable? harmonics?!) sounds, am I balancing levels wrong, … am I overlooking something obvious?

Take this track I’ve done as an example:

Let’s hear what you have to say. Thank you! :slight_smile:

Edit: I’m not after perfection. Of course.

Isn’t that just a matter of physics? Smaller speakers are going to move less air and so your track will definitely sound different than on headphones which are pumping air directly into your eardrums lol.

Personally I think a compressor is key for these boxes because the dynamic range is so wide that it’s easy to tweak yourself out of a balanced mix.

What I’ve been doing is putting all my audio through a AUM and Mixbox. I find the resulting mix much more balanced and enjoyable.

1 Like

Essentially, your bass elements need more mids (and even high end) than you think. When you hear something that sounds bassy through tiny speakers, it’s not because they are pumping out any bass or subs in the 30-80Hz region but because there’s enough content in the lower mids (or even the upper frequencies when it comes to the ‘click’ of a solid kick drum, for example) to fool our brains into thinking it’s bassier than it really is. So, obviously you don’t need to mix out the lowest parts of your track, because this will still sound fantastic on your headphones and larger speakers, but by adding a bit of compression and saturation/distortion on your bass elements they will have a little more presence and will cut through that little bit more on smaller speakers.

Edit: Not that I’m any kind of expert on these things. I deliberately mix very bass heavy and couldn’t care less about anyone who wants to listen to my music on tiny speakers or earbuds and literally miss half of the tracks as a result haha.

13 Likes

Thanks, Craig for reminding me of that. This is something I’ve been guessing around also recently. Also feels like maybe there’s the possibility I’m overcomplicating things. Because I am stubbornly insisting that I should be able to achieve this with solely one particular piece of gear.
But hey, saturation sounds good and creating some more noise is definitely something I can try.

1 Like

The most valuable sentence of advice I have heard the last years is “mixes are made in the mids”. Its so simple, but it really is something that when understood (with your ears also) really helps.

5 Likes

Speaking specifically about the M:C, don’t use the “punch” on any bass or kicks, it sucks all the low end out of them.

Also, I often do the opposite when it comes to using the drive. I will likely drive the shit out of my low end sounds, using the punch to give the higher end stuff a bit more presence.

It all comes down to mixing and gain staging really, and knowing which bits you can push harder and with what tools.
I’ve done a lot of tracks and a longish set with just the M:C and I find it a real pleasure to mix with on its own. The key is to keep it simple, don’t overthink it and piss about with knobs until it sounds good.

EDIT: Also, big clicky transients on kicks are great.

7 Likes

If you think about 808s and 303s, distortion plays a huge role in making them sound huge on systems that don’t extend into bass and sub frequencies. If you want your low end to be heard on a range of systems, you’ll have to add harmonics to your kicks, basses, etc.

Midrange really defines a good mix. If you can balance elements without crowding and masking, then you’ll be in good shape when it times to compress and limit things.

8 Likes

I’ve been musing on this lately, any good tutorials on mixing for the mids?

There’s got to be more than just setting an EQ over your master bus and seeing what sounds nice, then saturation, reverb and some sort of frequency-specific dynamics processing.

I’ve been working on this myself too, for the last year or so, and things are starting to actually get better in my mixes. So one thing is just to keep at it.

Apart from the tips to extend your bass sounds to have mid range via sound design, it’s also an option to simply tune some of your bass sounds up a bit, towards the mid range. For example: If you have a deep low kick, try tuning you bassline a few notes or an octave higher then you’re used to. (Plus add some mid range to your kick too:)

2 Likes

It’s like cooking, throw everything in and it will be shit, be selective, use ingredients that compliment or contrast, make use of space, both in frequency and rhythm, more is rarely better, if something does not need to be there get rid of it. Most often use EQ only for cutting, but ideally don’t use sounds that need cutting or boosting and your mixing will be much easier and better. Most of the time EQ is great on fx, especially reverb and delay with cuts to avoid clutter.

I tend to mix during composition phase, then final tweaks - mainly of levels and fx, once something is cooking.

6 Likes

I remember reading in an old acoustics book, that orchestral music could be played from a cheap transistor radio with a frequency range starting at 400 Hz, and the contrabassoon, timpani and double basses could still be identified in the texture.

The explanation, I think, for this, was: Higher overtones in a sound “point” to an implicit fundamental frequency. Our brain does the rest. Psycho-acoustics.

I suppose if you create a bass sound, then are able to filter/attenuate the bottom and have it still retain a “bassi-ness”, then you may be heading in the right direction. If, on the other hand, you find yourself trying to boost the low-end of the bass sound to fix the mix, you may already have a problem.

I think it’s been mentioned by others, but a good mix is going to sound good in your car, on that cheap transistor radio, or on a nice setup.

I frequently get the bass wrong on my tracks. Problems include: too much, too little, not enough pitch clarity, or a zero-sum relationship with other parts of the texture.

2 Likes

…basic rule of thumpbbbbb…

the house of overall frequencies “only” got 6 floors…!

sub end 25 to 60
low end 40 to 120
low mids 120 to 400
hi mids 400 to 2000
hi end 2000 to 10000
air end 10000 to out of hearsight

and on each of these floors there can be only ONE element to be really prominent at the same given moment in time…
and the more comlplex ur element structure gets, the more ur single elements must be shaped properly to work together with each other to become that one thing again to rule that same overall frequency floor plan we got…

while final goal always remains the same…in the end u want a sum, that is more than nothing but the result of all ur sonic elements just added up on top of each other, all at once…

so, the sad truth is…if u work with hardware only, u must keep it dead simple…
or, if u want more complex arrangements, there’s no way around additional tools like dedicated eq’ing, compression and individual spacing…which can only be done, at least within reasonable real word solutions, via some daw…

3 Likes

I haven’t produced anything electronic with more than six elements. I can think of a few examples how, in theory, more voices can be combined within a smaller frequency range.

For example, J.S. Bach wrote 4 and 5 voice fugues within a few octave range. It was the use of counterpoint that allowed the voices to stay independent of one another. Each voice moved in an independent fashion, helping the ear separate the voices.

More voices can be combined, also, if the separate voices do not happen at the same time. For example, put one voice on the beat and another voice on the offbeat. Different elements can share the same “floor”, but happen at different times.

The separation of voices can be maintained by creating different ADSR envelopes for each voices. Differences in articulation can help us distinguish among multiple parts within a complex musical texture.

And there is the way we create individual sounds. Creating the best-sounding patch, when auditioned solo, then combining it with other best-sounding patches, does not necessarily produce the best result. The voices may each need to be thinned out to fit into the texture.

IMO, reverb diminishes the effectiveness of stacking multiple voices. The ambient noise level may swallow up other, quieter sounds in the mix. Muddy.

Not sure I agree there can be only one or two elements in the high end. Orchestration teaches that chords written the higher ranges can be more tightly spaced,
because the ear’s pitch-acuity (the ability to distinguish between pitches) becomes finer in the higher frequencies.

Interesting topic!

2 Likes

If you want your low end to feel more presence on small speakers you can copy your track, move the second track an octave up and then lower the volume until it is more or less just a hint. If the low end is a more complex waveform you might want to simplify the second one a bit get it closer to a sine wave. Having mids going with the low end will make you feel like you are hearing it more. *other people basically said this :sweat_smile:

I really like how this topic is evolving from a rather unspecific question to some very inspiring exchange of knowledge. I will have to find some time to try out a few things and see what fits best to my workflow.

Thank you for the various facets and perspectives you have been sharing so far. :black_heart:

…if i say sonic elements and u think of samples…
well, there’s the problem…

I learned an idea from this book which I like, but find hard to implement in my projects: it’s fine to have a sound that forces you to mix everything else around it. Maybe your middle-8 lead sound is so good you want to highlight all of it. Let it cross the various bands as it demands, and mix everything else around it; submit to the quirks, accept the character this imparts, rather than going for an even distribution of sounds across the spectrum. This technique wont suit every track, but it’s helpful to remember that sounds can cross the bands.

I am not familiar with the term sonic elements. A Wikipedia search for sonic elements returns the name of a progressive rock band from LA.

So a sonic element means one of the resonant frequencies of a complex sound? Or what we call a “partial”? Your “floors” analogy applies to the creation of a single sound/patch/sample, not to an entire arrangement of sounds?

Sorry for the confusion.

…a lead melody, for example, is a sonic element…

and such a musical/sonical element in writing/producing music in general can be achieved by using just one sample, as u suggested, or a single synth voice, or many synthvoices, or various synth AND samples all at once, or a guitar, or any other combination of compositional sonic elements to become that one thing together…THE lead melody motiv…

any musical content has such sonic elements…such as bass motivs, various harmony content like chord progressions, rhythm and so on and so on…

oops… :wink:

no need for big wiki crosschecking here…

and before u start wondering again…yes, of course a bassline for example again, consists of various harmonic partials in the overall house of frequency…but has to fulfill ONE single mission at any given moment of an arrangement…to be the harmonic foundation of the whole thing…and the more u crosschecked how much it’s figting with a kik for the same attention at any given moment in that same low end where they BOTH have to compliment each other to work for real (that moment where the sounddesign concept of “ducking” was invented)…while, at the same time, not stealing too much space in the mid end again to leave room for the other sonic elements u might wanna establish, the more it worx TOGETHER in ur overall sonic storytellin’…

besides the house of frequencies and it’s limited amount of floors of perception, there’s also the sidefact that the average music listener can only follow a certain amount of sonic elements at any given moment of an arrangement to really enjoy the compositional flow or end up totally stressed or overwhelmed…

and of course, U, as an artist decides, what u want ur listener to experience…
if u wanna make sure, he/she can appriciate ur sonic content without any second thoughts and nothing but joy, u provide it that way…or not…always depends on what u wanna achieve… :wink:

2 Likes

Keeping in mind that this is a rule of thumb and not “The Law of Frequency Balancing, As Foretold in Scripture”, it’s interesting to me that two of the most popular instruments in 20th-cent popular music, the guitar and bass guitar, both overlap at two frequency floors in their most usable ranges:

Bass Guitar:
Sub End: lowest E to low B (on A string)
Low End: low B to high B (on G string), covering only one octave

Guitar:
Low End: lowest E to low B (on A string)
Low Mid: low B to G (on high E string)
Hi Mid: Above G on high E string (shredders and jangle pop)

Similar crossing happens in several vocal ranges, though I’m guessing psychoacoustics helps keep the voice prominent

Edit: Suddenly worried this came off as antagonistic when it was not meant to be! I guess what I’m stuck on is, given that so many records and performers use instruments that cross these floors, does it all come down to arrangement and The Art of Mixing, or is there some other step I’m missing?

1 Like