I was about 75% through writing a letter with a foul temperament before I thought better of it. My bad mood abetted, I’d prefer to talk about something exciting instead.
I’ve been thinking a lot lately about this quote from the producer SOPHIE, who tragically passed away on January 30th. Here it is in full:
“I’m rather fixated on this idea of a monophonic, elastic, full frequency range morphing composition. The language of electronic music shouldn’t still be referencing obsolete instruments like kick drum or clap. No one’s kicking or clapping. They don’t have to! So it makes more sense in my mind to discard those ideas of polyphony and traditional roles of instrumentation. It seems wacky to me that most DAW software is still designed around having drums/bass/keyboard/vocal presets for production. That’s what I find liberating about the Monomachine. It’s just waveforms that can be pushed into shapes and materials and sequenced. Just like a sculpture machine. Not like a computer pretending to be a band from the 70’s or whatever”
That might be a lot to take in if you’re not a musician, so let me try and break down that idea into some composite parts (which is maybe counter to SOPHIE’s logic here, but we’ll get to that) so that we can approach the underlying concept on more solid footing.
First, monophony vs polyphony: is the sound you’re hearing coming from one source or many? Is it a snare drum or two people singing together? Under this rubric, the vast majority of what people listen to is polyphonic music; multiple instruments and voices working together to create a single piece of music. But you’ll notice in the last sentence I already gave the game away a bit. Regardless of how many instruments are playing, most people experience recorded music monophonically. Whether through one speaker or two earbuds, people hear one song.
Until pretty recently it would still take a multitude of instruments, recorded through a single microphone or several, to create that illusion of monophony. But as electronic instruments have evolved, it’s become easier and easier to compose music with a single Digital Audio Workstation (DAW). However, as SOPHIE described, most of these DAWs are designed to replicate the older model of recording multiple instruments on top of each other.
For example when I open up Logic, the DAW that I use to write music, I can choose from a set of digital sounds meant to emulate the sounds of various acoustic instruments; piano, guitar, violin, drums, you name it. Despite what it says on the tin, these sounds are not the real deal. They are recreations of what an acoustic instrument would sound like. Not the instrument itself, but the sound that the instrument makes. So when we strip away the surface the various sounds in a DAW are nothing more than frequencies that when combined approximate the instrument they’re named after. What SOPHIE called for in this quote is an abandonment of pretenses. Instead of pretending that these organized frequencies are an acoustic instrument played in meat space, take these sounds for what they are.
Once you’ve discarded the idea of a DAW containing multiple instruments, you can think of them as a single electronic voice. The song isn’t a matter of different instruments interlocking, but a single voice changing its shape. When you remove music from its source and treat it only as it exists upon the ears of its listeners it is free to resemble a sculpture that morphs over time instead of the product of multiple voices.
This is a profound observation about electronic music, one that opens up entire vistas of musical composition, and helps explain why SOPHIE’s approach to production was so revolutionary. Instead of arranging her tracks in terms of kick drums and snares, she thought of her music purely in terms of the frequencies of those instruments. A low percussion sound, a high frequency sound with a short decay. Once removed from their earthly relatives, these digital sounds could take any number of shapes, and would frequently morph over the course of a composition.
But this revelation also has relevance to those of us who don’t think of kick drums and claps as obsolete.
When I first read this quote, I was reminded of an interview I did with Brian Chase about the experimental drone music that he makes when not playing drums for The Yeah Yeah Yeahs. Chase described that as he got better at playing the drums his hearing gradually expanded outward. First he had to learn how to listen to his own playing, then how it interacted with the other musicians he was playing with, and then finally to hearing the sound of the musicians in the room they were playing in. Essentially he was describing a process of hearing the music he was playing as polyphonic and gradually shifting to hearing it as monophonic.
Drums are a particularly good instrument to map that process through. A drum set is both a single instrument and a collection of instruments. When you hear a drum beat, let’s say something simple like the groove to “Billie Jean”, you probably perceive it as a single sonic object. It is a drum beat after all, not a drums beat. But even in that groove what you are hearing is the interplay of three different instruments.
Early in the process of learning to play drums it was necessary for me to learn to listen to each of the drums separately in order to identify flaws in how I was playing them. Was I hitting the snare hard enough? What part of the hi-hat do I need to hit in order to make the best sound? Was I “choking” the sound of the bass drum by pushing the beater too far into the head? I needed to move from my monophonic ear to a polyphonic ear so that I could get the most out of each drum. But this approach has its limits. If you only think of the drums as individual sounds then it’s easy to overlook the interactions of those sounds. How are these sounds balanced? What is the composite sound?
One of the best lessons I ever learned from my teachers was that I needed to think of myself as a mixing board behind the drum kit. Each of my limbs could work as a fader, where I could adjust the volume of each drum individually to create different sonic shapes. To do this you have to lead with what I think of as your “fifth limb.” Some drummers are led by their right hand, which can result in them playing the cymbals too hard and the kick and snare inconsistently. Others lead with their right foot, which grounds their sound to a powerful kick but might make their subdivisions wobbly up top (fwiw this can still sound pretty damn good in the right circumstances). But when you lead with the fifth limb your primary instrument is the combination of each of the sounds your other four limbs are producing separately. Even if they don’t think of their playing in these terms, I would imagine all of your favorite drummers are hearing themselves in this way.
(You can even see the difference in the bodies of right-hand heavy players and “fifth limb” players. Beginner drummers will often turn their body in their chair to face their right hand playing the hi-hat, unconsciously turning their left ear away from the drums and wasting energy by maintaining a twisted spine. More experienced drummers will instead face their bodies forward and maintain a balanced posture even as their limbs move across the kit.)
Once I learned to hear my drum kit this way, I needed to repeat the same process but with each of the musicians I was playing with at a given time. How were the drums balanced against the bass guitar? Was I overpowering the vocals? Which parts of my internal mixer did I need to adjust in order to make the guitars sit right, and when did I need to make those adjustments? This requires me to move in between monophonic listening and polyphonic listening constantly, which takes focus and attention. The results are well worth the effort. My bandmates have a better time playing with me because they can tell I’m listening to them, and the songs sound better because I’m less likely to trample all over them.
I’m not at Chase’s level so I can’t hear the room the way he describes, but I think this type of hearing is something to aspire to. It would force me to hear the music not as it is made but as it exists in the world. In less annoying parlance: It would force me to hear the music how my audience does. The average person doesn’t split apart the sound they are hearing into its composite sources, they simply hear the end result. This is why programs like Song Exploder or mix breakdowns on Youtube are so popular, they let monophonic listeners hear music in the polyphonic way that musicians do. If people heard recorded music polyphonically naturally there would be no need for Hrishikesh Hirway to tease out the different parts of a mix. It would be redundant to the experience of listening to the song itself. So if we musicians want to better understand, and thus anticipate, how our audiences are perceiving our work, we need to separate ourselves from the experience of playing polyphonically and learn to hear the sounds we make together as a monophonic whole.