The artistry of AI-assisted music
Does the rise of artificially intelligent tools make music production any less artistic?
The growing popularity of artificially intelligent tools in creative industries should hardly come as a surprise, given the lengths a dash of algorithmic magic can go to cutting unnecessary busywork in pursuit of artistic success. Photographers and graphic designers have long enjoyed using content-aware fill tools and automatic exposure adjustments to rough hem their drafts before putting on the final touches. DJs have automatic beat gridding which helps them sync tracks to each other and make for seamless nightclub mixes. Even writers have Grammarly-esque tools that suggest follow-up words in real-time. In no other field, however, has the meteoric rise of AI tools been as controversial – and as confusing – as within music production in the past half-decade.
In 2017, music software developer iZotope released a suite of products that would forever change the landscape of mixing technology (and give mastering engineers around the world an immediate and overwhelming fear of death): Ozone 8. Previous iterations of the software had served as a bundle of useful tools for balancing the spectral and dynamic content of an in-progress track, but the 8th revision shipped with a ‘Master Assistant’ feature, in which an AI companion would listen to your track and compare it to a database of millions of other songs it had already analysed. It would then average out the data of the closest matches and automatically apply various EQ, compression and stereo imaging settings to get your song sounding as close as possible to commercially mastered releases. Almost immediately, boomer producers from across the world – presumably those who had been making Eminem type beats for years – emerged from the woodwork to decry the innovation as laziness-inducing marketing hype.
Since the release of Ozone 8, countless other plugins and pieces of hardware gear have shipped with machine learning and AI as description-fillers and ‘exciting new’ features. Mixed in Key’s Captain Chords generates new musical motifs from a given chord progression, writing the melodies to entire verses and choruses if you so desire. Sonible’s Smart Reverb will algorithmically calculate the most appropriate amount of reflections that your sound would have in a virtual space of your choosing, keeping the amount of unnecessary echo under control. Other plugins have taken the more esoteric approach to functionality under-the-hood: no-one can really be sure what Soundtheory’s Gullfoss actually does, except that if you turn a couple of the knobs on the interface, your song might sound a little clearer and brighter – whatever that means.
These further AI ‘innovations’ – if that’s what you can call them – have caused an aggrandising reactionary movement across the internet who claim that using these tools to produce music is actively anti-creative and anti-artistic. “Your mixes will never sound good,” they declare. Not unless you do them by ear, painstakingly over the course of several hours, while your wife and infant child look on in pity and despair!
It’s not hard to imagine why they might come to this conclusion: authors write words on the page, visual artists pull together materials and put paint to canvas, and architects… do whatever architects do. We think of chords, melodies and textures as the intuitive and paradigmatic product of artistry when it comes to contemporary music production – after all, if a computer generated Phoebe Bridger’s hooks by algorithm, would all her Twitter stans still be as willing to claim her song-writing genius? The argument then proceeds like such: AI tools lower the skill ceiling of music production, and make mediocre tracks much more achievable. They tempt people into forgoing the difficult process of learning abstract skills like ear-training and instead rely on a pre-determined algorithm. Therefore, as time passes and more of these tools come out, music will become increasingly homogenised, and forward-thinking records will become fewer and further between.
The more I think about this position, the more untenable I find it is. Not only does it betray a fundamental misunderstanding of the nature of music production – at least since the invention of the synthesiser – but it is unjustifiably pessimistic. If anything, the rise of so many useful AI-assisted tools has made music far more intimate, and the boundaries for experimentation and new ideas far broader than ever before.
First, music production is a creative field so mired and deeply interconnected with automation that these new ‘AI’ tools are really nothing more than a slight extension of techniques that have existed for forever. Ever since Robert Moog debuted the first commercially available and musical voltage-controlled oscillator in 1964, the ‘directness’ of artistic involvement has been intercepted by technology that has a mind of its own. No longer did musicians have to physically cause sound output from their instruments, like plucking a violin string or hammering a piano key: synthesisers are always generating sound, as long as they are provided electricity. To ‘play’ a synthesiser is to access a continuous stream of sound that already exists thanks to its circuitry, and to shape the emerging vibrations using envelopes and other modulation. And yet, do we regret the invention of electronic instruments – the single greatest leap forward for music since Bartolomeo Cristofori discovered he could put dynamic control in a keyboard instrument back in 1700? By extension, would we characterise David Bowie, Daft Punk or Aphex Twin as un-artistic? Music these days is nigh-unthinkable without the timbres and textures pioneered by Moog, Oberheim, Dave Smith, Yamaha and Roland throughout the 20th century; AI is merely an extension of the progress that has already been made.
Moreover, music production is becoming increasingly about the intention of the process and not the notes-on-the-ledger-lines final product – taking on an almost impressionistic shift since the turn of the century. Infamously, American rap trio Migos took an average of 30 minutes to record each song on their 2018 album Culture II. To judge a song as insanely stupid as ‘Stir Fry’ by the nature of its chordal or melodic content utterly misses the mark of Migos’ innovation: brain-dead, unthinking slappers with absurdly loud 808s and kicks. Cutting-edge trends like the recent wave of cough hyperpop blow out traditionally-hated production techniques – auto-tune, chipmunked vocals, overly-distorted clipping, terrible MIDI instruments – and transforms them into an aesthetic of subversion: an irreverent fuck-you to mixing convention rather than a half-assed attempt to emulate tradition. To employ a relatable phrase I hate: it’s the vibe, honey.
However, perhaps most crucially, it’s simply incorrect to view these new tools as somehow any different than the aforementioned techniques photographers, graphic designers or writers employ in their day-to-day craft: ones that we don’t seem to have the same kneejerk reaction to. Anyone who has spent more than five minutes applying an equaliser curve knows it’s an absolutely mind-numbing chore. The amount of time A/B-ing compression settings could just as easily be spent considering adding extra musical variety, a beat switch-up, or a new creative combination of effects. AI tools lower the skill ceiling, sure, but in ways that don’t fundamentally matter to the pursuance of an artistic vision. In a sense, they’re far more liberating than restrictive, because anyone can put out a good sounding mix from their bedroom on no budget.
While places for professionals will always exist in the industry, AI democratises music production by decreasing the amount of time necessary to reach a baseline level of listenability, and allows young creatives to focus on crafting their own sound. I, for one, welcome our new robot overlords.