Music Composition & Artificial Intelligence: Part 2
In Part 1 of this series, we dug into the technical side of AI music composition, including neural network and algorithmic methods. Now, I’d like to step back and focus on a different set of questions:
- Can AI-composed music be good, i.e., will BeyoncAI ever rival the real Beyoncé?
- How might AI change the music industry?
- Who owns the rights to AI-composed music?
Is AI-composed music “good”?
There’s a simple answer to this, although some might call it a cop-out: it depends on what makes music good.
Outside of formal definitions based in music theory, we might define good music as music that produces strong emotional responses — euphoria, nostalgia, sadness, etc. But the level and nature of an emotional response depends on the individual listener. In fact, we could conceivably use AI to manufacture truly personalized music, leveraging user-level preferences to create compositions designed to provoke a specific emotional response in an individual person. Doing so might be enormously rewarding, or at the very least a lot of fun — but it wouldn’t tell us much about whether the music was universally “good”.
Even in the absence of a formal definition of “good”, though, almost everyone agrees that a lot of the music produced by AI systems is pretty bad. For example, when researchers at the University of Toronto used neural networks trained on over 100 popular Christmas carols to generate a new original carol, the results were underwhelming at best.
What’s holding AI composers back? In most cases, it comes down to data. “Good” AI composition — by any definition of “good” — requires very large data sets that fully capture the huge degree of variance across different dimensions of music.
So, if we had the time and source material from which to sufficiently train up our models, would there be anything to prevent our AI systems from taking over the charts? Rather than offer my own speculation about this, I collected the perspectives from several leaders in the field:
“If your bar is AI composing a pop song without people knowing it’s involved, that’s a much bigger hurdle, I think. That would involve AI composing music that’s as catchy and emotionally resonant, with lyrics as powerful, as non-AI-composed music. And each one of those things is really hard.” — Ed Newton-Rex, Jukedeck
“We’re a long way from Bob Dylan, but we’re not very far from Bum Bum Tam Tam [a French pop hit].” — Benoit Carré, FlowMachines
“It [the AI] is not always successful — the minute you push it out of its comfort zone, it falls apart — but we’ve had one musician look through them, and he said one in three are good, and one in five are surprisingly good.” — Bob Sturm, Queen Mary University of London
Ultimately, I take all of this to mean that what we’ll see in the near term is AI coinciding with music creation in organic ways, rather than taking it over all at once. We could have a chart-topping AI-composed song in the not-too-distant future, but its success might be as much because of its origins (i.e., the novelty factor) as in spite of them. And although AI is getting better at emulating existing styles, it still needs a lot of human guidance, and it’s unlikely to break new stylistic ground anytime soon.
How is AI changing the music industry?
There’s quite a bit of speculation in this area. In researching this post, for example, I found the following questions scattered throughout various popular articles:
- Is an AI composition “real” music?
Will AI be able to work without a human?
Will AI steal the jobs of professional composers and musicians?
Isn’t using AI cheating?
Won’t AI completely destroy the music industry?!
However, one of these articles also made what I thought was a very good point: in 1982, a branch of the UK’s Musician’s Union tried to ban the use of synths, on the grounds that they were taking work away from musicians who played stringed instruments.
Every time a new technology significantly shifts the way we create music — or, really, the way we do anything else — there are naysayers. Things like AutoTune and the use of samples and loops were also “disruptors” that eventually became commonplace. Heck, I’m sure even the first brass flute was an abomination to musicians who’d been carving wooden flutes for decades. In this sense, AI is just the latest in a long string of disruptive technologies, in an industry that has always defied narrow definitions.
Ultimately, humans are still at the core of the music creation process, actively collaborating with AI to achieve a final artistic vision — and that aspect, at least, will likely never change. But producing pop hits is a lucrative business, and money will surely be a strong motivator for AI music service start-ups. So some change seems inevitable, and we’ll just have to wait to see how things pan out.
The legal side of AI music composition
Questions of music quality and industry disruption loom large over AI composition, but two other issues might actually be of greater immediate concern: who is the “author” of an AI-composed song, and what happens if an AI infringes another author’s copyright?
Arguments about whether code can be the author of a musical work in the US are over 50 years old, but copyright law is still vague when discussing authorship of works that weren’t created by humans. For example, courts took about six years to conclude a debate, kicked off by a 2011 lawsuit, over whether a monkey could hold a photo copyright.
Recently, the developers behind Endel, an app that uses AI to generate personalized “soundscapes,” signed a distribution deal with Warner Music. Warner needed to know how to credit each track in order to register the copyrights, and the company was initially stumped as to what to list for “songwriter,” as it had used AI to generate all of the audio. They finally decided to list all six employees at Endel as the songwriters for all 600 tracks, but the solution seemed silly even to the employees themselves. One engineer said in an interview, “I have songwriting credits even though I don’t know anything about how to write a song.”
So far, we’ve mostly focused on the shortcomings of AI compositions. But what happens if our BeyoncAI gets good enough to create a track that sounds just like Beyoncé?
Law is generally reluctant to protect things “in the style of” other things, as musicians are influenced by other musicians all the time. For Beyoncé to have a real case, then, the AI composition would have to be closely similar to an existing song — or a human collaborator would have to market the song as sounding like Beyoncé, without her consent.
Even if an AI system did closely mimic an artist’s sound, the artist might have trouble proving the AI was designed for that purpose. Copyright law requires the accuser to prove that the infringing author was “reasonably exposed” to the work they supposedly copied — and neural networks are extremely difficult to reverse-engineer. If a copyright claim was filed against a song written by our BeyoncAI, it might be close to impossible to prove that BeyoncAI’s algorithm was actually trained on Beyonce’s music.
AI composition is a fascinating, and in many ways exciting, approach to creating music. It’s still far from popular — human-composed songs continue top the charts, and likely will for the foreseeable future — and it raises tricky questions around quality, business ethics, and artistic ownership. But as tastes and technology continue to evolve, AI will almost certainly begin to play a bigger role in crafting everything from background music to pop radio hits.