To find a single issue that has so dominated the music industry discourse like AI has in 2023, one would have to look back to the early 2000s when P2P and digital piracy were the center around which everything else seemed to revolve.
Unlike P2P, with AI it only took the music industry a matter of months, rather than years, to work itself through the seven stages of grief. From initial shock at the end of 2022 (when everyone really started to take on what AI, for good and ill, could mean), by the middle of 2023 the industry was accelerating itself to the testing stage (but full acceptance is still a little way off).
Inevitably, different divisions of the industry are moving at different speeds here. There are the artists jumping in and seeing AI as a new creative tool that must be experimented with. There are rights companies and platforms who are willing to experiment as long as it is conditional. And there are the companies and bodies who wish to put strict rules/commandments/guidelines in place before things race forward and they are left playing a desperate form of catch up.
We will look at what each constituency is doing here, and what their beliefs around AI are, in turn. Only in understanding what all these different sections hope to get out of AI (or how they hope to bring AI to heel) can we truly grasp both the complexities at play here and what is at stake for the industry as a whole.
AI and the artist: a whole new canvas and set of paints
It is telling that the first major artist to embrace the creative and financial possibilities of music NFTs in early 2020/2021 is the same major artist now seeing AI as a new form of creative liberation.
In April, Grimes posted that she was happy for anyone to use her voice to generate AI songs as long as they split royalties 50/50 with her. “Feel free to use my voice without penalty,” she tweeted. “I have no label and no legal bindings. I think it’s cool to be fused w a machine and I like the idea of open sourcing all art and killing copyright.”
Soon after, she felt compelled to add in some caveats to her AI openness. She said she would issue “copyright takedowns ONLY for rly rly toxic lyrics w grimes voice”. She stressed “no baby murder songs plz” and no “Nazi anthem unless it’s somehow in jest a la producers I guess” (a reference to the 1967 movie).
She formalized the AI licensing structure later in the year through a deal between her own Elf.Tech platform, CreateSafe and Slip·stream (which operates a music library that is used by over 300,000 online creators). Alive to the possibilities of AI, she is the rare example of an act going far, far beyond lip service to actually put systems in place to make the hypothetical actual.
Other acts were not necessarily as gung-ho as Grimes, but they were all publicly exploring and debating what new shapes AI could allow them to sculpt.
Holly Herndon, partly in response to what Grimes was doing, sounded a note of caution to other musicians, warning them to not get too caught up in the Utopianism being talked up here by some. “Artists, for the time being it is a good idea not to sign any contract regarding usage of your voice in an AI context,” she tweeted in May. “There is so much room for exploitation between general confusion about AI and FOMO. There is plenty of time. Don’t sign anything until things settle down, or always consult a lawyer.”
“There is so much room for exploitation between general confusion about AI and FOMO. There is plenty of time. Don’t sign anything until things settle down, or always consult a lawyer.”
Like Grimes, however, she went from words to actions here. She had already created a “deepfake twin” in 2021 (Holly+), but in November spoke about expanding her existing Spawn tool into Spawning that she said was “building the consent layer for AI”. Within that is the Have I Been Trained? tool that claims to do the forensics to help artists see if their work has been used in AI training datasets.
David Guetta was also looking at the AI options. He declared, “I’m sure the future of music is in AI […] But as a tool.” To show he was fully on board, he teased a clip of a track which he claimed used AI to create a track in the style of Eminem with an artificial recreation of the rapper’s voice. He did stress, fully aware of the copyright implications and the possible threat of legal action, that it would not be commercially released.
As the pop world becomes more global and the lingua franca of music becomes less Anglophonic, AI can become an important lyrical tool for musicians. Nowhere was this better illustrated in 2023 than what Lauv did with AI company Hooky and K-pop singer Kevin Woo where they created a Korean-language version of his track ‘Love U Like That’. It essentially involved Woo translating the lyrics and then using a “Lauv AI effect” to make his singing voice sound like Lauv’s singing voice. We can expect more AI-powered pop polyglottism in the coming years.
A number of artists toyed with AI to deploy it as effectively a new type of remix tool. Roberta Flack came out in support of Endel and Warner Music’s Rhino division who took stems from ‘Killing Me Softly With His Song’ to generate three “soundscape” albums based around different activities (focus/productivity, relaxation, sleep). Meanwhile, The Orb and David Gilmour (via Sony Music) worked with AI company Vermillio to let fans create their own “personalised AI track and artwork” for their Metallic Spheres In Colour album.
It was not just living artists who were part of the great AI experiments of 2023.
The “new” Beatles song ‘Now & Then’ (which went straight to number 1 in the UK on release) was only possible because AI technology (developed by director Peter Jackson and his team for the 2021 Get Back documentary) was able to separate out John Lennon’s voice from his piano on a home recording dating back to 1977. This enabled the surviving Beatles, Paul McCartney and Ringo Starr, to finish the recording that was technologically impossible in the 1990s when they (and George Harrison) first worked on it.
Around the same time as the release of ‘Now & Then’, the estate of Édith Piaf (who died in 1963) and Warner Music Group announced that AI would be used on archive recordings of her voice to enable her to “narrate” a new documentary about her life and music.
The Beatles and Piaf are two strong examples of the mainstreaming of AI with regard to deceased artists, making the impossible possible by creating new products that were inconceivable even a decade ago.
Not all artists felt the same way about AI.
The controversy around what became known as the “Fake Drake” and The Weeknd track ‘Heart On My Sleeve’ by Ghostwriter977 hit critical mass and put Universal’s lawyers into scramble mode, demanding it be pulled from DSPs. Ghostwriter977’s (unnamed) manager was taking the long view on it all, arguing this will become the norm and that they were merely first out of the trenches and so were taking most of the bullets.
“I like to say that everything starts somewhere, like Spotify wouldn’t exist without Napster,” they said. “Nothing is perfect in the beginning. That’s just the reality of things. Hopefully, people will see all the value that lies here.”
Nick Cave, when asked about ChatGPT and its ability to pump out lyrics in the style of any writer, did not hold back. He referred to a ChatGPT-produced set of “Nick Cave” lyrics as “bullshit” and a “grotesque mockery of what it is to be human”.
Maybe this is what AI will be – for now, at least. A form of creative karaoke. An approximation of the real thing, but without the grit and heart that is required to elevate notes, chords, melodies and words to the level of great art (what Cave termed a “blood and guts business”).
AI and tentative steps towards experimentation by the industry
It would be wrong to say the jury is still out with regard to how the wider music industry is thinking about AI. It is simultaneously in and out: a bit like Schrödinger’s Cat (or Schrödinger’s ChatGPT).
As with everything in the music business involving licensing decisions, no one wants to make a total leap into the unknown in case they get it dramatically and painfully wrong, setting a precedent that could haunt them for years.
But there was certainly an opening up to AI in 2023 (if falling short of total embrace) by the industry.
Reservoir Music CEO Golnar Khosrowshahi wrote an op-ed for Variety in June outlining her views and where she is spotting the positives. “AI can create — and already is creating — efficiencies across the industry,” she argued of the varied office-centric applications of it. But her enthusiasm was not confined to the functional benefits of AI. “Used correctly, AI can actually help us preserve and protect copyright — versus the present fear of usurping it,” she proposed.
“Used correctly, AI can actually help us preserve and protect copyright — versus the present fear of usurping it.”
Golnar Khosrowshahi, Reservoir Music
“Through audio fingerprinting, AI tools that verify authorship in real time will help reduce the unnecessary litigation that can be based on subjective interpretations or human error. AI will also equip both owners and distributors of content (i.e. streaming services) with significant changes in how we classify and catalog music (e.g., the micro categories that we can use to further define characteristics and attributes of songs). Not only can we then better understand the music, but we can also be more efficient at micro licensing, delving into why listeners love what they love, both in the moment in the context of a trend, and over time when it comes to standards and classics.”
David Israelite, head of the National Music Publishers’ Association, was weighing up the pros and cons of AI in an address at a meeting of the Association of Independent Music Publishers in early 2023. “This threatens the entire music economy,” he said, bluntly. “I think that is pretty clear.” He did, however, say that it was important for the industry to look at where the positives could be, coupled with a sprinkling of pragmatic realism. “I don’t have many answers for you today, other than what I’m hoping is that as an industry, we approach these AI issues with the mindset of this is not necessarily bad. It doesn’t matter anyway, because we’re not going to control it. Instead: what are the opportunities? And how do we engage with it in a productive way, so we don’t look back and say, ‘It took us 20 years to figure out how to deal with AI like we did with digital music?’”
On the label side, a significant appointment was made by Sony Music in the UK when it named Geoff Taylor, previously CEO of the BPI, as its EVP of artificial intelligence (a new role, of course). It is a sign of how seriously the company is taking it all and how important it sees AI in terms of its future.
Universal Music was also making proactive steps here, with chairman and CEO Lucian Grainge saying the major is open to working with AI companies – as long as they are legitimate and respectful of copyright. “My philosophy for the company has always been we should be, and can be, the hostess with the mostess,” he said in April. “We’re open for business with businesses which are legitimate, which are supportive, and [with] which we can create a partnership for growth.”
Making good on that, Universal signed a deal with AI company Endel in May, terming it a “strategic relationship to enable artists and labels to create soundscapes for daily activities like sleep, relaxation, and focus by harnessing the power of AI”. So it is using AI around music but within very tight parameters.
Publishers were not shying away from experimenting here, with Boosey & Hawkes publishing the sheet music for ‘I Stand In The Library’ (a 16-minute choral piece) that was composed by Ed Newton-Rex, VP of audio at Stability AI, who used GPT-3 to generate and refine lyrics for it. “I had to do some curating of the text: that involved asking it to rewrite lines on around 10 occasions,” he told Music Ally. “Apart from that, it was entirely generated by the AI.” He added a fascinating detail about the copyright registration for the composition. “Interestingly, they are registering it as an instrumental, because there’s no mechanism to register a combined work with PRS when part of it was generated by AI.” As this technology progresses, and as it becomes normalized, such issues may dissolve.
While music rightsholders were moving at a particular speed here, technology companies were trying to get to the future much quicker. In January, Google announced its MusicLM AI model that could generate “high-fidelity music from text descriptions”, having trained it on 280,000 hours of music drawn from the Free Music Archive dataset. By May, Google had opened it up for people to start experimenting with it.
Within the Google family, YouTube was also chomping at the bit here. But it was also acknowledging that copyright owners would want safeguards in place. It published a blog in November outlining its “approach to responsible AI innovation”. Part of that was saying it had takedown procedures in place if AI-generated content infringed copyrights.
“[I]n the coming months, we’ll make it possible to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice, using our privacy request process,” it said. “We’re also introducing the ability for our music partners to request the removal of AI-generated music content that mimics an artist’s unique singing or rapping voice. In determining whether to grant a removal request, we’ll consider factors such as whether content is the subject of news reporting, analysis or critique of the synthetic vocals.”
YouTube swiftly followed this up with news of its Dream Track in YouTube Shorts experiment where a range of acts (Alec Benjamin, Charlie Puth, Charli XCX, Demi Lovato, John Legend, Papoose, Sia, T-Pain and Troye Sivan) were opening up AI-generated versions of their voices that could be used by “a small group of select US creators” to “create unique soundtracks of up to 30 seconds” in their video posts on YouTube Shorts.
“Imagine in [the] early 2000s, if the file sharing companies came to the music industry and said, ‘Would you like to experiment with this new tool that we built and see how it impacts the industry and how we can work together?’ It would have been incredible.”
Robert Kyncl, Warner Music
Robert Kyncl, head of Warner Music (and, tellingly, a former YouTube executive), praised YouTube for involving the music industry here and working with it, rather than against it. “Imagine in [the] early 2000s, if the file sharing companies came to the music industry and said, ‘Would you like to experiment with this new tool that we built and see how it impacts the industry and how we can work together?’ It would have been incredible,” he said. “Obviously, that didn’t happen. So this is the first time that a large platform at a massive scale, that has new tools at its disposal is proactively reaching out to its partners to test and learn.”
All of these are significant moves, showing the industry had learned a lot of lessons from the early 2000s and that it better understands that technology has to be worked with because ignoring it and/or litigating against it are not always the best responses.
It does not, of course, follow that AI companies are free to do whatever they want. The industry has its conditions, a great many conditions, that need to be met here. This, the industry is keen to make clear, cannot be one-way traffic.
Conditional discharges: the industry will embrace AI, but only when the rules of engagement are made clear (and agreed to)
New terrain requires cartographers. With regard to music and AI, multiple stakeholders are stepping forward to map out the paths that must be taken through this new land. There is some crossover in the maps they are drafting; and there is some divergence.
Below are the bodies and companies looking to set guidelines and safeguards in place for when AI meets music copyrights and determining what the specifics of those guidelines and safeguards should be.
Grammys: the awards body announced in July that “AI, or music that contains AI-created elements” could qualify for a Grammy nomination. There was, however, a key caveat. “What’s not going to happen is we are not going to give a Grammy or Grammy nomination to the AI portion,” said Recording Academy CEO Harvey Mason Jr. It seems a very grown-up approach to understand that AI will be something that creators will lean on and that genuine art can spring forth from this.
Deezer: the DSP said it would sniff out AI-generated music on its platform and “develop a system for tagging music that has been created by generative AI, starting with songs using synthetic voices of existing artists”. A bit like explicit content on DSPs, it will make it clear to the user if AI has created (or largely assisted in the creation of) what they are playing.
Believe: similar to what Deezer is doing, Believe says it has developed software that can identify if music has been created using AI. Its AI Radar claims to be able to spot AI-created recordings with 98% accuracy and pinpoint deep fakes with 93% accuracy.
Human Artistry Campaign: made up of over 40 organizations (including the IFPI, the RIAA, the BPI, the NMPA, ASCAP, SESAC and SoundExchange), it drew up its seven core principles for music and AI. These are: 1) technology has long empowered human expression, and AI will be no different; 2) human-created works will continue to play an essential role in our lives; 3) use of copyrighted works, and the use of voices and likenesses of professional performers, requires authorization and free-market licensing from all rightsholders; 4) governments should not create new copyright or other IP exemptions that allow AI developers to exploit creators without permission or compensation; 5) copyright should only protect the unique value of human intellectual creativity; 6) trustworthiness and transparency are essential to the success of AI and protection of creators; and 7) creators’ interests must be represented in policymaking.
The Council Of Music Makers: the new body outlined what it called its “five fundamentals for music and AI”. These are: 1) where licensing deals are negotiated in respect of AI technologies, the explicit consent of individual music makers must be secured before music is used to train AI models; 2) the publicity, personality and personal data rights of music makers must be respected; 3) where permission is granted, music makers must share fairly in the financial rewards of music AI, including from music generated by AI models trained on their work; 4) as AI companies and rightsholders develop licensing models, they must proactively consult music makers and reach agreement on how each stakeholder will share in the revenue from AI products and services; and 5) AI-generated works must be clearly labeled as such and AI companies must be fully transparent about the music that has been used to train their models, keeping and making available complete records of datasets.
ASCAP: The body outlined its six core principles for music and AI. These are: 1) prioritizing rights and compensation for human creativity; 2) transparency in identifying AI versus human-generated works and retaining metadata; 3) protecting the right to decide whether your work is included in an AI training license; 4) making sure creators are paid fairly when their work is used in any way by AI, which is best accomplished in a free market, not with government-mandated licensing that essentially eliminates consent; 5) credit when creators’ works are used in new AI-generated music; and 6) a level playing field that values intellectual property across the global music and data ecosystem.
IMPF: the Independent Music Publishers’ International Forum set out its four key principles for generative AI. These are: 1) seeking express permission for the use of music in the machine-training process; 2) keeping records of the musical works used in the machine-training process; 3) labeling of AI-generated music; and 4) status of purely AI-generated musical works (i.e. labeling music that has no human input).
EU AI Act: it is still very much a “in motion”, but on 11 December the European Parliament and European Council reached a provisional agreement on the shape and the scope of the EU’s Artificial Intelligence Act. It cannot become law until it is formally adopted by the Parliament and the Council (and then each EU country will decide if it will be implemented locally). So there are still some “ifs” to be settled. The IFPI called it a “constructive and encouraging framework” while GEMA termed it “a step in the right direction” with the caveat that it will “need to be sharpened further on a technical level”. It is obviously not purely focused on music and AI, but there is enough here to have an impact on the music sector and how it moves forward with AI. The IFPI added, “While technical details are not yet finalized, this agreement makes clear that essential principles – such as meaningful transparency obligations and respect of EU copyright standards for any GPAI model that operates in the internal market – must be fully reflected in the final legislation and its concrete application if we are to achieve our mutual goals.”
UK Music: Jamie Njoku-Goodwin, then head of the industry body, outlined the company’s position on AI to the UK government in June. He stressed that what they termed “music laundering” through AI needed to be cauterized. He called it, “[A] process where you could steal someone’s work, feed it into an AI, and then generate clean, ‘new’ music, just as a money laundering operation might do with stolen money.”
RIAA: In June, the US trade body demanded that Discord shut down the AI Hub (which had 142,000 members), citing mass copyright infringements. “This server(s) is/are dedicated to infringing our members’ copyrighted sound recordings by offering, selling, linking to, hosting, streaming, and/or distributing files containing our members’ sound recordings without authorization,” it said.
ASCAP (again): In a letter to the US Copyright Office, ASCAP made it clear it did not want the training of an AI model with copyrighted music to be classed as “fair use”.
The AI debate will roll through 2024 and beyond. New uses (both good and bad) will emerge. Applications of AI in certain contexts will be normalized. Groundbreaking deals will happen. Outrage will be caused by “maverick” operators seeking to disrupt the consensus. Lawsuits will be fired out with the goal of shutting down bad actors and nailing down precedents. Fortunes will be made. Fortunes will be lost. Fortunes will be squandered. So too will opportunities. Bullets will be dodged. Bullets will connect with frightening accuracy. Cheers and boos will rise up to meet new developments in roughly equal measure. Great progress will be made. So too will be great mistakes.
All in all, just another year in the music business.