Byline Times is an independent, reader-funded investigative newspaper, outside of the system of the established press, reporting on ‘what the papers don’t say’ – without fear or favour.
To support its work, subscribe to the monthly Byline Times print edition, packed with exclusive investigations, news, and analysis.
AI music appears to have broken the glass ceiling with a pastiche parody song about immigrants, created entirely by artificial intelligence, charting in the Top 50 in Germany for the first time in recent days.
The track, Verknallt in einen Talahon, went viral on TikTok and in less than a month amassed 3.5 million streams on Spotify where it reached No 3 on the streaming platform’s global viral chart. On Thursday, it was just outside the top 10.
The song’s creator, Josua Waghubinger, who goes by the name Butterbro, has said he made the track’s chorus by feeding his own lyrics into Udio, a generative AI tool that can create vocals and instrumentation from simple text prompts, The Guardian reported.
AI music tools enable anyone to create as many as ten songs a second with a single prompt on their computer or smartphone leading to a wave of fake and unlicenced content that could be infringing on copyright laws and has the music industry racing to protect their profits. Record labels are taking AI music companies to court and making deals with social media partners.
But, does it matter if the music we are listening to was created by a human or a machine, and can AI music and human-made content coexist in harmony?
Following in the same vein as large language models (LLMs) like Open AI’s ChatGPT and image-based platforms like Midjourney, AI-generated music services are quickly going mainstream.
Initially a more challenging medium, with AI-generated music lagging far behind both image and language models in quality and consistency, the sudden and exponential growth of AI music startups such as Udio and Suno have successfully rendered far more convincing content, generated by a prompt such as “a 90’s style love song”.
The lightning speed of AI’s technological advances have placed the music industry on the back foot with industry unions and major labels scrambling to protect copyrighted material.
Generative AI models require original data and content in order to create an imitation of a sound; for an AI music tool to create the sound of 90s grunge music, or synth pop music, it has to be fed music that falls within that genre – without access to a library of human expression and creative output, machines are unable to generate their own artificial approximations.
AI is being increasingly used by music producers to generate vocals in the style of well-known singers. In 2023, the Beatles released Now and Then, a track that used the technology to extrapolate John Lennon’s vocals.
And, in April, a song featuring an AI-generated version of Tupac Shakur’s voice was uploaded on Drake’s Instagram account. It disappeared soon after when lawyer’s for the late rapper reportedly threatened to sue.
Earlier this year artists including Stevie Wonder, Billie Eilish, Pearl Jam and dozens of others signed an open letter warning that AI-generated music trained on their recordings could “sabotage creativity” and sideline human artists.
The training of AI models on human-created material has raised a myriad of ethical and copyright-related questions; are artists aware that their material is being used to train AI models? And are listeners aware that they are listening to a song generated by a machine?
Can the Courts Intervene?
A lawsuit against music generation startups Udio and Suno filed in June by the Recording Industry Association of America (RIAA), which represents UMG, Sony, and Warner among others, marks the music industry’s first attempt to combat the technology through the courts.
Several US court cases related to the use of large-language models and other AI generators that have been trained on creative industries are currently advancing through the American judicial system.
The RIAA lawsuit alleges Udio and Suno trained their AI models using copyrighted music. Whilst recording association acknowledges in court papers that “artificial intelligence (“AI”) and machine learning are the next frontier of technological development”, it goes on to outline potential for abuse and argues that, “There is nothing that exempts AI technology from copyright law or that excuses AI companies from playing by the rules.”
Suno has since publicly admitted using copyrighted material to train its AI model but claims the circumstances fall under “fair use” – a clause for the unauthorised use of copyright-protected works under certain circumstances.
A US Supreme Court ruling on fair use in 2023 could have an outsized impact on music cases because it focused largely on whether a new use has the same commercial purpose as the original work. AI-generated songs are operating in the same space as the copyrighted material, with an increasing slew of self-styled AI artists profiting from putting out AI music.
In court papers, Suno and Udio argue that the lawsuits were attempts to stifle smaller competitors, comparing the labels’ protests to past industry concerns about synthesizers, drum machines and other innovations replacing human musicians. Suno has amassed an estimated value of $500 million in just two years, whilst newcomer Udio secured $10 million in seed funding in April. The court case could have profound implications but is likely to take years.
Whether RIAA is successful or not, smaller music labels and independent artists are unlikely to see the benefit. Whilst a ruling against Udio and Suno using copyrighted material would apply broadly to all copyrighted songs, smaller labels and independent artists would not possess comparable resources necessitated to lodge a viable infringement case.
The Role of Social Media
Germany’s controversial AI track first gained popularity on TikTok. Earlier this year Universal became involved in a very public, and at times ugly, dispute with the video-sharing platform over AI music, and removed its catalogue from the site. An agreement was eventually reached but TikTok remains littered with unlabelled AI music.
Agreements continue to be reached with social media platforms with an announcement last week that Meta and Universal Music Group (UMG) have renewed their partnership agreement that includes specific references to AI music.
In a statement on the expanded multi-year licensing partnership which began in 2017, UMG chief digital officer and executive vice president Michael Nash said: “We look forward to continuing to work together to address unauthorised AI-generated content that could affect artists and songwriters so that UMG can continue to protect their rights both now and in the future.“
Details remain sparse but the inclusion-of and focus-on unauthorised AI-generated music points to the growing unease within the music industry toward its use of copyrighted music. The agreements reached between Universal and Meta suggest a form of cooperation is possible.
Efforts to Create a Legislative Framework
Outside of the court room, political efforts to create an ethical guardrail for the use of AI, that protects human content continues.
In the UK, the All-party Group on Music (APG) released a report in May calling for robust legislation. The APG recommended an “ambitious UK AI Act”, a creative industry task force, labelling for AI-generated content and a specific “personality right” to protect creatives and artists from deepfakes.
A UK AI Act did make its way into the King’s Speech during the State opening of parliament and featured nimbly in Labour’s manifesto but further details on a bill have yet to emerge.
At a European level, the EU AI Act will create a headache for corporations and start-ups pushing generative AI models in much the same way the EU Digital Services Act is presently achieving with social media companies including X and Meta.
ENJOYING THIS ARTICLE? HELP US TO PRODUCE MORE
Receive the monthly Byline Times newspaper and help to support fearless, independent journalism that breaks stories, shapes the agenda and holds power to account.
We’re not funded by a billionaire oligarch or an offshore hedge-fund. We rely on our readers to fund our journalism. If you like what we do, please subscribe.
The legislation is the first of its kind and places an onus of companies to lift the lid on AI training systems and to ensure AI content is appropriately labelled.
But, the bigger challenge is global cooperation; in order to create an ethical framework that protects human content, countries need to be operating at an international level to create a shared agreement of best practice, codes of ethics and comparable legislation.
In the meantime, AI music will continue to rapidly expand with many of us unaware of whether the song accompanying an Instagram video, or popping up on a Spotify playlist, was created by a human or a machine.