Techno Blender
Digitally Yours.

Streaming sites told not to let AI use music for copycat tracks | Music industry

0 32


The music industry is urging streaming platforms not to let artificial intelligence use copyrighted songs for training, in the latest of a run of arguments over intellectual property that threaten to derail the generative AI sector’s explosive growth.

In a letter to streamers including Spotify and Apple Music, the record label Universal Music Group expressed fears that AI labs would scrape millions of tracks to use as training data for their models and copycat versions of pop stars.

UMG instructed the platforms to block those downloads, saying it would “not hesitate to take steps to protect our rights and those of our artists”.

The letter, first reported by the Financial Times, comes after a similar move from the Recording Industry Association of America, the industry’s trade body, last October. Writing to the US trade representative, the RIAA said that AI-based technology was able “to be very similar to or almost as good as reference tracks by selected, well known sound recording artists”.

The group added: “To the extent these services, or their partners, are training their AI models using our members’ music, that use is unauthorised and infringes our members’ rights by making unauthorised copies of our members works.”

Although “large language models” (LLMs) such as ChatGPT and Google’s Bard, have been the focus of much of the AI industry, other types of generative AI have made similar leaps in recent months.

Image generators, such as Midjourney and Stable Diffusion, have become accurate enough to generate plausible fakes that fool huge numbers of viewers into thinking, for example, that the pope stepped out in a custom Balenciaga-style puffer jacket.

Music generators are not quite at the same level of mainstream accessibility, but are able to create convincing fakes of artists such Kanye West performing new cover versions of whole songs including Queen’s Don’t Stop Me Now and Kesha’s TikTok.

Other systems, like one demonstrated in a research paper by Google, are capable of generating entirely new compositions from text prompts such as: “Slow tempo, bass-and-drums-led reggae song. Sustained electric guitar. High-pitched bongos with ringing tones. Vocals are relaxed with a laid-back feel, very expressive.”

skip past newsletter promotion

Such systems are trained on hundreds of thousands of hours of recorded material, typically collected without explicit consent from their sources. Instead, AI research labs operate under the expectation that their actions are covered by “fair use” exemptions under American law, because the end product, an AI model, is a “transformative work” that does not compete with the original material.

However, sometimes such systems will spit out almost exact copies of material they were trained on. In January, for instance, researchers at Google managed to prompt the Stable Diffusion system to recreate near-perfectly one of the unlicensed images it had been trained on, a portrait of the US evangelist Anne Graham Lotz.

In the UK, there are other exceptions that support AI labs training models on materials obtained without consent. A recent update to intellectual property law, for instance, allowed non-commercial use of any legally acquired copyrighted material for AI research. In what has been called “data laundering”, the research can then be legally used to train commercial models down the line, while still benefiting from the copyright exceptions.


The music industry is urging streaming platforms not to let artificial intelligence use copyrighted songs for training, in the latest of a run of arguments over intellectual property that threaten to derail the generative AI sector’s explosive growth.

In a letter to streamers including Spotify and Apple Music, the record label Universal Music Group expressed fears that AI labs would scrape millions of tracks to use as training data for their models and copycat versions of pop stars.

UMG instructed the platforms to block those downloads, saying it would “not hesitate to take steps to protect our rights and those of our artists”.

The letter, first reported by the Financial Times, comes after a similar move from the Recording Industry Association of America, the industry’s trade body, last October. Writing to the US trade representative, the RIAA said that AI-based technology was able “to be very similar to or almost as good as reference tracks by selected, well known sound recording artists”.

The group added: “To the extent these services, or their partners, are training their AI models using our members’ music, that use is unauthorised and infringes our members’ rights by making unauthorised copies of our members works.”

Although “large language models” (LLMs) such as ChatGPT and Google’s Bard, have been the focus of much of the AI industry, other types of generative AI have made similar leaps in recent months.

Image generators, such as Midjourney and Stable Diffusion, have become accurate enough to generate plausible fakes that fool huge numbers of viewers into thinking, for example, that the pope stepped out in a custom Balenciaga-style puffer jacket.

Music generators are not quite at the same level of mainstream accessibility, but are able to create convincing fakes of artists such Kanye West performing new cover versions of whole songs including Queen’s Don’t Stop Me Now and Kesha’s TikTok.

Other systems, like one demonstrated in a research paper by Google, are capable of generating entirely new compositions from text prompts such as: “Slow tempo, bass-and-drums-led reggae song. Sustained electric guitar. High-pitched bongos with ringing tones. Vocals are relaxed with a laid-back feel, very expressive.”

skip past newsletter promotion

Such systems are trained on hundreds of thousands of hours of recorded material, typically collected without explicit consent from their sources. Instead, AI research labs operate under the expectation that their actions are covered by “fair use” exemptions under American law, because the end product, an AI model, is a “transformative work” that does not compete with the original material.

However, sometimes such systems will spit out almost exact copies of material they were trained on. In January, for instance, researchers at Google managed to prompt the Stable Diffusion system to recreate near-perfectly one of the unlicensed images it had been trained on, a portrait of the US evangelist Anne Graham Lotz.

In the UK, there are other exceptions that support AI labs training models on materials obtained without consent. A recent update to intellectual property law, for instance, allowed non-commercial use of any legally acquired copyrighted material for AI research. In what has been called “data laundering”, the research can then be legally used to train commercial models down the line, while still benefiting from the copyright exceptions.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment