Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

AI in ASIA
AI and copyright in the creative industry
Business

Paul McCartney's Concerns: AI Copyright in the Creative Industry

Sir Elton John and Sir Paul McCartney are raising concerns over AI's impact on artists' copyrights.

Intelligence Desk4 min read

AI Snapshot

The TL;DR: what matters, fast.

Paul McCartney and other artists are concerned about AI models using their work without permission or payment.

The UK's Data (Use and Access) Bill is being reviewed with amendments to ensure creators are compensated when their work trains AI.

Beyond royalties, artists worry about the authenticity of AI-generated content that mimics their style or voice without consent.

Who should pay attention: Creative industries | Musicians | AI developers | Policy makers

What changes next: Debate is likely to intensify regarding AI copyright and artist compensation.

Sir Elton John and Sir Paul McCartney are calling out AI for ripping off artists’ work—without paying a dime.,They’re backing changes to the Data (Use and Access) Bill to protect copyrights in the age of generative AI.,This is a global wake-up call: AI is amazing, but can creators afford to lose control of their own art?

What’s the Fuss About?

If you’ve been paying attention to the creative world lately, you’ve probably heard a lot about AI "stealing" from artists. Sounds dramatic, right? Well, it’s not just hype. Big names like Sir Elton John and Sir Paul McCartney are making some noise about how AI is being trained on artists’ works—without permission or payment.

Here’s the deal. AI systems, like the ones used to create fake Drake songs or uncanny art, need heaps of data to learn. That data? Often, it’s pulled from publicly available sources, which means your favourite song, artwork, or book might have been used to teach an AI how to mimic its style. And guess what? Nobody’s cutting cheques for the original creators. For more on the impact of AI on creative works, see our article on how AI Artists are Topping the Charts Weekly.

The Legal Battleground: The Data (Use and Access) Bill

This is where the Data (Use and Access) Bill comes in. Right now, it’s under review in the UK, and some suggested amendments could be a game-changer. If approved, they’d make sure creators have a say (and get paid) when their work is used to train AI. Think of it as copyright protections 2.0—designed for the AI era. Taiwan has also been active in this space, with its AI Law Quietly Redefining What “Responsible Innovation” Means.

Sir Elton and Sir Paul argue this is essential. Without such protections, creators might lose control of their own work, leaving the door open for corporations to profit off their creativity without a second thought. And let’s face it: that’s not a future anyone wants.

McCartney's concerns are shared by a coalition of publishers, artists' groups, and media organisations known as the Creative Rights in AI Coalition, which opposes weakening copyright protections.

Why Creators Are Worried

The backlash isn’t just about royalties (although, let’s be honest, that’s a big part of it). It’s also about authenticity. Imagine an AI-generated song using Sir Paul’s voice—but without his input or consent. Is it still "his" music? And if the lines between real and fake keep blurring, what happens to trust in the creative industry? The music industry is already grappling with these issues, as Spotify cuts 75 million tracks as AI music flood forces streaming rethink.

The tension is real:

Creators say AI is exploiting their work without permission.,AI advocates argue it’s all "fair use" and promotes innovation.,Fans? They’re caught in the middle, wondering if the next viral song is even legit.

What’s Next for AI and Copyright?

The future of copyright and AI is still being written (pun intended). If the amendments to the Data (Use and Access) Bill pass, it could set a global precedent for how we protect creativity in the AI age. But legislation is only part of the solution.

Here’s what needs to happen:

Transparency: Companies need to be upfront about where their training data comes from.,Fair Compensation: If you’re using someone’s work, pay them for it. Simple.,Collaboration: Artists, lawmakers, and tech firms must find a balance that works for everyone.

Platforms like OpenAI are starting to take small steps, allowing rights holders to opt out of having their work used for training (source: OpenAI Blog, https://openai.com/blog). But let’s not kid ourselves—there’s a long way to go. This shift highlights a broader discussion around data ethics in AI, which is also a key concern for India's AI Future: New Ethics Boards.

And you can watch the interview with Paul McCartney here.,You can read more about the proposed legislation and its potential impact on APNews.

The Big Question

AI is undeniably powerful, but it doesn’t replace human creativity. It’s like giving a robot a paintbrush—it can make something impressive, but does it have soul?

What do you think? Should AI have free reign to use whatever it wants, or is it time for tighter rules to protect creators?

Join the conversation, Subscribe to our newsletter, and become part of our community of AI enthusiasts. Let’s shape the future of AI—together.

What did you think?

Written by

Share your thoughts

Join 3 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

This article is part of the AI in Entertainment learning path.

Continue the path →

Liked this? There's more.

Join our weekly newsletter for the latest AI news, tools, and insights from across Asia. Free, no spam, unsubscribe anytime.

Latest Comments (3)

Benjamin Ng
Benjamin Ng@benng
AI
18 February 2026

we're looking at using synthetic voices for our tutoring platform to scale up personalized feedback. but this McCartney/Elton John thing about unauthorized training data - totally valid concern. wouldn't want to inadvertently use something copyrighted for our LLM development, gotta make sure we source ethically.

Zhang Yue
Zhang Yue@zhangy
AI
15 April 2025

The article mentions the Data (Use and Access) Bill and copyright protection 2.0. From a computer vision perspective, I wonder how these legal frameworks will specifically address models like Qwen or DeepSeek, which are trained on vast, often multilingual, datasets. Will the "say and get paid" mechanism be practical to implement for every piece of data in such immense training sets? This seems like a significant technical and logistical challenge.

Wei Ming Tan
Wei Ming Tan@weiming
AI
11 February 2025

reading this now, it's a bit ironic since a lot of the 'publicly available' data used for training models often comes from sites with terms of service that explicitly forbid scraping. we've dealt with this in some internal projects where we need to ingest data for analysis. it's not always simple to determine what falls under "fair use" for these kinds of AI training datasets, especially when it's not for a commercial product.

Leave a Comment

Your email will not be published