Creators Are Losing the AI Copyright Battle. We Have to Keep Fighting (Guest Column)

The battle over “training data” – essentially, creative outputs used by AI corporations – between these tech giants and artists may shape the media landscape for this generation. AI companies aim to leverage creators’ work without compensation, using it to develop AI models that rival those artists; on the other hand, artists and copyright holders are taking significant steps to prevent this exploitation.

2023 found me taking a bold step away from Stability AI due to differences in opinion regarding an essential matter. Since then, I’ve been passionately advocating for a fairer deal for creators within the AI industry. Historically, creators have been the underdog, but now, for the first time, we see a potential route leading us towards defeat. I don’t intend to instill despair; there’s still a strong possibility that the world strikes a just balance between the interests of AI companies and creators. However, the current odds are not in our favor, and I believe it’s crucial for us to acknowledge this reality.

In numerous aspects, we’re seeing success. The number of copyright-related lawsuits against artificial intelligence (AI) firms is escalating rapidly, with over 40 cases in the U.S. alone at present, and it seems likely that more will follow. The initial case to reach a conclusion, Thomson Reuters vs. Ross Intelligence, was decided in favor of the rights holders. Although the defendant had already gone bankrupt and there were unique aspects to this case compared to others, the judge’s ruling underscored that the competitive impact of Ross’ use of Thomson Reuters’ works played a crucial role in the defeat of Ross. This same competitive impact is present in many ongoing lawsuits – AI companies are potentially harming the market for the data they utilize – and this is why the creative industries have been buoyed by this outcome.

Public support is also leaning in our favor in the court of public opinion. In every poll conducted to ask the general public if AI companies should be able to train on copyrighted materials without consent, a majority have expressed their alignment with rights holders. Since these tech giants invest significant resources into developing their technology, it seems only fair that they should also compensate for the content used, especially since it’s widely considered as a crucial element in the development process.

In essence, I’m concerned that we might be at a disadvantage because it’s possible that governments could alter copyright laws to benefit AI companies. If this occurs, our struggle may already be over.

The United Kingdom is currently at the forefront of discussions regarding AI and copyright. In December last year, the government initiated a consultation about AI and copyright, yet this was not a neutral consultation as they had already declared their preferred option prior to receiving any responses. This preferred option would grant AI corporations access to all copyrighted works unless rights holders explicitly chose to reserve them through an as-of-yet unestablished reservation system. Essentially, due to the fact that many people often fail to opt out in such schemes, the government aims to reverse the existing copyright law and hand over most of the U.K.’s creative works to AI companies free of charge.

The British creators have been, and continue to be, extremely upset about this situation. Notable figures such as Paul McCartney, Elton John, Dua Lipa, and Barbara Broccoli, among others, have publicly spoken out against these plans. I coordinated a protest album that included over 1,000 British musicians. Major newspapers across the country published identical front pages in demonstration.

However, there’s a growing concern that the government might disregard these calls for fairness. Recently, Peter Kyle, the U.K.’s Secretary of State for Science, Innovation and Technology, dismissed the protests as being from individuals who are resistant to change. Regardless of the united front presented by the country’s artists, it seems the government is choosing to remain deaf to their concerns.

As a passionate gamer, I’ve been following the ongoing debate about AI legislation in the U.S. Not too long ago, OpenAI suggested to the White House that it’s important to keep the ability for AI to learn from copyrighted content. While they call it preservation, rights holders vehemently disagree, arguing there’s no such right currently recognized by law. Instead, what they’re aiming for is new legislation that would give them control over this matter. They’re counting on the influence of tech lobbyists who are close to the president and have significant interests in AI avoiding legal challenges, to make their vision of immunity a reality.

In an escalating situation, nations seem to be taking cautious steps in response to one another. For instance, the British government is positioning itself as a rival to California, fearing potential leniency towards AI companies in the U.S. This apprehension leads them to anticipate and mimic these companies, disregarding concerns about widespread copyright infringement that some people associate with Californian AI companies.

If AI companies are granted the license to utilize the lifework of global creators without compensation, they may exploit governmental anxieties about falling behind in the AI competition as a justification. Many influential figures within various governments worldwide perceive artificial general intelligence as imminent and desire it to be developed domestically. They are being misled into believing that the key to this achievement involves legalizing intellectual property theft under the guise of technological advancement.

We’re currently at a disadvantage, but we haven’t been defeated yet. It’s crucial that we rally now. The serious issue is that governments might seize creators’ work and hand it over to AI companies with no compensation. Merely sending joint letters isn’t sufficient. If we unite, if we amplify our voices so they can’t be dismissed, we can prevent this widespread plundering of human creativity. Remember, democratic governments are obligated to heed their citizens — and the citizens are resolute that such theft is unacceptable.

Ed Newton-Rex established Fairly Trained, a non-profit organization dedicated to promoting ethical data sourcing in generative AI companies. Prior to this role, he was the head of audio at AI firm Stability AI.

Read More

2025-04-16 22:25