Hollywood’s Divide on Artificial Intelligence Is Only Growing

Hollywood’s Divide on Artificial Intelligence Is Only Growing

As a seasoned screenwriter with decades of experience in Hollywood, I find myself squarely in the middle of this fascinating and contentious debate over AI utilization in the production pipeline. Having penned scripts for various shows, including some critically acclaimed ones, I’ve seen firsthand how technology can both aid and hinder the creative process.


The arrival of artificial intelligence hit Hollywood like an earthquake.

In the year 2022, the threat of layoffs, cost-reduction measures, and an impending writers’ strike hung heavy over the industry, causing widespread uncertainty. However, the tide turned in the autumn when OpenAI unveiled a preliminary version of ChatGPT, marking a significant shift in public perception of the technology. This development sparked a chain reaction, with “Heart on my Sleeve,” a song featuring AI-generated voices similar to Drake and The Weeknd, being among the first examples of this new era.

SAG-AFTRA and the Recording Industry Association of America dispatched their lobbying teams to Washington D.C, as reported by sources close to the matter. These efforts gained support from Senators Chris Coons, Marsha Blackburn, Amy Klobuchar, and Thom Tillis. Subsequently, these senators presented a preliminary draft of legislation aimed at shielding performers’ likenesses and voices from unauthorized use in AI-generated tools. On July 31st, this bill was revised – marking a significant step forward in the ongoing discussion about AI regulations.

“According to SAG-AFTRA’s general counsel, Jeffrey Bennett, the initiative against fake content was already in motion before the strike, immediately following ChatGPT and Fake Drake. He explains that it wasn’t hard to approach the Senate because people are now aware of the issues at hand.”

As a passionate fan, I found myself watching from the sidelines as the studios held back their involvement. They waited for a clearer understanding of how this bill would shape up before making any moves. For them, it wasn’t just about supporting or opposing the measure; it was about preserving their creative freedom, specifically in relation to using digital replicas in parodies and documentaries, according to what I’ve learned from those privy to the studios’ lobbying endeavors, as reported by The Hollywood Reporter.

Upon its introduction, the Motion Picture Association, the studios’ trade group, expressed gratitude for the built-in protections against stifling free speech, stating, “These safeguards are crucial for any legislation to endure.”

The Motion Picture Association (MPA) has various priorities when it comes to artificial intelligence. Apart from safeguarding their vast collection of films and television shows from AI companies that might be using them to power their systems, they are also motivated by the need to prevent machine-made content that could potentially outdo their own productions. Moreover, since they produce a significant amount of content themselves, they are keenly aware of the potential role AI tools might play in creating such content in the future. Essentially, the MPA is keeping an eye on the evolution of content creation in the realm of AI production.

The break marks a growing rift between Hollywood’s unions and studios on issues related to AI.

It’s a divide that executives may need to bridge diplomatically. On Netflix’s July 18 earnings call, co-CEO Ted Sarandos was asked about the potential impact that generative AI would have on content creation. The exec replied that AI “is going to generate a great set of creator tools, a great way for creators to tell better stories” but noted “there’s a better business and a bigger business in making content 10 percent better than it is making it 50 percent cheaper.”

The debate about AI has been hiddenly unfolding in an unanticipated setting: The U.S. Copyright Office, where they’ve been deliberating on issues related to copyright and AI. On Wednesday, they released a report emphasizing the immediate need for laws addressing deepfakes. They’ve been discussing these matters with representatives from the Writers Guild of America, Directors Guild of America, SAG-AFTRA, MPA, and others. These discussions suggest that Hollywood’s unions and the tech companies pioneering AI technology, some of whom have established a presence in the industry, are heading towards a confrontation as they argue over the use of AI tools during the filmmaking process.

In this scenario, various unions found themselves at odds with the MPA (Motion Picture Association) over several contentious matters. They were joined by Meta, OpenAI, and tech lobbyists. The main point of disagreement revolved around the question of whether new laws should be enacted to handle the issue of using copyrighted materials without permission for AI system training, as well as the creation of numerous works resembling existing content that may infringe on copyrights.

Studios not only defended the adequacy of existing laws for copyright, they also advocated for more lenient standards when it comes to AI-generated works. They expressed concerns about the inflexibility of the Copyright Office’s rule that intellectual property rights can only be awarded to human creations, arguing that this rule fails to acknowledge the creative input humans provide when using AI as a tool.

The contentious topic revolves around being a significant battlefield in the utilization of automated resources, as one of the primary hindrances to widespread AI tool usage within the manufacturing process is the lack of copyright eligibility for generated works.

“Mark Goffman, who’s worked on shows like Bull, Limitless, and The West Wing, mentioned at the AI in Entertainment conference in May that contracts typically require seeking studio approval. Many studios, in fact, have policies against this practice,” said Goffman. He further explained that legal issues related to the chain of title and signing a ‘certificate of originality’ to prove you wrote it independently often pose restrictions.”

Despite its growing popularity, large language models are being utilized more frequently, even within the writing process. As Momo Wang, director of animation at Illumination (Minions, Despicable Me, Sing), stated at an AI conference, “I employ [large language models] for research.” In his words, “Initially, I compose a story in Chinese and then translate it to English using the LLM. This method is more convenient for me compared to any translation software available.”

As a gaming enthusiast dabbling into filmmaking, I’ve noticed the heated debates among filmmakers about the use of AI tools. Among these tools, OpenAI’s Sora has been grabbing attention lately, introduced back in February as an advanced AI capable of generating hyperrealistic clips from mere sentences as text prompts. OpenAI, currently grappling with internal disputes regarding safe deployment of their tech (recently escalated by a lawsuit filed by former board member Elon Musk on Aug. 5 over the company’s shift towards profit), has been showcasing videos made using Sora, collected from beta testers offering feedback to improve the technology in Hollywood.

As a seasoned professional in the entertainment industry, I have witnessed firsthand the rapid advancement of technology and its impact on our work. In my years working as a writer and performer, I have seen the industry adapt to new tools and techniques with varying degrees of success. However, when it comes to the use of AI in content creation, I find myself torn between excitement for the potential innovations it could bring and concern about the rights of creators.

“Laura Blum-Smith, WGA’s senior director of research and policy, asserts that at its core, AI doesn’t have the ability to produce new things. She’s been vocal about this topic, discussing it with relevant authorities and organizations.”

As a gamer, I find myself in the middle of an intriguing debate. The Directors Guild of America (DGA) seems to be treading cautiously, voicing worries about potential misuse of new technology but also anticipating that our creative peers will embrace it as part of the filmmaking process. On the other hand, they’re backing the WGA’s stance on moral rights. This means they want to acknowledge us, the writers and directors, as the original creators of our work. In essence, they’re aiming for a system where we have greater financial and creative control over our creations, even if we don’t hold the copyrights ourselves.

According to Russell Hollander, the executive director of DGA, we’re not pessimists about AI. Instead, we view it as a valuable asset in filmmaking when used responsibly. However, like any powerful tool, there should be clear boundaries, and its role should be to aid filmmaking rather than undermine it.

Read More

2024-08-06 20:55