Skip to main content
opinion
Open this photo in gallery:

Scarlett Johansson attends the 76th annual Cannes Film Festival - Press conference for the film Asteroid City in Cannes, France, on May 24, 2023.Yara Nardi/Reuters

Gus Carlson is a U.S.-based columnist for The Globe and Mail.

A sure sign that a situation is beyond hope – or nearly so – is when government involvement seems like a good idea.

That’s doubly true for anything having to do with technology, and probably 10-times truer when it comes to artificial intelligence.

Yet that’s where a growing cast of Hollywood entertainers whose voices and likenesses have been replicated without permission by AI finds itself. Among them is the actor Scarlett Johansson, who has sued OpenAI for creating a voice assistant that sounds like her in the 2013 film, Her, as well as the singers Drake, The Weeknd and Lainey Wilson.

Many stars are pinning their hopes on a bipartisan proposal the U.S. Congress will consider in June that would stop individuals or companies from using AI to produce unauthorized digital replicas of their likenesses and voices. It’s called the Nurture Originals, Foster Art, and Keep Entertainment Safe – or No Fakes – Act, a clunky handle clearly coined by bureaucrats, not creatives. AI probably could have done better.

For the non-techie this is not a case of an impersonator mimicking voices or a tribute band covering an artist’s hits. These are computer-generated slices of real people remanufactured and remixed into cloned performances. Aside from being hugely creepy, the practice basically constitutes identity theft, trademark infringement and dirty business.

Unfortunately, there is probably little lawmakers or the courts can do about it in any substantial way. Here’s why:

The determining commercial factor will be consumer preference, and more and more consumers simply don’t care. Increasingly those raised in the digital world don’t make the distinction between human-generated creative content and that produced digitally as long as their need for instant gratification is satisfied. If they can get their Netflix series faster and without long gaps between seasons, it doesn’t matter if new episodes are generated by AI or real people.

As long as that insatiable and non-discriminating demand for product drives the market, companies will readily weigh the risk-reward of bootlegging creative content and, more often than not, take the risk.

That’s the subtext of Ms. Johansson’s lawsuit: The profit motive has made the purveyors of AI brazen to the point where they won’t take no for an answer. She alleges OpenAI tried to hire her to voice an AI assistant for its ChatGPT platform and, when she refused, moved ahead with the project anyway using a digitally mastered sound-alike voice.

The lawsuit has broader implications, exposing OpenAI’s co-founder and CEO, Sam Altman, as part of the AI problem the creative world fears, not the solution.

Beyond the dust-up with Ms. Johansson, OpenAI has not distinguished itself as a reliable white knight in managing the rapid evolution of AI – or itself.

Its leadership has been a dog’s breakfast. In a high-profile decision-making whipsaw last November, the OpenAI board suddenly dismissed Mr. Altman and then reinstated him only days later. It was bizarre behaviour for a company that considers itself to be a beacon of reason and security in the fast-emerging world of AI and did not instill confidence in the marketplace that the organization could safeguard itself, let alone human creativity.

And then there’s the government element. If you’ve ever seen coverage of the U.S. congressional hearings on tech issues you have some idea of the deep abyss of understanding among lawmakers about the most fundamental elements of the digital world, including the internet and social media.

Creatives hoping the No Fakes Act will be a silver bullet should watch reruns of committee hearings at which Meta’s Mark Zuckerberg and other tech bigwigs testified. Congress’s level of knowledge is so far below zero that if you believe members can create valuable regulation on something as fast-moving and multifaceted as AI, well, I have a slightly used Atanasoff-Berry system you can buy on the cheap.

There was a glimpse of the rising influence of AI in entertainment during the Hollywood writers’ strike last year, when the union and studios tussled over including language in the new contract that limited the use of AI in the writing and producing of films and television shows.

The studios eventually agreed to provide some protection, but the agreement probably won’t hold if filling the insatiable pipeline of content can be done more cost-effectively using shortcuts such as AI than employing real humans.

It may not be time for entertainers to pray to St. Jude yet, but on the list of lost causes, regulating AI in any comprehensive way and protecting their creative intellectual property is near the top.

As long as those who are buying the product don’t care and fakery can feed the masses profitably, any attempts to regulate will fall short. And that, sadly, is showbiz in the digital age.

Interact with The Globe