This op-ed was first published in the Toronto Star on July 16, 2024.
Amidst the anxiety surrounding emerging technologies, we can take a moment to admire the unpredictable weirdness of it all. For example: When the history of early generative AI is written (perhaps by a bot), a chapter will be devoted to – you guessed it – Scarlett Johansson’s voice.
Last month, Johansson went public with accusations that OpenAI deliberately copied her voice for their new AI assistant in the updated version of ChatGPT, which now boasts over 100 million weekly users.
Like Ursula from The Little Mermaid, the theft of a woman’s voice for their own gain was hard to miss. It’s worth taking a moment to be outraged at OpenAI’s cavalier actions and dubious that this company can be trusted to self-govern the impact of generative AI on the public interest.
But more helpfully, this episode sparked a much-needed discussion about the legal protections we need as AI advances, and how we can implement these protections without gutting our critical Charter rights.
Our voice, our face, our likeness — these are innate features that make us who we are. New developments in AI are dramatically lowering the costs of creating truly life-like digital replicas of videos, images, and voices—commonly known as deepfakes.
Deepfakes are already inflicting real damage, from fraud impacting people and companies, and most ominously in elections. Robocalls featuring a fake President Biden discouraged voting in a New Hampshire primary earlier this year. One party in the recent Indian election shared a video of its leader chatting in a language that he doesn’t speak.
And there’s another important unforeseen impact: Deepfakes are undermining the credibility of real content, allowing leaders to dismiss genuine evidence as doctored. President Trump has been an early adopter of this innovative lie – he falsely claimed that the infamous Access Hollywood tape is a fake.
Pictures, videos and sound recordings are by no means the only threat: AI-generated text is highly convincing and can be produced at such enormous scale that it will drown-out real voices, especially online. Europol, the EU’s law enforcement agency, predicts that as much as 90% of all text online will be synthetically generated by 2026.
So what can we do to stay ahead of the deepfake challenge?
We need urgent action on three levels – where deepfakes are created, enabled and distributed.
To halt their malicious creation, we need to clarify that deepfakes are subject to already existing defamation, identity fraud, revenge porn and child pornography laws. For example, the law against non-consensual sharing of intimate images defines those images as a visual recording, which could be read to not include AI-generated images. Parliament should enact a law that specifically makes deepfakes subject to these existing criminal laws.
As for the AI systems that enable deepfakes, there need to be stronger penalties to discourage AI systems, like OpenAI, from generating fake likenesses of individuals without consent. We also need better ways to be able to check if something has been produced by these systems, to facilitate labeling and verification. Canada’s proposed AI and Data Act does some of this work, but it’s unclear if this bill is going to pass.
Where we need more action is in the spread of this fake content, facilitated predominantly by social media platforms. The proposed Online Harms Act lays out new responsibilities for these platforms to act responsibly and mitigate the spread of illegal content in narrowly-defined categories: hate speech; incitement of violence or terrorism; intimate image abuse; child cyberbullying, and inducing self-harm.
It also establishes a Digital Safety Ombudsperson to support victims in removing this content. Scoping identity fraud to the Online Harms Act would be one way to require more effective action and recourse from online platforms operating in Canada. This is consistent with similar bills in the EU and UK. Of course, care would always need to be taken that the law doesn’t trample on free expression rights.
The authenticity of audio, video and published text is increasingly dubious. But it doesn’t have to be this way. Targeted governance of generative AI can help us all trust our eyes and ears.
Sam Andrey is the Managing Director of The Dais, Toronto Metropolitan University’s public policy and leadership think tank. Charles Finlay is the Founding Executive Director of Rogers Cybersecure Catalyst, Toronto Metropolitan University’s national centre for training, research and innovation in cybersecurity.