There was an unusually convincing alternative to the Queen’s Christmas Broadcast this year, with many choosing to view Channel 4’s deepfake of the Queen.
The channel used AI to transform an actress into a replica of the Queen on screen. The speech was widely watched, sparking discussion on social media as well as causing 200 Ofcom complaints. The creators later revealed how easy it had been to create the fake video, saying they wanted to highlight the spread of fake news.
The term deepfake was coined in 2017 in a Reddit site created to share deepfake pornography. Porn still makes up a huge proportion of deepfakes online today, but the new video creation techniques have also been used by movie fans to produce their own edits, and in a few cases by political actors to discredit their opponents. The increasing ease and decreasing cost of making these videos could mean that more people are able to create convincing deepfakes, including trolls and spreaders of disinformation.
We raised those concerns in 2019 with our viral deepfake of Boris Johnson and Jeremy Corbyn. Since then, making deepfakes has only got easier, and fake news has become a more urgent challenge than ever due to the spread of Covid-19 misinformation. So what’s happened in the last year to meet that challenge?
At the end of 2020, the UK government released a long-awaited response to a consultation on the Online Harms white paper. They proposed new legislation aimed at protecting people – particularly children – online. Culture Secretary Oliver Dowden said it would be brought before parliament this year, and it could come into force in 2022.
One of the reasons this bill has taken a while to progress is the need to balance online safety with free speech. It’s vital that regulating deepfakes doesn’t stop them being used for good, such as to preserve the anonymity of vulnerable sources in documentaries. The focus on direct harm to children helps limit the remit of the most powerful measures, and only the largest sites (with the biggest audiences, and the greatest capacity to enforce these rules) will be subject to additional rules on harmful misinformation.
But without decent deepfake regulation, free discussion is vulnerable to destruction by those willing to lie and manipulate. If your favourite politician said something inflammatory in a convincing deepfake, would you believe it – just for a split second? If you caught your local MP doing something dodgy on camera, could you still prove it was true if they cried “deepfake”? The suggested new rules don’t offer much to prevent this.
As a start, the expert working group on online harms established by the legislation should include experts on AI and machine learning who can speak to the threat of deepfakes.
The new rules will also require larger companies to publish transparency reports on their work to tackle online harms. We think those reports should include information on the prevalence of deepfakes, so that digital platforms have accountability for their role in the spread of fake news.
Most importantly, the legislation’s definition of harmful content should limit the posting of deepfakes to those clearly labeled as such, helping people to spot fake footage while allowing their continued use for satire and entertainment. The law should also ban political deepfakes in the immediate run-up to an election to stop them influencing voting before they can be fact checked.
Although there were worries about deepfakes in the 2020 US elections, trolls seemed more interested in easy content than sophisticated techniques. But it’s becoming easier and easier to create deepfakes, meaning trolls could find them more appealing in future.
The US government is currently debating an act that would make it illegal to produce deepfakes without a watermark. Singapore brought in a law in 2019 with powers to force platforms to put warnings next to disputed posts, though critics warned the law was too broad and could repress free speech. In Europe, Germany and France both have anti-fake news laws, while the EU is at the early stages of introducing rules around the promotion of false content and manipulation of elections.
Regulating deepfakes has many challenges. Tactics like the US’s proposed watermarks can be thwarted easily by routine image manipulation. Blanket bans on false content threaten freedom of speech, especially as it’s hard to define “fake” or “manipulated” in clear legal terms. Last year Microsoft released software to help identify deepfakes, but as detection software improves so does the software for creating convincing fakes.
Several companies have taken proactive measures to manage the deepfake risks on their platforms.
Facebook announced at the start of 2019 they would remove AI-generated videos edited in ways not obvious to the average person, or if they mislead the viewer’s impression of what someone said. The policy does not apply to satire. Soon afterwards, Twitter announced it would remove “deceptively shared” media if it poses a safety risk, and would otherwise label tweets as manipulated media. TikTok also banned deepfakes under a policy prohibiting “synthetic or manipulated content that misleads users by distorting the truth of events in a way that could cause harm”.
All of these policies go further than what’s set out in the plans for the Online Harms Bill. It’s good news that tech companies are taking these steps, but we still need other policies to be coordinated from the outside, such as a shared database of identified deepfakes that researchers can train detection software on.
As the technology develops, we’re expecting deepfakes to become an increasingly common feature of our lives.
In 2021, we’ll be keeping an eye on how legislation around the world keeps pace with changing technology. We’ll get a sharper picture of what the Online Harms Bill will mean for deepfakes, and whether it’s enough to address the threat while allowing room for the positive uses to grow.
Channel 4’s deepfaked speech emphasised the importance of trust – something that gets called into question when anything we see might be fake. The issue probably won’t be solved by any single law, but a combination of laws, self-regulation and education can help us to trust what we see online.