BLOG

FaceApp, deepfakes, and why we should be worried

By Areeq Chowdhury.

How much would you sell your face for? According to the millions of people who have been using FaceApp in recent weeks (myself included) the price is the ability to stare into the future and visualise how we may look towards the end of our days. Is it, however, just a bit of fun, or is it something more sinister? Do we realise that we are paying a price (our data) when we receive a seemingly free service on the internet? Is it possible that we are, in fact, helping to usher in the next generation of deepfakes? These are questions that journalists, researchers, and, even, politicians have been asking in recent weeks. US Senate minority leader, Chuck Schumer has gone as far as to call upon the FBI to investigate the Russia-based app. Should we be worried?

One of the concerns which some commentators have bandied about is the potential for the images uploaded to be repurposed for the development of so-called ‘deepfakes’. Deepfakes are essentially the video equivalent of photoshopped images; fabricated videos which look and sound like their real subject. You may have read about their application in pornography following a storm in which a series of celebrities’ faces were superimposed onto the bodies of adult actresses. You may also have read about its potential to inflict damage on democracies with videos being fabricated to spread false messages from politicians.

In a sentence, deepfakes are the equivalent of a gut-wrenching, fact-free cocktail of bots, fake news, and revenge porn served with an erosion of trust in reality. Imagine a situation in which a video goes viral of a politician compromising themselves. Perhaps it is a clip of them admitting to an affair or uttering a racist comment. The video, a deepfake, is a complete fabrication, however by the time the denial has been released to the press, it’s already been viewed thousands if not millions of times. To make it seem even more believable, the logo of a major news outlet is slapped onto it and the video is shared by websites who fail to verify the content.

Now imagine an alternate situation in which a genuine video of a politician compromising themselves goes viral. Everything in the video actually happened, however due to the perfection and proliferation of deepfake videos, the subject is able to dismiss the real event as a lie. That plausible deniability has been labeled as the liar’s dividend.

This black mirror-esque scenario isn’t as far-fetched as it sounds. Just this week, Arvind Limbavali, an Indian politician, broke down in tears during a debate after an alleged deepfake video appearing to show him having sex with another man was shared online. Will we see more of this in the future?

It’s difficult to know what the right approach to this phenomenon should be. The political response to the comparatively mild challenge of online disinformation hasn’t exactly been a roaring success. As part of their efforts to combat it, the UK Government has spent taxpayers money on designing and promoting adverts with a creature that wouldn’t look amiss in the Monsters, Inc franchise. This is unlikely to be sufficient when it comes to questioning videos we are watching with our own eyes. Whilst we could make it illegal to create a deepfake of someone without their consent, this will be a purely retrospective action and one which will be difficult to enforce, especially if the perpetrator is outside of a country’s jurisdiction.

I haven’t a clue whether FaceApp, or any other app, is using pictures we share with them to develop deepfakes. The people behind it being Russian isn’t enough of a reason to jump to conclusions, in my opinion. However, the controversy has shone a light on a problem we should all be interested in; the power of new technologies to convincingly manipulate our image.

The reason that FaceApp went viral recently, isn’t simply because it shows us what we could look like when we’re older, it shows us an incredibly realistic representation of what we could look like. That level of realism is something which should concern us. If this kind of manipulation is used for malicious purposes, the implications could be so serious that we’ll end up reminiscing over simpler times when we merely lost our minds over low-tech social media bots retweeting each other. Moving forward, we need to start thinking about what the policy responses should be. Failure to do so will allow the lines between fact and fabrication to be blurred beyond recognition.

Areeq Chowdhury is the Head of Think Tank at Future Advocacy.