By Alexander Rand.
There’s the pandemic, and there’s the infodemic. As global panic around the virus spreads, the internet has become heavy with dubious virus-related content, including the infamous (yet oddly inspiring) animal resurgence stories, and medical advice from practitioners of varying repute. Tech companies and public health authorities have been aggressively intervening to promote good health practices by pushing real facts to the top of the heap and burying false information.
But these efforts have had only a limited effect. Christina Tardáguila, Associate Director of the International Fact-Checking Network, has called the current situation “the biggest challenge fact-checkers have ever faced”, observing that “it’s amazing what the mixture of panic and the lack of good data can do to our brains and to our capacity to sort fact from fiction”. Bogus claims have gained traction, spanning from the supposed benefits of eating bananas and drinking pure alcohol, to conspiracy theories around the origins of the virus. In some cases, misinformation has even duped high-ranking government officials, undermining public health efforts. In others, it has had more immediately devastating consequences.
Despite these stories, existing technology could actually have made the infodemic much worse. According to Reuters, much of the misinformation around Covid-19 has consisted of ‘cheap fakes’, meaning stories tweaked using technology to distort their meaning and serve a false narrative. While cheap fakes made up just 59% of misinformation, they accounted for almost 90% of social media engagement among all pieces of misinformation, which illustrates their heightened virality compared to lower-tech pieces of misinformation. It is easy to imagine that as more sophisticated manipulated media such as deepfakes become widespread, they will drive the virality of fake stories even further. Importantly, Reuters has not found any deepfakes in the present infodemic thus far.
Misinformation is becoming more targeted and sophisticated. Soon it won’t be enough to simply ask whether the person in the video is blinking, or to look for typos in that email from the Nigerian prince, when trying to spot a scam. In the face of highly realistic synthesized media, even those who consider themselves sharp and skeptical will soon be lost in a forest of convincing yet opposed narratives. We can take this moment to inoculate ourselves against future infodemics by reflecting on what we consider to be ‘true’ online, and why.
To avoid the harms of an information environment full of synthesized content, we need to rethink our foundational notions of truth and authenticity. For starters, we need to be more specific about how we define ‘fakeness’. It might feel safe to classify synthesized content such as deepfakes as ‘fake’, insofar as they portray events that have not actually happened. But even this can create contradictions. As legal scholars Chris Barnes and Tom Barraclough have pointed out, digital technology always involves manipulating and synthesizing input from multiple sources, and we are usually still willing to accept digital media as ‘real’; we don’t question the veracity of our smartphone pictures.
We have even been willing to accept drastically manipulated images as ‘authentic’. To illustrate this, Barnes and Barraclough point to the example of the first-ever photograph of a supermassive blackhole. This ‘photograph’ is in fact a synthesized image compiled from many different sensors across the globe. These sensors detected radio-waves imperceptible to the human eye, which were then added to a digital composite and manipulated to generate a visual image. And despite this very involved process of synthesis and manipulation, we are not inclined to call the resulting image ‘fake’. This illustrates how our notions of authenticity with respect to synthesized media are actually quite flexible.
Then where does ‘fakeness’ actually lie? For Barnes and Barraclough, the answer depends on the relationship between representation and reality: an image is false if it ‘misrepresents’ the subject matter it appears to have captured. But this notion of misrepresentation is largely subjective. It depends on the relationships between people involved in an exchange of information, the claims implicit in that exchange, and the medium itself, among other factors. These can all influence the subjective experience of the person perceiving the information, and their sense of whether the subject has been misrepresented.
This means that even if we develop powerful tools to detect synthesized media, this won’t be synonymous with detecting falsity itself. Indeed, simply banning synthesized media from web platforms could result in the removal of lots of benign or beneficial content. Instead, a policy for combating misinformation needs to be based on an understanding of the subjective factors at play in determining what ‘falsity’ actually is. In addition to simply detecting deepfake technology, we should be thinking about how to verify credible sources, regulate the information space to establish clearer boundaries between harmful and benign synthesized media, and protect those who fall victim to synthesized media harms.
Just as we can think of the present health crisis as a dress rehearsal for future pandemics which could be orders of magnitude more dangerous, we can think of the present infodemic as a dress rehearsal for a future where the tools of misinformation are more sophisticated and widespread. As a society, we’ll need to refine our relationships with concepts such as falsity and authenticity if we are to effectively counter future misinformation pandemics.
Alexander Rand is a Research Intern at Future Advocacy focusing on deepfakes.