The Deepfake Age: AI Misinformation in Democracy and Elections
Misinformation revisited in the era of Generative AI - Where is the epistemic security crisis?
Hello Everyone,
For an ๐ง audio version of this article check below. To find AI policy and alignment ๐งช experts read until the end.
When ChatGPT came into the world nearly 2 years ago, one of the big fears was that Generative AI would be used to produce dangerous misinformation. But it turns out, the internet might be more resilient than we thought it was.
(โโ๐For a limited time get a Yearly sub for under $5 a monthโจโ โ).
While Generative AI can be used both to create misinformation campaigns and exploited to amplify disinformation campaigns, weโve been rather lucky in this U.S. 2024 election cycle. In a manner of speaking, it could have been worse. More powerful models were delayed till after the election and text-to-video Generative AI only get more powerful and capable next year in 2025 with likely more misinformation and cybersecurity issues arriving in the not so distant future.
Where to read about AI policy?
For AI policy perspectives I recommend a few Newsletters here, here and here and for more philosophical takes go here. But how do the academics, AI policy experts and scholars think of Misinformation? Today we turn to
who is one of those scholars I find very accessible. We are learning from the history of AI as it is unfolding. Learning from examples, if you will.Even as Meta has released recently their text-to-video tool called Meta Movie Gen, and the usual controversies and Ad spends bombard the U.S. Presidential race, including the antics of Elon Musk, we have to assume that text-to-video will explode in the 2025 to 2030 period and that the misinformation challenges ahead are going to be an ongoing concern with potentially unforeseen consequences as American BigTech embraces synthetic content in their products. And if we are the product, what will that mean for the future of the internet?
Elon Musk participating at a Donald Trump rally. October, 5th, 2024.
Harry Law works on governance and policy at Google DeepMind (GDM). Heโs also a PhD candidate at the Department of History and Philosophy of Science at the University of Cambridge and a postgraduate fellow at the Leverhulme Centre for the Future of Intelligence.
In an era of the internet where we are normalizing synthetic media, content thatโs made up for example, podcasts that arenโt real, itโs getting very weird considering the hallucinations and black hat use cases that are possible and occuring. Itโs not even clear what comes next. While Google frames NotebookLM as a research tool, the reality is digital avatars and fake video are becoming more common and the creation and proliferation of hyperrealistic synthetic content meant to deceive could theoretically become much higher. But were our initial fears about the black-hat use cases of Generative AI and foreign adversaries using them, exaggerated? It might be too soon to say, but Harry has some interesting observations on this.
Read more by Harry
Harry mostly writes about ย AI history, policy, and governance at www.learningfromexamples.com.ย
๐ง Listen to the Article: 14:40
Misinformation Revisited: Where is the epistemic security crisis?
, September, 2024, writes in a personal capacity.This is the year of the election. In 2024, around 1.5 billion people are going to the polls as elections take place in more than 50 countries that between them hold almost half of the worldโs population. One of the most consequential of these took place in April in India when 642 million voters participated in an election that saw prime minister Narendra Modi win a third consecutive term with the ruling Bharatiya Janata Party (BJP).ย
The signs looked ominous before the vote. Arvind Kejriwal, the chief minister of Delhi and the head of the Aam Aadmi Party (AAP) was arrested in late March to answer corruption charges. After his arrest, Kejriwal generated an AI voice recording from behind bars to share with his supporters, which in turn saw BJP voters create AI generations of their own to make light of the situation at Kejriwalโs expense.ย
As the election got underway, you can understand why some observers worried that a deluge of AI-powered misinformation was inevitable. But it didnโt happen. An excellent piece in The Atlantic, which explains both Kejriwalโs story and the muted impact of AI on the political process, convincingly argues that โdeepfakes have not been as destructive in India as many had fearedโ.ย
According to Indian fact checker Boom Live, of the 258 election-related fact-checks that the organisation conducted, only twelve involved AI. That isnโt to say there were no instances of AI generated misinformationโโthere have been reports of false election-result predictions, simulated phone conversations, and fake celebrity criticismsโโbut rather that the scale and impact of AI on the political environment was much less significant than some predicted.ย ย ย
Then there was the July UK election, which I experienced in all of its misery glory from Cambridge, England. In a result that was of no surprise to anyone, the Labour Party returned to power for the first time since 2010 and the incumbent Conservative Party was swept into opposition. In the middle of the election, the BBC ran an article with the headline โTikTok users being fed misleading election news, BBC findsโ.ย ย ย ย ย ย ย ย
Now, there are two different versions of โAIโ at play here: there is the recommender system that determines which piece of content to promote, and there is the content itself. Taken together, the concern is that AI threatens to influence both the medium and the message in a way that degrades the strength of the polity.ย
But what sort of scale are we talking about? As it turns out, nothing too widespread. The report said that these videos have racked up โhundreds of thousands of viewsโ, whichโfor an app with daily UK views in the billionsโis probably best described as a microbe inside a drop in the ocean.ย
There is a growing belief amongst those actually studying the prevalence of misinformation that claims about the purported harms of algorithmically-powered misinformation are not supported by studies of misleading content. We might say that hosting the story on the BBC homepage dramatically overplays the scale of the problem, which in turn may give readers the wrong impression about the importance of misinformation in shaping the outcome of the UK election.ย ย ย ย
In other words, if we believe misinformation to include misleading but not outright false content, reporting about the supposed effect of โmisinformationโ could well be described as, well, misinformation.ย
The evidenceย
In January early this year, the World Economic Forum issued a report saying that in the next two years AI-powered misinformation and disinformation represented a greater risk than โeconomic downturn[s]โ, โextreme weather eventsโ, and even โinterstate armed conflictโ. The authors canvassed and amalgamated opinions from โ1,490 experts across academia, business, governmentโ in order to create an index of top risks over the next 24 months. In pages 18-21 they explain the reasoning behind some of these views: the emergence of โlarge-scale artificial intelligence (AI) modelsโ that โhave already enabled an explosion in falsified information and so-called โsyntheticโ contentโ.
As we saw, though, AI hasnโt yet hopelessly degraded our epistemic securityโdespite the emergence of powerful and widely available tools that have the potential to do so. In one of the better studies looking at misinformation, Harvard tackled the three most common arguments about AIโs impact on the information environmentโโincreased quantity of misinformation, increased quality of misinformation, and increased personalisation of misinformationโโand found that each was overpriced. At the moment, they argue, โexisting research suggests at best modest effects of generative AI on the misinformation landscape.โ
The report is notable in that it deals with the โmarginal riskโ of generative models. This is the idea that we ought to understand the risk posed by a new technology (in this case AI) in comparison to the existing risk accepted by society by similar technologies (in this case the internet and associated communications technologies).ย
There are essentially two elements we need to consider using this framing. First, as above, we can assess whether AI is enabling increased quantity, quality, and personalisation of misinformation compared to existing methods. Second, weโre looking for evidence that AI is driving changes along these axes in the real worldโrather than whether models have the capability to exacerbate the problem in principle.ย ย ย ย
Fortunately, some studies are doing that. A June study published in Nature gathers together evidence about precisely these issues. As the authors explain: โHere we identify three common misperceptions: that average exposure to problematic content is high, that algorithms are largely responsible for this exposure and that social media is a primary cause of broader social problems such as polarization.โย ย
The authors cite famous reports from Facebook that content made by Russian bots from the countryโs Internet Research Agency reached 126 million American citizens on the platform before the 2016 US presidential election. Sounds big, until the authors remind us that this content represented 0.004% of the posts that US citizens saw in the Facebook newsfeed during the period of study.ย
This dynamicโโof failing to contextualise the amount of low quality or misleading sources that a person consumes relative to their information diet as a wholeโโis commonplace in the world of misinformation studies. A well known 2019 study, for example, categorised 490 websites identified as โuntrustworthyโ โ but failed to account for the fact that this made up only 5.9% of US citizensโ visits to news sites on average. When you take television into account, the figure drops to just 0.1% of US citizensโ media diet.ย
That the overall surface area for misleading content is small necessarily means that the impact of AI is limited. But how limited? And what, exactly, do we make of the corrosive effects of algorithms on political life that we hear so much about? The most famous idea is that of the โfilter bubbleโ that traps users online in an echo chamber of distorted information.ย ย
The core set of claims here is that recommender systems aim to promote user engagement,ย that inflammatory content is more engaging than average, and algorithms subsequently promote it to users to boost view counts. Unfortunately for this story, the existence of large effects on information consumption from algorithms hasnโt actually been established. A major 2023 study even showed that algorithmically-curated feeds supply less untrustworthy content than chronological feeds.ย
A recent study found that AI bots trained on individuals' YouTube viewing histories and following the platform's algorithmic recommendations consumed less partisan content than the actual human users did. This is probably because people who engage with extreme or untrustworthy content tend to seek it out themselves. In other words, people generally go looking for misinformation rather than passively falling victim to it.
After all, a landmark 2021 paper demonstrated that extreme content is viewed on YouTube by a small percentage of the population who tend to consume similar content elsewhere. The upshot is that consumption is driven by demand, not algorithms.
Misinformation for thee, but not for me
So, the question is, why exactly does the ability of AI to degrade the information environment seem overpriced? Well, the advent of social media very obviously coincides with political polarisation, a perceived collapse in the quality of political discourse, and the rise of extremist ideologies. As a result, lots of people tend to suspect that there may have been a causal relationship between them.
For AI, the concern is that it will exacerbate these trends and deliver more of the same. But as the above should make clear, itโs not all that clear that social mediaโโwhere content is created and recommended by AIโโreally is degrading the health of the polity. Yes, it feels like it is โ but thereโs not really any hard evidence to back it up.ย
There are two stories we can tell ourselves here. Either the marginal risk of AI is high because the actual impact of communications technologies on political discourse is low, or the marginal risk of AI is low because you believe that social media has already poisoned the well.
But is that right? There is, I suppose, a third option: that just like the advent of social media the advent of AI feels riskier than it actually is. This option, which I think probably fits my own world model best, is closely connected to what are essential problems with โmisinformationโ discourse.
The unfortunate truth is that, while we all have our biases, those who seek to isolate misinformation as a discrete area of study have an unfortunate tendency to believe that misinformation is a thing that happens to someone else. Misinformation for thee, but not for me.ย
They like to believe that there is such a thing as an objective science of misinformation. But there canโt be. Putting aside the fact that most people donโt consume much outright false information, popular definitions of misinformation have been broadened to encompass true but misleading content. Based on this definition, you get into the absurd situation whereby โmisinformation researchersโ end up spreading misinformation (if you wanted to call it that) themselves because โtrue but misleadingโ could mean just about anything.ย ย
To be clear, though, that doesnโt mean that I think the information environment is in good health.ย
Dan Williams of the University of Sussex makes the case that, even if we reject the beliefs that A) misinformation is the most worrying form of bad information and B) misinformation is easy to identify, that does not mean that society is enjoying a period of epistemic vitality.ย
As Williams explains: โFirstโฆcommunication can be - and frequently is - highly misleading without ever involving blatantly false or fabricated content. Second, once you broaden the focus on bad information to include any content that might be misleading even if it does not involve outright falsehoods and fabrications, bad information is not easy to identify.โ
This is the crux of the issue. For AI, we might say that the technology increases the ability of people to create misleading content. But even so, if we think about the evidence, do we really believe that an avalanche of synthetic mediaโโof memes, and jokes, and clear fabricationsโโshould be called misinformation? I doubt anyone would credibly call Donald Trump posting a video of him and Elon Musk dancing an example of misinformation.ย
But why not? After all, the fact checkers would agree that it didnโt happen. Some researchers even described an obvious meme as an example of the misinformation monster in action. You might think these are silly examples, butโโas the Indian election showedโโthis messiness is the rule, not the exception.ย
AI is rapidly increasing the amount of content on the internet, but I donโt see any reason to believe it represents a major departure from established methods of communication. Powerful image, video, and audio generation models primarily lower barriers to entry for a party to create content that they could have done with more time and resources on their side.
Of course, it doesnโt just do that. There are genuine instances in which AI can be used in a way that represents a change to the status quo. A good example here is the use of voice generation models, which allows someone to convincingly clone the voice of anyone they like (providing they have enough examples of them speaking).ย ย ย
For public figures, that poses no issue. Itโs not hard to get a voice sample from anyone in a position of power or influence given that such people tend to do a lot of public speaking. Here lies the contradiction. If itโs easy to create convincing voice clones, why have they failed to influence political life by impersonating politicians, celebrities, and others?ย
Iโm not aware of much empirical work on this issue, but if I had to guess, I would put it down to the fact that such impersonations travel primarily via social media. Major outlets check these things before publishing, and sites like X eventually get to the truth through a combination of community notes or a simple โchat, is this real?โย
Clearly, there might be some people who get the wrong end of the stick for a moment or two. But, when verification fails to materialise (or when additional information quickly counteracts the original claim), the artefact in question doesnโt tend to stick.ย
Thatโs the problem with fears around an AI field-misinformation apocalypse. Sooner or later, it either happens or it doesnโt. And right now, much like empirical studies that seek to understand the impact of human generated misinformation, the fear of the thing doesnโt bear much resemblance to reality.ย ย ย ย
Thanks for reading!
Appendix
Does AI policy and alignment research interest you? Here are some folk I recommend following if you use LinkedIn.
Unfortunately the founder of
is no longer with us. In Memoriam: Abhishek Gupta (Dec 20, 1992 โ Sep 30, 2024).๐ผ AI Policy people I follow on LinkedIn:
https://www.linkedin.com/in/ravit-dotan/ - Ravit Dotan
https://www.linkedin.com/in/ristouuk/ - Risto Uuk
https://www.linkedin.com/in/msheehan2/ - Matt Sheehan
https://www.linkedin.com/in/jonasschuett/ - Jonas Schuett
https://www.linkedin.com/in/rumman/ - Rumman Chowdhury
https://www.linkedin.com/in/ryan-donnelly-enzai/ - Ryan Donnelly
https://www.linkedin.com/in/shea-brown-26050465/ - Shea Brown
https://www.linkedin.com/in/reid-blackman/ - Reid Blackman
https://www.linkedin.com/in/kay-firth-butterfield/ - Kay Firth-Butterfield
https://www.linkedin.com/in/staceyhking/ - Stacey King
https://www.linkedin.com/in/miles-brundage-49b62a4/ - Miles Brundage
https://www.linkedin.com/in/acomomcilovic/ - Aco Momcilovic
https://www.linkedin.com/in/matthijsmaas/ - Matthijs M. Maas
https://www.linkedin.com/in/katharina-koerner-privacyengineering/ - Katharina Koerner
https://www.linkedin.com/in/buildingtrustedaiholistically/ - Pamela Gupta
https://www.linkedin.com/in/abhishekguptamcgill/ - Abhishek Gupta
https://www.linkedin.com/in/carlosig/ - Carlos Ignacio Gutierrez
https://www.linkedin.com/in/alextamkin/ - Alex Tamkin
https://www.linkedin.com/in/huw-roberts-3539b5b7/ - Huw Roberts
https://www.linkedin.com/in/borhane/ - Borhane Blili-Hamelin
https://www.linkedin.com/in/kzenner/ - Kai Zenner
https://www.linkedin.com/in/carlos-eduardo-torres-giraldez/ - Carlos Eduardo Torres Giraldez
https://www.linkedin.com/in/eugenio-v-garcia-414316157/ - Eugenio V Garcia
https://www.linkedin.com/in/harry-law-934a42b4/ - Harry Law
https://www.linkedin.com/in/kevin-klyman/ - Kevin Klyman
https://www.linkedin.com/in/lewis-ho-a74380178/ - Lewis Ho
https://www.linkedin.com/in/walter-m-pasquarelli/ - Walter Pasquarelli
https://www.linkedin.com/in/henryajder/ - Henry Ajder
https://www.linkedin.com/in/gary-marcus-b6384b4/ - Gary Marcus
https://www.linkedin.com/in/keith-sonderling/ - Keith Sonderling
https://www.linkedin.com/in/margaret-levi-67365544/ - Margaret Levi
https://www.linkedin.com/in/zak-rogoff-3301689/ - Zak Rogoff
https://www.linkedin.com/in/conor-griffin-6902bb7/ - Conor Griffin
https://www.linkedin.com/in/anita-ho-4804835a/ - Anita Ho
https://www.linkedin.com/in/dr-brenda-kubheka-7a1b1917/ - Brenda Kubheka
https://www.linkedin.com/in/zohaibknoorani/ - Zohaib Noorani
https://www.linkedin.com/in/lmiller-ethicist/ - Laura Miller
https://www.linkedin.com/in/lila-shroff/ - Lila Shroff
https://www.linkedin.com/in/jenpan/ - Jennifer Pan
https://www.linkedin.com/in/gillian-k-hadfield-1773987/ - Gillian K. Hadfield
https://www.linkedin.com/in/bryson/ - Joanna Bryson
https://www.linkedin.com/in/benjaminkultgenphd/ - Ben Kultgen