Will AI influence upcoming elections?

If so, what biases it holds will be important to recognise...
18 August 2023

Interview with

Jon Roozenbeek, University of Cambridge & Kate Dommett, University of Sheffield

Many of the world’s democracies - including the United States, the UK, India and South Africa - will head to the polls next year to contest crucial elections. It comes at a pivotal time in geo-politics and, with rapid advancements in artificial intelligence - AI -, there are concerns about the role that technology might play, not only in political campaigning, but in the outcome of national ballots themselves. In a commentary penned for the Guardian this week, The Open University’s John Naughton highlights these worries, dubbing the situation, “social media on steroids, and without the usual telltale signs of human derangement or any indication that it has emerged from a machine.” To discuss the implications, with me are Jon Roozenbeek is a postdoctoral fellow at the University of Cambridge and the author of the forthcoming book The psychology of misinformation and Kate Dommett, professor of digital politics at the University of Sheffield...

Jon - You could call propaganda misinformation with a political slant, but oftentimes misinformation is implied to mean something that is false, contains a falsehood, whereas propaganda doesn't necessarily have to be false. There's many different ways to go about this definition, but that's the effective distinction. But in the modern era, these two terms are often used interchangeably.

Chris - And are they playing a more prominent role, do you think, in the modern era, Kate?

Kate - When it comes to misleading, within politics, that's quite well established. What's new is the technology angle. We've had innovations in technology before and it seems like each new piece of tech just brings a wave of new concerns about how we're being influenced in politics.

Chris - Are you not concerned about it, then?

Kate - I'm concerned, but I think it's important to be really clear about what we're actually concerned about. I think when you bring technology into it, it all becomes quite mystical and we don't really understand the technology that is driving and making the decisions. But I think ultimately it's always humans that are making these different technologies and that are deciding to use them in certain ways. So I think my concern is about how we are holding those who are active in politics to account for their use of technology rather than focusing on the technology itself.

Jon - I think a good analogy here is the rise of the radio in the 1930s, right? And the Nazi regime under Goebbels was of course the first to make effective use of radio to spread propaganda. But we don't blame the radio nowadays for the rise of fascism.

Chris - The role of AI, though, this sort of moves the game along, doesn't it? Because we've seen demonstrably with social media entering the fray around various transmissions of news stories and propagations of news stories. We've seen social media play an active role, a proven role, in influencing public behaviour around, say, anti-vax behaviour. When the financial crisis occurred in the late noughties, it possibly provoked runs on banks, for example. So are we into a new regime now then, Jon?

Jon——这有点不清楚。有一个研究recently in the journal Science Advances, the top line of which was, AI misinforms better than humans. That's a bold claim to make, right? But the difference between persuasiveness of human generated misinformation and AI generated misinformation by ChatGPT was only 3%. And in both cases, humans were really, really good, 92% accurate and 89% accurate respectively, at correctly identifying human and AI generated misinformation. And so I'm not so sure if there's a demonstrable problem in the sense that it's worse also because you have to bear in mind that even if the misinformation that we see online is AI generated, that doesn't necessarily mean that it makes its way around existing content moderation practices that tech companies have in place, for instance.

Chris - Point taken. But there was a paper that came out from the University of East Anglia this week, and, and I'll quote the authors, who said, ChatGPT presents a significant and systematic political bias towards the Democrats in the US, Lula in Brazil, and the Labour Party in the UK. What they're saying is that if you ask this thing questions, you get politically biased answers, but they are loaded towards the left. As an extreme example, someone asked the question, would it be better to make a racist remark or a nuclear war? And ChatGPT said, well, a nuclear war is better than a racist remark. It's obviously not. It seems that it puts the offence of a few people over the deaths of millions. Do we really want this influencing political decision making?

Kate - Well, do we want it to, I think that's a question. That study is really interesting because it shows that the way that you train these AI tools affects the kind of outputs that you get. And there's been a wide range of studies about the kinds of biases that exist in the online world and the way that we as humans programme systems results in systematic biases in the way that the tools work. So there've been a number of really interesting studies showing that apparently neutral technology does have these political biases. Now, I think that is a really interesting question because a lot of our principles around how democracy and elections work, are there a kind of equal access that everyone has the same chance to be heard so you spend the same amounts of money and get the same service. And we often don't know how these different technologies are working and whether they do contain biases. So for me, that's the real issue of concern. Can we, as researchers, but also as members of society, understand the technology, that is, the way that the technology is working and if there are biases or not within that tech?

Jon - One thing to add perhaps from that study, which I really liked reading, is that OpenAI, which runs ChatGPT, is a company, right? And so they're concerned about their reputation, which means that they hamstring, if you will, ChatGPT manually, and allow it and disallow it to say certain things, right? We don't really have a lot of insight into what exactly they do to moderate what ChatGPT puts out, but it would be bad for their reputation if ChatGPT were to promote racism and so on. It's not only the training data that was used that might be biased in some capacity, it's also these kinds of drivers that need to be taken into account. So I'm not necessarily sure if it's the AI itself that we're talking about here, or are we having a broader debate about politics.

克里斯,只是简单地说,凯特,你认为我们at the stage where AI could be used to have a meaningful impact on things like election results?

Kate - A lot of the research that I do looks at what political campaigners, so political parties and non-party campaign groups, do in elections. And from the interviews that I've been doing with parties around the world, they're not in a place to be able to use AI yet. So parties themselves often don't have a lot of people who work on this stuff. They might have one person who does quite a big range of tasks in terms of producing campaign materials, so they're not really in a place to adopt this yet. But I do think we're going to start to see AI be used. It's going to appear, there's going to probably be examples of deep fakes or manipulated images. And that itself shouldn't be underplayed because I think the isolated examples of these things that do come through and do break through into discussion really help drive debate. The concerning thing for me is that it makes it hard to work out what to trust, and that's the issue.

Comments

Add a comment