Should social media get a trustworthy button?

Would having an 'I trust this' button on social media help combat misinformation?
16 June 2023

Interview with

Laura Globig, UCL

MOBILE PHONE

A mobile phone sitting on a computer keyboard

Share

The UK’s Covid inquiry kicked off this week. We told you last week how the process is going to work. It’s almost certain that one aspect that the investigators will dwell on is communication and the spread of misinformation during the pandemic, much of it driven by unreliable sources on social media. And with about 5 billion of the world's 8-plus billion population regularly using some form of social media - Facebook alone has about 3 billion active accounts - the societal impact of these media - and specifically the messages and information that people convey through them - is huge. But as UCL's Laura Globig argues, the problem with many social media platforms is that they’re engineered only to engage users and promote information exchange; they don't reward users for the accuracy or reliability of what they share. So she's come up with a better way to do that...

Laura - The spread of misinformation online has skyrocketed and this has had quite drastic consequences such as increasing polarisation and resistance to climate action and vaccines. And so far existing measures to halt the spread of misinformation online, such as, for example, flagging or reporting posts has had only limited impact. So we wanted to know if we could help address this issue of misinformation online.

Chris - And are there any particular groups who are more susceptible to this or is everyone potentially a sucker for it?

Laura - People are actually quite good at distinguishing true from false information. So it's not a lack of ability. In fact, existing research shows that lay peoples are just as good as professional fact checkers at telling apart true from false information. Instead, one reason for the spread of misinformation online is the lack of incentives on social media platforms to share true information and avoid sharing false information. People tend to choose actions that lead to rewards or positive feedback and avoid those that lead to punishment. And on social media platforms, these rewards and punishments come in the form of likes and dislikes. But the issue with these likes and dislikes is that they aren't representative of the accuracy of the information you're sharing. For example, you could like an obviously false post because you think it's amusing. So we now propose that the key to reducing the spread of misinformation online is not to tell people what's true and what's false, but instead to directly incentivise them to share more true relative to false information. And so we need an incentive structure where these social rewards and punishments are directly contingent on the accuracy of the information.

Chris - So what you are saying is instead of there being thumb up or thumb down, like or dislike, I could have, "I trust this, I don't trust this."

劳拉-没错。在这项研究中我们通过slightly altering the engagement options offered to users. So we're not taking away the like and the dislike button, but instead we added an option to react to posts using, just as you said, the trust and distrust buttons.

Chris - You can envisage why people would be incentivised to use that because it's an additional badge of honour for them saying, "I'm sharing this, but it's a bit iffy." And then if it turns out that it is a bit iffy, they can say I told you so. So it does kind of play into the same reward system, but it's for the benefit of more clear communication.

劳拉-没错。Here, there's no ambiguity in the use of the trust and distrust. So trust by definition is related to reliability. It's a firm belief in the truth and reliability of something. And so what we found in this study is that people would use these buttons to actually differentiate between true and false posts.

Chris - So what data have you got that suggests this will actually work?

Laura - What we did was we created simulated social media platforms and, in these platforms, users saw both true and false information. And then we added an option to react to posts using a trust and a distrust button in addition to the usual like and dislike buttons. And so then what we found was that people use these buttons to differentiate true from false information more so than they use the like and dislike buttons. And as a result of that, to receive more trust rewards and fewer distrust punishments, other participants were then also more likely to share true information relative to false information. So what we saw was a large reduction in the amount of misinformation being spread.

Chris - Does the person effectively score points for trusting something that turns out to be true? Is that how it feeds back and endorses that so that person's building reputation? Is that one of the incentives?

Laura - So the incentive is receiving the trust themselves. So we have three experiments and in the first experiment we gave participants the option to react to posts using a trust, distrust, like and dislike button. And the incentive here is just to engage with the post itself. And so what we found was people use the trust and distrust buttons. And then in the second and third experiment we looked at how receiving trust and distrust feedback from other participants would impact sharing. So there people are motivated to share true posts so that they receive a large number of trusts and very few distrusts.

Chris - And of course your timing is perfect because in the UK at least the online safety bill is making its way through the government process. This is the idea of trying to make the internet a safer place where misinformation propagates more slowly. So really the whole world, the business world, should be receptive to ways that they can improve not just the engagement but the quality of the engagement.

劳拉-没错。That's also our hope and what we're doing here is we are not reliant on any fact checkers or anyone definitively determining whether something is true or not, but instead we're putting the onus on the user, which actually increases user autonomy, which also again would be very appealing to the platforms and hopefully to social media users themselves.

Comments

Add a comment