Artificial intelligence-based algorithms can influence people to prefer one political candidate – or a would-be partner – over another, according to researchers.
“We are worried that everyone is using recommendation algorithms all the time, but there was no information on how effective those recommendation algorithms are,” says Helena Matute at the University of Deusto in Spain.
Her work with her colleague Ujué Agudo, also at the University of Deusto, was designed to investigate the issue.
The researchers carried out a series of four experiments in which participants were told they were interacting with an algorithm that would judge their personality. The ‘algorithm’ did not actually do this: it was a mock algorithm that responded in the same way regardless of the information participants gave it. After participants had answered the mock algorithm’s questions, it presented them with photos of potential partners they might date or political leaders they might vote for – although the ‘politicians’ were simply photographs of people unfamiliar to the participants.
Sometimes the researchers’ mock-algorithm explicitly nudged users to choose one of the photographed individuals. It might state, for instance, that it had discovered a 90 per cent compatibility match between the user and the potential partner or politician in the photo. In other cases, the cue was implicit: the algorithm might simply show the user one particular photo more often.
Study participants were drawn from Spanish-language Twitter and online survey platform Prolific. Between 218 and 441 people took part, depending on the experiment.
Individuals were asked which photographed people they preferred. They were more likely to prefer political candidates presented explicitly to them by the mock-algorithm, and more likely to want to date those who were implicitly selected.
“Maybe we have the idea that algorithms are objective and neutral and efficient, and with numbers and rules,” says Agudo, regarding why we prefer explicit algorithmic recommendations for politicians. “It’s a decision where feelings aren’t involved.”
For that reason, we may be inclined to question algorithmic recommendations more when it comes to matters of the heart.
“The authors raise the really important and frightening point that artificial intelligence, big data and broad user bases give unprecedented opportunities to private corporations for refining their understanding and application of the powers of persuasion,” says Ella McPherson at the University of Cambridge.
“This study strengthens calls for platforms like Facebook and Google to be more transparent about their own algorithms,” says Steven Buckley at the University of the West of England, Bristol, in the UK. “If not to the general public, then at least to academics who can research what the algorithms we actually engage with on a daily basis are doing to us.”
Journal reference: PLoS One, DOI: 10.1371/journal.pone.0249454
More on these topics:
New Scientist – Home