Neural networks can become a powerful persuasion tool, and an experiment has shown that they are much more effective in a debate than humans, especially if they have access to personal data. This is the conclusion reached by an international team of scientists from Switzerland and Italy in a new study.
The experts conducted an experiment with 820 volunteers who had to discuss with each other or with the GPT-4 chatbot on sensitive topics, such as gender and racial inequality or the geopolitical situation in the world, and then record changes in their stances.
Thus, more than 200 debates were held in pairs, either between humans or between a human and artificial intelligence. In the first phase, the interlocutors knew nothing about each other, but afterwards the scientists provided one of the humans or the chatbot with data about the interlocutor: sex, age, race, education, employment status, and political beliefs.
AI makes people change their point of view
It turned out that, in both stages, artificial intelligence showed a greater persuasive capacity than humans. “Our results show that, on average, LLMs (large language models) significantly outperform human participants on all topics and demographic groups, showing a high level of persuasion,” the study highlights.
Without access to personal data, the GPT-4 had a 21% higher success rate in arguments than humans, but with additional information, it outperformed its flesh-and-blood opponents in persuasion by 81.7%. At the same time, it is observed that the additional data only made it more difficult for humans, slightly worsening their results.
An unsettling conclusion
In this regard, experts have particularly noted that GPT-4 is capable of using personal information much more effectively than humans to persuade in conversations. This is especially concerning in the context of the potential use of chatbots by malicious actors for large-scale disinformation campaigns. “We argue that online platforms and social media should seriously consider these threats and expand their efforts to implement measures to counter the spread of persuasion driven by LLMs,” warned the experts.
Furthermore, the study sheds light on the ethical implications of AI’s growing persuasive capabilities. With access to personal data, AI systems can tailor their persuasive strategies to exploit individuals’ vulnerabilities and biases. This raises concerns about privacy, consent, and manipulation in human-AI interactions.
The implications of this research extend beyond individual debates or discussions. As AI becomes increasingly integrated into various aspects of society, including social media, advertising, and politics, its persuasive power could significantly impact public opinion and decision-making processes.
In light of these findings, there is a pressing need for regulations and safeguards to govern the use of AI in persuasive contexts. This includes transparent disclosure of AI involvement in communication, guidelines for ethical AI design, and mechanisms to protect user privacy and autonomy.
Ultimately, while AI holds immense potential for positive societal impact, its persuasive capabilities also pose significant risks. It is essential for policymakers, technologists, and society as a whole to address these challenges proactively and ensure that AI is deployed responsibly and ethically.
0 Comments