AI Tools Used in Misinformation Campaigns to Influence US Politics
OpenAI's AI-powered chatbot, ChatGPT, has been used to spread misinformation to deepen U.S. political polarization, focusing on controversial issues.
OpenAI has discovered that its AI-powered chatbot, ChatGPT, has been used to create the content fueling the spreading of misinformation across a range of websites and social media sites to deepen the U.S. political polarization. As reported by the company last week, the content focused on a range of controversial issues, including the Gaza conflict and the Olympic Games, as well as the U.S. presidential elections. Despite the high performance of AI tools, Microsoft’s investigation of last week’s Iranian misinformation campaign revealed that these attempts have not achieved much success yet.
The content contained an array of false information and negative opinions about the presidential candidates and was linked to the batch of websites that were identified by Microsoft last week in a campaign to spread fake news. The situation reveals one of the threats created by the use of new AI technologies in information wars. It brings the experts and other interested parties to the question of whether better measures are needed to contain the application of AI tools to vulnerable people to achieve the goals of manipulating public opinion.