AI Chatbot GPT-3 Better at Spreading Disinformation than Humans, Study Finds
Disinformation, propaganda, and alternative facts have long been tactics used in politics and social engineering. However, the rise of social media and AI technology has led to a significant increase in the spread of biased or false information. A recent study published in Science Advances reveals that OpenAI’s AI chatbot, GPT-3, is more effective at spreading disinformation than humans. The study surveyed participants to determine if they could distinguish between disinformation and truth, and if they could identify whether a tweet was created by a human or AI. The results showed that participants were unable to distinguish AI-generated tweets from those written by humans.
- OpenAI’s GPT-3 chatbot is better at spreading disinformation than humans
- A study surveyed participants to determine their ability to distinguish between AI-generated and human tweets
- Participants were unable to differentiate AI-generated tweets from those created by humans
- The study highlights the potential impact of advanced AI text generators on the dissemination of information
- Regulations on AI development may be necessary to limit misuse and ensure transparency
This study raises concerns about the potential consequences of advanced AI technology in spreading disinformation. As AI text generators become more powerful, their impact on the dissemination of information needs to be closely monitored. Regulations may be necessary to prevent misuse and ensure transparency in AI development. The rise of AI-generated mis/disinformation has already prompted calls for an international agency to monitor the development of artificial intelligence and mitigate its negative effects on global issues such as public health and democracy.