Urgent Need for AI Regulation Arises
The recent emergence of deepfake explicit photos featuring Taylor Swift has sparked a demand for regulation on artificial intelligence (AI). These disturbing images have been widely circulated on platforms like X and Telegram, leading to outrage among politicians in the United States. One particular photo allegedly received up to 47 million views, prompting calls for criminalization of such acts. U.S. Representative Joe Morelle described the spread of these pictures as “appalling.”
In response, X has pledged to take immediate action by actively removing the images and taking appropriate measures against the accounts responsible for spreading them. The team is closely monitoring the situation to prevent further violations of celebrities’ privacy and ensure the removal of the offensive content.
A study conducted in 2023 revealed a significant increase in the creation of doctored images using AI tools, with a 550% surge between 2019 and 2023. This alarming trend has raised concerns among governments worldwide regarding the need for AI regulation.
More Deepfake Images on the Rampage
In December, a deepfake video featuring Ripple CEO Brad Garlinghouse surfaced on YouTube. The video portrayed Garlinghouse urging XRP holders to send their coins for promised doubling, which turned out to be a fraudulent scheme. While X promptly addressed the issue by removing the video, Google initially refused to take it down, causing frustration within the community.
Additionally, Singapore Prime Minister Lee Hsien Loong alerted the public about a deepfake video falsely endorsing a non-existent crypto investment platform linked to Elon Musk. These incidents have further highlighted the urgent need for AI regulation.
Hot Take: Governments Taking Action on AI Regulation
Governments worldwide are recognizing the pressing need for AI regulation in light of the increasing prevalence of deepfake content. U.S. President Biden has taken active steps by instructing certain agencies to develop customized standards for testing AI, including cybersecurity and other risks.
Similarly, the European Union has prioritized the establishment of a robust AI regulatory framework, with multiple agencies collaborating on this issue. It is evident that the rise of deepfake technology has prompted governments to address the potential harms and risks associated with AI, emphasizing the importance of effective regulation in safeguarding individuals’ privacy and preventing misinformation.