Microsoft and Meta Crack Down on Misleading AI-Generated Political Ads
Microsoft has announced new policies aimed at curbing the creation of deceptive political ads, particularly those generated using AI, as the 2024 election season approaches. The move follows Meta’s recent announcement of similar measures. Microsoft’s President Brad Smith and VP of Technology for Fundamental Rights Teresa Hutson detailed the company’s approach to AI in political advertising in a recent blog post.
The company’s commitment to election protection includes providing transparent and authoritative information about elections, enabling candidates to confirm the origins of campaign material, safeguarding political campaigns against cyber threats, and offering recourse when AI distorts their likeness or content.
To help campaigns maintain control of their content, Microsoft is launching a new tool called “Content Credentials as a Service” that uses digital watermarking technology to encode details about the content’s origin. Additionally, Microsoft is also launching an Election Communications Hub to help secure elections.
Meanwhile, Meta has announced that political campaigns must disclose the use of AI in their ads related to social issues, elections, and politics. Advertisers will be required to complete an authorization process and include a disclaimer stating who paid for the ad.
The Battle Against Deepfakes Continues
As generative AI continues to advance rapidly, policymakers, corporations, and law enforcement are working to catch up. Cybercriminals have also turned to generative AI models to accelerate phishing attacks. The spread of AI deepfakes online has internet watchdogs sounding the alarm.
In response to this trend, last month a bipartisan group of U.S. Senators proposed a bill called the “No Fakes Act,” which seeks to criminalize the unauthorized use of a person’s likeness in media without their permission. The Act would come with a $5000 fine per violation plus damages and apply to any individual, including 70 years after death.
Hot Take: Protecting Free and Fair Elections in the Age of AI
As deepfake technology continues to evolve at a rapid pace, it is crucial for tech companies like Microsoft and Meta to implement measures that protect the integrity of electoral processes. By introducing policies aimed at curbing deceptive political ads generated using AI, these companies are taking an important step toward safeguarding free and fair elections in the face of emerging technological threats.