Guidelines for Protecting Elections with AI Set by Microsoft and Meta

Guidelines for Protecting Elections with AI Set by Microsoft and Meta


Microsoft and Meta Crack Down on Misleading AI-Generated Political Ads

Microsoft has announced new policies aimed at curbing the creation of deceptive political ads, particularly those generated using AI, as the 2024 election season approaches. The move follows Meta’s recent announcement of similar measures. Microsoft’s President Brad Smith and VP of Technology for Fundamental Rights Teresa Hutson detailed the company’s approach to AI in political advertising in a recent blog post.

The company’s commitment to election protection includes providing transparent and authoritative information about elections, enabling candidates to confirm the origins of campaign material, safeguarding political campaigns against cyber threats, and offering recourse when AI distorts their likeness or content.

To help campaigns maintain control of their content, Microsoft is launching a new tool called “Content Credentials as a Service” that uses digital watermarking technology to encode details about the content’s origin. Additionally, Microsoft is also launching an Election Communications Hub to help secure elections.

Meanwhile, Meta has announced that political campaigns must disclose the use of AI in their ads related to social issues, elections, and politics. Advertisers will be required to complete an authorization process and include a disclaimer stating who paid for the ad.

The Battle Against Deepfakes Continues

As generative AI continues to advance rapidly, policymakers, corporations, and law enforcement are working to catch up. Cybercriminals have also turned to generative AI models to accelerate phishing attacks. The spread of AI deepfakes online has internet watchdogs sounding the alarm.

In response to this trend, last month a bipartisan group of U.S. Senators proposed a bill called the “No Fakes Act,” which seeks to criminalize the unauthorized use of a person’s likeness in media without their permission. The Act would come with a $5000 fine per violation plus damages and apply to any individual, including 70 years after death.

Hot Take: Protecting Free and Fair Elections in the Age of AI

Read Disclaimer
This page is simply meant to provide information. It does not constitute a direct offer to purchase or sell, a solicitation of an offer to buy or sell, or a suggestion or endorsement of any goods, services, or businesses. Lolacoin.org does not offer accounting, tax, or legal advice. When using or relying on any of the products, services, or content described in this article, neither the firm nor the author is liable, directly or indirectly, for any harm or loss that may result. Read more at Important Disclaimers and at Risk Disclaimers.

As deepfake technology continues to evolve at a rapid pace, it is crucial for tech companies like Microsoft and Meta to implement measures that protect the integrity of electoral processes. By introducing policies aimed at curbing deceptive political ads generated using AI, these companies are taking an important step toward safeguarding free and fair elections in the face of emerging technological threats.

Author – Contributor at | Website

Demian Crypter emerges as a true luminary in the cosmos of crypto analysis, research, and editorial prowess. With the precision of a watchmaker, Demian navigates the intricate mechanics of digital currencies, resonating harmoniously with curious minds across the spectrum. His innate ability to decode the most complex enigmas within the crypto tapestry seamlessly intertwines with his editorial artistry, transforming complexity into an eloquent symphony of understanding.