US Senators Request Information on AI Scams
Four US senators have written a letter to the Federal Trade Commission (FTC) Chair, Lina Khan, requesting information on the FTC’s efforts to track the use of artificial intelligence (AI) in scamming older Americans. The senators emphasized the need to effectively respond to AI-enabled fraud and deception. They asked the FTC to share how it is working to gather data on AI scams and ensure it is accurately reflected in its Consumer Sentinel Network (Sentinel) database. The senators also posed several questions to the FTC regarding its capacity to identify AI-powered scams, tag them in Sentinel, identify generative AI scams that go unnoticed, and use AI to process Sentinel’s data.
Global Guidelines for Secure AI Models Released
The US, UK, Australia, and 15 other countries jointly released global guidelines aimed at protecting artificial intelligence (AI) models from tampering. The guidelines urge companies to make their models “secure by design” and recommend maintaining a tight leash on the infrastructure of AI models, monitoring for tampering before and after release, and training staff on cybersecurity risks. However, the guidelines do not address controls around image-generating models, deep fakes, data collection methods used in training models.
Hot Take: Strengthening Protections Against AI Scams
US senators have taken action by requesting information on the efforts made by the FTC to combat AI scams targeting older Americans. This shows a growing concern for protecting vulnerable individuals from fraudulent activities facilitated by artificial intelligence. By seeking data on AI scam practices and urging accurate reporting in the FTC’s database, these senators are highlighting the importance of understanding the extent of this threat in order to effectively counter it. Furthermore, global guidelines for securing AI models demonstrate a collaborative effort among countries to safeguard against tampering. While these guidelines provide valuable recommendations, it is crucial to address other potential risks associated with AI, such as deep fakes and data collection methods.