Biden Administration Launches U.S. AI Safety Institute Consortium
The Biden Administration has announced the launch of the U.S. AI Safety Institute Consortium (AISIC) in response to an executive order demanding the safe development and use of artificial intelligence. The consortium includes major AI rivals such as Amazon, Google, Apple, Microsoft, OpenAI, and NVIDIA.
A Collaborative Effort for Safe and Trustworthy AI
The consortium brings together AI developers, researchers, civil society organizations, and users with the goal of developing and deploying safe and trustworthy artificial intelligence. It aims to set safety standards and protect the innovation ecosystem.
Pulling Every Lever for Safety Standards
Commerce Secretary Gina Raimondo stated that President Biden’s directive was to set safety standards and protect innovation. The AISIC is a response to that directive and will help accomplish those goals.
Building on the Executive Order
The consortium is a result of the executive order signed by President Biden in October. The order focused on developing guidelines for evaluating AI models, risk management, safety, security, and applying watermarks to AI-generated content.
A Diverse Group of Participants
The consortium includes representatives from various sectors such as healthcare, academia, worker unions, banking, state and local government. International partners are also expected to collaborate.
Promoting Interoperable Tools for Safety
The consortium aims to establish a new measurement science in AI safety and will work with organizations from like-minded nations to develop effective tools for safety worldwide.
Notable Absences
While the list of participating firms is extensive, some notable tech companies like Tesla, Oracle, Broadcom, and TSMC are not represented.
Addressing Misuse of AI
The rise of AI-generated deepfakes and misuse of generative AI tools has led to concerns. The U.S. Federal Communications Commission recently declared AI-generated robocalls using deepfake voices illegal in the United States.
A Pledge for Responsible AI Development
Last year, several AI and tech companies, including Google, Microsoft, Amazon, and OpenAI, pledged to develop AI responsibly. Collaboration and sharing best practices are key to achieving responsible AI development.
Hot Take: Promoting Safety and Trust in AI Development
The launch of the U.S. AI Safety Institute Consortium marks an important step towards ensuring the safe and trustworthy development of artificial intelligence. By bringing together industry leaders, researchers, and other stakeholders, the consortium aims to set safety standards and protect the innovation ecosystem. It also addresses concerns related to the misuse of AI, such as deepfakes and robocalls. With a diverse group of participants and international collaboration, the consortium aims to establish effective tools for safety worldwide. This initiative reflects a commitment to responsible AI development and underscores the importance of collaboration in addressing the challenges posed by artificial intelligence.