The United States National Institute of Standards and Technology (NIST) Launches the Artificial Intelligence Safety Institute Consortium
The United States National Institute of Standards and Technology (NIST), under the Department of Commerce, has taken a significant stride towards fostering a safe and trustworthy environment for Artificial Intelligence (AI) through the inception of the Artificial Intelligence Safety Institute Consortium (“Consortium”). The Consortium’s formation was announced in a notice published on November 2, 2023, by NIST, marking a collaborative effort to set up a new measurement science for identifying scalable and proven techniques and metrics. These metrics are aimed at advancing the development and responsible utilization of AI, especially concerning advanced AI systems like the most capable foundation models.
Consortium Objective and Collaboration
The core objective of the Consortium is to navigate the extensive risks posed by AI technologies and to shield the public while encouraging innovative AI technological advancements. NIST seeks to leverage the broader community’s interests and capabilities, aiming at identifying proven, scalable, and interoperable measurements and methodologies for the responsible use and development of trustworthy AI.
Engagement in collaborative Research and Development (R&D), shared projects, and the evaluation of test systems and prototypes are among the key activities outlined for the Consortium. The collective effort is in response to the Executive Order titled “The Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” dated October 30, 2023, which underlined a broad set of priorities relevant to AI safety and trust.
Call for Participation and Cooperation
To achieve these objectives, NIST has opened the doors for interested organizations to share their technical expertise, products, data, and/or models through the AI Risk Management Framework (AI RMF). The invitation for letters of interest is part of NIST’s initiative to collaborate with non-profit organizations, universities, government agencies, and technology companies. The collaborative activities within the Consortium are expected to commence no earlier than December 4, 2023, once a sufficient number of completed and signed letters of interest are received. Participation is open to all organizations that can contribute to the Consortium’s activities, with selected participants required to enter into a Consortium Cooperative Research and Development Agreement (CRADA) with NIST.
Addressing AI Safety Challenges
The establishment of the Consortium is viewed as a positive step towards catching up with other developed nations in setting up regulations governing AI development, particularly in the realms of user and citizen privacy, security, and unintended consequences. The move reflects a milestone under President Joe Biden’s administration towards adopting specific policies to manage AI in the United States.
The Consortium will be instrumental in developing new guidelines, tools, methods, and best practices to facilitate the evolution of industry standards for developing or deploying AI in a safe, secure, and trustworthy manner. It is poised to play a critical role at a pivotal time, not only for AI technologists but for society, in ensuring that AI aligns with societal norms and values while promoting innovation.
Hot Take: NIST Paves the Way for Responsible AI Development
The launch of the Artificial Intelligence Safety Institute Consortium by NIST represents a significant advancement towards establishing a safe and trustworthy environment for AI. By collaborating with various organizations and experts, NIST aims to identify scalable techniques and measurements that promote responsible AI development. This initiative aligns with President Joe Biden’s administration’s focus on managing AI technologies effectively.