The G7 Set to Establish an AI Code of Conduct
The Group of Seven (G7) industrial countries is expected to reach an agreement on an artificial intelligence (AI) code of conduct for developers on October 30th, according to a report by Reuters. The code aims to promote the safe and trustworthy use of AI worldwide while addressing the associated risks. It consists of 11 points that provide voluntary guidance for organizations developing advanced AI systems, emphasizing the need for transparency and robust security controls.
Government Initiatives on AI Regulation
The G7’s efforts to establish an AI code of conduct align with global initiatives on AI regulation. The European Union introduced its landmark EU AI Act in June, while the United Nations formed a 39-member advisory committee to address global AI regulation. Additionally, the Chinese government implemented its own AI regulations in August.
Industry Response
OpenAI, the developer of the popular AI chatbot ChatGPT, announced its plans to create a “preparedness” team to assess various AI-related risks. This move reflects the industry’s recognition of the importance of responsible AI development and risk mitigation.
Conclusion: Global Efforts for Responsible and Trustworthy AI
The G7’s upcoming agreement on an AI code of conduct demonstrates their commitment to fostering safe and secure AI technologies. By establishing guidelines for developers, promoting transparency, and emphasizing security controls, the G7 aims to harness the benefits of AI while effectively managing potential risks. This initiative aligns with other international efforts to regulate AI and ensure its responsible and trustworthy use in various sectors.