EU Negotiating Stricter Regulations for Large AI Systems
Representatives in the European Union are currently in talks to introduce additional regulations on the largest artificial intelligence (AI) systems, according to a report from Bloomberg. The discussions involve the European Commission, European Parliament, and EU member states, focusing on the potential impact of large language models (LLMs) such as Meta’s Llama 2 and OpenAI’s GPT-4. The aim is to impose additional restrictions on these models as part of the forthcoming AI Act while avoiding excessive burdens on new startups.
Preliminary Stages of Agreement
Sources close to the matter reveal that negotiators have reached a preliminary agreement on the topic. However, the details are still being finalized. The proposed regulations for LLMs would follow a similar approach to the EU’s Digital Services Act (DSA), which sets standards for platforms and websites regarding user data protection and illegal activities. Larger platforms like Alphabet Inc. and Meta Inc. are subject to stricter controls under the DSA.
First Mandatory AI Rules in Western Government
The EU’s AI Act aims to be one of the first mandatory sets of rules for AI established by a Western government. China has already implemented its own AI regulations, effective since August 2023. Under the EU’s proposed regulations, companies developing and deploying AI systems would be required to conduct risk assessments, label AI-generated content, and be prohibited from using biometric surveillance. However, the legislation is yet to be enacted, and member states retain the authority to disagree with any proposals put forth by parliament.
China’s Implementation of AI Laws
Since China implemented its AI laws, over 70 new AI models have reportedly been released in the country. This highlights China’s proactive approach in regulating AI technologies and promoting innovation in the sector.
Hot Take: EU Strives for Responsible AI Regulation
The European Union’s negotiations for stricter regulations on large AI systems demonstrate its commitment to ensuring responsible and accountable use of artificial intelligence. By addressing potential risks associated with LLMs and other advanced AI models, the EU aims to strike a balance between fostering innovation and safeguarding against potential harms. The proposed AI Act, if enacted, will set a significant precedent as one of the first mandatory rules for AI in a Western government. This move aligns with global efforts to establish comprehensive frameworks that govern the development and deployment of AI technologies.