A blog post coauthored by the CEO of OpenAI, Sam Altman, OpenAI President Greg Brockman, and OpenAI Chief Scientist Ilya Sutskever is warning that the development of artificial intelligence needs heavy regulation to prevent potentially catastrophic scenarios.
“Now is a good time to begin thinking about the governance of superintelligence,” stated Altman, acknowledging that future Artificial Intelligence (AI) systems could significantly surpass AGI in terms of capability. “Given the image as we see it now, it’s conceivable that within the following 10 years, Artificial Intelligence (AI) systems will exceed expert skill levels in most domains, and carry out as much productive activity as one of today’s largest corporations.”
Echoing concerns Altman raised in his recent testimony before Congress, the trio outlined 3 pillars they deemed critical for strategic future planning.
The “starting point”
First, OpenAI believes there must be a balance betwixt control and innovation, and pushed for a social agreement “that allows us to both maintain safety and assist smooth integration of these systems with society.”
Following that, they championed the idea of an “international authority” tasked with system inspections, audit enforcement, safety standard compliance testing, and deployment and security restrictions. Drawing parallels to the International Atomic Energy Agency, they suggested what a worldwide Artificial Intelligence (AI) regulatory body can potentially look like.
Last, they emphasized the need for the “technical capability” to sustain control over superintelligence and keep it “safe.” What this entails remains nebulous, even to OpenAI, but the post warned against onerous regulatory measures like licenses and audits for technology sliding below the bar for superintelligence.
In essence, the idea is to keep the superintelligence aligned to its trainers’ intentions, preventing a “foom scenario”—a rapid, uncontrollable explosion in Artificial Intelligence (AI) capabilities that outpaces human control.
OpenAI likewise is warning of the potentially catastrophic impact that uncontrolled development of Artificial Intelligence (AI) models could have on future societies. Other specialists in the field have already raised similar concerns, from the godfather of AI to the founders of Artificial Intelligence (AI) corporations like Stability AI and even previous OpenAI workers involved with the training of the GPT LLM in the past. This urgent call for a proactive approach toward Artificial Intelligence (AI) governance and regulation has caught the attention of regulatory authorities all around the world.
The Challenge of a “Safe” Superintelligence
OpenAI thinks that once these points are addressed, the capacity of Artificial Intelligence (AI) can be more freely exploited for good: “This technology can make better our societies, and the creative capacity of everyone to use these new tools is certain to astonish us,” they said.
The authors likewise stated that the space is as of now growing at an accelerated pace, and that is not going to change. “Stopping it would require something like a worldwide surveillance regime, and even that isn’t guaranteed to work,” the blog reads.
Regardless of these challenges, OpenAI’s leadership remains committed to exploring the question, “How can we secure that the technical capability to keep a superintelligence safe is achieved?” The world doesn’t have an answer at this time, but it definitely needs one—one that ChatGPT can’t provide.