A blog post coauthored by the CEO of OpenAI, Sam Altman, OpenAI President Greg Brockman, and OpenAI Chief Scientist Ilya Sutskever isย warning that the development of artificial intelligence needs heavy regulation to prevent potentially catastrophic scenarios.ย
โNow is a good time toย begin thinking about the governance of superintelligence,โ stated Altman, acknowledging that future Artificialย Intelligenceย (AI) systems could significantly surpass AGI in terms of capability. โGiven theย image as we see it now, itโs conceivable that within theย following 10 years, Artificialย Intelligenceย (AI) systems will exceed expert skill levels in most domains, and carry out as much productive activity as one of todayโs largest corporations.โ
Echoing concerns Altman raised in his recent testimony before Congress, the trio outlined 3 pillars they deemed critical for strategic future planning.
The โstarting pointโ
First, OpenAI believes there must be a balance betwixt control and innovation, and pushed for a social agreement โthat allows us to both maintain safety andย assist smooth integration of these systems with society.โ
Followingย that, they championed the idea of an โinternational authorityโ tasked with system inspections, audit enforcement, safety standard compliance testing, and deployment and security restrictions. Drawing parallels to the International Atomic Energy Agency, they suggested what a worldwide Artificialย Intelligenceย (AI) regulatory body canย potentially look like.
Last, they emphasized the need for the โtechnical capabilityโ toย sustain control over superintelligence and keep it โsafe.โ What this entails remains nebulous, even to OpenAI, but the post warned against onerous regulatory measures like licenses and audits for technology sliding below the bar for superintelligence.
In essence, the idea is to keep the superintelligence aligned to its trainersโ intentions, preventing a โfoom scenarioโโa rapid, uncontrollable explosion in Artificialย Intelligenceย (AI) capabilities that outpaces human control.
OpenAI likewise isย warning of the potentially catastrophic impact that uncontrolled development of Artificialย Intelligenceย (AI) models could have on future societies. Other specialists in the field have already raised similar concerns, from the godfather of AI to the founders of Artificialย Intelligenceย (AI) corporations like Stability AI and even previous OpenAI workers involved with the training of the GPT LLM in the past. This urgent call for a proactive approach toward Artificialย Intelligenceย (AI) governance and regulation has caught the attention of regulatoryย authorities all around the world.
The Challenge of a โSafeโ Superintelligence
OpenAI thinksย that once these points are addressed, theย capacity of Artificialย Intelligenceย (AI) can be more freely exploited for good: โThis technology can makeย better our societies, and the creative capacity of everyone to use these new tools is certain to astonish us,โ they said.
The authors likewise stated that the space is asย ofย now growing at an accelerated pace, and that is not going to change. โStopping it would require something like a worldwide surveillance regime, and even that isnโt guaranteed to work,โ the blog reads.
Regardlessย of these challenges, OpenAIโs leadership remains committed to exploring the question, โHow can we secure that the technical capability to keep a superintelligence safe is achieved?โ The world doesnโt have an answer atย thisย time, but it definitely needs oneโone that ChatGPT canโt provide.