The OpenAI leadership trio jointly authored an article calling for an international body to oversee superintelligence

Source: The Paper

Reporter Fang Xiao

"We may end up needing something like the IAEA (International Atomic Energy Agency) for superintelligence efforts; any effort beyond a certain threshold of capabilities (or resources like computing) would need to be checked by an international authority, requiring audits, testing Compliance with security standards, restrictions on the degree of deployment and level of security, and so on."

OpenAI leadership published "Governance of Superintelligence" on its official blog.

The advances in artificial intelligence are fast enough, and the possible dangers are clear enough, that the leadership of OpenAI, the developer of ChatGPT, has volunteered that the world needs an international body for superintelligence similar to that governing nuclear energy.

On May 22 local time, OpenAI founder Sam Altman (Sam Altman), president Greg Brockman (Greg Brockman) and chief scientist Ilya Sutskever (Ilya Sutskever) joined forces at the company Writing on the blog, saying that the pace of innovation in artificial intelligence is so rapid that we cannot expect existing institutions to adequately control the technology.

The article, titled "Governance of Superintelligence," acknowledges that AI will not manage itself, "We may eventually need something akin to the IAEA (International Atomic Energy Agency) for superintelligence efforts; any or resources such as computing) threshold efforts need to be checked by international authorities, require audits, test whether they meet security standards, limit the degree of deployment and security level, and so on.”

Leading AI researcher and critic Timnit Gebru said something similar in an interview with The Guardian that day: “Unless there is external pressure to do something different, companies are Not self-regulating. We need regulation, we need something better than a pure profit motive." Gebru was fired from Google for speaking out about the dangers of artificial intelligence.

OpenAI's proposal could be an industry conversation starter, showing that the world's largest AI brands and vendors support regulation and urgently need public oversight, but "don't yet know how to design such a mechanism."

Last week, Altman testified before the U.S. Congress that OpenAI was "very concerned" about elections being influenced by AI-generated content. He worries that the AI industry could "do significant harm to the world". "I think if this technology goes wrong, it can go terribly wrong, and we want to be outspoken about that." "We want to work with the government to prevent that from happening." Altman suggested that Congress create a new Agencies that issue licenses for AI technologies "beyond a certain scale of capability" should be subject to independent audits by experts who can judge whether the model complies with those regulations before releasing an AI system to the public.

The following is the full text of "Governance of Superintelligence":

Given the picture we're seeing now, it's conceivable that within the next decade, AI systems will exceed the skill levels of experts in most domains and perform as many productive activities as one of the largest corporations today.

In terms of potential benefits and disadvantages, superintelligence will be more powerful than other technologies that humans have had to deal with in the past. We can have a more prosperous future; but we must manage risk to get there. Given the potential for risk, we cannot just be reactive. Nuclear energy is a common historical example of a technology of this nature; synthetic biology is another.

We must also de-risk today's AI technologies, but superintelligence requires special handling and coordination.

A starting point

There are a number of ideas that are important to our good chances of successfully navigating this development; here we provide our initial thoughts on three of them.

First, we need some degree of coordination among leading development efforts to ensure that the development of superintelligence proceeds in a way that both keeps us safe and helps these systems integrate smoothly with society. There are many ways this can be accomplished; the world's leading governments could establish a program in which many of the current efforts are part of, or we could collectively agree (with the support of the new organization proposed below) that cutting-edge AI capabilities The growth rate of the company should be limited to a certain rate every year.

Of course, individual companies are also expected to act responsibly and with extremely high standards.

Second, we may end up needing something akin to the IAEA (International Atomic Energy Agency) for superintelligence efforts; any effort beyond a certain threshold of capabilities (or computing resources, etc.) would need to be checked by an international authority, requiring an audit , testing for compliance with security standards, limiting deployment and security levels, and more. Tracking computing and energy usage would help a lot and give us some hope that this idea is actually achievable. As a first step, companies could voluntarily agree to start implementing elements such an agency might one day require, while as a second step, individual countries could implement them. It is important that such an agency focus on reducing the risks that exist, rather than issues that should be left to individual countries, such as defining what AI should be allowed to say.

Third, we need technological capabilities to secure superintelligence. This is an open research problem, and a lot of work is being done by us and others.

what is not in scope

We believe it is important to allow companies and open source projects to develop models below a significant threshold of capability without the kind of regulation we describe here (including onerous mechanisms such as licenses or audits).

Today's systems will create enormous value in the world, and while they do have risks, those levels of risk seem commensurate with other Internet technologies, and what society might do seems appropriate.

By contrast, the systems we focus on will have the momentum to outperform any technology that has not yet been created, and we should be careful not to downplay the focus on technologies that are far below that by applying similar standards to them.

Public input and potential

But the governance of the most robust systems, and the decisions about their deployment, must have strong public oversight. We believe that people around the world should democratically determine the boundaries and defaults of AI systems. We don't yet know how to design such a mechanism, but we plan to experiment with its development. We still believe that on these broad scales, individual users should have a great deal of control over the behavior of the AI they use.

Given the risks and difficulties, it's worth considering why we're building this technology.

At OpenAI, we have two fundamental reasons. First, we believe this will lead to a better world than we imagine today (we've seen early examples of this in areas such as education, creative work, and personal productivity). The world faces many problems, and we need more help to solve them; this technology can improve our society, and the creative ability of everyone using these new tools will surely surprise us. The economic growth and improvement in quality of life will be phenomenal.

Second, we believe that preventing superintelligence would have unintuitive risks and difficulties. Since the benefits are so great, the cost of building it is falling every year, the number of players building it is increasing rapidly, and it's essentially part of the technological path we're on, stopping it requires something akin to a global surveillance system, Even this is not guaranteed to work. So we have to get it right.

View Original
The content is for reference only, not a solicitation or offer. No investment, tax, or legal advice provided. See Disclaimer for more risks disclosure.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments