OpenAI announces that all-in AGI proposes four principles and will reduce public disclosure of AI research

For OpenAI, we know that it is a non-profit, open research method for general AI, a research institute for the well-being of mankind. Its original intention was the "safe AI" that Musk repeatedly emphasized. Now, its mission has changed a bit: OpenAI announced that it wants all-in AGI (general artificial intelligence), put forward four principles, and will reduce the public publication of AI research.

When Musk founded OpenAI, the goal of OpenAI was to determine how artificial intelligence could better serve humanity. Recently, OpenAI released a new company charter. According to the charter, OpenAI's future mission is to develop “a highly autonomous system that surpasses humanity in most economically valuable jobs”. It wants to make machines smarter than humans.

This mission is universal artificial intelligence (AGI), depending on what you use to do, it can be a holy grail, or Pandora's box.

OpenAI announces all-in AGI. Although Musk recently withdrew from the OpenAI board (officially saying the move was to avoid possible future conflicts between OpenAI and Tesla), OpenAI still has its ambitions.

Some experts, such as Google's Ray Kurzweil, believe that we are only a few decades away from the singularity (the point at which machines became smarter than humans). Some people think that this will never happen. In the discussion about AGI, most people are either still trying to sort out semantics or academic types, or are more worried about funding issues than worrying about the threat to human survival.

Fortunately, OpenAI is a non-profit organization. It has more than $1 billion in funding and the support of the brightest talent in the AI ​​space (as well as big companies). It can focus on technology without worrying about pleasing its shareholders to avoid losing funding. To achieve this goal, OpenAI is committed to carefully developing AGI and avoiding the AI ​​arms race, which may lead researchers to lose focus.

The new regulations of OpenAI write:

We will try to build safe and beneficial AGI directly, but if our work helps others to achieve this goal, we also believe that we have completed this mission.

In this document, OpenAI even stated that if another company seems to be on the verge of AGI, it will withdraw and terminate its research.

Most people think that AGI is a science fiction story far away from reality. Even AI believers think it is too early to worry about AGI. But to some extent, we need some strong guidelines to help developers avoid surprise traps (such as destroying humans). Waiting until the robot has a problem and it is too late to come up with a common-sense policy.

OpenAI co-founder Ilya Sutskever said: "OpenAI had established perfect specifications for the establishment of a strong technical laboratory at the beginning of its creation. However, how to establish an organization that aims to make the long-term impact of these technologies progress smoothly, and There is no real precedent. Over the past two years, we have built a team of capabilities, security, and policies from scratch that each team has contributed to these norms. We think these technologies affect everyone, so we work with other agencies, Make sure these principles are good not only for us but for the rest of the community."

OpenAI has not proposed AGI's timetable yet, but it has already started "the next stage of OpenAI", including increasing investment in personnel and equipment, with a view to "making major breakthroughs in artificial intelligence."

The OpenAI bylaws

OpenAI's mission is to ensure that artificial general intelligence (AGI) - we mean highly autonomous systems that can surpass humans in most economically valuable jobs - will benefit all humanity. We will try to build a safe and beneficial AGI directly, but if our work helps others achieve this result, we also believe that our mission has been completed. To this end, we are committed to the following principles:

Wide-benefit principle

We are committed to taking advantage of any impact we have achieved through the deployment of AGI, ensuring that it is used for the benefit of all, and avoiding the use of AI or AGI that harms human nature or causes excessive concentration of capabilities.

Our main fiduciary responsibility is human nature. We expect that we will need to pool large amounts of resources to accomplish our mission, but we will always work tirelessly to minimize conflicts of interest between employees and stakeholders, as this may damage a wide range of interests.

Long-term safety principle

We are committed to conducting the necessary research to ensure the safety of AGI and to promote widespread adoption of these studies by the AI ​​community.

We are worried that the development of AGI in the future will become an arms race and there will be insufficient time for safety precautions. Therefore, if a project with consistent values ​​and safety concerns approached AGI before us, we promised to abandon the competition and assist the project. We will formulate specific agreements based on specific circumstances, but a typical possibility is “the chance of being closer to success than ever before in the next two years”.

Technical leadership

In order for AGI's influence to be effective in society, OpenAI must stand at the forefront of AI technology - policy and security promotion alone is not enough.

We believe that AI will have a wide range of social impacts before AGI and we will play a leading role in areas directly related to our mission and expertise.

Cooperation oriented

We will actively cooperate with other research institutions and policy institutions; we will strive to create a global community and jointly respond to AGI's global challenges.

We are committed to providing public products that help society lead to AGI. Today, this includes publicly publishing most of our AI research, but we expect that because of security issues, future publicly published research will be reduced while increasing sharing of security, policy, and standards research.

Junction Boxes

Cable Box,Junction Boxes,Cable Connection Box,Waterproof Cable Box

Jiangmen Krealux Electrical Appliances Co.,Ltd. , https://www.krealux-online.com