The world’s leading AI scientists are urging world governments to work together to regulate the technology before it’s too late.
Three Turing Award winners—basically the Nobel Prize of computer science—who helped spearhead the research and development of AI, joined a dozen top scientists from across the world in signing an open letter that called for creating better safeguards for advancing AI.
The scientists claimed that as AI technology rapidly advances, any mistake or misuse could bring grave consequences for the human race.
“Loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity,” the scientists wrote in the letter. They also warned that with the rapid pace of AI development, these “catastrophic outcomes,” could come any day.
Scientists outlined the following steps to start immediately addressing the risk of malicious AI use:
Government AI safety bodies
Governments need to collaborate on AI safety precautions. Some of the scientists’ ideas included encouraging countries to develop specific AI authorities that respond to AI “incidents” and risks within their borders. Those authorities would ideally cooperate with each other, and in the long term, a new international body should be created to prevent the development of AI models that pose risks to the world.
“This body would ensure states adopt and implement a minimal set of effective safety preparedness measures, including model registration, disclosure, and tripwires,” the letter read.
Developer AI safety pledges
Another idea is to require developers to be intentional about guaranteeing the safety of their models, promising that they will not cross red lines. Developers would vow not to create AI, “that can autonomously replicate, improve, seek power or deceive their creators, or those that enable building weapons of mass destruction and conducting cyberattacks,” as laid out in a statement by top scientists during a meeting in Beijing last year.
Independent research and tech checks on AI
Another proposal is to create a series of global AI safety and verification funds, bankrolled by governments, philanthropists and corporations that would sponsor independent research to help develop better technological checks on AI.
Among the experts imploring governments to act on AI safety were three Turing award winners including Andrew Yao, the mentor of some of China’s most successful tech entrepreneurs, Yoshua Bengio, one of the most cited computer scientists in the world, and Geoffrey Hinton, who taught the cofounder and former OpenAI chief scientist Ilya Sutskever and who spent a decade working on machine learning at Google.
Cooperation and AI ethics
In the letter, the scientists lauded already existing international cooperation on AI, such as a May meeting between leaders from the U.S. and China in Geneva to discuss AI risks. Yet they said more cooperation is needed.
The development of AI should come with ethical norms for engineers, similar to those that apply to doctors or lawyers, the scientists argue. Governments should think of AI less as an exciting new technology, and more as a global public good.
“Collectively, we must prepare to avert the attendant catastrophic risks that could arrive at any time,” the letter read.
Recommended newsletter
Data Sheet: Stay on top of the business of tech with thoughtful analysis on the industry’s biggest names.
Sign up here.
Originally Appeared Here