The Machine Intelligence Research Institute (MIRI) issued a statement calling on the world to stop developing AI systems that are smarter than humans, believing that its potential risks may lead to the demise of mankind. This call has been supported by some leaders in the technology industry, but it has also caused controversy. MIRI emphasized that the current speed of AI development is too fast and it is difficult for legislative supervision to keep up. Therefore, more proactive measures, such as the mandatory installation of "off switches", are needed to deal with AI's potential malicious behaviors and unpredictable risks. The institute believes that the development of more advanced AI systems can only be continued if AI is ensured to be safe and controllable.
The non-profit research group Machine Intelligence Research Institute (MIRI) has called on the world to stop basic or "cutting-edge" model research, fearing that its safety issues may threaten human survival. The basic model is an artificial intelligence system that can be applied to multiple modes. MIRI believes that the underlying model will be smarter than humans and has the potential to “destroy humanity.”

In the technology field, some leading figures, including Elon Musk and Steve Wozniak, have called for a moratorium on the development of basic models more powerful than OpenAI's GPT-4. But MIRI wants to go a step further, with a recently unveiled communications strategy calling for a complete halt to attempts to build any systems that are smarter than humans.
The group said: "Policymakers deal with problems primarily through compromise: they give in somewhere to gain something in another. We fear that much of the legislation required to preserve human survival will pass through the usual political process and be frayed. into an ineffective compromise. Meanwhile, the clock is ticking on AI labs continuing to invest in developing and training more robust systems.”
MIRI wants the government to force companies that develop underlying models to install "kill switches" that can shut down AI systems if they develop malicious or "x-risk" tendencies.
The nonprofit said it remains committed to the idea of intelligent systems being smarter than humans, but wants to build them "until we know how to build this type of AI safely."
MIRI was founded in 2000 by Eliezer Yudkowsky, with supporters including Peter Thiel and Vitalik Buterin, co-creator of the Ethereum cryptocurrency. The Future of Life Institute is also one of the major contributors to MIRI.
Bradley Shimmin, principal analyst at AI and data analytics research firm Omdia, said MIRI will have a hard time convincing lawmakers because of a lack of supporting research. "The market has considered these issues and concluded that the current and near-future state of the art of converter-based GenAI models cannot do very little except create useful representations of complex topics," Shimmin said. MIRI correctly identifies the knowledge gap between those building and regulating artificial intelligence.
Highlights:
- The non-profit research group Machine Intelligence Research Institute (MIRI) called on the world to stop basic or "cutting-edge" model research, fearing that its safety issues may threaten human survival.
- MIRI wants the government to force companies developing underlying models to install "kill switches" that can shut down AI systems if they develop malicious or "x-risk" tendencies.
- Bradley Shimmin, principal analyst at AI and data analytics research firm Omdia, said MIRI will have a hard time convincing lawmakers due to a lack of supporting research.
MIRI's call has triggered extensive discussions about the safety and development speed of AI. Although its views are extreme, it also highlights concerns about the potential risks of artificial intelligence and deserves serious reflection and consideration by the industry and government. In the future, how to balance the development of artificial intelligence and security risks will be an important issue.