At the China-Singapore Artificial Intelligence Frontier and Governance Seminar, why did academicians from China and Singapore pour cold water on AI?
Author:Eve Cole
Update Time:2024-11-15 12:24:01

Since the concept of artificial intelligence was first proposed in 1956, AI has further developed towards cognitive intelligence based on computational intelligence and perceptual intelligence. In recent years, with the emergence of ChatGPT and sora, AI development has been advancing rapidly. However, at the Sino-Singapore Artificial Intelligence Frontier and Governance Seminar hosted by the Chinese Academy of Engineering and the Singapore Academy of Engineering and hosted by Tongji University today (October 28), many academicians poured cold water on the "AI fever". "Network endogenous security issues are comprehensively challenging the underlying driving paradigm of today's digital ecosystem." Wu Jiangxing, an academician of the Chinese Academy of Engineering and a professor at Fudan University, said bluntly that it is regrettable that neither the security system itself nor the network security guards who guard homes and nursing homes are currently able to Answer the three major soul torture questions: "Are there any loopholes?" "Are there backdoors/trapdoors?" "Are there multiple security overlay issues?" He seriously pointed out that the current AI application system has a serious imbalance between security responsibilities and risks. No merchant can guarantee that their products do not have security vulnerabilities and backdoors, and no testing agency can guarantee that the products submitted for inspection will be free of vulnerabilities and backdoors. This has become an inescapable nightmare for all countries in the AI era. At the meeting, many experts analyzed that seemingly powerful artificial intelligence and large models also have shortcomings in terms of consumption, safety, and ethics. The development of artificial intelligence is still on the road, and there is still a lot of room for development. Scholars from China and Singapore should strengthen cooperation and work together to guide AI to be more energy-saving, safe, and virtuous. Seemingly powerful artificial intelligence is actually full of hidden dangers. When it comes to network security risks, many people were particularly impressed by the Microsoft blue screen incident that occurred in July this year. On July 19, users from many countries around the world discovered that their company’s computers had a blue screen, prompting “The device has encountered a problem and needs to be restarted. Subsequently, the blue screen problem was confirmed to be related to a software update by the network security company Crowd Strike. In Wu Jiangxing’s view, this is the network A typical case of security “bodyguards” stabbing humans in the back. “Artificial intelligence and security issues are closely related, and security is a genetic flaw brought about by the mother in the modern computer architecture. The security issues of the AI operating environment must be paid attention to. "
In this regard, Zheng Qinghua, president of Tongji University and academician of the Chinese Academy of Engineering, holds the same view. "While we highly affirm the major achievements of large models, we must also be deeply aware that there are some inherent flaws in large models." Zheng Qinghua gave an example. The first performance is Excessive consumption of data and computing power. "One day, the valuable information that humans mine from open source data on the Internet will hit a ceiling, just like humans mine rare metals from mineral resources, there will be a time when they are exhausted." The second inherent flaw shows. Because of catastrophic forgetfulness and weak traction ability of the scene. Zheng Qinghua explained that large models like the new and hate the old, and it is difficult to draw inferences from one example. Often, it is difficult to adapt to A, but it is difficult to adapt to B. It is not easy to find a balance between the two. The third is weak reasoning ability. The large model uses an autoregressive training algorithm, which makes it unable to form the logical reasoning ability based on causal reasoning built by humans. The autoregressive generation method is also difficult to cope with complex reasoning tasks that require backtracking and trial and error, which often leads to the phenomenon that large models learn wrong information to solve tasks. This phenomenon is called the "Clever Hans effect" ". The fourth inherent flaw is that the large model does not know where it went wrong and why it went wrong, let alone correcting it when it is known. Zheng Qinghua bluntly said that these inherent flaws have led to problems such as hallucinations and poor controllability in large models. "Especially in engineering applications and other scenarios where we need to know what is happening and why, large models can be said to be powerless." Wen Yonggang, academician of the Singapore Academy of Engineering and professor at Nanyang Technological University, believes that human society is entering a period of dual transformations of digitalization and sustainability. era. Especially in the digital transformation, a large number of activities have changed from offline to online, which consumes a lot of computing resources and servers. Predictions show that by 2030, the electricity consumption of Singapore's data centers will reach 12% of society's total electricity consumption. What is even more alarming is that the extensive use of AI will also increase the volume of carbon emissions and have a devastating impact on the environment.
Guide the AI to run in the right direction. When the AI runs with its eyes closed, how can humans, the developers of technology, grasp the steering wheel? At the meeting, experts also gave feasible suggestions based on long-term research. Wu Jiangguang has created the endogenous security and mimic defense theories since 2013. Based on the theoretical foundation, the team built an endogenous safety architecture to empower intelligent driving systems in the Nanjing laboratory. The system has more than 20 application scenarios and more than 100 differentiated application individuals. It has a comprehensive identification success rate of more than 90% on common AI issues such as counterattacks and backdoor vulnerabilities. Zheng Qinghua said that history and experience have proved that every human progress in brain science has a reference, inspiration and guidance for the research of artificial neural networks and machine intelligence. "Today's large models are only the most preliminary and simple reference to the human brain. If we can draw in-depth reference from brain science, especially the mechanisms of memory representation, activation, retrieval, and encoding recall that are unique to humans, we are expected to solve the problems faced by today's large models. "There are various inherent flaws." Therefore, he proposed: China must have its own machine intelligence model. Tongji University is currently opening up disciplinary boundaries, promoting the integration of computer science and brain science, studying the correlation between human brain memory and machine memory, and exploring new ways to use information science to study brain science.
Today's artificial intelligence has broken down the boundaries of traditional disciplines and is extending to almost all disciplines. Guo Guisheng, an academician of the Singapore Academy of Engineering and a professor at the Singapore University of Technology and Design, is also a member of the AI-RAN Association. It is understood that AI-RAN refers to "Artificial Intelligence (AI)-Radio Access Network (RAN)" and is an industry association aimed at revitalizing the integration of artificial intelligence and wireless communications and leading technological innovation. Guo Guisheng introduced that a large number of AI-related projects and quantum computing projects are being promoted through interdisciplinary interconnection. In his view, guiding AI to do good requires not only breaking out of academic circles, but also actively linking up with global wisdom. He hopes that in the future, more laboratories and companies from Chinese universities will join the AI circle of friends and establish partnerships. The reporter learned that Singapore, as the world-famous "Artificial Intelligence Capital", was one of the first countries in the world to launch a national artificial intelligence strategy and has carried out a lot of pioneering work in artificial intelligence governance. At the scene, Zheng Qinghua also proposed that in the future, if we want to realize the beautiful picture of "everyone has intelligence, machines have intelligence, each has his own wisdom, and wisdom and wisdom are shared", experts in the field of artificial intelligence from China and Singapore need to work together to create something for this world. Contribute more.