On the eve of this year's National People's Congress and the Chinese People's Political Consultative Conference, Zhou Hongyi, member of the National Committee of the Chinese People's Political Consultative Conference and founder of 360 Group, expressed important views on the DeepSeek big model and AI security issues. He stressed that it is crucial to correctly understand AI security, and it should neither exaggerate its risks nor ignore its potential problems. Zhou Hongyi's views provide a new perspective for the current security discussion in the AI field.
Zhou Hongyi pointed out that there is a tendency to be exaggerated at the moment in AI security issues. He specifically criticized the five largest American AI companies represented by OpenAI, believing that they defend their monopoly and closed-source strategies by exaggerating AI insecurity, and thereby pushing governments to strengthen supervision, thus blocking the pursuit of latecomers. He believes that discussing AI security in this context is suspected of being a "rogue" and emphasizes that "not developing is the greatest insecurity." In his opinion, seizing the opportunities of the AI industrial revolution, improving productivity and achieving technological inclusiveness is the most urgent task at present.

Zhou Hongyi put forward unique insights on the issue of AI "illusion". He believes that "illusion" is not a pure safety hazard, but a reflection of the intelligence and creativity of the big model. Large models without "illusions" lack imagination, and "illusions" are an important feature that AI shows similar human intelligence. This view provides a new direction for thinking for the development of AI technology.
Taking DeepSeek as an example, its "illusion" is significant, and users can feel creative abilities similar to humans when using them. Zhou Hongyi argues that AI security and business development can be promoted simultaneously, and specific problems such as "illusions" should be broken down into solveable technical challenges, rather than generally classified as security risks. He called for a rational view of AI characteristics and find targeted solutions to promote technological progress. This view provides important guidance for research and practice in the field of AI.