Recently, Google has made major adjustments to its AI principles and canceled its previous commitment to not developing AI for weapons or surveillance, which has caused widespread controversy. This move stems from Google's updated public AI principle page, which deletes the "application we will not pursue" part, causing the media and the public to question its AI application direction. After the move signed a cloud service contract with the US and Israeli military, this move exacerbated people's concerns about the role of technology companies in the field of military and monitoring.
Recently, Google quietly removed a promise from its official website to not develop artificial intelligence (AI) for weapons or surveillance. The change was first reported by Bloomberg, and this move has aroused widespread attention and discussion from the outside world. Google recently updated its public AI principle page and deleted parts called "we will not pursue", and this part is still visible last week.
Faced with media inquiries, Google provided a newly released blog post about "Responsible AI" to "Tech Crunch". In this article, Google said: "We believe that enterprises, governments and organizations should work together to create AI that can protect human beings, promote global development and support national security."

Google's updated AI principle emphasized that the company will be committed to "reducing accidents or harmful results, avoiding unfair prejudice", and ensuring that the company's development is consistent with the "principles of general acceptance of international law and human rights". This suggests that Google's position on AI applications may be changing.
In recent years, Google's cloud service contracts signed with the U.S. and Israeli military have sparked protests from internal employees. Although Google has repeatedly emphasized that its AI technology does not use to harm humans, the AI head of the US Department of Defense recently said in an interview with "Technology Crunch" that some companies' AI models actually accelerated the combat decision -making process of the US military. This remark is full of doubts about the future application of Google AI.
This change in Google has triggered the thoughts on the role of technology companies in the field of military and monitoring. In the context of the increasing attention of the ethical issues of artificial intelligence technology, this new measure of Google seems particularly noticeable. The public and employees are paying close attention to how the company maintains its responsible image in this sensitive field, and how to balance the relationship between business interests and moral responsibility.
With the continuous progress and application of AI technology, Google's position changes may have a profound impact on the entire industry, and it will also re -examine the future scientific and technological ethics and sense of responsibility.
Google's move has aroused widespread discussion on the social responsibility of technology companies and AI ethics. Its future development direction and the impact on the AI industry are worthy of attention. The formulation and supervision of the ethical norms of AI technology development and application is also more urgent.