According to the latest research report by Gartner, the widespread use of generative artificial intelligence (GenAI) technology will bring significant data security risks. It is estimated that by 2027, more than 40% of AI-related data breaches will be directly attributed to the misuse of generative artificial intelligence. This forecast highlights the serious challenges facing enterprises in adopting GenAI technology, especially in data governance and security measures.
With the rapid popularization of GenAI technology, the challenges encountered by organizations in the context of data localization are particularly prominent. The high demand for centralized computing power from these technologies has led to a significant increase in the risk of cross-border flow of data. Joerg Fritsch, Gartner's vice president analyst, stressed that many companies lack the necessary supervision when integrating GenAI tools, resulting in the possibility of accidentally transmitting sensitive data to unknown locations, posing serious security risks.

The lack of unified data governance standards worldwide is another key challenge. This lack of standardization has led to fragmentation of the market, forcing companies to develop specific strategies for different regions, which not only increases operating costs, but also limits their ability to effectively utilize AI products and services globally. "The complexity of managing data flows and the quality maintenance issues brought about by localized AI policies can lead to a significant decline in operational efficiency," Fritsch noted.
To address these risks, companies need to make strategic investments in AI governance and security. Gartner predicts that by 2027, AI governance will be generally required worldwide, especially under the framework of sovereign AI laws and regulations. Organizations that fail to integrate necessary governance models in a timely manner will face competitive disadvantages and may lose market opportunities.
To reduce the risk of AI data breaches, Gartner recommends that enterprises adopt a multi-pronged strategy: first, strengthen data governance, ensure compliance with international regulations and monitor unexpected cross-border data transfers; second, establish a dedicated governance committee to improve transparency in AI deployment and data processing; finally, adopt advanced data security technologies such as encryption and anonymization to protect sensitive information.
In addition, companies should invest in trust, risk and security management (TRiSM) products related to AI technology. This includes AI governance, data security governance, prompt filtering and red action, and synthesis to generate unstructured data. Gartner predicts that by 2026, businesses implementing AI TRiSM control will reduce inaccurate information by at least 50%, significantly reducing the risk of wrong decisions.
Key points include: More than 40% of AI data breaches will be triggered by generative AI misuse; companies must strengthen data governance to ensure compliance and security; investing in AI-related trust, risk and security management products can significantly reduce the generation of misinformation. These recommendations provide clear guides for businesses to deal with the challenges posed by GenAI.