Meta recently sparked controversy when its AI image recognition system mistakenly labeled real photos as "AI-produced." Multiple photographers, including former White House photographer Pete Souza, have reported similar issues with their work being incorrectly labeled as AI-generated, raising questions about the accuracy of Meta’s AI recognition technology and exposing Possible flaws in this system and potential infringement of creators’ rights. This incident quickly spread on social media and attracted widespread attention.
Recently, social media giant Meta has been involved in a controversy over AI tags. Multiple photographers have complained that Meta mistakenly added the label "Made by AI" to real photos they took, raising questions about the accuracy of the company's AI recognition systems.
This problem was first discovered by former White House photographer Pete Souza. A basketball game photo he took was mistakenly labeled as AI-generated by Meta. Subsequently, more photographers reported similar situations, including the Indian Premier League champions' photo being mistagged. Interestingly, these error labels only appear on mobile devices and not on the web.
What’s even more worrying is that even extremely minor edits could trigger Meta’s AI tags. PetaPixel reports that using Adobe Photoshop's Generative Fill tool to remove small blobs from an image was enough for Meta to flag the photo as AI-generated. This sparked an outcry from photographers, who argued that such slight retouching should not be labeled as AI-generated.

Photographer Noah Kalina expressed his opinion on Threads: "If a 'retouched' photo is 'made with AI,' then the term effectively loses its meaning." He even suggested that if Meta really wanted to protect users , why not label every photo as "not a true representation".
In the face of the controversy, Meta spokesperson Kate McLaughlin acknowledged that the company is aware of the issue and is evaluating its labeling methods to more accurately reflect the extent of the use of AI in images. Meta said it relies on industry standard metrics and is working with other companies to improve the process.
The controversy stems from a plan announced by Meta in February to add an "AI-made" label to photos generated using specific AI tools ahead of the election season. Although Meta has not revealed the specific triggering mechanism, it is generally believed in the industry that it is related to the metadata in the image file.
With the widespread application of AI technology in image processing, how to accurately identify and label AI-generated content has become a thorny issue. Meta’s controversy not only reflects the limitations of current AI recognition technology, but also triggers people’s deep thinking on the authenticity of digital content and the rights of creators. As the controversy continues, the industry expects Meta to improve its labeling system as soon as possible and find a balance between protecting users and respecting creators.
Meta needs to face up to this incident and seriously reflect on the accuracy of its AI recognition system and the fairness of its algorithm. Only by continuously improving technology and actively communicating with photographers and other content creators can we finally solve this problem and maintain a good Internet ecology.