Meta’s “Made with AI” label sparks controversy! Multiple photographers have complained that Meta mistakenly labeled real photos they took as AI-generated images. This problem involves the use of photo editing software. Even simple cropping and compression may trigger Meta's algorithm and misjudge real photos as AI-generated content. This incident highlighted the limitations of AI image recognition technology in practical applications, and also raised concerns among photographers about copyright and algorithm fairness.
Meta is reportedly aware of the problem and is working to improve its image recognition algorithms to more accurately identify AI-generated images. However, the current Meta method still has shortcomings, causing trouble to photographers' creations and works. Meta said it is working with relevant companies to improve this process, but specific solutions and improvement timetables have not yet been announced. This incident also reminds us once again that as AI technology develops, we need to pay more attention to its potential ethical and social issues.
According to foreign media reports, multiple photographers complained that Meta mistakenly added the "Made with AI" label to real photos they took. Several photographers have shared examples over the past few months, most recently Meta labeling a basketball game photo taken by former White House photographer Pete Souza as AI-generated.
In another recent example, Meta mistakenly added the tag to an Instagram photo showing Kolkata Knight Riders winning a premier cricket league in India. Interestingly, as with Sousa's photo, this tag only shows up when viewing the image on a mobile device, not on the web.

Souza said he tried and failed to get the tag removed. He speculated that using Adobe's cropping tool and compressing the image into JPEG format might trigger Meta's algorithm.
However, Meta also incorrectly labeled real photos as Made with AI when photographers used generative AI tools like Adobe's Generative Fill to remove the smallest objects, PetaPixel reported. The publication conducted its own test using Photoshop's Generate Fill tool to remove a blob in an image, which Meta then flagged on Instagram as AI-generated. Strangely, however, when PetaPixel re-uploaded the file into Photoshop and then copied and pasted it into a black document to save, Meta didn't add the "Made with AI" tag. Multiple photographers have expressed displeasure that such minor edits have been unfairly labeled as AI-generated.
Photographer Noah Kalina wrote on Threads: "If 'retouched' photos were all labeled 'Made with AI,' then the term would effectively be meaningless. If they were serious about protecting people, they might automatically give every Photos labeled 'not a true representation'."
Meta spokesperson Kate McLaughlin said in a statement that the company is aware of the issue and is evaluating its approach "so that our labels reflect the amount of AI used in images." "We rely on other companies to use the same technology in their tools Industry standard metrics are included, so we are actively working with these companies to improve this process and align our labeling approach with our intent,” McLaughlin added.
In February, Meta announced it would add a “Made with AI” tag to photos uploaded to Facebook, Instagram and Threads ahead of this year’s election season. Specifically, the company said it will add tags to AI photos generated using tools such as Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock.
Meta didn't disclose what triggered the "Made with AI" label, but all of these companies have added or are in the process of adding metadata to image files to indicate the use of AI tools, which is Meta's way of identifying AI-generated photos. Adobe, for example, last year launched its content credentials system, which adds information about the source of content to metadata.
Highlight:
- Multiple photographers complained that Meta incorrectly labeled real photos as “Made with AI.”
- Photos created using editing tools appear to be affected.
- Meta also incorrectly labeled real photos as Made with AI when using generative AI tools.
Meta’s response to this question indicates that they are working to improve their algorithms to reduce false positives. But this also exposes the limitations of AI technology in image recognition and the possible negative impacts in practical applications. How to balance technological development and user rights protection is still a problem that requires continued attention and resolution.