Recently, Human Rights Watch (HRW) released a report revealing the presence of a large number of unauthorized personal photos of Brazilian children in the popular AI image generator training data set. These photos come from personal blogs and videos and contain children’s names and location information, which can be easily tracked and abused, raising serious children’s privacy and safety concerns. The report pointed out that the development of AI face-changing technology has increased the risk of children's portraits being used to generate pornographic content, bringing risks to children such as bullying and fishing, and even causing lasting mental trauma. HRW calls on governments and technology companies to take responsibility and develop protective measures, and recommends strengthening children's data protection at the legal level.
The report pointed out that the LAION-5B data set contains at least 170 photos of children from 10 states in Brazil, covering various stages of children from infancy to adolescence. Although the LAION organization has taken steps to remove relevant links, the report believes that this may underestimate the actual situation. HRW emphasized that it is unfair to shift the responsibility for protecting children's privacy to parents. The government and technology companies should actively take responsibility, formulate relevant laws and regulations, clearly prohibit the use of children's personal data to train AI systems without permission, and provide accountability for child victims. way. Countries around the world should also strengthen children's data protection to prevent AI abuse and inappropriate content generation. This incident once again highlights the ethical challenges and data security issues existing in the development of artificial intelligence technology, which require joint efforts from all parties to effectively solve them.