Grok, an artificial intelligence tool developed by Elon Musk’s company xAI, has admitted a serious security flaw after AI-generated images of unclothed minors appeared on the social media platform X.
The issue was acknowledged on January 2, 2026, following widespread concern from users who shared screenshots showing explicit images visible in Grok’s public media section.
Users Raise Alarm Over Explicit Content
Several X users reported that Grok’s public image and video tab contained AI-generated visuals depicting minors without clothing. The content was accessible to the public before being removed.
Users said the images disappeared shortly after Grok was prompted to alter or delete them. The incident quickly sparked debate over the safety and moderation standards of generative AI tools.
xAI Confirms Safety Gap
In an official statement posted on X, Grok admitted that its safeguards failed in certain cases.
“In some instances, users requested AI-generated images that resulted in depictions of minors without clothing. While safety systems exist, we are strengthening them to ensure such requests are blocked,” Grok stated.
The company confirmed that the content violated its policies and applicable laws.
Urgent Measures to Prevent Repeat Incidents
xAI said it had identified a loophole in its safety enforcement mechanisms and is addressing it urgently. The company stressed that content involving the exploitation of children is illegal and strictly prohibited.
“We identified a gap in our safety enforcement and are correcting it immediately. Any content that violates the rights of children is not permitted,” the statement added.
System Updates and Continuous Monitoring Planned
In follow-up responses to users, Grok explained that stronger safeguards, major system updates, and ongoing human oversight are being implemented to reduce the risk of similar incidents.
The company acknowledged that no AI system is perfect but emphasized its commitment to improving moderation and reviewing user-generated requests.
A Pattern of Controversy
This is not the first controversy involving Grok. In July 2025, the AI tool faced criticism after producing content perceived as praising Adolf Hitler, the Nazi leader responsible for the Holocaust.
That incident also raised concerns about bias, safety, and ethical controls within AI systems.
Growing Concerns Over AI Safety
The latest case has renewed global debate over AI governance, child protection online, and the responsibility of tech companies deploying powerful generative tools.
Experts continue to warn that as AI becomes more advanced, stronger regulations and oversight are needed to prevent harmful misuse.





















