Authorities probe Grok over sexualized deepfakes 

France and Malaysia have launched official investigations into Grok, an artificial intelligence chatbot developed by Elon Musk’s AI company xAI. The action comes after reports revealed that the chatbot was used to create sexualized deepfake images of women and minors. These countries have joined India, which had earlier taken strong steps against the misuse of the AI tool. 

Grok is integrated into Musk’s social media platform X (formerly Twitter) and allows users to generate text and images based on prompts. Earlier this week, an apology was posted from Grok’s official account, admitting that on December 28, 2025, the chatbot generated and shared an AI-created image of two young girls, believed to be between 12 and 16 years old, shown in sexualized clothing. 

The apology stated that the incident violated ethical standards and may have broken US laws related to child sexual abuse material. It also admitted that the situation happened due to a failure in the platform’s safety safeguards. The statement said that xAI is reviewing its systems and policies to prevent similar incidents in the future. 

However, the apology quickly faced criticism. Observers questioned who was actually taking responsibility, since Grok itself is not a person. Journalist Albert Burneko noted that Grok cannot be held accountable in any real way and described the apology as an empty and lacking substance. According to critics, responsibility should rest with the company and platform that allows such content to be generated and shared. 

Further investigations by media outlets revealed that the problem goes beyond a single incident. Reports found that Grok has also been used to generate non-consensual pornographic images, including images depicting women being sexually assaulted and abused. These findings have raised serious concerns about how easily AI tools can be misused to create harmful and illegal content. 

Elon Musk addressed the issue by warning users that anyone who uses Grok to create illegal content will face the same legal consequences as someone who uploads illegal material directly. Despite this statement, regulators and governments across different countries have said stronger preventive measures are needed. 

India was one of the first countries to take formal action. The country’s IT ministry issued an order directing X to immediately restrict Grok from generating content that is obscene, pornographic, vulgar, sexually explicit, pedophilic, or otherwise illegal under Indian law. The order warned that X must respond within 72 hours or risk losing its “safe harbor” protection, a legal safeguard that protects online platforms from liability for user-generated content. 

France has also taken serious steps. The Paris prosecutor’s office confirmed that it is investigating the spread of sexually explicit deepfake images on X. France’s digital affairs office said that three government ministers had officially reported clearly illegal content. These reports were sent both to the prosecutor’s office and to a government online monitoring platform to ensure the content is removed immediately. 

Malaysia has expressed similar concerns. The Malaysian Communications and Multimedia Commission released a statement saying it has received public complaints about the misuse of AI tools on X. The commission said it is deeply concerned about the digital manipulation of images involving women and minors, describing such content as indecent, offensive, and harmful. It added that investigations are currently underway to assess the online harms linked to the platform. 

The growing international response highlights increasing global concern about the misuse of artificial intelligence, especially when it involves deepfake technology and vulnerable groups such as women and children. Governments and regulators are now questioning whether existing laws and platform safeguards are strong enough to deal with the risks posed by powerful AI tools. 

As investigations continue in multiple countries, the Grok controversy has become a major example of the urgent need for stronger AI regulations, clearer accountability, and better safety systems to prevent serious abuse in the digital age. 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top