
X acknowledges mistakes on Grok AI, assures compliance with Indian laws
Microblogging platform X has acknowledged lapses in handling obscene content generated through its AI chatbot Grok and assured the Indian government of compliance with domestic laws, following a stern warning from the Ministry of Electronics and Information Technology . Government sources said X has blocked around 3,500 pieces of content and deleted over 600 accounts linked to the misuse of Grok for generating sexually explicit imagery.
The action came after the ministry found X’s initial response to official notices inadequate, particularly on details of content takedowns and preventive safeguards. After a follow-up warning issued on January 2 , the platform committed to ensuring that obscene imagery will not be allowed going forward. X’s Safety team said it takes action against illegal content , including Child Sexual Abuse Material (CSAM) , through removals, permanent account suspensions, and cooperation with law-enforcement agencies. The platform also warned that any user prompting Grok to generate illegal content would face the same consequences as uploading such material directly .
While the corrective measures are notable, the episode has exposed deeper concerns around AI safety , platform accountability , and enforcement timing. Grok, integrated directly into X’s public feed, enabled users to manipulate images initially for benign edits, but increasingly for non-consensual sexualised content , including deepfakes involving women and minors. Unlike private AI tools, Grok’s outputs appeared publicly and instantaneously, amplifying harm at scale.
X and its owner Elon Musk have maintained that responsibility lies with users misusing the tool. Musk publicly stated that illegal AI-generated content will be treated no differently from illegal uploads , reinforcing a post-facto enforcement approach. Critics argue this underscores a failure of preventive guardrails , allowing harm to occur before moderation intervenes.
Internationally, governments are signalling tougher responses. In France, courts recently convicted individuals for cyber-harassment of First Lady Brigitte Macron, underlining that digitally manufactured abuse carries real-world legal consequences .
India’s intervention highlights a central challenge of the AI era: innovation without enforceable safeguards risks magnifying harm faster than platforms can contain it. X’s compliance marks an important step, but sustained transparency, proactive moderation, and credible deterrence will determine whether such abuses are structurally prevented rather than temporarily suppressed.
