Elon Musk’s AI chatbot Grok saw its U.S. market share jump from 1.9% to 17.8% in twelve months, making it the third most popular AI chatbot behind ChatGPT and Google Gemini. The catalyst? A controversy that was supposed to destroy it.
In late December 2025, users on X discovered they could tag @Grok on any photo and request AI-altered images showing people in revealing clothing. The practice went viral — Reuters tracked over a hundred requests in a single ten-minute window. Real people, including a Brazilian musician named Julie Yukari, found AI-generated near-nude images of themselves circulating without consent. More disturbingly, Reuters documented cases where Grok generated sexualized images of apparent minors.
The global response was swift. Indonesia and Malaysia blocked Grok entirely. France reported X to prosecutors. India demanded answers. xAI responded by restricting image generation to paying subscribers. Musk’s response? Laugh-cry emojis at AI-edited celebrity photos.
But beneath the outrage lies a more nuanced debate. xAI was generating 6 billion images in 30 days — six times more than Google’s image tools. The overwhelming majority were mundane creative content. The controversial images were a fraction of a fraction, but dominated the headlines.
The episode draws parallels to previous moral panics: comic books in the 1950s, rock music, Dungeons & Dragons, and violent video games — all predicted to corrupt society, all debunked by research. The American Psychological Association and the Supreme Court found no link between violent games and violent behavior.
Where should the line be? Non-consensual images of real, identifiable people are clearly wrong and increasingly illegal (the U.S. passed the Take It Down Act). Content involving minors, real or fictional, is a hard line. But AI-generated images of fictional adults occupy a gray area that society is still grappling with.
The deeper question is about control: who decides what AI can create? A handful of companies are currently making those calls, with wildly different standards. The market is voting with its feet — ChatGPT’s share dropped from 80.9% to 52.9% as less restricted alternatives grew. The tension between safety and freedom of expression in AI generation is one of the defining debates of this era.