British Prime Minister Keir Starmer on Tuesday condemned the creation and spread of fake explicit images on Elon Musk’s social media platform X, calling the practice “disgusting” and “shameful” and warning that the government would not hesitate to take action if the company failed to respond.
Starmer’s comments followed reports that Grok, an artificial intelligence chatbot developed by Musk-owned xAI and integrated into X, had been misused to generate sexually explicit images depicting real individuals without their consent. Victims reportedly include women and minors.
“What is happening here is appalling and shameful,” Starmer said on January14. “This kind of abuse has no place in our society, and we will not compromise on protecting people from it.”
Grok, launched by xAI as a conversational AI tool embedded within the X platform, has drawn criticism from safety advocates and regulators who say generative AI systems are increasingly being exploited to produce deepfake sexual content, which can be difficult to detect, remove, or trace.
Campaigners say the rapid spread of such material can cause serious harm to victims, including reputational damage, psychological distress, and in some cases long-term social and professional consequences.
Starmer said that if X failed to take sufficient steps to prevent the creation and distribution of such content, the British government would “fully support” action by the UK’s communications regulator, Ofcom.
Under Britain’s Online Safety Act, platforms are required to take active measures to prevent illegal and harmful content — including non-consensual sexual imagery — and can face heavy fines if they fail to comply.
“Companies cannot turn a blind eye to how their technologies are being abused,” Starmer said. “If platforms do not act responsibly, regulators will.”
X and xAI have not publicly commented on the specific allegations referenced by Starmer.
The controversy highlights growing global concerns over the misuse of generative AI tools, particularly in relation to deepfake pornography. Several countries are reviewing whether existing laws are sufficient to address AI-generated abuse, while technology companies face increasing pressure to build safeguards into their systems.
Advocates argue that platforms should implement stronger content moderation, watermarking, and identity verification systems to prevent misuse, while ensuring rapid takedown mechanisms for victims.
The British government has said it is monitoring the impact of generative AI closely and will consider further regulatory measures if necessary.
“We want innovation,” Starmer said, “but it must be safe, responsible and accountable.”

