The UK’s online safety regulator Ofcom has opened a formal investigation into X (formerly Twitter) following reports that the platform’s Grok AI chatbot was used to create and share “undressed” images of people and sexualised imagery involving children, content that may fall under non-consensual intimate image abuse, pornography, or child sexual abuse material (CSAM) under UK law.
Ofcom said it first contacted X on January 5 and set a deadline of January 9 for the company to explain what steps it had taken to meet its UK duties. After receiving a response and conducting an expedited review, Ofcom moved to a formal probe under the Online Safety Act 2023.
The investigation will examine whether X has complied with obligations including: conducting suitable and updated illegal-content and children’s risk assessments, taking proportionate steps to prevent users from encountering “priority” illegal content (including intimate image abuse and CSAM), removing illegal content swiftly when aware of it, and using highly effective age assurance to stop children accessing pornography.
Ofcom noted it also received a response from xAI and is assessing whether there may be compliance issues linked to Grok’s provision that warrant a separate investigation.
UK technology secretary Liz Kendall welcomed the probe, calling the recent Grok-linked content “deeply disturbing” and urging a swift conclusion.
If Ofcom finds breaches, it can require remedial steps and impose fines of up to £18 million or 10% of qualifying worldwide revenue (whichever is higher), and in extreme cases seek court-ordered measures that could restrict access in the UK.