Home > TV & Broadband > News > UK steps up action on AI deepfakes as Ofcom probes X over Grok images
Pressure on UK regulation after reports Grok generated non-consensual sexualised images
Ofcom is investigating X under the Online Safety Act after reports that its Grok AI tool generated non-consensual sexualised images.
The move tests how UK online safety rules apply to generative AI, with Ofcom warning it can impose major fines or seek court-ordered disruption.
Separately, the government has confirmed plans to ban AI "nudification" tools, prompting questions over whether multi-purpose systems like Grok will be covered.

Ofcom has opened a formal enforcement investigation into X under the Online Safety Act to assess whether it has complied with its duty to protect users in the UK from illegal content.
The regulator said it moved "extremely quickly" after reports that Grok, the AI tool developed by xAI and integrated into X, was used to generate non-consensual sexualised images of women and children.
Earlier this month, Ofcom made urgent contact with both X and xAI and set a deadline for the companies to explain what steps they had taken to meet their legal obligations under the Act.
The investigation follows parliamentary pressure after Dame Chi Onwurah, Chair of the Science, Innovation and Technology Committee, wrote on 9 January to Ofcom and the Department for Science, Innovation and Technology (DSIT) seeking clarification on regulatory powers and next steps.
Melanie Dawes, Chief Executive of Ofcom, wrote, "We are conducting this investigation as a priority. In doing so we must follow the processes required under UK law and set out in our Online Safety Enforcement Guidance. Making sure that these processes are followed ensures that our investigations are carried out fairly and that our decisions are legally sound - meaning that any findings against the company are procedurally robust, and that they stick."
Under the Online Safety Act, which came into force at the end of 2023, Ofcom can investigate whether platforms have systems and processes in place to prevent users in the UK from encountering illegal content, including non-consensual intimate images.
The regulator cannot assess or order the removal of individual posts, but where it finds compliance failures it can impose fines of up to £18m or 10% of a company's qualifying global revenue, whichever is higher.
In the most serious cases, it can also seek court orders requiring third parties, such as internet service providers or payment firms, to withdraw services or block access in the UK.
Alongside the investigation, the government has confirmed plans to legislate against AI "nudification" tools, with the Secretary of State for Science, Innovation and Technology, Liz Kendall, saying amendments to the Crime and Policing Bill would be brought forward as a priority.
The proposed ban would apply only to tools with the sole purpose of generating fake nude images. Dame Chi Onwurah said it was "unclear whether this ban - which appears to be limited to apps that have the sole function of generating nude images - will cover multi-purpose tools like Grok".
While Liz Kendall, has said the Online Safety Act already provides significant protections against AI-related harms, Dame Chi Onwurah said the case showed "how UK citizens have been left exposed to online harms while social media companies operate with apparent impunity".
The case also raises broader questions about how responsibility for AI-generated harm should be shared between developers, platforms and users.
Developers already make selective design choices about what generative AI systems are allowed to do, with some platforms blocking categories such as identity recognition or certain sexualised image transformations, while others apply fewer or looser constraints.
The Grok reports illustrate how those design choices can materially affect the risk of harm when similar underlying AI capabilities are deployed with different safeguards.
At the same time, the government has emphasised personal liability. Liz Kendall said the government would bring into force powers "as a matter of urgency" to criminalise the creation of non-consensual intimate images, adding that people should be "under no illusion" that creating or sharing such material is a criminal offence and that law enforcement will take action.
Between those two sits the platform layer, where the Online Safety Act places duties on services to reduce the risk of illegal content appearing on their platforms, rather than banning specific AI capabilities outright.
The Grok case brings these layers into tension. Where harmful outputs depend on how an AI system is prompted, responsibility is divided between user behaviour, developer safeguards and platform enforcement.
Ofcom's investigation will test whether those responsibilities can continue to be managed through platform processes alone, or whether regulation needs to set clearer limits on what AI systems are allowed to generate, and who is accountable when those limits are breached.
We're independent of the products and services we compare.
We order our comparison tables by price or features - never by referral revenue.
We donate at least 5% of net profits to charity, and operate a climate-positive workforce.
Receive consumer updates that matter in our newsletter
06 January 2026
Hyperoptic increases annual price rise to £4
29 December 2025
Community Fibre drops mid-contract price rises for new customers
29 December 2025
FACT warns users of illegal streaming after police investigation
Comments