Elon Musk and former CEO summoned to April hearings; separate UK probe examines AI-generated sexualized content
French prosecutors raided the Paris offices of Elon Musk’s social media platform X on Tuesday as part of an expanding criminal investigation into multiple alleged offenses including unlawful data extraction and complicity in the possession of child pornography.
The Paris prosecutor’s cyber-crime unit conducted the search as the investigation broadens beyond its original scope. Both Musk and former X chief executive Linda Yaccarino have been summoned to appear at hearings in April, according to the prosecutor’s office.
In a separate development announced the same day, the UK’s Information Commissioner’s Office (ICO) launched its own investigation into Musk’s AI chatbot Grok over concerns about its “potential to produce harmful sexualised image and video content.”
X has not responded to requests for comment on either investigation. The company has previously characterized the French probe as politically motivated and an attack on free speech.
Investigation Timeline and Scope
The French investigation began in January 2025 when prosecutors started examining content recommended by X’s algorithm. The probe expanded in July 2025 to include Grok, Musk’s AI chatbot that sparked controversy over its content generation capabilities.
At the time of that expansion, X posted a statement calling the action “politically-motivated” and denying allegations it had manipulated its algorithm.
Following Tuesday’s raid, French prosecutors outlined an expanded list of potential criminal violations under investigation:
- Complicity in possession or organized distribution of images of children of a pornographic nature
- Infringement of people’s image rights with sexual deepfakes
- Fraudulent data extraction by an organized group
The prosecutor’s office did not provide specific details about evidence gathered during the raid or the timeline for completing the investigation.
UK Regulatory Action
UK authorities provided an update Tuesday on ongoing investigations into sexual deepfakes created by Grok and shared on X. The images, often made using real photographs of women without their consent, generated significant criticism in January from victims, online safety campaigners, and politicians.
X eventually intervened to restrict the practice after Ofcom and other regulatory bodies launched investigations.
In its Tuesday update, Ofcom said it continues to investigate the platform and is treating the matter as urgent. However, the communications regulator noted it currently lacks sufficient legal authority to investigate the creation of illegal images by chatbots directly.
The ICO announcement followed shortly after, stating it would launch its own probe in conjunction with Ofcom into how Grok processes personal data.
“The reports about Grok raise deeply troubling questions about how people’s personal data has been used to generate intimate or sexualised images without their knowledge or consent, and whether the necessary safeguards were put in place to prevent this,” said William Malcolm, the ICO’s executive director for regulatory risk and innovation.
The European Commission announced its own investigation into xAI, Grok’s parent company, in late January over concerns about the generated images. A Commission spokesperson confirmed Tuesday that EU authorities are coordinating with French officials regarding the Paris office search.
Regulatory Jurisdiction Questions
The dual UK investigations highlight jurisdictional complexity in regulating AI tools and social media platforms. Ofcom has authority over content distributed on social media platforms but indicated its powers do not extend adequately to investigating chatbot functionality directly.
The ICO’s data protection mandate provides a separate legal basis for examining how Grok processes personal information to generate images. The joint investigation represents an attempt to address gaps in existing regulatory frameworks.
Under UK data protection law, processing personal data without consent can constitute a violation when that processing causes harm or violates individual rights. Creating sexualized images using someone’s likeness without permission potentially falls under multiple categories of harmful processing.
The ICO has enforcement powers including issuing fines up to £17.5 million or 4% of global annual turnover, whichever is higher, for serious data protection violations.
Platform Response and Free Speech Claims
X has consistently framed regulatory investigations as attacks on free expression rather than legitimate oversight of potentially illegal activity.
This framing received support Tuesday from Pavel Durov, founder of messaging app Telegram, who criticized French authorities in a post on X. Durov accused France of being “the only country in the world that is criminally persecuting all social networks that give people some degree of freedom.”
“Don’t be mistaken: this is not a free country,” Durov added.
Durov was arrested and detained in France in August 2024 over alleged moderation failures on Telegram that prosecutors said had failed to curb criminal activity. He was permitted to leave the country in March 2025 after implementing platform changes including sharing some user data with authorities in response to legal requests.
The parallel between Durov’s case and the X investigation is notable. Both involve prosecutors examining whether platform operators bear legal responsibility for illegal content created or distributed using their services.
Legal Framework for Platform Liability
European legal frameworks increasingly hold platform operators accountable for content and activity occurring on their services, particularly when platforms are alleged to have insufficient safeguards or to actively facilitate illegal activity.
The charges under investigation in the French case suggest prosecutors are examining whether X’s design choices, algorithmic recommendations, or insufficient content moderation constitute complicity in illegal activity rather than merely hosting user-generated content.
The distinction matters legally. Platforms generally receive liability protection for user-generated content under intermediary liability frameworks. That protection diminishes or disappears when prosecutors can demonstrate the platform actively facilitated illegal activity through design choices, algorithmic amplification, or inadequate response to known violations.
The fraudulent data extraction allegation suggests investigators are also examining whether X’s data collection practices comply with European data protection requirements, including obtaining proper consent and implementing required safeguards.
Broader Regulatory Context
The investigations represent part of a broader regulatory shift in how authorities approach large tech platforms, particularly those controlled by individuals with significant political influence and public profiles.
Musk’s ownership of X has been accompanied by significant changes to content moderation policies, verification systems, and platform governance. These changes have attracted regulatory scrutiny across multiple jurisdictions.
The AI-generated content issue extends beyond X and Grok specifically. Multiple AI image generation tools have faced criticism and investigation over their capacity to create non-consensual intimate images, deepfakes impersonating real individuals, and other potentially harmful content.
Regulatory approaches vary by jurisdiction. Some authorities focus on the platforms distributing such content. Others examine the AI tools creating it. The UK’s split approach, with Ofcom investigating distribution and ICO investigating data processing, reflects ongoing uncertainty about optimal regulatory frameworks.
Public Reaction: Action Over Announcement
The French raid generated significant public commentary, particularly contrasting French enforcement style with perceived inaction in other jurisdictions. Social media responses characterized France’s approach as executing enforcement actions without advance warnings or protracted public debates about potential consequences.
The response reflected broader frustration with regulatory approaches that emphasize statements of concern, threatened investigations, or prolonged deliberation rather than swift enforcement. Multiple commenters compared French action favorably to what they characterized as slower, more hesitant approaches in the United States and United Kingdom, where investigations often proceed through extended public phases before concrete actions occur.
Responses highlighted perceptions that other jurisdictions rely excessively on preliminary announcements and procedural notices rather than direct enforcement. The sentiment expressed impatience with regulatory bodies that telegraph intentions extensively before taking action.
The contrast highlighted different regulatory philosophies. French authorities conducted their investigation, executed a search warrant, and summoned witnesses with minimal advance public notice. UK and EU authorities, by comparison, have issued multiple public statements about ongoing investigations while enforcement actions remain pending.
This difference reflects both legal system variations and political culture around enforcement. French criminal procedure allows prosecutors broader investigative authority with less requirement for public process during investigation phases. Common law jurisdictions like the UK typically involve more procedural steps and public notification before similar enforcement actions.
The public response also revealed appetite for accountability enforcement against large tech platforms and their executives, particularly those who have accumulated significant political influence. Comments referenced France’s revolutionary history and willingness to challenge powerful entities, framing the enforcement action as continuation of that tradition.
Several responses expressed desire for similar enforcement approaches in their own jurisdictions, suggesting the French action set a precedent that other countries should follow. The commentary revealed public sentiment that regulatory hesitation enables continued harm while platforms operate without meaningful accountability.
Whether the raid produces criminal charges remains uncertain. Public reaction, however, demonstrated significant support for enforcement approaches that prioritize action over extended deliberation, particularly when addressing platforms and tools accused of facilitating serious harms including child exploitation and non-consensual intimate imagery.
Next Steps
The April hearings in France will provide Musk and Yaccarino opportunity to respond to prosecutors’ questions. French criminal procedure allows for such questioning during investigative phases before any formal charges are filed.
The UK investigations remain in early stages. Neither Ofcom nor ICO provided specific timelines for completing their probes or issuing findings.
X faces concurrent investigations across multiple jurisdictions examining different aspects of platform operations and AI tool functionality. The company has not indicated whether it plans substantive changes to Grok’s operation or X’s content policies in response to the regulatory scrutiny.
The cases will likely influence broader policy debates about platform accountability, AI safety requirements, and the boundaries of intermediary liability protection in cases involving algorithmic content generation and recommendation.