On February 3, the UK’s Information Commissioner’s Office (ICO) announced a formal investigation into Elon Musk’s AI company xAI. The probe focuses on whether its AI chatbot Grok improperly processed personal data while generating certain AI-generated images.
The regulator stated the investigation follows growing public and political concern and will assess potential violations under UK data protection laws.
Sexualized AI Images of Real Individuals Trigger Controversy
The controversy began after users reportedly prompted Grok to generate sexualized AI images of real individuals without consent, leading to widespread circulation on social media. The incident sparked intense debate over AI ethics, platform accountability, and personal privacy rights.
Regulators indicated such practices may involve unauthorized use of personal data, potentially breaching existing data protection regulations.
European Enforcement Actions Intensify
Earlier the same day, French authorities conducted a raid on X’s Paris office, signaling broader European efforts to tighten oversight of AI platform compliance risks.
UK regulators emphasized that if violations are confirmed, xAI could face fines of up to £17.5 million or 4% of global annual revenue, whichever is higher.
Expanding Compliance Boundaries Between AI and On-Chain Ecosystems
As AI technologies increasingly integrate with Web3, issues such as data authenticity, identity verification, and traceability of generated content are becoming key regulatory concerns.
In certain on-chain application scenarios, combining risk monitoring tools like Trustformer KYT with transaction and identity behavior analysis can help establish transparent and verifiable trust frameworks across cross-platform ecosystems.
Conclusion
The investigation reflects Europe’s tightening stance on AI data compliance and platform governance. As generative AI expands its influence, platforms are expected to strengthen responsibilities in content moderation, privacy protection, and risk management.