Online Safety Act

Event

The UK's Online Safety Act faces scrutiny over AI content, social media bans, and platform compliance.

Total Coverage:11 articles
Last 7 Days:0
Velocity:-100.0%
Trending:100%

Event Overview

The Online Safety Act is a UK law designed to regulate online content and protect users, particularly children, from harmful material. It's newsworthy due to recent controversies and enforcement actions. Ofcom, the UK's communications regulator, is actively investigating and enforcing the Act, particularly regarding AI-generated content on platforms like X (formerly Twitter). Recent events include Ofcom launching a formal investigation into X's Grok AI tool for creating sexualized images, potentially leading to fines or a ban. A suicide forum was also found in breach of the Act for failing to block UK users. The Act's relevance is heightened by discussions around social media bans for under-16s and the need for clarity on content moderation, as seen in the case of Palestine Action. The Act's effectiveness in addressing AI-related harms and protecting vulnerable users is under intense scrutiny.
Last updated: March 21, 2026