Background Discussion
Background Discussion: Social Media Restrictions for Minors: What Can Europe Learn From Australia?
February 2026
Wednesday
11
10:00 - 11:00
(CET)
Across the EU, political pressure is building to restrict access to social media for minors. Denmark, France and Germany are increasingly considering ID-verified bans for minors, with initial legislative proposals currently in the works.
As these debates intensify, Australia stands out as a key international precedent: the country has implemented far-reaching restrictions for under-16s – measures that are now influencing discussions well beyond its borders.
As European policymakers consider next steps, we should ask: Do Australia-style social media restrictions for minors offer a viable model for Europe — or a warning sign? How is the ban so far affecting Australian teenagers, families, and platforms? And what, concretely, can the EU already learn from the Australian test bed?
These questions will be at the centre of Lena-Maria Böswald's 60-minute background discussion with Tom Sulston. Tom is the Head of Public Policy for Digital Rights Watch, an Australian not-for-profit organisation that closely monitors and engages in internet policymaking in Australia.
______________________________________________________________________
Event Readout
11th February, 2026
Interface’s Lena-Maria Böswald, Senior Policy Researcher in the Digital Public Sphere Programme, hosted an event on the reality of Australia's social media age restrictions. With European nations like Germany, Denmark, France, and Spain proposing similar bans for minors, and similar pressure growing at the EU level, the discussion sought to clarify what these policies look like in practice.
The discussion guest, Tom Sulston, Head of Policy at Digital Rights Watch Australia, provided a critical perspective on the Australian "Online Safety Amendment Act." He characterised the legislation as a "victory of politics over policy", noting that the bill was rushed through parliament in just eight days, with a 48-hour community consultation. Sulston argued that the move was driven more by election-related political pressure and media campaigns focused on cyberbullying, such as the Murdoch-led "Let Them Be Kids", than by established evidence of broader mental health benefits for children.
The Specifics of the Australian Framework
The ban, which came into effect in late 2025, targets users under the age of 16 and applies to ten major platforms, including Facebook, Instagram, TikTok, YouTube, and Reddit. Unlike many proposed European models, Australia’s law is unique in its strictness:
- No parental exemption: Even with parental consent, minors under 16 are legally barred from creating accounts.
- Account deactivation: Platforms are required to deactivate existing profiles of underage users.
- Platform responsibility: Tech companies face penalties of up to €29 million (49.5m AUD) for failing to implement "reasonable steps" to enforce the age limit.
The law also does not dictate any specific technology, partially due to the expedited timeline around the law, meaning platforms have adopted a "suite" of self-chosen age assurance methods:
- Age verification via identity documents: Using passports or driver's licenses.
- Age estimation: AI-driven facial recognition or account age to estimate a user's age.
- Age inference: Analysing online behaviour through account history or text patterns to identify "child-like" writing.
The Technological and Human Cost
There are critical concerns regarding how platforms are currently checking a user’s age and the unintended consequences for users, said Sulston. There are serious privacy risks associated with document verification as a method, while others are often woefully inaccurate in differentiating between 15- and 16-year-olds, with error rates leading to thousands of "false positives" (adults locked out of their accounts) and "false negatives" (children gaining access).
He identified three major consequences:
1. Loss of anonymity and privacy: Requiring ID verification creates a "chilling effect" for whistleblowers and vulnerable populations who rely on online anonymity for safety.
2. Driving youth to “dark corners": By banning minors from moderated platforms (which have safety features and reporting tools), the law may drive them to unmoderated spaces like 4chan or Stormfront, as is already the case in Australia.
3. Isolation of vulnerable teens: For LGBTQ+ youth, disabled teenagers, and those in remote communities, social media is often a vital support network for already vulnerable groups. Sulston noted that some mental health helplines have already reported that 10% of teens requesting their services cite the ban as a contributing factor.
Q&A: Looking Toward a European Solution
The session concluded with a discussion of how the EU might avoid these pitfalls, led by audience Q&A. Sulston emphasised that the Australian ban does not address the underlying "surveillance capitalism" business model of social media companies, and in fact distracts from other conversations on policies to make social media meaningfully safer for children and adults.
Instead of "age-gating" corners of the internet, which fundamentally disempowers the individual, Sulston argued policymakers should focus on regulating algorithms, user targeting, and platforms’ legal immunity with regard to user-generated content.
"The ban itself is not great... the problem is all of the side effects that were not thought out because it went through a political process, not a policy process."
– Tom Sulston, Digital Rights Watch
Meet the speakers
Lena-Maria Böswald
Senior Policy Researcher Digital Public Sphere