Two months after Australia’s landmark social media ban for under-16s took effect, new data reveals that one-fifth of Australian teenagers are still actively using platforms like TikTok and Snapchat, exposing a critical gap between legislative intent and technical reality in global efforts to protect minors online.
Australia’s ambitious Online Safety Amendment (Social Media Minimum Age) Bill 2023, which commenced on December 10, 2025, was heralded as a world-first, legally binding requirement for platforms to implement robust age assurance systems or face fines up to A$49.5 million ($35 million). The law targeted major platforms including Meta’s Instagram and Facebook, Google’s YouTube, TikTok, and Snapchat, aiming to create a digital safeguard for a generation raised on social media.
The first concrete evidence of the ban’s impact, from data collected by parental control software maker Qustodio and reported by Reuters, shows a clear but incomplete decline. Among 13-to-15-year-olds, Snapchat usage fell 13.8 percentage points to 20.3% from November to February, while TikTok usage dropped 5.7 percentage points to 21.2%. YouTube, which allows unlogged use for all ages, saw only a one-point dip to 36.9%.
These numbers represent a steeper seasonal decline than the previous year’s December-January summer break, suggesting the ban did have an impact. However, the persistent usage by a significant minority—over one in five teens for the most popular platforms—immediately answers the core question haunting policymakers and parents: can age gates ever be truly effective against motivated minors?
The Loophole in the Law: Users Who “Haven’t Been Blocked”
Qustodio’s report frames the issue bluntly: “Among children whose parents haven’t blocked access, a meaningful number continue to use restricted platforms in the months following the ban.” This points to a two-front enforcement failure. First, platform-native age verification, often reliant on self-declaration or easily circumvented AI checks, is proving porous. Second, device-level controls installed by parents are not universally applied, leaving a direct pathway to these apps.
The technical challenge is immense. Platforms employ a mix of ID verification, AI age estimation, and user reporting. Each method has well-documented weaknesses: fake IDs,照片 spoofing to fool AI, and the simple truth that a motivated teen with a parent’s credit card or an older sibling’s login can bypass most systems. The law fines the platforms, but it cannot fine the user, creating an asymmetric enforcement problem.
Regulator and Government Responses: “Cultural Change” Takes Time
Internet regulator the eSafety Commissioner acknowledged the data, stating it was “aware of reports some under-16s remained on social media and was ‘actively engaging with platforms and their age assurance providers… while continuing to monitor for any systemic failures that may amount to a breach of the law’.” This signals an enforcement posture focused on systemic platform flaws rather than individual user behavior.
A spokesperson for Communications Minister Anika Wells offered the government’s perspective: “We have always been clear that increasing the minimum age to access social media is a cultural change that will take time.” This frames the 20% figure not as a failure, but as an expected phase in a long-term societal shift. The implication is that the goal is to make access harder and less normalized, not to achieve 100% compliance overnight.
No Migration, But No Surrender: The WhatsApp Blip
One fear that has not materialized, according to the Qustodio data, is a mass exodus to completely unregulated or encrypted platforms like Discord or various gaming chats. There was only a “small uptick” in WhatsApp use among 13-15-year-olds. This is crucial. It suggests teens are not abandoning social connectivity but are seeking the same features (messaging, sharing, community) on platforms with weaker or no age checks, or on the same platforms via workarounds like using a parent’s account.
The data also notes that some of the post-ban dip is “slowly beginning to recover.” This indicates user adaptation. As news cycles move on and workarounds spread through peer networks, the initial shock of the ban may be wearing off, potentially stabilizing usage at a new, lower but persistent baseline.
The Global Experiment and Its Unanswered Questions
Australia’s law is now a blueprint. Similar legislation is advancing in the United Kingdom, France, and several U.S. states. The world is watching this real-time experiment. The early data provides two stark lessons.
First, age assurance technology is not yet a magic bullet. Platforms are investing heavily, but the cat-and-mouse game between verification systems and determined users favors the latter. Second, legislation alone is insufficient. The “cultural change” the government references requires parallel investment in digital literacy for parents, transparent reporting from platforms on their failure rates, and potentially, a shift in how society views childhood digital access.
The bans also raise new equity concerns. Families with fewer resources—less time, technical skill, or financial means for monitoring software—may see higher rates of non-compliance, potentially creating a two-tier system of protected and unprotected minors.
What This Means for Users, Developers, and Parents
- For Teens: The ban has made access slightly harder, but not impossible. Expect continued peer-to-peer sharing of workarounds, such as using family accounts or borrowing devices. Platforms may increase secondary verification prompts, leading to more friction.
- For Parents: The onus has shifted further to device-level controls (Apple’s Screen Time, Google Family Link) and third-party monitoring apps like Qustodio. Relying solely on platform-level age gates is a known weak point.
- For Developers & Platform Engineers: The legal stakes are now existential. This data is a urgent call to innovate beyond checkbox compliance. True age assurance will require frictionless, privacy-preserving verification methods that are extremely difficult to spoof—a monumental technical and UX challenge.
- For Policymakers Globally: Australia’s experience shows a trade-off: a significant reduction in casual, logged-in teen usage, but a stubborn floor of determined access. Future laws may need to incorporate penalties for negligent household-level controls or mandate funding for independent age-tech audits.
The 20% figure is not a verdict of failure, but a definitive benchmark. It sets a realistic, evidence-based starting point for a global debate. The goalposts have moved from “can we stop all teens?” to “what is an acceptable level of residual risk, and what are the societal costs of achieving it?” The answers will shape the digital childhoods of the next decade.
The fast-moving intersection of technology, policy, and youth development demands authoritative, immediate analysis. Only at onlytrustedinfo.com do you get the insights you need to understand what’s happening and why it matters, without the noise. Follow us for ongoing, definitive coverage of the tech stories that define our future.