Australia’s internet regulator has accused the world’s largest social media companies of not adequately implementing the country’s prohibition preventing under-16s from accessing their platforms, despite laws that took effect in December. The eSafety Commissioner, Julie Inman Grant, has raised “serious concerns” about adherence by Facebook, Instagram, Snapchat, TikTok and YouTube, highlighting inadequate practices including allowing banned users to repeatedly attempt age verification and inadequate safeguards to prevent new accounts. In its initial compliance assessment since the ban took effect, the regulator found numerous deficiencies and has now shifted from observation to active enforcement, warning that platforms must show they have put in place “appropriate systems and processes” to prevent children under 16 from accessing their services.
Non-compliance Issues Revealed in Opening Large-scale Review
Australia’s eSafety Commissioner has outlined a concerning pattern of non-compliance amongst the world’s most prominent social media platforms in her first formal review since the ban came into effect on 10 December. The report demonstrates that Meta, Snap, TikTok, YouTube and Snapchat have jointly neglected to establish adequate safeguards to prevent minors from accessing their services. Julie Inman Grant expressed particular concern about systemic weaknesses in age verification systems, noting that some platforms have permitted children who initially declared themselves under 16 to subsequently claim they were older, thereby undermining the law’s intent.
The findings demonstrate a notable intensification in the regulatory action, with the eSafety Commissioner moving beyond monitoring to direct enforcement. The regulator has emphasised that merely demonstrating some children still maintain accounts is insufficient; platforms must instead furnish substantive proof that they have put in place comprehensive systems and procedures designed to prevent under-16s from creating accounts in the first place. This shift reflects the government’s determination to hold tech giants responsible, with potential penalties looming for companies that fail to meet the statutory obligations.
- Enabling previously banned users to confirm again their age and restore account access
- Permitting multiple tries at the same age assurance method without penalty
- Inadequate safeguards to stop accounts for under-16s from being established
- Limited notification systems for parents and the general public
- Lack of transparent data about compliance actions and user account terminations
The Extent of the Problem
The considerable scale of social media activity amongst Australian young people highlights the regulatory challenge confronting both the authorities and the platforms themselves. With numerous accounts already removed or restricted since the implementation of the ban, the figures paint a picture of extensive early non-compliance. The eSafety Commissioner’s conclusions indicate that the operational and technical barriers to implementing age restrictions have turned out to be considerably more complex than expected, with platforms having difficulty to differentiate authentic age confirmations from fraudulent ones. This intricacy has placed enforcement authorities grappling with the core issue of whether existing age verification systems are sufficient for the purpose.
Beyond the operational challenges lies a wider issue about the readiness of companies to prioritise compliance over user growth. Social media companies have consistently opposed stringent age verification measures, citing privacy concerns and the real challenge of verifying age digitally. However, the regulatory report suggests that some platforms might not be demonstrating sufficient effort to deploy the infrastructure mandated legally. The move to active enforcement represents a critical juncture: either platforms will significantly enhance their regulatory systems, or they stand to incur substantial fines that could reshape their business models in Australia and possibly affect compliance frameworks internationally.
What the Statistics Demonstrate
In the opening month after the ban’s implementation, Australian regulators indicated that 4.7 million accounts had been suspended or removed. Whilst this number initially seemed to demonstrate compliance achievement, further investigation reveals a more nuanced picture. The substantial number of account removals indicates that many under-16s had been able to set up accounts in the initial stages, demonstrating that preventative measures were inadequate. Furthermore, the data prompts inquiry about whether deleted profiles constitute genuine enforcement or simply users deleting their pages voluntarily in response to the latest limitations.
The restricted transparency regarding these figures has troubled independent observers trying to determine the ban’s true effectiveness. Platforms have disclosed little data about their compliance procedures, performance indicators, or the nature of removed accounts. This lack of clarity makes it challenging for regulators and the wider public to evaluate whether the ban is operating as planned or whether young people are simply finding other methods to access social media. The Commissioner’s push for thorough documentation of structured adherence protocols reflects growing frustration with platforms’ unwillingness to share complete details.
Industry Response and Pushback
The major tech platforms have responded to the regulator’s enforcement action with a combination of compliance assurances and doubts regarding the practical feasibility of the ban. Meta, which operates Facebook and Instagram, emphasised its commitment to complying with Australian law whilst at the same time contending that accurate age determination continues to be a significant industry-wide challenge. The company has advocated for a alternative strategy, proposing that robust age verification and parental approval mechanisms put in place at the application store level would be more efficient than enforcement at the platform level. This position demonstrates wider concerns across the industry that the current regulatory framework places an unrealistic burden on separate platforms.
Snap, the creator of Snapchat, has adopted a more assertive public position, stating that it had suspended 450,000 accounts since the ban took effect and asserting it continues to suspend additional accounts each day. However, industry observers question whether such figures reflect authentic adherence or simply represent reactive account management. The fundamental tension between platforms’ business models—which historically relied on maximising user engagement and growth—and the statutory obligation to actively exclude an entire age demographic persists unaddressed. Companies have consistently opposed stringent age verification, citing privacy concerns and technical limitations, creating a standoff between authorities and platforms over who carries responsibility for execution.
- Meta contends age verification should occur at app store level instead of on individual platforms
- Snap states to have locked 450,000 user accounts following the ban’s implementation in December
- Industry groups highlight privacy issues and technical challenges as impediments to effective age verification
- Platforms maintain they are making their best effort whilst questioning the ban’s overall effectiveness
Larger Considerations Regarding the Prohibition’s Effectiveness
As Australia’s under-16 online platform ban enters its implementation stage, key concerns remain about whether the law will achieve its stated objectives or merely drive young users towards less regulated platforms. The regulator’s initial compliance assessment reveals that following implementation, significant loopholes remain—children continue finding ways to circumvent age verification systems, and platforms have had difficulty prevent new underage accounts from being established. Critics contend that the ban’s success depends not merely on regulatory vigilance but on whether young people will truly leave major social networks or simply shift towards alternative services, encrypted messaging applications, or VPNs designed to mask their age and location.
The ban’s worldwide effects contribute further complexity to assessments of its success. Countries such as the United Kingdom, Canada, and several European nations are watching Australia’s experiment closely, exploring similar legislation for their own citizens. If the ban fails to reduce children’s online activity or does not protect them from dangerous online content, it could damage the case for equivalent legislation elsewhere. Conversely, if implementation proves sufficiently strict to genuinely restrict underage participation, it may embolden other nations to adopt comparable measures. The outcome will potentially determine international regulatory direction for the foreseeable future, making Australia’s implementation efforts examined far beyond its borders.
Those Who Profit and Who Is Disadvantaged
Mental health campaigners and organisations focused on child safety have backed the ban as a essential measure against algorithmic manipulation and contact with harmful content. Parents and educators maintain that taking young Australians off platforms designed to maximise engagement could lower anxiety levels, improve sleep patterns, and decrease exposure to cyberbullying. Tech companies’ own research has acknowledged the risks to mental health associated with social media use amongst adolescents, adding weight to these concerns. However, the ban also removes valid applications of social media for young people—maintaining friendships, accessing educational content, and participating in online communities around common interests. The regulatory framework assumes harm outweighs benefit, a calculation that some young people and their families question.
The ban’s concrete implications extends beyond individual users to affect content creators, small businesses, and community organisations dependent on social media platforms. Young people who might have pursued creative careers through platforms like TikTok or Instagram now face legal barriers to participation. Small Australian businesses that are dependent on social media marketing lose access to younger demographic audiences. Community groups, charities, and educational organisations have trouble connecting with young people through channels they previously used effectively. Meanwhile, the ban unintentionally favours large technology companies with resources to build age verification infrastructure, possibly reinforcing their market dominance rather than reducing it. These unforeseen effects suggest the ban’s effects extend far beyond the simple goal of child protection.
What Follows for Regulatory Action
Australia’s eSafety Commissioner has announced a significant shift from passive monitoring to direct intervention, marking a critical turning point in the execution of the youth access prohibition. The authority will now collect data to determine whether companies have omitted “reasonable steps” to block minors from using, a legal standard that goes further than simply documenting that young people stay within these services. This approach necessitates tangible verification that platforms have established appropriate systems and protocols designed to exclude minors. The regulatory body has indicated it will pursue investigations carefully, developing arguments that could result in significant fines for failure to comply. This transition from oversight to intervention reflects increasing dissatisfaction with the services’ existing measures and indicates that consensual engagement on its own will not be enough.
The implementation stage raises important questions about the sufficiency of sanctions and the practical mechanisms for maintaining corporate responsibility. Australia’s regulatory framework delivers compliance mechanisms, but their success depends on the eSafety Commissioner’s willingness to pursue regulatory enforcement and the platforms’ capability to adjust substantively. Global regulators, notably regulators in the Britain and Europe, will keenly observe Australia’s enforcement strategy and results. A successful enforcement campaign could establish a template for further jurisdictions evaluating similar bans, whilst failure might compromise the overall legislative structure. The forthcoming period will determine whether Australia’s pioneering regulatory approach produces real safeguards for teenagers or becomes largely performative in its influence.
