By Satyabrat Borah
On December 10, 2025, Australia became the first country to implement a nationwide ban on social media access for children under the age of 16, marking a bold and controversial step in the global effort to protect young people from the potential harms of online platforms. The law, passed in late 2024 as the Online Safety Amendment (Social Media Minimum Age) Act, requires major tech companies to take reasonable steps to prevent Australians under 16 from creating or maintaining accounts on designated platforms. These include Facebook, Instagram, Threads, X (formerly Twitter), YouTube, Snapchat, TikTok, Reddit, Kick, and Twitch. Failure to comply could result in fines of up to 49.5 million Australian dollars per breach. Prime Minister Anthony Albanese hailed the move as a “proud day” for the nation, emphasizing that it empowers families to reclaim control from powerful tech giants whose algorithms have been linked to rising mental health issues among youth.
The decision stems from mounting evidence of social media’s detrimental effects on children. Studies cited by the Australian government reveal that excessive exposure contributes to anxiety, depression, body image disturbances, cyberbullying, and even suicidal ideation. One government-commissioned report found that seven out of ten children aged 10 to 15 had encountered harmful content, including promotions of misogyny, eating disorders, and self-harm. Leaked documents from Meta, the parent company of Facebook and Instagram, have previously acknowledged that its platforms exacerbate body image problems and suicidal thoughts in teenagers. Globally, similar concerns have prompted calls for regulation, with experts pointing to addictive design features that keep young users scrolling for hours. Albanese and supporters argue that delaying access until 16 allows children to develop greater resilience during critical formative years, encouraging real-world activities like sports, reading, or learning instruments instead of endless digital engagement.
As the ban took effect, platforms began deactivating accounts en masse. Meta started shutting down hundreds of thousands of underage profiles on Facebook and Instagram days earlier, offering options to download data or pause accounts until users turn 16. TikTok announced it would deactivate all under-16 accounts regardless of registration details, relying on advanced age verification technology. Snapchat planned three-year suspensions, while YouTube focused on Google account signals and other indicators. Reports emerged of teething issues: some children bypassed facial age estimation tools, fooling systems into verifying them as adults, while others, including over-16 users, faced erroneous deactivations. The eSafety Commissioner, Julie Inman Grant, acknowledged that the rollout would not be perfect overnight, promising a graduated approach to enforcement and public updates on compliance before Christmas. Platforms must report underage account numbers, and ongoing monitoring will assess effectiveness through independent academic evaluations.
Public reaction has been deeply divided. Many parents and child advocates welcomed the ban, viewing it as a necessary safeguard in an era where children are bombarded with predatory content. Polling consistently shows around two-thirds of Australians support raising the minimum age to 16, with some parents describing their children as “completely addicted” and praising the law for providing a framework to enforce limits. Stories from families who lost children to suicide linked partly to online harms underscored the urgency, with advocates hoping Australia’s action inspires other nations. Yet, teenagers expressed a mix of distress and resignation. Some felt isolated, fearing disconnection from friends, especially those in remote areas or from minority groups who rely on social media for community and support. LGBTQ+ organizations highlighted that for many vulnerable youth, platforms offer lifelines to acceptance and resources unavailable offline. Others shrugged it off, predicting quick adaptation or workarounds.
Critics, including tech companies and free speech advocates, argue the ban is flawed and potentially counterproductive. Platforms contend it will drive children to unregulated “darker corners” of the internet, where harms are harder to moderate. Age verification methods, while improving, remain imperfect and prone to circumvention via VPNs, fake profiles, or borrowed credentials. Gaming services like Roblox and Discord, along with messaging apps like WhatsApp, are currently exempt, creating loopholes where similar risks persist. Privacy concerns loom large, as robust verification could require collecting more personal data, clashing with Australia’s strong privacy laws. Amnesty International warned that blanket bans force secretive usage, increasing vulnerability. A pending High Court challenge claims the law infringes on implied constitutional rights to political communication. Even supporters admit enforcement challenges, with savvy tech-native children likely to find ways around restrictions.
The ban’s long-term success remains uncertain. While it addresses direct exposure on major platforms, it does not eliminate online risks entirely. Children can still view public YouTube content without accounts, and new platforms could emerge unchecked. Experts emphasize that technology alone cannot replace parental guidance and education on digital literacy. Responsible monitoring of children’s online activities is crucial, yet in busy modern families, this is often easier said than done.
This leads to an alternative proposal worth considering: making social media access paid, even nominally. A small monthly fee, perhaps equivalent to a few dollars in many countries or as low as one rupee in places like India, could serve as an effective gatekeeper. Digital payments typically require adult verification and consent, since children cannot independently register on most payment gateways. This would allow parents to oversee and approve access, fostering accountability without outright exclusion. Low costs ensure no significant financial barrier, avoiding socioeconomic divides. If parents choose to share payment details recklessly, no system is foolproof, but this approach encourages involvement. It balances protection with practicality, enabling monitored usage rather than prohibition.
Australia’s pioneering move has ignited international interest, with countries like Malaysia planning similar under-16 restrictions and others in Europe and the US exploring age limits or enhanced parental controls. Whether it proves a model or a cautionary tale depends on outcomes: reduced mental health issues versus increased isolation or migration to unsafe spaces. Ultimately, shielding children in a tech-saturated world demands multifaceted efforts. Governments must hold companies accountable for safer designs, parents must engage actively, and society must teach critical online navigation skills. Australia’s experiment is underway, offering valuable lessons for a digital future where children’s well-being takes precedence over unchecked innovation. As the world watches, the true measure of success will be healthier, more balanced young generations equipped to thrive both online and off.



