Credits: Wikimedia Commons

Australia has successfully passed a new legislation, one that bans children under the age of 16 from accessing major social media platforms. With the bill now approved by both the House of Representatives and the Senate, the country is set to enforce some of the strictest age restrictions for online platforms globally, and a first globally in terms of a blanket social media ban.

This development comes at a time when there are increasing concerns about the harmful impact of social media on younger users, particularly regarding mental health, privacy, and exposure to inappropriate content. Platforms such as Instagram, TikTok, Snapchat, and X (formerly Twitter) will be required to ensure that users below 16 cannot set up accounts. The law will come into effect by late 2025, giving platforms ample time to implement the necessary age-verification systems.

Public support for the move has been overwhelming. A YouGov survey found that 77% of Australians approve of the new restrictions. Many parents and advocacy groups see the law as a much-needed safeguard in an era where social media has been linked to rising rates of anxiety, depression, and cyberbullying among young people. Still, enforcing this legislation will be a challenge, and social media companies will be held responsible for implementing “reasonable steps” to verify users’ ages, with potential fines of up to AUD 49.5 million ($32 million) in case of non-compliance.

The exact methods of implementing these steps are at the discretion of the companies, which have expressed strong reservations about the feasibility of the law. Meta, for instance, said that the legislation had been “rushed,” arguing that it disregards existing measures to create age-appropriate experiences. Elon Musk, owner of X, suggested that the law could pave the way for broader government control over internet access in Australia. “The social media ban legislation has been released and passed within a week and, as a result, no one can confidently explain how it will work in practice – the community and platforms are in the dark about what exactly is required of them,” DIGI (Digital Industry Group Inc, non-profit organization) Managing Director Sunita Bose commented on the matter.

Still, there are several risks associated with age-verification technologies. While the law explicitly prohibits platforms from requiring users to submit sensitive documents such as passports for verification, alternative methods such as biometric analysis also carry risks, and arguments can be made that these systems may inadvertently collect and store personal data, which can be used for malicious purposes. To address these concerns, the Australian government has initiated trials involving a diverse group of participants to evaluate various verification methods. These trials, overseen by the Age Check Certification Scheme, aim to ensure that the chosen solutions maintain user privacy and security.