This article was published 2 yearsago

While the internet remains an unsafe place for teen and underage users, and is the bane to their digital privacy, social media companies and platforms have been working on updates and features to better safeguard the privacy and security of their younger users. Meta, the Facebook parent, is now taking it one step further with its new privacy updates for teens on Instagram and Facebook.

In a blog post, the parent company of two of internet’s largest social media platforms announced the rolling out of the new set of updates, ones that are slated to better protect teen users from harmful behaviour on both platforms as we head into the holiday season at the fag end of the year, which is bound to see teens spending more time and posting more on social media.

These updates will be applicable to users of Facebook and Instagram under the age of 16 – or 18 in certain countries – and when they log on to Facebook as new users, they will be defaulted into the private settings. If they are existing users of the platform, Facebook will recommend them to manually opt for the private settings for a safer experience on its social networking site.

These private settings will affect who can see their Friends list, who can see the people, Pages and lists they follow, who can see posts they are tagged in on their profile, who can comment on their public posts, and reviewing posts they’re tagged in before the post appears on their profile
.
For another, it is taking steps to better protect them from potential predators on the platforms. Adults are already restricted from messaging teens that they are not connected to, or from seeing teen users in “People You May Know” recommendations. Now, it is testing ways to protect teens from messaging suspicious adults they aren’t connected to. Meta has provided an example of what it considers to be a “suspicious” account – an account belonging to an adult that may have recently been blocked or reported by a young person. And when the Instagram accounts of teen users are viewed by suspicious adults, Meta is testing a way to remove the message buttons on the accounts of the teen users.

Additionally, the social media company is encouraging teen users to make the most of its safety tools on the two platforms, such as reporting accounts after blocking them, as well as receiving safety notices with information on how to navigate inappropriate messages from adults. Meta notes that teen users have made ample use of these tools – over 100 million users were sent safety notices in a single month last year on Messenger. While this highlights Meta’s efforts to protect minor users on its platforms, it paints a disturbing picture about the number of potential predators on its platform.

Last but not the least, Meta has joined forces with the National Center for Missing and Exploited Children (NCMEC) to develop a global platform for teens who fear that their intimate images might be shared online without their consent.

This is an issue that has been persistent with the growth and proliferation of the internet, and the leak and spreading of intimate pictures of teens on online platforms often have severe real-world consequences – including suicide, which we have seen to occur several times. With the platform, Meta aims to prevent a teen’s intimate images from being posted online, and ensure that the platform responds to their needs of teens so they can regain control of their content in such horrific situations. Meta said that it will share on this in the coming weeks.

Meta is also collaborating with Thorn and their NoFiltr brand to remove the shame and stigma associated with intimate photos, as well as to inspire kids to seek treatment and reclaim control if they’ve shared them or are experiencing sextortion. It will do so with the help of educational materials.