In a welcome move, Instagram has introduced a new feature that might become helpful in the fight against (or at least act as speed breaker) cyber bullying. The social media giant has just announced today that is launching a brand new tool, which will act as a filter of sorts, and will be aimed at allowing users to automatically filter out requests for direct messages (DMs) if they contain any offensive or inappropriate language, words, or phrases, or even inappropriate emojis. While the new tool has mainly been designed to help out celebrities and social figures protect themselves from malicious content, and harmful DMs, it is bound to become a good inclusion for regular users of the app as well, especially given the rising cases of cyber bullying.
The main idea behind the new feature is to promote Instagram’s endeavor to put an end to hate speech through its portal. It follows up on the move made by the company back in February, wherein it had announced that it will be putting an end to (or in other words, taking down or disabling) the accounts that sent multiple messages containing harassing or malicious content. At that time, it was announced that first time senders of such messages will be blocked from sending messages for undefined periods, but repetitive offences will lead to disabling of their accounts. Even back in 2018, the app had introduced a comments filter, which allowed users to automatically block comments attacking any person’s physical appearance.
The new tool will be Toggle in function, and will be present in a new section of the app, known as “Hidden Words” (which will live in the Privacy section of the Settings Menu), allowing users to either toggle it on or off. This means that it will have to be turned on proactively, as and when needed by the users. Commenting on this, Instagram said, “We want to respect peoples’ privacy and give people control over their experiences in a way that works best for them,” hunting at wanting to give users the control to allow certain chats containing words which might fall in the keywords list, to remain active. Turning the tool on will lead to all messages containing offensive content being stored in a separate folder, where they will be concealed until they are clicked upon. Clicking on them will in turn allow users to delete or report messages as well.
The tool will also be able to detect commonly used misspellings of the offensive keywords, which are sometimes used by bullies to evade detection. The list of keywords and emojis will be decided upon based on Facebook’s collaboration with certain unspecified anti-bullying organisations. This feature will first be made available in a handful of countries, including France, Ireland, UK, Germany, Australia, Canada, and New Zealand, over the next few weeks, and will gradually be rolled out to other countries too, over the course of the next few months.
During its announcement, Instagram also introduced another new feature, which will enable users to completely block out certain people from contacting them, even over new accounts. This feature will be made available within the next few weeks, in all parts of the country.
The features are completely developed by Facebook itself, and have currently been announced only for Instagram, and not for contemporaries WhatsApp and Messenger, which fall under the same banner. Any reports made by users will be used by the company to build upon the database, to include more information and words. This feature will help make it safer for people to access both their inboxes, the primary one where only contacts are allowed to send messages, and the secondary one which is open to anyone and everyone.