This article was last updated 4 years ago

Twitter
Credits: Wikimedia Commons

In a bid to block out harmful comments and allow its users to enjoy a more pleasurable time across its platform, social media site Twitter has finally announced that it will be rolling out a new and improved Reply Prompts feature for users on Android and iOS, who have their default language set to English. The improved feature will not hold users back from tweeting what they want to say, but instead, a prompt will be displayed before their Tweet is posted, so that users can review and reconsider what it is that they’re tweeting. Harmful words, which might be offensive to some, will be highlighted in the prompt.

The harmful words under Twitter’s dictionary so far include hateful remarks, use of strong language, and insults or slurs.

The feature has been under testing since last year, when Twitter had said that the purpose of the same was to make users reconsider their Tweets/replies, should they be making a “potentially harmful or offensive reply”.

During its testing phase Twitter had strived to make the algorithms of the service better, so that they are better able to distinguish between insults and friendly banter. It had said last year, “the algorithms powering the prompts struggled to capture the nuance in many conversations and often didn’t differentiate between potentially offensive language, sarcasm, and friendly banter.”

It now seems that Twitter has succeeded in making the algorithms smarter, to better understand the nuances of the conversation. Now, before a prompt is sent regarding a potentially offensive reply, the relationship between the author and the replier, as well as their frequency of contacting each other, will be taken note of, so that the prompts don’t become a nuisance.

 The system has also become smarter, in the sense that it will now be able to identify situations where “language may be reclaimed by underrepresented communities and used in non-harmful ways.” However, this doesn’t mean that it will, on any instance, let true defaulters off the hook. The algorithms have been strengthened in such a way that they are now more capable of detecting the use of strong language, and take the action needed. The platform is also amicable to learning from its mistakes, as the users will have the option of giving feedback to Twitter. Once they receive a prompt, they will then be asked if the prompt was wrong. 

The feature had so far been under a beta testing phase of sorts, wherein Twitter had tested how it affects users, and looked at ways to make the experience better. The company claims that during these tests, some Twitter users’ posting habits did go for a turn, as the prompts succeeded in making about 34% of the people tested edit or delete their reply, if it contained any offensive content. It seems that these prompts also made users more aware of appropriate and inappropriate content, and resulted in future offensive replies going down by around 11%. On the receiving end, users were better protected online through the use of these reply prompts, as their encounters with offensive or inappropriate replies went for a dip. All in all, this new feature seems to be a valuable addition to Twitter’s anti-hate speech agenda, and since it has now become significantly smarter, users might be in for a safer, and more welcoming, experience on the platform.