Users may have noticed that social media platform X recently brought several changes to its Community Notes feature. Earlier this week, the crowdsourced fact-checking feature brought videos under its aegis, allowing users to add fact checks or other important context to posts. And more recently, the erstwhile Twitter has introduced a subtle (yet impactful) tweak to its Community Notes feature.
With the latest modification to Community Notes, X users will be able to review all proposed annotations to a post on the micro-blogging site, rather than being limited to a single note. This development could potentially prompt contributors to consider a wider range of perspectives before finalizing their rating. Essentially, this new adjustment is set to arm contributors with as much pertinent information as possible, ensuring a comprehensive evaluation of notes. However, it also serves to present alternative viewpoints, potentially influencing contributors’ judgments and introducing a degree of complexity into the fact-checking process.
“When rating a note, it’s important that contributors have as much helpful information available as possible. Starting today, when contributors tap “Rate it” on a note, they’ll be taken to a page listing all note proposals, instead of just the note they’re looking at. This way, contributors can consider other notes before submitting their rating, including those claiming the post does not need additional context. We expect this to create more thoughtful and accurate ratings,” read a post by the official X account by Community Notes on the platform on Wednesday, September 13.
When rating a note, it’s important that contributors have as much helpful information available as possible. Starting today, when contributors tap “Rate it” on a note, they’ll be taken to a page listing all note proposals, instead of just the note they’re looking at.
This way,… pic.twitter.com/giia4jUQ25
— Community Notes (@CommunityNotes) September 12, 2023
In practice, this adjustment could prove pivotal, and the official account elaborated the same in its post. In practice, this adjustment could prove pivotal. For instance, consider a scenario where two notes are presented, both of which could be deemed helpful. One rectifies misinformation by highlighting that whales are indeed mammals, while the other contends that a note is unnecessary because the account in question is a parody. Both statements hold validity, but the latter could inadvertently lead to the removal of context from a tweet.
This nuance becomes even more critical when dealing with parody accounts, especially political ones. And considering that the number of parody accounts has skyrocketed ever since the verified ‘tickmark’ became available to anyone willing to pay for it, this adjustment to Community Notes seems to be a useful one. It remains to be seen whether it can truly improve the accuracy of fact-checking on X. By providing contributors with more context and alternative viewpoints, it can lead to more thorough evaluations, as well as more precise fact-checks, reducing the spread of misinformation on the platform.
This also helps X in business. Ever since Musk’s acquisition, the erstwhile Twitter has been under tremendous critical pressure for being biased and giving a looser leash to hate-speech and misinformation. Musk himself has been accused of personal bias, whether it is hampering visits to competitors and news outlets he has publicly stated hate for, or limiting reach for personalities he doesn’t approve of. This tweak to Community notes, if unbiasedly implemented, could make users more confident about effective addressing of misinformation, thus potentially leading to a better user experience, and perhaps even bringing back some of the disgruntled Twitterati who had earlier left to alternatives like Threads.