This article was published 7 yearsago

youtube

YouTube was recently faced with intense backlash, followed by the departure of major brands, over the placement of adverts next to inappropriate and misleading content — such as video propelling hate, abuse, or even anti-Semitism. It was not only the advertisers who stomped off the platform anxiously, the creators also started appealing YouTube to figure something out when several of their video were demonetized.

Over the past couple months, the video streaming giant has been introducing amends to make its platform more advertiser friendly and has today updated its guidelines to further restrict creators from mishandling their freedom of speech and earning money even on those videos.

This update will be beneficial for both the creators, as well as the advertisers, as the former will now have knowledge of what not to include in their videos to get flagged whereas the latter can advertise without having to worry about its ads appearing next to any of the banned and restricted content. It will enable creators to understand what they can do to remonetize their videos and get back to similar ad revenue standards as before — it dropped after several major brands left the platform.

In the official blog post, YouTube states that it has heard feedback from both the advertiser and creator community before deciding to broaden the categories and types of video content that it will take stringent action against. Ariel Bardin, YouTube’s vice president of product management has defined three new categories that would be deemed highly inappropriate for both advertiser and brand partners. These categories are as under:

[mks_accordion] [mks_accordion_item title=”Hateful content”] Content that promotes discrimination or disparages or humiliates an individual or group of people on the basis of the individual’s or group’s race, ethnicity, or ethnic origin, nationality, religion, disability, age, veteran status, sexual orientation, gender identity, or other characteristic associated with systematic discrimination or marginalization.
[/mks_accordion_item] [mks_accordion_item title=”Inappropriate use of family entertainment characters”] Content that depicts family entertainment characters engaged in violent, sexual, vile, or otherwise inappropriate behavior, even if done for comedic or satirical purposes.
[/mks_accordion_item] [mks_accordion_item title=”Incendiary and demeaning content”] Content that is gratuitously incendiary, inflammatory, or demeaning. For example, video content that uses gratuitously disrespectful language that shames or insults an individual or group.
[/mks_accordion_item] [/mks_accordion]

This change in guidelines, in no way, means that your video content will be removed from the streaming platform if it isn’t deemed worthy of advertising by YouTube’s algorithms or human support staff. It is just that the video will continue to live on the platform, but won’t be eligible for any sort of advertising from the company’s brand partners — they no longer appreciate their ads appearing next to any offensive or hateful content. The video will only be removed from YouTube if they violate both the Terms of Service and Community Guidelines.

In addition to taking an even tougher stance of the type of content creators can make available to their audience, YouTube has also decided to help them out on how to make their video appealing for a broad range of advertisers. The company has published a new course in the Creator Academy to provide budding creators with additional info on making the content suitable for a wide audience that certain brands are looking to cater to.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.