YouTube Ups Content Quality for Brands
Brand representatives, sick of seeing their ads playing before the video of the body of someone who recently committed suicide shows up on consumers’ screens, are getting their wish for tighter content enforcement from YouTube.
On Monday, a YouTube blog post titled “Expanding Our Work Against Abuse of Our Platform” details three ways the video vendor will enforce content quality in order to stop angering advertisers.
Back in April 2017, marketers were so upset about finding their ads next to hate speech and other undesirable videos that many of them pulled their advertisements or pulled back — and some even started vetting videos with their own staffers. Publishers including Adweek, Bloomberg, Mashable, Mediabistro, Variety, The Verge and the Wall Street Journal noted on Tuesday that YouTube’s had a year’s-worth of complaints from advertisers fed up with the problems.
The company has shown ads for major brands next to videos depicting hate speech. It’s also hosted disturbing cartoons in its YouTube Kids section, and allowed Logan Paul’s troubling video featuring the body of a person who had recently committed suicide to show ads and reach its “Trending” section.
Here’s what YouTube CEO Susan Wojcicki writes on Monday:
Human Reviewers Will Approve YouTube Content
This year, YouTube will employ more than 10,000 staffers who will work to “address content that might violate our policies,” Wojcicki says.
While not specifically naming the inappropriate content targeting children, she writes that YouTube is working with child safety organizations and other third-party advisers to find objectionable content.
Wojcicki explains:
Since June, our trust and safety teams have manually reviewed nearly 2 million videos for violent extremist content, helping train our machine-learning technology to identify similar videos in the future. We are also taking aggressive action on comments, launching new comment moderation tools and in some cases shutting down comments altogether. In the last few weeks we’ve used machine learning to help human reviewers find and terminate hundreds of accounts and shut down hundreds of thousands of comments.
“YouTube Subjecting All ‘Preferred’ Content to Human Review,” Tuesday’s WSJ headline states.
YouTube’s Machine Learning Will Be Deployed Wider
The streaming video platform’s machine-learning tools are … learning … faster.
Wojcicki says:
• Since June we have removed over 150,000 videos for violent extremism.
• Machine learning is helping our human reviewers remove nearly five times as many videos than they were previously.
• Today, 98 percent of the videos we remove for violent extremism are flagged by our machine-learning algorithms.
• Our advances in machine learning let us now take down nearly 70 percent of violent extremist content within eight hours of upload and nearly half of it in two hours and we continue to accelerate that speed.
• Since we started using machine learning to flag violent and extremist content in June, the technology has reviewed and flagged content that would have taken 180,000 people working 40 hours a week to assess.
Advertisers and Consumers Will See Greater Transparency From YouTube
Wojcicki says in 2018, YouTube will put out a “regular report” detailing content problems, in which “we will provide more aggregate data about the flags we receive and the actions we take to remove videos and comments that violate our content policies. We are looking into developing additional tools to help bring even more transparency around flagged content.”
YouTube’s Partner Program Gets a Revamp
The note about this change comes from Adweek on Tuesday. After the April advertiser uproar, content creators had to have 10,000 views on a channel to be approved for the partner program.
Now YouTube channels will need to amass 1,000 subscribers and 4,000 hours of watch time in a one-year period to run ads. Both new and existing channels will have to meet the new requirements, which go into effect on Feb. 20.
In addition to views, YouTube staff will also monitor spam, community strikes and flags of abuse as qualifiers for whether or not a channel can make money off of clips.
According to Google, 99 percent of the channels that will be affected by the new guidelines make less than $100 from advertising every year, meaning the vast majority of channels affected do not make much money off of YouTube.
What do you think, marketers? Here's one comment:
“YouTube’s decision to eliminate smaller creators from their partner program will put on a strain on its community. Video platforms would do well to remember that creators are the cornerstone of their success — if you start removing opportunities to monetize, they will look elsewhere for places to host their content.“ — Chris Pavlovski, CEO and founder of Rumble
Please respond in the comments section below.
Related story: YouTube Fixes Horrible Ad Placements, Grows 50%