Facebook, YouTube Warn Of More Mistakes As Machines Replace Moderators

Mar 31, 2020
Originally published on March 31, 2020 9:44 am

Facebook, YouTube and Twitter are relying more heavily on automated systems to flag content that violate their rules, as tech workers were sent home to slow the spread of the coronavirus.

But that shift could mean more mistakes — some posts or videos that should be taken down might stay up, and others might be incorrectly removed. It comes at a time when the volume of content the platforms have to review is skyrocketing, as they clamp down on misinformation about the pandemic.

Tech companies have been saying for years that they want computers to take on more of the work of keeping misinformation, violence and other objectionable content off their platforms. Now the coronavirus outbreak is accelerating their use of algorithms rather than human reviewers.

"We're seeing that play out in real time at a scale that I think a lot of the companies probably didn't expect at all," said Graham Brookie, director and managing editor of the Atlantic Council's Digital Forensic Research Lab.

Facebook CEO Mark Zuckerberg told reporters that automated review of some content means "we may be a little less effective in the near term while we're adjusting to this."

Twitter and YouTube are also sounding caution about the shift to automated moderation.

"While we work to ensure our systems are consistent, they can sometimes lack the context that our teams bring, and this may result in us making mistakes," Twitter said in a blog post. It added that no accounts will be permanently suspended based only on the actions of the automated systems.

YouTube said its automated systems "are not always as accurate or granular in their analysis of content as human reviewers." It warned that more content may be removed, "including some videos that may not violate policies." And, it added, it will take longer to review appeals of removed videos.

Facebook, YouTube and Twitter rely on tens of thousands of content moderators to monitor their sites and apps for material that breaks their rules, from spam and nudity to hate speech and violence. Many moderators are not full-time employees of the companies, but instead are contractors who work for staffing companies.

Now those workers are being sent home. But some content moderation cannot be done outside the office, for privacy and security reasons.

For the most sensitive categories, including suicide, self-injury, child exploitation and terrorism, Facebook says it's shifting work from contractors to full-time employees and it is ramping up the number of people working on those areas.

There are also increased demands for moderation as a result of the pandemic. Facebook says use of its apps, including WhatsApp and Instagram, is surging. The platforms are under pressure to keep false information, including dangerous fake health claims, from spreading.

The World Health Organization calls the situation an infodemic, where too much information, both true and false, makes it hard to find trustworthy information.

The tech companies "are dealing with more information with less staff," Brookie said. "Which is why you've seen these decisions to move to more automated systems. Because frankly, there's not enough people to look at the amount of information that's ongoing."

That makes the platforms' decisions right now even more important, he said. "I think that we should all rely on more moderation rather than less moderation, in order to make sure that the vast majority of people are connecting with objective, science-based facts."

Some Facebook users raised alarm that automated review was already causing problems.

When they tried to post links to mainstream news sources like The Atlantic and BuzzFeed, they got notifications that Facebook thought the posts were spam.

Facebook said the posts were erroneously flagged as spam because of a glitch in its automated spam filter.

Zuckerberg denied that the problem was related to shifting content moderation from humans to computers.

"This is a completely separate system on spam," he said. "This is not about any kind of near-term change, this was just a technical error."

Copyright 2020 NPR. To see more, visit https://www.npr.org.

STEVE INSKEEP, HOST:

Almost by definition, many tech workers are among those who can work from home. Many are already in front of screens already working on the Internet. But one tech job at companies like Facebook, YouTube and Twitter is hard to do at home - moderating harmful content. Workers must do that while preserving privacy and security and maybe not in front of their families in the living room. NPR tech correspondent Shannon Bond reports on a solution - artificial intelligence.

SHANNON BOND, BYLINE: The tech companies have been saying for years that they want computers to take on more content moderation. For one thing, they're faster than human reviewers, and they won't be traumatized by graphic violence or disturbing content. The pandemic has accelerated that transition. Graham Brookie is director of the Atlantic Council's Digital Forensic Research Lab, which tracks online disinformation.

GRAHAM BROOKIE: We're seeing that play out in real time at a scale that I think a lot of the companies probably didn't expect at all.

BOND: So what does this mean for what's showing up in your social media feeds? The companies themselves are warning there could be mistakes. Here's Facebook CEO Mark Zuckerberg on a recent call with reporters.

(SOUNDBITE OF ARCHIVED RECORDING)

MARK ZUCKERBERG: We may be a little less effective in the near term while we're adjusting to this.

BOND: That means some posts or videos might be incorrectly removed and others that should come down may be left up. At Facebook, humans are still reviewing the most difficult material, like posts about suicide and self-harm, terrorism and child exploitation. Many moderators are contractors, not full-time employees. But Facebook is shifting that work to employees so contractors can stay home. The platforms are grappling with how to get the critical work of moderation done as the volume of posts they have to review is skyrocketing. Graham Brookie says that's creating pressure.

BROOKIE: They are dealing with more information with less staff, which is why you've seen these decisions to move to more automated systems because, frankly, there's not enough people to look at the amount of information that's ongoing.

BOND: That includes false information about the pandemic, including bogus cures and harmful fake treatments. The World Health Organization calls the situation an infodemic where too much information, both true and false, makes it hard for people to find sources they can trust. Brookie says that makes the platform's decisions about what people are allowed to say even more important right now.

BROOKIE: I think that we should all rely on more moderation rather than less moderation in order to make sure that the vast majority of people are connecting with objective, science-based facts.

BOND: Some Facebook users are raising alarms that automated review is already causing problems. When they tried to post links to mainstream news sites like The Atlantic and BuzzFeed, they got notifications that the posts were spam. Facebook said that was an error because of a glitch in its automated spam filter. Zuckerberg said it was unrelated to the change in content moderation. Shannon Bond, NPR News, San Francisco. Transcript provided by NPR, Copyright NPR.