Go Down Skip to main content

Author Topic: Facebook “Explains” Why It Failed To Remove Christchurch Shooter’s Gruesome Livestream

  • ****
  • Member Plus
  • Posts: 1023
    • YoNews.org
Facebook “Explains” Why It Failed To Remove Christchurch Shooter’s Gruesome Livestream

by Tyler Durden
ZeroHedge.com

Thu, 03/21/2019


Though Facebook’s AI-powered censors managed to “mistakenly” flag Zero Hedge as a repeat violator of the social network’s “community standards”, when it comes to livestreamed videos depicting horrific and extreme violence, the company is still working out some kinks in its ability to immediately identify and remove livestreams depicting horrific acts of terror and violence like the video published Friday by the Christchurch Shooter.


In a blog post mea culpa published Thursday, Facebook’s VP of Integrity Guy Rosen explained why the company failed to immediately remove the horrifying livestream of the attacks that the shooter, a 28-year-old Australian who also published a manifesto laying out his violent, islamophobic ideology, posted to Facebook Live, and which was viewed 4,000 times before being taken down.


According to Rosen, one reason the video lingered for so long on its platform – Facebook didn’t remove the video until police responding to the incident reached out to the company, despite it being reported multiple times – was that the video wasn’t prioritized for immediate review by the company’s staff. As it stands, Facebook only prioritizes reported livestreams tagged as suicide or self harm for immediate review.


To rectify this, the company is “reexamining its reporting logic” and will likely expand the report categories prioritized for immediate review.


In Friday’s case, the first user report came in 29 minutes after the broadcast began, 12 minutes after the live broadcast ended. In this report, and a number of subsequent reports, the video was reported for reasons other than suicide and as such it was handled according to different procedures. As a learning from this, we are re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review.


Or as one twitter wit summed up:



In latest statement on the Christchurch terror attack, Facebook says the Live video wasn’t acted on straight away because it wasn’t flagged under a new category “suicide”….


Looks like they’re mulling a new category. Like “murder” or “terrorism”? https://t.co/dMq1QgoCyu pic.twitter.com/KtPfd5cfiL


— Mark Di Stefano 🤙🏻 (@MarkDiStef) March 21, 2019



The Rest…HERE


Source: Facebook “Explains” Why It Failed To Remove Christchurch Shooter’s Gruesome Livestream
Post Yonews
Drag to your tool bar
& Post from anywhere!

Share
 
Go Up