Skip to main content

Don't blame users when violence on Facebook goes unreported

Don't blame users when violence on Facebook goes unreported

Share this story

facebook stock

After two high-profile episodes of violence on its platform, Facebook is facing a crisis with no clear prospect for change.

It started last month in Cleveland, when a 37-year-old man named Steve Stephens used Facebook for a series of immensely disturbing videos, including one where he was seen shooting 74-year-old Robert Godwin Sr. in the head. The brutality of the crime, and the video’s spread across Facebook, led to massive media coverage. Then, last week, another gruesome tragedy unfolded on the platform, as a man in Thailand, Wuttisan Wongtalay, murdered his 11-month-old daughter on camera. Two video clips reportedly showed the killing, and stayed on Wongtalay’s Facebook page for more than 24 hours. By the time they were taken down, one had 112,000 views, and the other, 258,000.

“Why doesn’t someone call the police?”

A Thai viewer visible in a screenshot of the video asked a question that was on many minds: Why doesn’t someone call the police? Related questions are also being asked of Facebook. As the view counts rose on the videos, how could they have stayed up for so long without the platform taking action?

Facebook relies on users to flag offensive or violent content. Those reports are then passed on to professional moderators, often laboring overseas, who decide whether content violates Facebook’s terms of service and should be taken down. The company says it sometimes provides information to law enforcement in emergencies, and presumably that information is often first seen by moderators. “Currently, thousands of people around the world review the millions of items that are reported to us every week in more than 40 languages,” the company said in its statement on the Cleveland video. According to Facebook’s timeline, Stephens’ account was disabled about 20 minutes after the shooting video was reported, but that video was not reported for more than an hour and 45 minutes.

The implication is that Facebook’s moderators acted fairly quickly given the amount of material they sift through, and the real lag came from viewers who didn’t report the video. It jibes with intuitions about passive spectators — think Kitty Genovese and the bystander effect — as well as a sense that the internet is an alienating place where people will watch horrible things. But the Genovese murder wasn’t what we think, and the psychology of witnessing terrible things on the internet is more complicated than it might seem.

The implication is that Facebook’s moderators acted fairly quickly

The bystander effect, which leads to a diffusion of responsibility in a crisis, is well-documented, although its most common interpretation may be misleading. As Dan Stalder, a University of Wisconsin–Whitewater professor, has documented in reviews of previous research, the bystander effect lessens any individual’s feelings of responsibility in a situation. In various scenarios, including when computers were used to ask for help — and when communication between bystanders is difficult, such as in an online video — as the number of people in the groups increased, each individual became less likely to help. But at the same time, overall, the odds that the person in trouble will be helped by anyone in the group increased.

It’s difficult to draw a straight analogy between the research and a single event, but Stalder suggests some compounding factors could have caused people to not report a video, or to react more slowly than you might expect. The effect, crucially, turns on whether viewers perceive they can help. It’s unclear if, scrolling through a Facebook feed and seeing a video, likely from an unknown location and time frame, users would be able to understand that they had an opportunity to take action. If they don’t know where it’s occurring, it’s not clear which law enforcement branch to contact, and it’s not clear that reporting violent content would result in someone being helped, or just the content being taken down. It’s also plausible that many simply didn’t know a reporting system — two clicks from the News Feed — was available.

“The onus on what is essentially moderating live feeds has been pushed onto the users.”

Sarah T. Roberts, an assistant professor at UCLA’s Department of Information Studies who has studied content moderation, says Facebook’s contention that no one reported the shooting video is “totally plausible, in the sense that the onus on what is essentially moderating live feeds has been pushed onto the users.” The current moderation model, where viewers are placed on the front lines, is a questionable one, she says, and with video, Facebook may not have grappled adequately with its “plan to open up an incredibly powerful tool to anybody and everybody.”

Without more transparency in Facebook’s moderation process, it’s challenging to point to any combination of cause — an unintuitive course of action, a lack of education on how to report videos — as the primary problem. Is the company’s failure part of an unresolvable issue with moderating a massive community by relying on users as the front line in enforcement? When it suggests that it does not receive flags quickly enough, Facebook seems to subtly endorse this view. “We are grateful to everyone who reported these videos and other offensive content to us, and to those who are helping us keep Facebook safe every day,” Facebook said in its post after the Cleveland shooting.

One update to Facebook’s statement, coming four days after it was first posted, points to a possible break in the moderation reporting chain, and indicates that some users will report not just violence, but talk of it. The original version of the post had said there were no user reports on the first video uploaded by Stephens, where he first threatened violence. But that wasn’t true, the company said; it had “discovered additional reports from users, including reports on the first video.” A Facebook spokesperson declined to elaborate on how that may have changed the timeline, or when, exactly, those missed reports came in. Regardless, it raises questions about why they were missed, and if flags from similar, lower-profile incidents may have been similarly lost.

Again, the update ended with a qualification, suggesting a note of inevitability, and of the limits of moderation: “None of these reports, however, came in before this tragic event took place.”