Skip to main content

How the Buffalo shooting livestream went viral

How the Buffalo shooting livestream went viral

/

Years after the Christchurch mosque attacks, what have platforms learned?

Share this story

Illustration by Alex Castro / The Verge

When a gunman pulled into the parking lot at a grocery store in Buffalo, New York, on Saturday in a racist attack targeting a Black community, his camera was already rolling.

CNN reports that a livestream on Twitch recorded from the suspect’s point of view showed shoppers in the parking lot as the alleged shooter arrived, then followed him inside as he began a rampage that killed 10 people and injured three. Twitch, popular for gaming livestreams, removed the video and suspended the user “less than two minutes after the violence started,” according to Samantha Faught, the company’s head of communications for the Americas. Just 22 people saw the attack unfold in real time online, The Washington Post reports.

But millions saw the livestream footage after the fact. Copies and links to the reposted video proliferated online after the attack, spreading to major platforms like Twitter and Facebook as well as lesser-known sites like Streamable, where the video was viewed more than 3 million times, according to The New York Times.

This isn’t the first time perpetrators of mass shootings have broadcast their violence live online with footage subsequently spreading. In 2019, a gunman attacked mosques in New Zealand’s Christchurch, live streaming his killings on Facebook. The platform said it removed 1.5 million videos of the attack in the 24 hours following. Three years later, with footage from Buffalo reuploaded and reshared days after the deadly attack, platforms continue to struggle with stemming the tide of violent, racist, and antisemitic content created from the original.

Moderating livestreams is especially difficult as things unfold in real time, says Rasty Turek, CEO of Pex, a company that creates content identification tools. Turek, who spoke to The Verge following the Christchurch shootings, says if Twitch was indeed able to disrupt the stream and take it down within two minutes of its beginning, that response would be “ridiculously fast.” 

“That is not only not industry standard, that is an achievement that was unprecedented in comparison to a lot of other platforms like Facebook,” Turek says. Faught says Twitch removed the stream mid-broadcast but did not respond to questions around how long the alleged shooter was broadcasting before violence began or how Twitch was initially alerted to the stream.

“The challenge is what happens with that video afterwards.”

Because live streaming has become so widely accessible in recent years, Turek acknowledges that getting moderation response time down to zero is impossible — and perhaps not the right framing to think about the problem. What matters more is how platforms handle copies and reuploads of the harmful content. 

“The challenge is not how many people watch the livestream,” he says. “The challenge is what happens with that video afterwards.” In the case of the livestream recording, it spread like a contagion: according to The New York Times, Facebook posts linking to the Streamable clip racked up more than 43,000 interactions as the posts lingered for more than nine hours.

Big tech companies have created a content detection system for situations like this. The Global Internet Forum to Counter Terrorism (GIFCT), created in 2017 by Facebook, Microsoft, Twitter and YouTube, was formed with the goal of preventing the spread of terrorist content online. After the Christchurch attacks, the coalition said it would begin tracking far-right content and groups online, after previously focusing mostly on Islamic extremism. Material related to the Buffalo shooting — like hashes of the video and manifesto the shooter allegedly posted online — were added to the GIFCT database, in theory allowing platforms to automatically catch and take down reposted content. 

But even with GIFCT acting as a central response in moments of crisis, implementation remains a problem, Turek says. Though coordinated efforts are admirable, not every company participates in the effort and its practices aren’t always clearly carried out. 

“You have a lot of these smaller companies that essentially don’t have the resources [for content moderation] and don’t care,” Turek says. “They don’t have to.”

Twitch indicates it caught the stream fairly early — the Christchurch shooter was able to broadcast for 17 minutes on Facebook — and says it’s monitoring for restreams. But Streamable’s slow response means that by the time the reposted video was removed, millions had viewed the clip and a link to it was shared hundreds of times across Facebook and Twitter, according to The New York Times. Hopin, the company that owns Streamable, did not respond to The Verge’s request for comment.

Though the Streamable link was taken down, portions and screenshots from the recording are easily accessible on other platforms like Facebook, TikTok, and Twitter, where it’s been reuploaded. Those major platforms have then had to scramble to remove and suppress the reshared versions of the video.

Days after the shooting, portions of the video that users reuploaded to Twitter and TikTok remain

Content filmed by the Buffalo shooter has been removed from YouTube, says Jack Malon, a company spokesperson. Malon says the platform also is “prominently surfacing videos from authoritative sources in search and recommendations.” Search results on the platform return news segments and official press conferences, making it harder to find any reuploads that do slip through.

Twitter is “removing videos and media related to the incident,” says a company spokesperson who declined to be named due to safety concerns. TikTok did not respond to multiple requests for comment. But days after the shooting, portions of the video that users reuploaded to Twitter and TikTok remain.

Meta spokesperson Erica Sackin says multiple versions of the video and the suspect’s screed are being added to a database to help Facebook detect and remove content. Links to external platforms hosting the content are permanently blocked.

But even into the week, clips appearing to be from the livestream continued to circulate. On Monday afternoon, The Verge viewed a Facebook post with two clips from the alleged livestream, one showing the attacker driving into the parking lot talking to himself and another showing a person pointing a gun at someone inside a store as they screamed in terror. The gunman mutters an apology before moving on, and a caption overlaid on the clip suggests the victim was spared because they were white. Sackin confirmed the content violated Facebook’s policies, and the post was removed shortly after The Verge asked about it.

As it’s made its way across the web, the original clip has been cut and spliced, remixed, partially censored, and otherwise edited, and its widespread reach means it will likely never go away. 

Acknowledging this reality and figuring out how to move forward will be essential, says Maria Y. Rodriguez, an assistant professor at the University of Buffalo School of Social Work. Rodriguez, who studies social media and its effects on communities of color, says moderation and preserving free speech online take discipline, not just around Buffalo content but also in the day-to-day decisions platforms make.

“Platforms need some support in terms of regulation that can offer some parameters,” Rodriguez says. Standards around how platforms detect violent content and what moderation tools they use to surface harmful material are necessary, she says.

Certain practices on the part of platforms could minimize harm to the public, like sensitive content filters that give users the option to view potentially upsetting material or to simply scroll past, Rodriguez says. But hate crimes aren’t new and similar attacks are likely to happen again. Moderation, if done effectively, could limit how violent material travels — but what to do with the perpetrator is what has kept Rodriguez awake at night.

“What do we do about him and other people like him?” she says. “What do we do about the content creators?”