The Impossible Task of Predicting a Shooting

Image for article titled The Impossible Task of Predicting a Shooting

By now, mass shooters have a distinct profile in the collective imagination and the eyes of the state: They are usually men, they are often white, nearly half are suicidal. An inordinate number have been the perpetrators of domestic abuse. Many purchase at least one weapon to carry out their plan. They leave behind horrific afterimages: Manifestos on 4chan and Tumblr, direct-message evidence of the days- or years-long preparation for their acts. In the weeks after a shooting, the grim records of an echo chamber almost always leak out. In retrospect, it all looks inevitable, like someone really should have known.


It’s this latter idea—that the telling details left in the aftermath of mass violence amount to an predictive pattern—that probably inspired the Trump administration to direct the Department of Defense “to work in partnership with local state and federal agencies, as well as social-media companies, to develop tools that can detect mass shooters before they strike,” as President Trump said at a press conference on Monday. (It was also a convenient attempt to side-step the ways a recent shooter had invoked the administration’s language of “invasion” to justify his racist acts.) This point, that predictive tools can prevent violence, is of course, an absurd idea, not least of all because it calls on companies who can’t even consistently ban Nazis to conjure some algorithmic magic that will, theoretically, identify the bad guys and deliver them directly into the hands of the FBI.

The problem this is supposed to be addressing is pressing, and there are few politically viable solutions in sight: Since last Sunday, active shooters have claimed at least 31 lives. On Monday, Uruguay’s foreign minister released a statement warning the country’s citizens “to take precautions against growing indiscriminate violence, mostly for hate crimes, including racism and discrimination” when visiting the U.S. His is the third country to issue such a warning this month.

But such predictive policing is an easy sleight of hand, in a country that has so far failed to kneecap the single simple material transitions—easy access to weapons—that fuel these shootings. In the vacuum where we might have gun-control laws, instead, the push for online surveillance has grown, winding the clock back from the moment a gun is purchased to the first time someone posts about buying a gun online. With enough cops and informants and infiltrators, this thinking goes, there would be fewer shooters; even better if those cops and informants and infiltrators could be abstracted into machines. None of which addresses the companies refusing to moderate hate speech on their platforms out of profit motive. Or the money that’s kept legislators from effectively regulating the tools used to carry out violent acts.

Already, given the lack of political will to stand up to these companies, a network of stop-gap surveillance experts has grown. In a riveting story on Wednesday, Cosmopolitan profiled a secretive woman, an ex-cop and former Marine, who works for the Anti-Defamation league and says she tracks thousands of men with the potential to commit hate crimes online. The exact ways in which she works aren’t detailed in the story: As the writer makes clear, the woman is rightfully concerned for her safety, and consented to having few specifics recorded in print. Her “savant”-like ability to discern a mass-murderer from any other hateful guy online is described as a “primal, hairs-on-end feeling that she’d learned not to ignore.” In the opening of the story, she tips off the FBI off to go in for a classic sting based on a series of increasingly dark posts, which, depending on your opinion of the FBI’s aggressive and deceiptful tactics when it comes to the handling of terrorism suspects, could strike you as either righteous or as a terrible idea.


Though the woman’s investigative approach in forums like 4chan and Gab isn’t the all-seeing dragnet proposed by the administration to trawl through above-ground networks, there’s something unsettling about a savant who spends all day talking to guys who post incel memes, sorts them according to how likely they are to kill four or more people, and brings them to the feds. For one thing, most obviously, it’s important that she’s always right. It’s also a weird workaround, relying on the heroics of an NGO investigator to tip off potentially dangerous people to the state.


Following the shootings in recent weeks, the sluggish will to pass gun control measures temporarily coalesced in familiar ways. Trump, as well as some Republican members of the Senate and House, have explored an expansion of “red flag” laws, which allow friends or family concerned about a person’s capacity for violence to petition to have their weapons removed—though the legislation already exists in at least 17 states. These laws are the lowest common denominator—themselves a form of prediction—requiring the justice system to decide who is careening towards a violent incident.


Yet the state has a poor track record when it comes to identifying the warning signs leading up to an attack. Devin Kelley, who killed 26 in a Texas church in 2017, was able to purchase a firearm because the Air Force failed to forward his disciplinary records to the FBI. In 2013, a military contractor didn’t mention that its employe, Aaron Alexis, had been the subject of numerous complaints, allowing him to maintain the clearance required to enter the Washington Navy Yard and kill 12. All of Boston’s newly installed CCTVs couldn’t find the Boston Bomber. Las Vegas is among the most data-happy cities in the country, with a “real-time-crime center” and cameras covering nearly every inch of the strip. Still, no-one noticed the arsenal of weapons Stephen Paddock hoisted into his hotel room at Mandalay Bay. In the Wall Street Journal this week, national security experts pointed out the obvious: Even if you could collect and sift through all those posts, most agencies don’t have the capacity to respond to a digital threat.


Which isn’t to mention that predictive policing and dragnet social media collection, as a practice, has consistently failed in serious ways: Data gets “dirty” with the biases of the people who punch it in; machine learning just isn’t that smart; cities from Los Angeles to New Orleans have found that their algorithm-mediated police programs are about as racist—or more so—than regular cops. An entire agency full of investigators like the one Cosmo interviewed, even with the best of intentions, couldn’t catch every threat. Reforms intended to stop mass shootings in schools—zero-tolerance policies, security guard with guns—have followed a similar logic, criminalizing black and latinx kids as they purport to keep “the kids” safe. And in this country at least, the agencies that would be overseeing such surveillance programs are full of white nationalists posting violent memes themselves. 

As hate crimes increase and more people go online to spout the vile opinions that fuel them, the lines between a criminal and a fucked up person have gotten even more blurry. In the Cosmo story, the investigator notes that in the years she’s been doing her job, the men she tracks have gotten younger. The tenor of their postings have changed: She’s spending more time looking at young misogynists who consider themselves incels, or white nationalists who call themselves“alt-right.” Over the last decade, domestic terrorists have been more frequently radicalized by such haphazard and hateful networks. It’s terrifying stuff, and it’s further loosened the borders between who people are online and off.


In the last section of Rachel Monroe’s recent book, Savage Appetites, she recounts the story of Lindsay Souvannarath, who at twenty-two met a boyfriend online through a Columbine Tumblr hashtag. Over the course of two years, Souvannarath and her boyfriend created a plan over DM to carry out a mass shooting, an idea the boyfriend introduced, and which would rely on his access to his dad’s guns. The plan didn’t go as intended, but when the tipped-off cops approached the boyfriend, he shot himself in the head. Using social media posts and her conversations with the boyfriend, a Halifax judge made her an example and sentenced Souvannarath to prison for life.

There are no comfortable answers in cases like these: Souvannarath was active in neo-Nazi forums as a teenager, and she certainly wasn’t innocent. But Monroe, at least, was somewhat conflicted by the judge’s conclusion: As she wrote, maybe “the judge was validating Lindsay’s virtual self, giving too much credence to the part of her that loudly proclaimed how frightening she was, how unlike other people.”


In Canada, she noted, 96% of people serving life sentences had been convicted of murder, and in prison, Souvannarath was retreating further into herself. Parsing situations like these isn’t as easy as flagging a post or running someone’s Tumblr profile through a filter, as much as it might be tempting to believe senseless acts of violence can be broken down into their component warning signs. And, notably, in Canada, at least, it’s harder to imagine Souvannarath passing a background check, finding a third party character witness, and getting her own gun. The ability to quickly and unceremoniously secure a weapon is among the most precise predictors of all.



Even if predictive software and data-mining tools *were* capable of semi-accurately detecting threats, the solution would almost certainly mean taking action before someone commits a crime. Is the solution to the gun violence epidemic really “just recreate Minority Report”?