A Toronto man named Gregory Alan Elliott was arrested and charged two years ago with criminal harassment for threatening messages he allegedly sent to women via Twitter. His case finally began in a Toronto court yesterday. If convicted, he could face jail time.
It's a heartening development for women whose professional and personal lives are heavily taxed by the specter of online abuse. Traditional law enforcement channels (not always on the cutting edge of new social media technology) often don't take internet-based harassment seriously, because it seems to exist only in an intangible playground and because our culture's "boys will be boys"/"don't feed the trolls" apologia is so aggressive. The line where online attacks cross over into real-life danger is muddy and ill-defined for most people—even, quite often, victims themselves. Is this real? Am I being oversensitive? Am I installing an alarm system in my house because some 13-year-old boy in Ohio is bored? How many rape threats is too many? Should I get a dog? Should I tell people why I got a dog?
Elliott was reportedly arrested after a woman claimed he repeatedly contacted her on Twitter in a manner that caused her to feel afraid, and "continued doing so even after she asked him to stop." After she came forward, several other women spoke up and said that they had also been harassed by Elliott.
Canadian legal experts are unsure as to how this case will unfold, says CTV:
"People have been getting into conflicts with each other since the dawn of time," he says. "People are just using new technology to vent."
Facebook messages are used often in court, but to Zvulony's knowledge, this is the first time someone has been charged for behavior exclusively on Twitter.
The lawyer says our laws are fully equipped to deal with a case like this. The claimant must prove she was really afraid, but the line between trolling and harassment online is blurry and people post offensive comments all the time on Twitter.
As for the defence, "I think one angle might be that the complainant had no reasonable basis to fear for her safety and that she is over-reacting."
And regardless of the outcome in this case, Zvulony says at some point someone will be convicted and the medium will be Twitter. "Police will then have precedent and feel more comfortable enforcing it in the future."
Elliott's case echoes yesterday's conviction of two UK-based internet trolls who sent threatening messages to journalist and activist Caroline Criado-Perez. The pair pleaded guilty to sending "menacing" tweets "over a public communications network."
Regardless of your stance on prosecuting online hate speech and/or explicit threats, we, as a society, need to start accepting the fact that the internet is real. The internet is not a fantasyland without consequences—it's a real place of real joy and real danger where real flesh-and-blood people exchange real ideas and real threats. Figuring out how to regulate that (especially in America, a nation with freedom of speech built so deeply into its fundament) is going to be a long, messy road.
I wish, for one thing, that private companies would take some aggressive action to shut this shit down. Not because of complaints and bad PR, but because hate speech and anonymous harassment are objectively wrong. I know Twitter has a "report abuse" button in the works, but as of right now we're shit out of luck. And has anyone seen any actual improvement on YouTube after their big Google+ overhaul? I still receive incessant harassment from people whose identities are as opaque as ever. And reporting harassment on YouTube is such a baroque, teeth-grindingly frustrating process that I don't have any teeth left and I'm still being harassed. (Example: In order to report abuse on videos with moderated comments, I have to approve all of the abuse for public viewing so that I can then link back to it in YouTube's reporting form. Otherwise YouTube doesn't recognize that the comments exist and the users can't be blocked or reported. THANKS 4 THE SUPPORT, DICKBAGS. LYLAS.)
I also worry that private systems of reporting abuse could just as easily be used to target and silence the victims they're supposed to protect. As I've written before:
When trolls created a fake Facebook profile for me during the Great Rape Joke Kerfuffle of 2013 (mostly to express how much I hate rape and love donuts, because comedy), and I attempted to have it shut down, my genuine account wound up getting reported and suspended in retaliation. At most a minor inconvenience, but needless and irritating nonetheless. The thought of having my Twitter account potentially suspended by abusers in retaliation for fighting back against my own abuse is profoundly enraging.
This is an immensely complicated, multifaceted, ethically cloudy issue and I'm sure many a Master's thesis will be written on it in the coming years. But, for now, I think we can agree on some things. Direct, violent threats are not "disagreements." Harassment is not a "discussion." "I am going to come to your house and rape you" is not an "opinion." Whether it's through private enterprise or government intervention, that shit has to change.