Can Two Volunteers Really Put a Dent in Twitter Harassment?

Gil C/Shutterstock.com

The company has granted moderating superpowers to a tiny feminist nonprofit. But shouldn't it be taking more drastic steps to protect its users?

Is the Internet a safe space for women?

It’s a huge question—yet, more and more, the answer seems to be a clear no. Last month, online abusers drove female video game critics and developers out of their homes with violent threats. One critic’s public event had to be canceled because of a promise of mass shootings. And a new Pew study put the harassment in statistically sharper terms: 25 percent of young women have been sexually harassed online, and 26 percent have been stalked.

This kind of online stalking and harassment isn’t new itself, but its recent visibility has accelerated the conversation about what we should be doing to protect women from abuse on the web. Much of this conversation has centered on Twitter, where so much recent abuse has happened. Because of Twitter’s open nature—any user can send a message to any other user, in public—it’s especially vulnerable to mass harassment and abuse.

On Thursday, the nonprofit organization Women, Action, and the Media—abbreviated WAM! or WAM—announced a new initiative with Twitter, to try to make the service safer for women. That partnership has been widely greeted as a step forward, a sign Twitter is finally taking harassment seriously. To my eye, though, it just seems like another stopgap—and further evidence Twitter isn't yet willing to invest to protect its most vulnerable users.

WAM, in effect, got super powers within Twitter’s moderating environment. After submitting an abuse report to Twitter, users can now also submit one to WAM. WAM will make sure the users’s claims are credible, then “escalate” the report in Twitter’s system, flagging it for immediate handling by the company’s moderators.

While WAM hopes to bring all expedited reports to a “speedy resolution” within 24 hours of receiving them, it cautions, “we’re not Twitter, and we can't make decisions for them.” It instead will advocate for users within the moderation system.

WAM will also be keeping track of whose reports get handled and whose don’t. Using its access to Twitter’s moderation system, WAM will be collecting data on how poorly gendered abuse is handled across the site.

WAM won’t have these super powers forever, nor does it want them. Its executive director Jaclyn Friedman told me that she thought the program’s initial test period would run for only about a month.

Even only a few weeks, she hopes, will give it a sense of how well or poorly abuse reports are handled across the site. It will also let WAM figure out what Twitter’s moderators consider okay.

“We’ll be escalating [harassment reports] even if they don’t fit Twitter’s exact abuse guidelines,” Friedman said. WAM intends to “cast a wider net” and see what Twitter’s moderators address.

WAM is a small nonprofit outside of Boston with a staff of two. Those two employees will be doing all the work: Friedman and WAM’s community manager, Mina Farzad, will personally read and vet every harassment report that WAM receives.

In other words, it’s understood to be a major improvement to the current situation, that two people will now be devoting serious time and attention to Twitter’s harassment problem—even though they work for a small nonprofit that’s effectively donating that time and attention to Twitter, a for-profit and publicly owned corporation.  

Twitter held its initial public offering a year ago. In its third-quarter results, issued in late October, it reported increased revenue but a net operating loss of $175 million.

Friedman communicated as much in her interview with me. “I’m frustrated. For all the money they make off their users, not to be able to spend a little more to make this safer...” seems wrong. Though she said she was excited and encouraged by the project, she lamented in no weak terms that it had to be done at all.

“I don’t think we should have to do this work,” she said. “I think it’s a scandal that a tiny, under-resourced nonprofit with two staff members is having to do free labor for them.”

Twitter hasn't issued a press release on the initiative, but did send a short statement saying, “We’re always trying to improve the way we handle abuse issues, and WAM! is one of many organizations we work with around the world on best practices for user safety.”

But the problem of abuse online, says Friedman, goes well beyond Twitter, even if it’s already within Twitter’s power to stop it.

“The major systematic issue is Twitter—and they’re not alone in this, other social media companies do it too—doesn’t put nearly enough resources into moderation,” she said.

WAM’s initiative emerged from a meeting that Friedman attended in October, convened by a Twitter employee in charge of public policy. The meeting also involved representatives of the National Network to End Domestic Violence and the Cyber Civil Rights Initiative, Friedman said.

The meeting, she said, was encouraging. The Twitter employee she worked with “really gets the issues and she wants to make change.”

Friedman sees the problems facing the platform—and its reluctance to spend money—as endemic to most contemporary social networks.

“I see this as a free speech issue,” Friedman said. She said she knew some would see the work WAM does as “censorship,” but that a completely open and unmoderated platform imposes its own form of censorship. It effectively prevents women, especially queer women and women of color, from getting to speak on the service.

She recalled when “speaking in public” meant putting a literal soap box down in an public park and advocating some point of view. But online and elsewhere she said, “our public squares are now privately held.”

“These public spaces—or their equivalents, now—are controlled by private companies. That’s exciting in some ways, because of what they can do, but also it’s dangerous.”

(Image via Gil C/Shutterstock.com)