A trip down memory lane: part 2 of “Social threat modeling” — DRAFT!

Note: as Shireen Mitchell and others are discussing on Twitter, “social threat modeling” isn’t necessarily a great name.  Suggestions welcome!

Just as I was finishing The winds of change are in the air, Twitter helpfully provided an excellent opportunity to illustrate the value of applying threat modeling techniques to social problems.  VP of Trust and Security Del Harvey’s Serving Healthy Conversation describes their latest attempt to improve the toxic environment on Twitter: use behavioral algorithms to detect the small number of users that “negatively impact the health of the conversation.”  What could possibly go wrong?

Before we get to that, let’s take a stroll down memory lane …

Leigh Honeywell’s Another Six Weeks: Muting vs. Blocking and the Wolf Whistles of the Internet on Model View Culture has a good summary of what went wrong here:

In attempting to solve the problem of users being retaliated against for blocking, Twitter missed other ways that harassers operate on their service.  Retweeting, in particular, is often used by harassers to expose the target’s content to the friends of the harasser – potentially subjecting the target to a new wave of harassment. With the blocking functionality changed to work as “mute”, targets lost the ability to stop their harassers from retweeting them.

One reason that computer security is so complex is that there are so many different threats that it’s very easy for a change to trigger unexpected consequences – fixing one problem creates another worse. Threat modeling, done well, gives a structured way to analyze this.    Let’s start with the simplified threat model for harassment Shireen Mitchell and I sketched out for our March 2017 SXSW talk (although we wound up not presenting it), and then Kelly Ireland and I refined as part of my talk at TRANSform Tech later that month.

Threat model for different ways of harrassing people

One of the ways harassers attack people is to flood them with messages; one of the ways to do that is to get people to help.  There are several different ways to do that, of course, one of which is retweeting.  Taking the analysis down to the next level:

Get people to help by: (1) sending them a link (2) retweet so followers see it (3) ...

Before Twitter’s changes, there was an easy way for the target to close off this avenue of attack: block the harasser so they can’t see the tweet.

Similar to the previous diagram but with a big circle "Block harasser" covering up the box saying "retweet so followers see it"

When Twitter changed blocking functionality, it re-opened up that avenue of harassment.

Of course, the harasser has other options as well.   But as Leigh Honeywell points out:

When the unannounced change was noticed, users and commentators argued that a determined harasser could have always copied-and-pasted a target’s tweets, set up new accounts, or otherwise worked around the existing blocking functionality, and that the original blocking functionality represented a false sense of security. These arguments ignored the value of that functionality for dealing with unmotivated, low-grade and opportunistic harassers.

If Twitter had done a good job of threat modeling, they would have considered the variety of threats, everything from organized alt-right campaigns to the kinds of opportunistic-but-relatively-lazy people who will retweet because it’s an easy way of poking somebody while showing off to their buddies, but won’t devote a lot of effort to it.  Targets of harassment understand these differences, of course.   But as Leigh Honeywell says

While I do not know what consultation Twitter did in deciding how this feature change would impact their users, the magnitude of the response suggests that it wasn’t enough. Engaging users who are directly impacted by harassment must be central to any platform’s efforts at combating abuse.

Another useful things about applying threat modeling to harassment is that it encourages you to think from the targets’ points of view.   Still, “find out what people using the software want” is software engineering 101 whether or not you’re doing thread modeling. Twitter has no excuse for not doing that here.

Why didn’t it happen?   Leigh Honeywell that the rapid and intense criticism of “emerged in large part from marginalized communities, who are disproportionately affected by online abuse.”  Hold that thought, as we flash forward a few years.

Hey wait a second, I’m noticing a pattern here!

Sarah Perez’ Twitter quickly kills a poorly thought out anti-abuse measure on TechCrunch goes into more detail.  Before Twitter’s changes, adding people to lists meant that they got a notification.  Harassers would create a list with an offensive name and use this to bombard people with offensive notifications.   Removing notifications when people are added to lists got rid of this avenue of harassment – but overlooked the fact that they were other important reasons for the notifications.

Once again, Twitter ignored the perspective of the targets of harassment.

Indeed.  Listen to black women!

And more generally: marginalized communities are disproportionately affected by online abuse.   Even by the low standards of the tech industry, Twitter’s diversity numbers are pretty bad.*  As former Twitter engineer Leslie Miley said in Charlie Warzel’s “A Honeypot For Assholes”: Inside Twitter’s 10-Year Failure To Stop Harassment

The decision-makers were not people who got abuse and didn’t understand that it’s not about content, it’s about context. If Twitter had people in the room who’d been abused on the internet — meaning not just straight, white males — when they were creating the company, I can assure you the service would be different.

Just to be crystal clear: software engineering techniques do not substitute for having a diverse team, inclusive culture, and equitable power distribution and compensation.   Sure, “social threat modeling” can be useful even for relatively-homogeneous product development teams, as long as they can work with (and listen to) other voices like social scientists and marginalized people in their community.   But the technical aspects aren’t enough by themselves.  As Shireen Mitchell says, “The solution is multifaceted. Those that chose just one path will fail. We need all of it.”

Instead of doing that, though, Twitter’s trying to throw technology at the problem.   Machine learning!  Artificial intelligence!   Behavioral algorithms!  In part 3 of the series, we’ll use “social threat modeling” to explore some of the reasons this won’t work.  A teaser:

 

 


* According to their 2017 report, only 14% of the people in technical roles are women, so there probably weren’t a lot of women of color involved on the product development team.  And only 2% of the company’s employees are African American.