“Social threat modeling”: the winds of change are in the air

Risk: Impact, Possibility, and Ease of Exploitation

Threat modeling is a structured approach to looking at security threats — and what can be done in response.  EFF’s Assessing Your Risks describes how people wanting to keep their data safe online can do threat modeling, starting with questions like “what do I want to protect?” and “who do I want to protect it from?”   Threat modeling is also an important software engineering technique, and it’s that aspect I’m going to focus on here.

When a company takes threat modeling seriously as part of an overall security development process, it can have a huge impact.  I saw first-hand working with the Windows Security team back when I was at Microsoft Research in the early 2000s, and things have come a long way since then.  Today there are books, checklists, tutorials, tools, and even games about how to do it well (although there are still plenty of companies who prefer to ignore the risks).

Even for companies that do practice it, threat modeling today generally has a rather selective focus.  As Amanda Levendowski points out in Conflict Modeling:

In the security and privacy contexts, threat modeling developed as a predictable methodology to recognize and analyze technical shortcomings of software systems. And when compared with security and privacy threat modeling, systems have lagged in developing similarly consistent, robust approaches to online conflict.

Indeed.  OWASP’s Application Threat Modeling page discusses things like decomposing the application into components, identify the data that need to be protected, and focusing on trust boundaries between running processes.  It doesn’t have much at all to say about the people who are in the system.  And there’s similarly no mention of important categories of other social and user harms like online conflict, harassment, computational propaganda, and influencing elections.

Threat model for different ways of harrassing people

Simplified threat model with different approaches to harassment

Several people are working on extending threat modeling or similar techniques to these social threats.  The work’s still at a relatively early stage, and there isn’t yet a good name for this overall approach — I’m calling it “social threat modeling” for now, but as Shireen Mitchell of Stop Online Violence Against Women points out that’s only one aspect of it.   Whatever you call it, though, there’s clearly something interesting going on here.  A few examples*:

While this work is very promising, the most striking thing to me is how little attention is getting paid to this issue.  Twitter, Facebook, and Google spend zillions of dollars a year (and publish bunches of research papers) on AI; how much have they invested here?  And the red-hot blockchain world has a golden chance to get things right from early on, but (with the notable exception of Kaliya), very few of the people I talked to at the recent Internet Identity Workshop were even thinking about stuff like this.

Still, the winds of change are in the air.  The UN is discussing Facebook’s role in genocides, Amnesty International is reporting on Toxic Twitter, and Safiya Umolya Noble’s outstanding Algorithms of Oppression is getting excerpted in Time Magazine.   More and more people are seeing computer science as a social science, and coming around to a point that Zeynep Tufecki, Sally Applin, and others have been making for quite a while: software companies need to get sociologists involved in the process.  As Window Snyder (co-author of a 2004 book on threat modeling and now chief security officer at Fastly) said at the recent OurSA conference, “the industry changes when we change it.”

So I expect we’ll be seeing a lot more attention to this area over the next few months.  It’ll be interesting to see which companies gets ahead of the curve.


* If there’s other work that should be in this list, please let me know!


Image credits:

Also published on Medium.

Leave a Reply

Your email address will not be published. Required fields are marked *