Threat modeling is a structured approach to looking at security threats — and what can be done in response. EFF’s Assessing Your Risks describes how people wanting to keep their data safe online can do threat modeling, starting with questions like “what do I want to protect?” and “who do I want to protect it from?” Threat modeling is also an important software engineering technique, and it’s that aspect I’m going to focus on here.
When a company takes threat modeling seriously as part of an overall security development process, it can have a huge impact. I saw first-hand working with the Windows Security team back when I was at Microsoft Research in the early 2000s, and things have come a long way since then. Today there are books, checklists, tutorials, tools, and even games about how to do it well (although there are still plenty of companies who prefer to ignore the risks).
Even for companies that do practice it, threat modeling today generally has a rather selective focus. As Amanda Levendowski points out in Conflict Modeling:
In the security and privacy contexts, threat modeling developed as a predictable methodology to recognize and analyze technical shortcomings of software systems. And when compared with security and privacy threat modeling, systems have lagged in developing similarly consistent, robust approaches to online conflict.
Indeed. OWASP’s Application Threat Modeling page discusses things like decomposing the application into components, identify the data that need to be protected, and focusing on trust boundaries between running processes. It doesn’t have much at all to say about the people who are in the system. And there’s similarly no mention of important categories of other social and user harms like online conflict, harassment, computational propaganda, and influencing elections.
Simplified threat model with different approaches to harassment
Several people are working on extending threat modeling or similar techniques to these social threats. The work’s still at a relatively early stage, and there isn’t yet a good name for this overall approach — I’m calling it “social threat modeling” for now, but as Shireen Mitchell of Stop Online Violence Against Women points out that’s only one aspect of it. Whatever you call it, though, there’s clearly something interesting going on here. A few examples*:
A threat model approach to attacks and countermeasures in on-line social networks, Borja Sanz et al., in Proceedings of the 11th Reunion Espanola de Criptografıa y Seguridad de la Información (RECSI), focuses on identifying attacks against users of online social networks and possible countermeasures to mitigate the risks
- Mozilla’s Coral Project applies a threat modeling perspective to online communities. Caroline Sinders of the Coral project briefly talks about threat modeling’s application to harassment in SXSW canceled panels: Here is what happened, from 2016.
- Amanda Levendowski describes Conflict Modeling as “a predictable framework to structure thinking around online conflict by suggesting a methodology for conflict modeling, defining a taxonomy of conflict—safety, comfort, usability, legal, privacy, and transparency (SCULPT)—and examining common mitigation techniques adopted by systems to reduce the risk of certain conflicts.” A draft was presented at the 2017 Privacy Law Scholars Conference,; as far as I know, the only public information is on her web site.
- Shireen Mitchell and I suggested applying threat modeling techniques to online harassment in our 2017 SXSW talk on Diversity-friendly Software. I went into a little more detail in Transforming Tech with Diversity-Friendly Software (the slides have a short example) and worked with the San Francisco-based startup O.school applying this approach to their pleasure education platform; Shireen is working with Kaliya Young on applying a generalized threat modeling approach to social and user harms in the self-sovereign ID world.
While this work is very promising, the most striking thing to me is how little attention is getting paid to this issue. Twitter, Facebook, and Google spend zillions of dollars a year (and publish bunches of research papers) on AI; how much have they invested here? And the red-hot blockchain world has a golden chance to get things right from early on, but (with the notable exception of Kaliya), very few of the people I talked to at the recent Internet Identity Workshop were even thinking about stuff like this.
Still, the winds of change are in the air. The UN is discussing Facebook’s role in genocides, Amnesty International is reporting on Toxic Twitter, and Safiya Umolya Noble’s outstanding Algorithms of Oppression is getting excerpted in Time Magazine. More and more people are seeing computer science as a social science, and coming around to a point that Zeynep Tufecki, Sally Applin, and others have been making for quite a while: software companies need to get sociologists involved in the process. As Window Snyder (co-author of a 2004 book on threat modeling and now chief security officer at Fastly) said at the recent OurSA conference, “the industry changes when we change it.”
So I expect we’ll be seeing a lot more attention to this area over the next few months. It’ll be interesting to see which companies gets ahead of the curve.
* If there’s other work that should be in this list, please let me know!
- Microsoft DREAD risk-ranking model from OWASP’s Application Threat Modeling page
- Simplified threat model for harassment from Transforming Tech with Diversity-friendly software
Also published on Medium.