Lessons from the Past: What Zoom can learn from Microsoft

The Zoom logo, with a black padlock

Summary: In the early 2000s, Microsoft faced challenges similar to the ones Zoom’s looking at today, and successfully turned things around.   Some of the key lessons from Microsoft’s experiences include

  • Think broadly about “trust”
  • Make trust the product teams’ responsibility
  • Fix your privacy practices and policies
  • Do threat modeling
  • Use the tools — and develop new ones
  • Learn from your experiences — and continue to update your processes
  • It’s a social problem, not just “technical”
  • It may take a while to address — but there’s a big potential upside

A tough time for Zoom

After a great start to the year, with usage soaring as people around the world stay home, the last few weeks have been a really tough time for Zoom.  The company has always focused on convenience and usability.  Now, they’re dealing with the consequences of not having paid much attention to security and privacy:

Facing an existential threat to its business, Zoom’s CEO Eric Yuan has announced that the company will be shutting down feature development for 90 days to focus on security and privacy. They’re also bringing in third-party security consultants, creating an advisory board, and engaging with security researchers.

Lessons from the past

Back in 2001, high-profile security problems (including one so severe the FBI issued a warning) had become an existential threat to Microsoft’s business.   In January of 2002, Bill Gates’ company-wide Trustworthy Computing memo announced that the company was shutting down Windows feature development to focus on security and privacy.

Michael Howard’s 10 years since the Bill Gates security memo: A personal journey is a great short summary of what Microsoft did as part of the effort — including bringing in third-party security consultants, creating an advisory board, and engaging with security researchers.

And it worked.  It took a few years, but Microsoft wound up turning things around.  By the mid-2000s, security and trustworthiness were becoming competitive advantage for the company.

I was at Microsoft Research at the time, and wound up pretty heavily involved in this work for several years — including helping plan the initial “security push”, researching attack surface reduction with Jeannette Wing and Michael, and modeling the effects of buffer overrun detection and mitigation technologies as part of a $200 million decision of whether or not to recompile the entire code base for a service pack release.  It was really stressful, an incredible sense of urgency crashing up against the complexities of evolving a culture that had been seen as core to the company’s successes.   At the same time, though, it was also a chance to work with some really great people and have an impact on the whole software industry.

Of course, it’s a different world today from the early 2000s.  Some of what we did looks downright quaint by today’s standards — for example all the time, energy, and money that went into flying consultants and advisors to Redmond, and flying employees to visit customers and conferences.  And Zoom’s very different than Microsoft was in quite a few ways, starting with being much more nimble.

Still, many of Microsoft’s experiences are extremely relevant.   Here’s some of the lessons that might be especially useful to Zoom.

Think broadly about “trust”

“Trust online will not be achieved through security because that vision is founded on a misconstrued notion of trust” — Helen Nissenbaum,  Securing Trust Online: Wisdom or Oxymoron?, 2001

Zoom clearly understands this. In A Message to Our Users  Eric Yuan emphasized that “we want to do what it takes to maintain your trust”, and also talked about “shifting all our engineering resources to focus on our biggest trust, safety, and privacy issues” as well as committing to providing a transparency report.  That’s very encouraging!

That said, Zoom’s initial responses have primarily focused on the security side.  One clear example is their new CISO Advisory Board, made up of Chief Information Security Officers from large corporations.  Another is bringing in ex-Facebook Chief Security Officer Alex Stamos as an outside advisor, and Katie Moussouris of Luta Security to assess Zoom’s internal vulnerability handling processes.

“Trustworthiness is a much broader concept than security, and winning our customers’ trust involves more than just fixing bugs.” — Bill Gates, Trustworthy Computing, 2001

CISO’s have a deep understanding of security, and Alex’s and Katie’s experiences and expertise are clearly relevant, so I can certainly see why Zoom started there.   Still, to make broad progress on trust, Zoom’s also likely to need

  • consumer privacy experts, as well as an advisory board with representatives from groups with a deep knowledge of privacy and represent consumer interests (such as EPIC, Consumer Federation of America, Privacy International, and Privacy Rights Clearinghouse)
  • safety experts, as well as an advisory board with representatives from those who are most targeted online — including domestic violence survivors, reproductive justice advocates, trans and non-binary people, people in recovery, racial justice activists, and disabled people

Similarly, as Zoom’s refocusing engineering, I really wonder how much of the training, code review, and testing they’re doing is getting informed by this broader perspective.  As Casey Fiesler says, user personas really need to include “user stalking their ex,”  “user who wants to traumatize vulnerable folks,” and “user who thinks it’s funny to show everyone their genitals”.   That clearly hasn’t been the case so far at Zoom.

Of course, you gotta start somewhere.   Zoom’s first steps are good ones.  Hopefully they’re already working on these other aspects as well.

Make trust the product teams’ responsibility

“Once Microsoft started using the Security Development Lifecycle, there was no stopping it.” — from Life in the Digital Crosshairs, 2014

Microsoft’s Security Development Lifecycle (SDL) continues to be one of the most significant contributions of the early-2000s work.  Zoom’s different enough from Microsoft that other security processes, or SDL variants for agile development and DevOps might be better starting points; but the same principles are likely to apply.  Zoom needs to find a way to operationalize security and other aspects of trustworthiness throughout their whole engineering organization, while evolving their culture to be more security-focused.

One of the most important principles of the SDL is to incorporate security into everybody’s role.  It’s important and valuable to have an empowered, well-resourced, security team that focus on security and privacy — and it’s equally important to have this expertise in the teams designing, developing, and testing the products.   As well as investing in training for the product teams, Microsoft wound up introducing new roles like Security Product Manager and Security Architrect, and revising other job responsibilities to make the security focus explicit.

“Privacy must become integral to organizational priorities, project objectives, design processes, and planning operations.”  — Ann Cavoukian, Privacy by Design: the Seven Foundational Principles

The same is true for other aspects of trust.  Privacy and safety teams are useful; by themselves, they’re not enough.  Fortunately, as with the SDL, there are useful blueprints for the path forward — Privacy by Design is a great example.

Fix your privacy practices and policies

“This is a clear breach of GDPR” — Tara Taubman-Bassirian, in Zoom’s Security and Privacy Woes Violated GDPR, Expert Says

EPIC’s 2001 FTC complaint about Microsoft Passport’s privacy practices led to a 2002 consent decree which committed the company to cleaning up its privacy act.   Progress was imperfect, but substantial in many ways.   Today’s FTC ignored EPIC’s 2019 complaint against Zoom, but that doesn’t mean they’re off the hook.  In Europe, there’s the GDPR and regulators who don’t have a lot of patience with badly-behaving US companies. In the US, Zoom may well have problems with COPPA, FERPA. HIPAA, and potentially a bunch of state regulations as well.

Even after some improvements, Zoom’s privacy policy still has a lot of problems — including minimal restrictions on sharing their data with third parties.   It doesn’t have to be this way.  One very positive way in which Zoom today is similar to Microsoft in the early 2000s is that their business model primarily revolves around people paying for software — as opposed to advertising-based companies like Facebook and Google who rely on exploiting their users’ personal data.

Zoom really needs to fixing their privacy policy — quite frankly they shouldn’t expect any credibility in the privacy community until they do.   But that’s just the first step.   Getting privacy experts involved in the design and review of their products, auditing their software to learn other unexpected data sharing is going on (and introducing tools and processes to prevent future problems), and applying the principles of Privacy by Design throughout their engineering process are also important.

Do threat modeling

“The risks, the misuse, we never thought about that.”

— Eric Yuan, in Zoom Rushes to Improve Privacy for Consumers Flooding Its Service

Threat modeling is a structured approach to looking at security threats — and what can be done in response.  As well as identifying specific threats that need to be prevented or mitigated, threat modeling also reminds developers and testers to keep security in mind, and forces the organization to document a system’s security properties — which in turn helps with tools, code review, and testing.

Microsoft’s early-2000s work on threat modeling, including Window Snyder and Frank Swiderski’s book and the broad use of the STRIDE model internally, had a significant impact not just on the company but the broader industry.   Threat modeling’s come a long way since then, with well-developed techniques and methodologies as well as excellent resources available like Mitre’s ATT&CK.

Still, many companies don’t do threat modeling very well, especially when it comes to social threats.   Facebook’s threat modeling, for example, didn’t pay attention to easy-to-predict threats such as companies like Cambridge Analytica lying to them, fake news sites trying to get more views by manipulating trending topics, intelligence agencies trying to influence elections in other countries, or communications channels being used to foment genocide.

Zoombombing is a great example of a high-profile problem that could have been anticipated and significantly reduced by even basic social threat modeling techniques.  The weakness of Zoom’s muting, blocking, and moderation support (leaving attendees open to bullying, hate speech, harassment) is another major areas where Zoom hasn’t paid attention to the threats.   And it’s worth noting that these aren’t just problems in the consumer and education worlds; they’re issues in corporate environments as well.

So hopefully as Zoom focuses on threat modeling inputs from Window, Casey, Shireen Mitchell, Kaliya Young, Danielle Citron, Leigh Honeywell, and others who focus on the social aspects — as well as content moderation experts like Sarah Roberts, who have a lot of experience with how to mitigate some of these threats.

Use the tools — and develop new ones

“Consider tools throughout the process, beginning in the planning phase” — me, in Steering the Pyramids: Tools, Technology, and Process in Engineering at Microsoft, ICSM 2002

Tools aren’t magic bullets — some of my most valuable contributions in the Microsoft security efforts were times I said “tools aren’t going to help with this particular problem.”   Still, tools can make a big difference on some kinds of problems.  As well as adopting commercially-available and research tools, Microsoft invested heavily in creating its own — static analysis tools (the focus of Righting Software, from 2004, which discusses the PREfix and PREfast tools I architected as well as SLAM, Vault, and ESP ), as well as attack surface estimators,  vulnerability scanners, and so much more.

Zoom’s undisclosed, and apparently unintentional, data-sharing with Facebook is a good example of an area where tools can be helpful: analyzing dependencies’ security behavior could have identified the privacy-invasive behavior of Facebook’s iOS SDK.  Zoom’s recent, and welcome, announcement that users will soon be able to customize which data center regions their account can use for its real-time meeting traffic, is another: information flow analyses, and better use of chaos testing and run-time monitoring tools, can help avoid the kind of unexpected behavior led to meeting information unexpectedly getting routed through China a couple of months go.

Zoom isn’t anywhere near as large as companies like Google, Facebook, and Amazon that have followed Microsoft’s playbook of developing large internal tools teams that mix research and developing practical tools.  So they’ll need to think about where off-the-shelf tools can help, where they can get creative by applying technologies like Jepsen and Alloy, and where they’ll need to move the state of the art forward.

Tools are often deployed in a tactical way, helping to address particular problems.  Especially in a situation like this, it’s also worth thinking about tool usage strategically, for example looking at how tools can contributing to process and cultural change.

Learn from your experiences — and continue to update your processes

“Controls are created to prevent hazards. Accidents occur when the controls are ineffective.” — Nancy Leveson, in How To Learn More From Accidents

Microsoft’s products and processes evolved significantly as part of the focus on Trustworthy Computing.  In many cases the changes were driven by analysis of security vulnerabilities.  Any vulnerability is a chance to ask questions like “Why weren’t the controls like testing, code review, and pen testing that should have prevented this hazard from being shipped effective?” Very often the answers point to training or process gaps, or identify patterns that highlight where other vulnerabilities may be lurking.

Root cause analysis was one popular technique at Microsoft.  The state of the art has progressed significantly over the last 20 years, so other approaches may make more sense for Zoom.  How To Learn More From Accidents is an excellent intro to Leveson’s Causal Analysis Using System Theory (CAST) approach; her 2019 CAST Handbook and Engineering a Safer World: Systems Thinking Applied to Safety, from 2012, go into a lot more detail.  No matter what approach Zoom winds up using, though, there’s a lot of leverage here.

It’s also useful to apply this kind of thinking to the system level.   Zoom has had indications for a while that there were some big security and privacy problems.  Why didn’t something get done about it before it hit the front pages and the FBI was issuing warnings?   Maybe (as with Microsoft back in the day) some people had been trying to get the word out that there was a big problem but they didn’t get heard.   Maybe executives and the board understood the risks, made a rational decision to focus on other priorities, but didn’t realize quickly enough that the risks had changed significantly as a result of the pandemic.

Whatever the explanation, it almost certainly points to opportunities for improvement going forward.

It’s a social problem, not just “technical”

“These are racist cyber attacks; not innocent party crashers just stopping by to say hey.” — Dr. Dennis Johnson, in Demand that Zoom immediately create a solution to protect its users from racist cyber attacks!

Software engineers like to think of security and privacy as purely “technical” problems.   The reality, though, is that software is used by people and organizations; you can’t separate the technology from the social aspects.  Alas, as Zeynep Tufekci,  Sally Applin, and others continue to point out, most software companies have a long track record of not getting anthropologists, sociologists and other social scientists involved in the process.

All of Microsoft’s work I’ve discussed here had a strong social focus, for example the the cultural, organizational, and interpersonal aspects of the SDL and threat modeling and the Analysis is necessary but by no means sufficient attitude towards tools.

“Applying social science perspectives to the field of computer security not only helps explain current limitations, and highlight an emerging trend, but also points the way towards a radical rethinking of how to make progress on this vital issue.” — Sarah Blankinship, Tomasz Ostwald, and me in Computer Science as a Social Science: Applications to Computer Security, 2009

Another outstanding example of the social perspective the work that people like Window Snyder, Kymberlee Price, Katie Moussouris, Terri Forslof, Celene Richenburg, and Sarah did to change the company’s attitude about working with the security community and move towards an ecosystem approach.  In an excellent Facebook discussion from a couple of years ago, Steve Lipner commented that he and other experienced security people at the company originally resisted this outreach until Window and others changed their minds.

Microsoft’s early-2000s work as well was also heavily influenced by people  like Jeannette Wing, Helen Nissenbaum, Laurie Williams, Andreas Zeller, and Andrea Matwyshyn whose work was infused with social perspectives.   Today,  Microsoft is reportedly the world’s second-largest employer of anthropologists.

Of course, Zoom won’t necessarily use the same tactics as Microsoft.  For example:

  • Microsoft’s outreach strategy was very in-person focused, including conferences and parties.  As the conference circuit moves online, Zoom’s got a great opportunity to build on the kudos they’ve gotten for their initial engagement with security researchers.
  • Zoom doesn’t have anything equivalent to Microsoft Research, but there are plenty of other ways to engage with academia.
  • Some of the most important disciplines for Zoo to engage with, like intersectional internet studies and content moderation, didn’t even exist in the early 2000s.

The calls by civil rights groups like Color Of Change, the National LGBTQ Task Force, and the National Hispanic Media Coalition for Zoom to release a plan to combat racial harassment also highlight the need for expertise in diversity, equity, and inclusion.   Perspectives from people like Safiya Noble, Ruha Benjamin, Shireen Mitchell, André Brock, and others who focus on the intersection of race and technology are especially important here.

As well as bringing experts in as consultants, Zoom also needs to build capacity by hiring them throughout the organization — including at the executive level as well as senior product and engineering roles.

It may take a while to address — but there’s a big potential upside

“We needed to change some security settings, like password enforcement on day one. But we learned a lesson, we quickly made a change.”  — Eric Yuan, in Zoom’s CEO Wants You to Trust the Company Again

Zoom’s getting a lot of justifiable praise for their fast and forceful reaction: quickly releasing several important fixes, engaging with security researchers, freezing feature development, communicating regularly and candidly.  That said, they’re still at a very early stage.  They’re just starting to think through what security, privacy, safety, and trust mean for them.  Most likely, they’re still trying to fully understand the technical debt — and ethical debt — they’ve taken on by ignoring it for so many years.

Zoom will probably continue to make progress much faster than Microsoft did — their code base is a lot smaller, their development cycles are a lot faster, and they don’t have the same legacy problems.  Still, it’s instructive to look at Microsoft’s timeline

  • In September 2001 (after Code Red, Nimda, and Gartner’s recommendation that companies consider Apache rather than Microsoft’s IIS), Microsoft knew they had a problem.
  • By early 2002, Bill Gates’ memo and the Windows security push signaled the start of significant sustained investment
  • Windows Server 2003 included some significant improvements, but in the summer of 2003 the Blaster worm led to another major mobilization with the Sledgehammer task force trying to “squash the bugs.”
  • Things only really turned the corner on a sustained basis with the introduction of the Security Development Lifecycle (SDL) in July 2004 and the release of Windows XP SP2 later that year

At the end of the day, though, Microsoft wound up in a much stronger position than they had been in before.  By the time I was GM of Competitive Strategy in 2006-7, security and customer perceptions of trustworthiness were starting to become significant competitive advantages.  Today, Microsoft’s reputation for security is one of the reasons that school districts like New York are replacing Zoom with Microsoft Teams.

So if Zoom continues to apply the lesson they’ve learned, and sustains their new focus on trust, there’s a big upside.

Zoom already has a remarkably usable, highly scalable, and very reliable product.  If they also become leaders in security, privacy, and other aspects of trust, they’ll be in a great position.

 


Thanks to Steve, David, George, Dragos, Matt, Kristen, Jeff, Pat, Michael, Jason, Deborah, and everybody else for feedback and discussions on earlier versions of this post.

12 Lessons from WT:Social

Lessons (so far) from WT:Social

Growth has tapered off significantly on WT:Social, the news focused social network from Jimmy Wales of Wikipedia fame.  Usability remains a huge problem.  There’s a lot of spam and other noise.  It’s still early days, and things may well improve over time, but it’s hard to be optimistic.

So now’s a good time to take a step back and look at what can be learned from the experience so far.  Over the years, I’ve done posts like this with Mastodon, Diaspora, Google+, and other social networks.  This time, I’m working on news focused social network software myself, so some of these lessons are likely to be especially relevant for me.

To start with, here’s a few I discussed in my 2017  Mastodon post (where I noted “we’ve seen them before with Dreamwidth, Diaspora, StatusNet, Gnu Social, Pinboard, Ello, and others”)  that are worth reiterating once again:

  • A lot of people want an alternative to corporate-owned ad-funded social networks.
  • A small team of developers can get something usable out quickly.
  • There’s interest across the world, not just in the U.S.

Moving on to some new lessons ….

  1. People like the idea of working together to help fight disinformation.   Perhaps the most encouraging takeaway from the WT:Social experience so far is that a lot of people understand that disinformation is a problem — want to help do something about it.  True, WT:Social’s “everybody can edit anything” approach doesn’t work well; no surprises there.  Still, it’s worth exploring other approaches involving more nuanced collaboration between paid professionals and “the crowd” (with training available, and perhaps some kind of Slashdot-like meta-moderation), all assisted by solid tools. [1]
  2. There’s a good opportunity for a “better reddit”.  Jimmy positioned WT:Social as a Facebook alternative, but as I discussed in Why is an “intellectual dark web” site at the top of my feed? , it’s currently more like reddit … and that’s not a bad thing!  On many topics, reddit’s links are mediocre (or worse) and provide very limited perspectives.  reddit discussions are often toxic. And while there are alternatives with some traction to Facebook and Twitter (MeWe and Mastodon both have millions of users), none of them have the same news focus as reddit.
  3. Design and usability are key.  People understand that a new site won’t be as polished as reddit or Facebook, but if it’s too confusing they generally won’t invite their friends [2] — and are likely to stop coming back.  WT:Social would have been better off starting with less functionality (did they really need hashtags right off the bat?) and putting more attention on design and usability.
  4. Help people have good initial experiences.   My first impression of WT:Social included getting asked for money, seeing off-topic links that happened to be at the top of the default subwikis at the time, and then getting spam in my email.  Hooray!  And pity the new user who found stuff confusing: for quite a while, there was no easy way of asking for help or finding the FAQ. Fortunately, it’s not hard to improve “first use experiences” through techniques like better design, simple onboarding screens, and easy access to resources and support. [3]
  5. Focus on accessibility up front or it will be a problem. WT:Social is a horrible experience using a screen-reader, and has many blatant accessibility bugs like missing alt-text and low color contrast that free site analyzers like Axe and WAVE can detect. Many other social networks also don’t do a great job here either, so there’s a big opportunity for a new offering to distinguish itself and a large audience of people whose needs aren’t being met today.
  6. Focus on harassment up front or it will be a problem.  WT:Social is filled with mechanisms that are optimized for harassers, doesn’t allow muting or blocking, and doesn’t even make it easy to find the code of conduct or anti-harassment policy.  Similarly, Wikipedia, Diaspora, Google+, Mastodon, and Twitter didn’t pay attention to harassment up-front, with the expected results.   Y’know, it doesn’t have to be this way.
  7. Think about how different cultural norms and legal systems will interact, including difficult areas relating to content that different people view as art, “porn”, and/or “NSFW”.  There are opportunities for innovation here: Mastodon worked through some similar issues, and came up with interesting techniques like tailorable content warnings and a mechanism to deal with images that are legal in some geographies but not others.
  8. Design for everybody, not just the kind of people the founder usually interacts with.  Lessons #3-7 are all examples of this (and I talked about another one, the term “subwiki”, in a previous post).[4]  I’ve made the same mistake myself.  Fortunately, it’s not hard to do better: work with a broad range of people, including those who are marginalized in different ways than you, from the very beginning — and listen to their ideas, suggestions, and feedback.
  9. Consider building on an existing discussion platform instead of rolling your own.  WT:Social’s initial discussion mechanism was pretty basic, and even after a couple of months of enhancements the lack of notifications can make it hard to have a good discussion there.  Does it make sense to leverage existing open-source commenting platforms like Coral Project or forum software like Discourse, NodeBB, or Vanilla Forums?
  10. Consider leveraging open standards based on decentralized identity and verifiable credentials.   Decentralized architectures are more complex but also a much better match for the real world.  Credit for this one goes to Kaliya Young (aka IdentityWoman) on Twitter, where she also provided some links to reading material.
  11. There’s a big opportunity for anti-oppressive social networks in general.  Today’s large social networks welcome racists, misogynists, alt-righters, and other bigots; Facebook goes even farther, siding with authoritarians and promoting genocide.  Most emerging alternatives either appeal even more blatantly to fascists (gab.ai) or strive for “neutrality” (WT:Social, MeWe, Minds). [5]   Dreamwidth continues to be a shining exception, and Mastodon’s early positioning as “Twitter without Nazis” is another (and there’s a lot to be learned from its challenges).  Still it’s clear that here’s a very large under-served market here.
  12. It’s time for a different approach. What would a news focused social media site look like if it were grounded in design justice and built on best practices and research into anti-harassment, content moderation, online extremism, and amplifying marginalized voices?  It’s hard to know, because there aren’t any high-profile examples of this.  Seems like an opportunity to me!

One of the things that really struck me as I was working on this list is  really striking about WT:Social is how they’ve repeated a lot of mistakes other social networks (including Wikipedia) have made.   But even though WT:Social hasn’t taken advantage of its opportunities to learn from other social networks, other social networks can learn from WT:Social.

I’m sure there are other good lessons as well – or aspects of these I’ve overlooked.  If you’ve have thoughts, please share them!

 


Thanks to Deborah, Eve, and everybody else who gave feedback on earlier versions of this post!


[1] As I was working on this post, I stumbled on Amy X Zhang’s thesis, which has some intriguing ideas and prototypes on the tools front.  Starbird et. al.’s paper on  disinformation as collaborative work, is also relevant.  How to apply collaborative approaches to countering disinformation?

[2]  The responses to Jimmy’s recent Why Inviting Friends Is Important highlight this.

[3] Indeed, WT:Social has recently made some progress here, thanks to Linda Blanchard’s excellent work on the Beginner’s Guide subwiki.

[4] Another example: the way new users automatically follow Jimmy Wales.  Jimmy’s said that this is done to make it more convenient for him to broadcast messages for everybody on the site … but there are plenty of other ways to accomplish this.  I get it that Jimmy wants to share the news when Rush’s drummer dies or a Turkish court rules in favor of Wikipedia, but it’s a classic case of assuming that users who haven’t expressed an interest in classic rock or Wikipedia share share his interests.

[5] I talked at length about “neutrality” in WT:Social will have to pick a side.   Jimmy’s comment in the  discussion on WT:Social is illuminating: he thinks people are “yearning” for technology that “fosters the kind of social activity that promotes truth and civil discourse.”  For more on why “civility” is so problematic, see what Ijeoma Oluo, Jamilah Lemieux, Kitanya Harrison, @sassycrass,  and @AngryBlackLady have to say about it.

Why is “intellectual dark web” content at the top of my feed? Thoughts on WT:Social

WT:Social - News focused social network (the WT:Social logo)

On Friday, I signed up for WT:Social, a news focused social network from Jimmy Wales of Wikipedia fame.  There’s a lot of buzz about WT:Social, and membership is soaring — up from just a few thousand users at the beginning of the month to almost 100,000 when I signed up two days ago.  The waitlist is long, but if you get a paid account ($12.99/month or $100/year) you can skip the queue.

Since I’m also working on some news focused social network software, and so am interested to see how others approach the problem, I paid for a month.  If you’re also developing social media software, there’s a lot to learn here, so it might be worth it for you as well.

Otherwise, save your money. [1]

Red flags from the beginning

There were some red flags from the beginning, starting with the lack of up-front information about a code of conduct, anti-harassment policy, or content guidelines.  As Elisa Camhort Page said when we were discussing this

A site that welcomes any content is inevitably a site that welcomes harassment, hate speech, threats, and misinformation. You cannot stave off one if you will not take a stand on the other.

Yeah really.  Eventually I discovered that the Terms and Conditions actually does link out to a Code of Conduct, as well as FAQs on Diversity and Ethics; from the dates on them, they seem to have been written for WT:Social’s previous incarnation as WikiTribune, but presumably they still apply.  Still, most people won’t invest the effort to find these, and so won’t know what’s expected of them.   It’s much better to make sure that people see these right up front — and explicitly agree to them.

Another immediately-obvious problem: the experience using a screen reader is really horrible.  There’s no “skip navigation” link, so the initial experience on the page starts with reading out all the menus and recommended sub-wikis.  Then when you finally get to a link, the title of the article is repeated multiple times, and it reads out the complete URL.  Yikes.

Also, it doesn’t seem like WT:Social has really thought through about how people might try to game the system, let alone applied structured techniques like “social threat modeling[2]  For example, the notifications are all on by default — meaning new posts get sent to you via email   What could possibly go wrong?  Here’s a screenshot of some email I got (with the subwiki’s name blanked out).

Email header. From: info@wikitribune.com Subject: WT:Social (wiki name blanked out): Subscribe to Read | Financial Times

In this particular case it was an accident [3] but you can certainly see how it could get abused.  Mechanisms like this make it open season for spammers, harassers, propagandists, and other unsavory types.

If you have an account there, you can turn the notifications off by going to “My Account” and then “Edit Notifications”.  The link https://wt.social/myaccount/notifications also works, at least for now … although, as Kathy Gill points out, the way the notification dialog uses red and green is problematic from an accessibility perspective.   Here’s what the initial settings look like via Coblis, the color blindness simulator.  Are they on or off?

Notifications dialog, with Off buttons in black and on buttons in grey

Even though I’ve turned all the notifications off, I still see some when I check the site.  Still, it’s a lot better than it was — and things aren’t showing up in my email.

It’s more like reddit than Facebook

Even though a lot of people are describing WT:Social as an alternative to Faecebook, it’s really a lot more like reddit.  Links get organized into “subwikis”, which fill a similar role to reddit’s “subreddits”.  You can browse a subwiki, comment on posts there, or join it (which lets you submit links of your own).

The word “subwiki” doesn’t seem like a great choice to me.  Subwiki’s aren’t wikis, and they aren’t part of a wiki.  In my own informal survey nobody found it a particularly appealing name.  But, it probably sounded good to Jimmy Wales and the people he hangs out with.

Your home page is a “feed” of the most recent posts, along with the most recent comments, from any of the subwikis that you’ve joined.  There are also some “global links” that the people running the site decide everybody gets to see (no way to opt out yet, sorry, and no information about how they decide on which links to send out).   There’s also the additional twist of collaborative wiki-like editing of posts, although I haven’t been able to get it to work yet. [4]

It mostly works.  I was able to figure out how to make a post and share a link myself (although I had to hit refresh to see whether it had succeeded or not).   I like exploring new social networks, so I hunted around found the FAQ and Known Bugs list. [5]  Putting my civil liberties hat on, I created the Section 215 subwiki to share links about the upcoming USA FREEDOM Act reauthorization battle, and seeded it with a post.  Then I sent invitation links to a couple of friends.

This was, in retrospect, a mistake.  My apologies.  If you’ve also signed up, and are considering inviting other people, please read this footnote first.[6]

How I spent my Friday evening

A few hours later one of the friends I had sent an invitation link to asked me

“Why is there an article from Quillette at the top of my WT:Social feed?”

Good question. I went back to check WT:social again and there was an article from Quillette at the top of my feed as well. WTF?

For those of you who don’t know Quillette, it’s an online magazine usually described a a part of the “Intellectual Dark Web” (IDW), which also includes other prominent members like Jordan Peterson, Ben Shapiro, and Jonathan Haidt.  Like others in the IDW, Quillette is polarizing.[7]  Some people see it as upholding values of free speech against the onslaught of SJWs and snowflakes. Others see it as … not the kind of content they want to be confronted with unexpectedly on a Friday night.

Most of my friends fall into the second category, so I hurriedly circled back to the people I had shared invitations with and let them know that they might be in for an unpleasant surprise if they signed up.  Then I looked to see what was going on.

Before we go into that, though, think for a moment about the effect this is likely to have on WT:Social. Lots of people are looking for alternatives to Facebook et al. When somebody like my friend goes to check out a new site and the first thing they see is IDW content … they’re likely to leave, and not come back.

And people who hear about this and don’t want to deal with IDW content might not even bother to check WT:Social out.  When I’ve told other friends that if the sign up for Jimmy Wales’ new social network they they might well see IDW content at the top of their feed, their reaction is generally that they’ve got better things to do with their time.

Then again, there are plenty of people out there who actively like IDW content. They’re ones who are likely to stick around, and invite their friends.  By placing this content so prominently, WT:Social is going to attract them — and drive away the people like me and most of my friends, who would rather not be confronted with IDW content on a Friday night.   This seems like good news for IDW fans who feel like they’re being oppressed by Facebook, Twitter, and reddit.  But as we’ll see, even for them, there are downsides.

Why should IDW fans have all the fun?

Once I looked into it, I realized that what had happened to my friend was fairly straightforward:

  • When they signed up for WT:Social, they were automatically joined to the “Long Reads” subwiki, (along with a handful of other subwikis).
  • When somebody shared IDW content to Long Reads, all 16,000 people in the “Long Reads” subwiki (including people like my friend, who were automatically joined when they signed up) saw it at the top of their feed.  It’s quite possible some or all of them got it in their email as well.

It turned out that I had been automatically signed up for the “Long Reads” subwiki too.  When I left it, the Quillette article vanished from my feed.

But wait a second, why should IDW fans have all the fun? So I rejoined “Long Reads” and shared Jessie Daniels’ Twitter and White Supremacy: A Love Story. When I asked another friend to sign up, here’s what they saw at the top of their feed.

WT Social Feed, with "Twitter and White Supremacy" at the top

Of course, criticisms of large tech companies for helping white supremacists are also polarizing.  Some people see this as … not the kind of content they want to be confronted with on a Friday night. One WT:social member appeared particularly incensed that this link was in his feed, and replied with multiple comments objecting to this “obvious nonsense” and “BS sensationalist headline”. And when I refreshed my front page, there was also a heated debate on the Quillette post as well.

Since there isn’t any way to hide posts from your feed, or prevent WT:social from showing you the five most recent comments on every post, now there was something for everybody!

  • Conservatives looking for alternatives because they feel like they’re being oppressed by corporate social media sites will be immediately irritated by “obvious nonsense.”  Why use WT:Social instead of alt-right fave gab.ai?
  • People looking for alternatives because they feel like corporate social media sites are siding with white supremacists may get a better first impression — but then as soon as they scroll down they’ll see IDW content.  Thanks but no thanks.
  • And people from across the political spectrum will get to see bloviating in comments – with no way to turn it off.  Y’know, there are a lot of reasons people are looking for alternatives, but I don’t think I’ve ever heard people say “the real problem with Facebook and Reddit today is that there’s not enough arguing about white supremacy and the ‘intellectual dark web’.”

A good learning experience

People are continuing to flock to WT:Social: 75,000 new members over the last two days, and the wait list is over 100,000.  The potential is there; for example, somebody posted a link to a story about sexism in Wikipedia, and there were some really great comments.  There’s interesting links on some of the subwikis as well.  But judging from the discussion on the site, most people signing up aren’t having good experiences.

WT:Social Subwiki / Spam requests happening, Created about 2 hours ago. Is there a way to block users or delete friend requests? I'm starting to get spam requests already. :-(

Admittedly, it’s early days yet.  WT:Social could learn from this, take a step back, and redesign their system yet again to pay more attention to things like harassment, abuse, and hate speech.  I’m not holding my breath, but we shall see.  I haven’t deleted my account yet[8] , so if you want to friend or follow me, here I am.

More importantly, WT:Social is not the only game in town.  Their initial floundering is also a learning experience for other nascent social networks and news-focused social media.   True, many of the lessons aout what not to do could also have been learned from Wikipedia’s own history and projects like Mastodon and Diaspora that also set out to provide free speech-oriented alternatives to ad-funded, surveillance capitalism social networks.   Still, it’s a good reminder.

And fortunately, there are positive lessons as well.  One big takeaway is the huge amount of interest in WT:Social (as well as MeWe, the privacy-friendly Facebook alternative, which is also currently getting a lot of signups[9]).  A couple of years ago I wrote about a potential tipping point.  Since then, the pent-up demand is continuing to grow — and not just with techies; I’ve seen a lot of activists I know talking about WT:Social.

Another takeaway is that it’s time for a different approach.  What would a social media site look like if it built on best practices and research into anti-harassment, content moderation, online extremism, and amplifying marginalized voices?

Hopefully we’ll start to see some examples of this over the next few months.

Acknowledgements

Many thanks to Shireen, Kaliya, Shasta, Kathy, Elisa, Victoria, Jim, Vicki, Jim, Soren, Deborah and everybody else for the valuable discussion about WT:Social and feedback on earlier versions of this post!

Footnotes

[1] I certainly don’t mind paying for ad-free social media; I’ve had paid subscriptions to Dreamwidth for years, and support a couple of Mastodon instances on Patreon.  But these are all sites that I started using for free and have had good experiences — and they are asking for a lot less than WT:Social.  Dragos Ruiu describes describes WT:Social’s approach as a “fee extortion waiting queue” which is pretty much how I feel about it too.   Also, Wales’ track record is not encouraging; see for example Mathew Ingram’s Wikipedia’s co-founder wanted to let readers edit the news. What went wrong? and Julia Jacobs’ Wikipedia Isn’t Officially a Social Network. But the Harassment Can Get Ugly.

[2] Shireen Mitchell and I discussed social threat modeling in our 2017 SXSW talk.  There’s an overview of related work in The Winds of Change are in the Air.  My personal experience is that taking a social threat modeling approach early in a project is incredibly valuable.  Like so many other security-related issues, this kind of stuff is very hard and expensive to try to patch in after the fact.

[3] Somebody had shared a link to a story from the Financial Times, quite the one about WT:Social, that turned out to be paywalled.  So when WT:Social tried get the title of the article, it instead got the paywall message.  The software didn’t bother check for this, but just posted it blithely, and sent out the email update to everybody following the subreddit who hadn’t yet turned off notifications.  The person who had posted the link realized their mistake, and deleted it quickly … but it was too late: the email had already gone out.

[4] Implementation bugs aside, I don’t understand how this is even supposed to work.  The impression I have is that you can set up posts that anybody can edit and people will then converge on a neutral point of view summary. What could possibly go wrong?

[5] Which has some scary stuff, like not being able to deny a friend request.

[6] Invitation links have some very unexpected behavior: everybody who accepts via the same link gets connected as friends, with no option to approve.  Once again, what could possibly go wrong?

[7] For example, when I shared an earlier draft of this on Facebook, somebody took exception to my classifying Jordan Peterson as “a mainstay” of the IDW.  So for a while the Facebook thread — which was supposed to be discussing WT:Social — turned into an argument about whether or not Peterson aligns with white supremacists, how misogynistic and anti-trans he is or isn’t, what some see as a pattern of passing off bullshit as “scientific studies”, and so on.

[8] Although I’ve cancelled future payments

[9] Of course, MeWe has challenges of its own.  See Inside MeWe, Where Anti-Vaxxers and Conspiracy Theorists Thrive.

 

 

To Save Tech, #ListentoBlackWomen

To Save Tech, #ListentoBlackWomen

Community voting for the 2019 SXSW conference begins today, so I wanted to let people know about To Save Tech, #ListentoBlackWomen , a panel proposal by Shireen Mitchell of Stop Online Violence Against Women, Dr. Safiya Umoja Noble of USC (author of the excellent Algorithms of Oppression), and me.

Here’s the description:

The disinformation, hacking, harassment, recruiting to extremist causes that we saw online during the 2016 elections highlight patterns Black women have long called attention to. So do the algorithmic biases of search algorithms, facial recognition software, and ad targeting; and the woefully inadequate responses of big tech companies including their tendency to look to AI as a magic tech solution. Listening to Black women is a path for the tech industry to get beyond its history of aiding hate, racists, sexists, nativists, and anti-LGBTQ+ bigots, and move in the direction of justice, equity, diversity, and inclusion within the industry.

Please check out our proposal on the SXSW site. If you like it, here’s how you can support it:

  • Vote for it on the SXSW site. You’ll need to create an account to vote; once you do, the VOTE UP button is on the left-hand side.
  • Leave a comment saying why you’re voting for it. To leave a comment, you’ll need to log in separately via Twitter, Facebook, or Disqus… I hate software. Still, comments are doubly helpful: the selection committee takes them into account; and, if other people see that somebody has commented, they’re more likely to comment themselves.
  • Share it with your friends and colleagues who might be interested, in email or on social networks.

SXSW says that community voting counts for about 30% of their decision. Since white guys have historically been overrepresented at SXSW (and Black women historically underrepresented), and most voters are past attendees, there’s a built-in bias against panels like ours. So even though it’s inconvenient, your support is greatly appreciated.

The good news is that once you’ve created the account and logged in, it’s easy to support multiple proposals! There are quite a few others that are interesting (and in many cases great complements to ours). For example:

Having said all that, here’s a bit more background about our proposal.

Sign saying ':isten to black women'

The origin for this specific proposal was a Twitter Moment that Shireen put together a few months ago called Hacking of 2016 would have never happened had folks #ListenedToBW. All three of us have focused on the underlying issues in our presentations and writings. To get an idea of where we’re coming from, as well as the videos on the SXSW page, check out

And while you’re at it, look around the SXSW site for other interesting panels featuring Black women – and vote them up so that SXSW attendees can listen to them as well 🙂

 


Image credit: Jeff Swensen, Getty Images, via Kiratiana Freelon’s March for Black Women Organizers Want to Put Our Issues Front and Center During March for Racial Justice on The Root

Torn Apart / Separados: immigrant detention after “zero tolerance”

Map of the Unitied states with hundreds of orange circles on it

Torn Apart / Separados visualizes the geo-spatial, financial, and infrastructural dimensions of  immigrant detention in in the wake of the Trump Administration’s “zero tolerance” policy.  The map above is just one of their visualizations of the locations of ICE facilities and private detention centers, based on aggregating and cross-referencing publicly available data.

With this information, perhaps our communities will begin to see the magnitude of the threat to human dignity occurring on our watch and the complex machinery driving government policy. Perhaps rather than feeling helpless, we can recognize that we have skills to tread these troubled waters, particularly in collaboration with each other.

— Roopika Risam, in What We Have, What We Can

It’s an important project.   The data’s extremely useful for activists, advocates, and journalists.  If you have the skills and a bit of time to help, Torn Apart / Separados offers a chance to make a huge impact in a humanitarian crisis.  Here’s a few links with more information:

So please consider getting involved.   Sylvia Fernández’ Torn Apart / Separados Call for Contributors and Reviewers, on HASTAC, describes several different ways people can help – as well as surveys for allies, activist and advocacy organizations and lawyers and legal advisors asking how the project’s resources could be useful to their work and whether they have any data or other resources to contribute.

And please also help get the word out – share the links above (or this post), and like and RT key tweets on the #TornApart and #Separados hashtags.

 

 

Sex, pleasure, and diversity-friendly software: the article the ACM wouldn’t publish

Sex, pleasure, and diversity-friendly software was originally written as an invited contribution to the Human to Human issue of XRDS: Crossroads, the Association of Computing Machinery’s student magazine.  After a series of presentations on diversity-friendly software, it seemed like an exciting opportunity to bring broaden awareness among budding computer scientists of important topics that are generally overlooked both in university courses and the industry.

Alas, things didn’t work out that way.

Overriding the objections of the student editors, and despite agreeing that the quality of the work was high and the ideas were interesting, the ACM refused to publish the article. The ACM employees involved were all professional and respectful, and agreed on the importance of diversity.  Still, due to concerns about discussions of sex and sexuality offending ACM subscribers and members, they would not even consider publishing a revised version.

The CHI paper What’s at Issue: Sex, Stigma, and Politics in ACM Publishing (authored by Alex Ahmed, Judeth Oden Choi, Teresa Almeida, Kelly Ireland, and me) explores some of the underlying institutional and sociopolitical problems this episode and others involved in editing the Human to Human issue highlights, and proposes starting points for future action for HCI-related research and academic publishing practices.

This revised version of Sex pleasure, and diversity-friendly software is written as a companion piece to What’s at Issue. After a brief background section, it includes extended (and lightly-edited) excerpts from the earlier version of the article, and my reflections on the experience and the opportunities it highlights for software engineering. An appendix includes a brief overview of diversity-friendly software along with links to more detailed discussions.

Continue reading Sex, pleasure, and diversity-friendly software: the article the ACM wouldn’t publish