Disinformation Week of Action and the #DisruptDisinfo Block Party

  Twitter Block Party: Monday October 26 4-5 PM ET

MediaJustice and the Disinfo Defense League are kicking off their Week of Action Against Disinformation with the #DisruptDisinfo Twitter Block Party Monday October 26 at 1 PM Pacific, 4PM Eastern.    It’s a great chance to learn how to respond when people share disinformation — and how to avoid sharing disinformation yourself.

Whether or not you’re on Twitter, you can follow along here.

The Disinfo Defense League is a group of non-partisan non-profits and researchers, and they’ve got a lot of other great events planned as part of the week of action.   A few highlights:

And if that’s not enough, Joan Donovan will be starting off each day of the week with a stream on Big if True at 11:00 p.m. Pacific.

If you can’t wait till next week to get started … no worries!   Here’s a couple of videos to tide you over.   First of all, here’s a short video by Shireen Mitchell with tips about cutting down the spread of disinformation.

Everybody’s been targeted by disinformation and other kinds of propaganda from Indivisible Plus Washington on Vimeo.

And here’s Ruha Benjamin, Safiya Noble, Fadi Quaran, and Shireen discussing voter suppression with the Real Facebook Oversight Board — great background to the Canaries in the Coal Mine and Democracy Dilemma events.

Lessons from the Past: What Zoom can learn from Microsoft

The Zoom logo, with a black padlock

Summary: In the early 2000s, Microsoft faced challenges similar to the ones Zoom’s looking at today, and successfully turned things around.   Some of the key lessons from Microsoft’s experiences include

  • Think broadly about “trust”
  • Make trust the product teams’ responsibility
  • Fix your privacy practices and policies
  • Do threat modeling
  • Use the tools — and develop new ones
  • Learn from your experiences — and continue to update your processes
  • It’s a social problem, not just “technical”
  • It may take a while to address — but there’s a big potential upside

A tough time for Zoom

After a great start to the year, with usage soaring as people around the world stay home, the last few weeks have been a really tough time for Zoom.  The company has always focused on convenience and usability.  Now, they’re dealing with the consequences of not having paid much attention to security and privacy:

Facing an existential threat to its business, Zoom’s CEO Eric Yuan has announced that the company will be shutting down feature development for 90 days to focus on security and privacy. They’re also bringing in third-party security consultants, creating an advisory board, and engaging with security researchers.

Lessons from the past

Back in 2001, high-profile security problems (including one so severe the FBI issued a warning) had become an existential threat to Microsoft’s business.   In January of 2002, Bill Gates’ company-wide Trustworthy Computing memo announced that the company was shutting down Windows feature development to focus on security and privacy.

Michael Howard’s 10 years since the Bill Gates security memo: A personal journey is a great short summary of what Microsoft did as part of the effort — including bringing in third-party security consultants, creating an advisory board, and engaging with security researchers.

And it worked.  It took a few years, but Microsoft wound up turning things around.  By the mid-2000s, security and trustworthiness were becoming competitive advantage for the company.

I was at Microsoft Research at the time, and wound up pretty heavily involved in this work for several years — including helping plan the initial “security push”, researching attack surface reduction with Jeannette Wing and Michael, and modeling the effects of buffer overrun detection and mitigation technologies as part of a $200 million decision of whether or not to recompile the entire code base for a service pack release.  It was really stressful, an incredible sense of urgency crashing up against the complexities of evolving a culture that had been seen as core to the company’s successes.   At the same time, though, it was also a chance to work with some really great people and have an impact on the whole software industry.

Of course, it’s a different world today from the early 2000s.  Some of what we did looks downright quaint by today’s standards — for example all the time, energy, and money that went into flying consultants and advisors to Redmond, and flying employees to visit customers and conferences.  And Zoom’s very different than Microsoft was in quite a few ways, starting with being much more nimble.

Still, many of Microsoft’s experiences are extremely relevant.   Here’s some of the lessons that might be especially useful to Zoom.

Think broadly about “trust”

“Trust online will not be achieved through security because that vision is founded on a misconstrued notion of trust” — Helen Nissenbaum,  Securing Trust Online: Wisdom or Oxymoron?, 2001

Zoom clearly understands this. In A Message to Our Users  Eric Yuan emphasized that “we want to do what it takes to maintain your trust”, and also talked about “shifting all our engineering resources to focus on our biggest trust, safety, and privacy issues” as well as committing to providing a transparency report.  That’s very encouraging!

That said, Zoom’s initial responses have primarily focused on the security side.  One clear example is their new CISO Advisory Board, made up of Chief Information Security Officers from large corporations.  Another is bringing in ex-Facebook Chief Security Officer Alex Stamos as an outside advisor, and Katie Moussouris of Luta Security to assess Zoom’s internal vulnerability handling processes.

“Trustworthiness is a much broader concept than security, and winning our customers’ trust involves more than just fixing bugs.” — Bill Gates, Trustworthy Computing, 2001

CISO’s have a deep understanding of security, and Alex’s and Katie’s experiences and expertise are clearly relevant, so I can certainly see why Zoom started there.   Still, to make broad progress on trust, Zoom’s also likely to need

  • consumer privacy experts, as well as an advisory board with representatives from groups with a deep knowledge of privacy and represent consumer interests (such as EPIC, Consumer Federation of America, Privacy International, and Privacy Rights Clearinghouse)
  • safety experts, as well as an advisory board with representatives from those who are most targeted online — including domestic violence survivors, reproductive justice advocates, trans and non-binary people, people in recovery, racial justice activists, and disabled people

Similarly, as Zoom’s refocusing engineering, I really wonder how much of the training, code review, and testing they’re doing is getting informed by this broader perspective.  As Casey Fiesler says, user personas really need to include “user stalking their ex,”  “user who wants to traumatize vulnerable folks,” and “user who thinks it’s funny to show everyone their genitals”.   That clearly hasn’t been the case so far at Zoom.

Of course, you gotta start somewhere.   Zoom’s first steps are good ones.  Hopefully they’re already working on these other aspects as well.

Make trust the product teams’ responsibility

“Once Microsoft started using the Security Development Lifecycle, there was no stopping it.” — from Life in the Digital Crosshairs, 2014

Microsoft’s Security Development Lifecycle (SDL) continues to be one of the most significant contributions of the early-2000s work.  Zoom’s different enough from Microsoft that other security processes, or SDL variants for agile development and DevOps might be better starting points; but the same principles are likely to apply.  Zoom needs to find a way to operationalize security and other aspects of trustworthiness throughout their whole engineering organization, while evolving their culture to be more security-focused.

One of the most important principles of the SDL is to incorporate security into everybody’s role.  It’s important and valuable to have an empowered, well-resourced, security team that focus on security and privacy — and it’s equally important to have this expertise in the teams designing, developing, and testing the products.   As well as investing in training for the product teams, Microsoft wound up introducing new roles like Security Product Manager and Security Architrect, and revising other job responsibilities to make the security focus explicit.

“Privacy must become integral to organizational priorities, project objectives, design processes, and planning operations.”  — Ann Cavoukian, Privacy by Design: the Seven Foundational Principles

The same is true for other aspects of trust.  Privacy and safety teams are useful; by themselves, they’re not enough.  Fortunately, as with the SDL, there are useful blueprints for the path forward — Privacy by Design is a great example.

Fix your privacy practices and policies

“This is a clear breach of GDPR” — Tara Taubman-Bassirian, in Zoom’s Security and Privacy Woes Violated GDPR, Expert Says

EPIC’s 2001 FTC complaint about Microsoft Passport’s privacy practices led to a 2002 consent decree which committed the company to cleaning up its privacy act.   Progress was imperfect, but substantial in many ways.   Today’s FTC ignored EPIC’s 2019 complaint against Zoom, but that doesn’t mean they’re off the hook.  In Europe, there’s the GDPR and regulators who don’t have a lot of patience with badly-behaving US companies. In the US, Zoom may well have problems with COPPA, FERPA. HIPAA, and potentially a bunch of state regulations as well.

Even after some improvements, Zoom’s privacy policy still has a lot of problems — including minimal restrictions on sharing their data with third parties.   It doesn’t have to be this way.  One very positive way in which Zoom today is similar to Microsoft in the early 2000s is that their business model primarily revolves around people paying for software — as opposed to advertising-based companies like Facebook and Google who rely on exploiting their users’ personal data.

Zoom really needs to fixing their privacy policy — quite frankly they shouldn’t expect any credibility in the privacy community until they do.   But that’s just the first step.   Getting privacy experts involved in the design and review of their products, auditing their software to learn other unexpected data sharing is going on (and introducing tools and processes to prevent future problems), and applying the principles of Privacy by Design throughout their engineering process are also important.

Do threat modeling

“The risks, the misuse, we never thought about that.”

— Eric Yuan, in Zoom Rushes to Improve Privacy for Consumers Flooding Its Service

Threat modeling is a structured approach to looking at security threats — and what can be done in response.  As well as identifying specific threats that need to be prevented or mitigated, threat modeling also reminds developers and testers to keep security in mind, and forces the organization to document a system’s security properties — which in turn helps with tools, code review, and testing.

Microsoft’s early-2000s work on threat modeling, including Window Snyder and Frank Swiderski’s book and the broad use of the STRIDE model internally, had a significant impact not just on the company but the broader industry.   Threat modeling’s come a long way since then, with well-developed techniques and methodologies as well as excellent resources available like Mitre’s ATT&CK.

Still, many companies don’t do threat modeling very well, especially when it comes to social threats.   Facebook’s threat modeling, for example, didn’t pay attention to easy-to-predict threats such as companies like Cambridge Analytica lying to them, fake news sites trying to get more views by manipulating trending topics, intelligence agencies trying to influence elections in other countries, or communications channels being used to foment genocide.

Zoombombing is a great example of a high-profile problem that could have been anticipated and significantly reduced by even basic social threat modeling techniques.  The weakness of Zoom’s muting, blocking, and moderation support (leaving attendees open to bullying, hate speech, harassment) is another major areas where Zoom hasn’t paid attention to the threats.   And it’s worth noting that these aren’t just problems in the consumer and education worlds; they’re issues in corporate environments as well.

So hopefully as Zoom focuses on threat modeling inputs from Window, Casey, Shireen Mitchell, Kaliya Young, Danielle Citron, Leigh Honeywell, and others who focus on the social aspects — as well as content moderation experts like Sarah Roberts, who have a lot of experience with how to mitigate some of these threats.

Use the tools — and develop new ones

“Consider tools throughout the process, beginning in the planning phase” — me, in Steering the Pyramids: Tools, Technology, and Process in Engineering at Microsoft, ICSM 2002

Tools aren’t magic bullets — some of my most valuable contributions in the Microsoft security efforts were times I said “tools aren’t going to help with this particular problem.”   Still, tools can make a big difference on some kinds of problems.  As well as adopting commercially-available and research tools, Microsoft invested heavily in creating its own — static analysis tools (the focus of Righting Software, from 2004, which discusses the PREfix and PREfast tools I architected as well as SLAM, Vault, and ESP ), as well as attack surface estimators,  vulnerability scanners, and so much more.

Zoom’s undisclosed, and apparently unintentional, data-sharing with Facebook is a good example of an area where tools can be helpful: analyzing dependencies’ security behavior could have identified the privacy-invasive behavior of Facebook’s iOS SDK.  Zoom’s recent, and welcome, announcement that users will soon be able to customize which data center regions their account can use for its real-time meeting traffic, is another: information flow analyses, and better use of chaos testing and run-time monitoring tools, can help avoid the kind of unexpected behavior led to meeting information unexpectedly getting routed through China a couple of months go.

Zoom isn’t anywhere near as large as companies like Google, Facebook, and Amazon that have followed Microsoft’s playbook of developing large internal tools teams that mix research and developing practical tools.  So they’ll need to think about where off-the-shelf tools can help, where they can get creative by applying technologies like Jepsen and Alloy, and where they’ll need to move the state of the art forward.

Tools are often deployed in a tactical way, helping to address particular problems.  Especially in a situation like this, it’s also worth thinking about tool usage strategically, for example looking at how tools can contributing to process and cultural change.

Learn from your experiences — and continue to update your processes

“Controls are created to prevent hazards. Accidents occur when the controls are ineffective.” — Nancy Leveson, in How To Learn More From Accidents

Microsoft’s products and processes evolved significantly as part of the focus on Trustworthy Computing.  In many cases the changes were driven by analysis of security vulnerabilities.  Any vulnerability is a chance to ask questions like “Why weren’t the controls like testing, code review, and pen testing that should have prevented this hazard from being shipped effective?” Very often the answers point to training or process gaps, or identify patterns that highlight where other vulnerabilities may be lurking.

Root cause analysis was one popular technique at Microsoft.  The state of the art has progressed significantly over the last 20 years, so other approaches may make more sense for Zoom.  How To Learn More From Accidents is an excellent intro to Leveson’s Causal Analysis Using System Theory (CAST) approach; her 2019 CAST Handbook and Engineering a Safer World: Systems Thinking Applied to Safety, from 2012, go into a lot more detail.  No matter what approach Zoom winds up using, though, there’s a lot of leverage here.

It’s also useful to apply this kind of thinking to the system level.   Zoom has had indications for a while that there were some big security and privacy problems.  Why didn’t something get done about it before it hit the front pages and the FBI was issuing warnings?   Maybe (as with Microsoft back in the day) some people had been trying to get the word out that there was a big problem but they didn’t get heard.   Maybe executives and the board understood the risks, made a rational decision to focus on other priorities, but didn’t realize quickly enough that the risks had changed significantly as a result of the pandemic.

Whatever the explanation, it almost certainly points to opportunities for improvement going forward.

It’s a social problem, not just “technical”

“These are racist cyber attacks; not innocent party crashers just stopping by to say hey.” — Dr. Dennis Johnson, in Demand that Zoom immediately create a solution to protect its users from racist cyber attacks!

Software engineers like to think of security and privacy as purely “technical” problems.   The reality, though, is that software is used by people and organizations; you can’t separate the technology from the social aspects.  Alas, as Zeynep Tufekci,  Sally Applin, and others continue to point out, most software companies have a long track record of not getting anthropologists, sociologists and other social scientists involved in the process.

All of Microsoft’s work I’ve discussed here had a strong social focus, for example the the cultural, organizational, and interpersonal aspects of the SDL and threat modeling and the Analysis is necessary but by no means sufficient attitude towards tools.

“Applying social science perspectives to the field of computer security not only helps explain current limitations, and highlight an emerging trend, but also points the way towards a radical rethinking of how to make progress on this vital issue.” — Sarah Blankinship, Tomasz Ostwald, and me in Computer Science as a Social Science: Applications to Computer Security, 2009

Another outstanding example of the social perspective the work that people like Window Snyder, Kymberlee Price, Katie Moussouris, Terri Forslof, Celene Richenburg, and Sarah did to change the company’s attitude about working with the security community and move towards an ecosystem approach.  In an excellent Facebook discussion from a couple of years ago, Steve Lipner commented that he and other experienced security people at the company originally resisted this outreach until Window and others changed their minds.

Microsoft’s early-2000s work as well was also heavily influenced by people  like Jeannette Wing, Helen Nissenbaum, Laurie Williams, Andreas Zeller, and Andrea Matwyshyn whose work was infused with social perspectives.   Today,  Microsoft is reportedly the world’s second-largest employer of anthropologists.

Of course, Zoom won’t necessarily use the same tactics as Microsoft.  For example:

  • Microsoft’s outreach strategy was very in-person focused, including conferences and parties.  As the conference circuit moves online, Zoom’s got a great opportunity to build on the kudos they’ve gotten for their initial engagement with security researchers.
  • Zoom doesn’t have anything equivalent to Microsoft Research, but there are plenty of other ways to engage with academia.
  • Some of the most important disciplines for Zoo to engage with, like intersectional internet studies and content moderation, didn’t even exist in the early 2000s.

The calls by civil rights groups like Color Of Change, the National LGBTQ Task Force, and the National Hispanic Media Coalition for Zoom to release a plan to combat racial harassment also highlight the need for expertise in diversity, equity, and inclusion.   Perspectives from people like Safiya Noble, Ruha Benjamin, Shireen Mitchell, André Brock, and others who focus on the intersection of race and technology are especially important here.

As well as bringing experts in as consultants, Zoom also needs to build capacity by hiring them throughout the organization — including at the executive level as well as senior product and engineering roles.

It may take a while to address — but there’s a big potential upside

“We needed to change some security settings, like password enforcement on day one. But we learned a lesson, we quickly made a change.”  — Eric Yuan, in Zoom’s CEO Wants You to Trust the Company Again

Zoom’s getting a lot of justifiable praise for their fast and forceful reaction: quickly releasing several important fixes, engaging with security researchers, freezing feature development, communicating regularly and candidly.  That said, they’re still at a very early stage.  They’re just starting to think through what security, privacy, safety, and trust mean for them.  Most likely, they’re still trying to fully understand the technical debt — and ethical debt — they’ve taken on by ignoring it for so many years.

Zoom will probably continue to make progress much faster than Microsoft did — their code base is a lot smaller, their development cycles are a lot faster, and they don’t have the same legacy problems.  Still, it’s instructive to look at Microsoft’s timeline

  • In September 2001 (after Code Red, Nimda, and Gartner’s recommendation that companies consider Apache rather than Microsoft’s IIS), Microsoft knew they had a problem.
  • By early 2002, Bill Gates’ memo and the Windows security push signaled the start of significant sustained investment
  • Windows Server 2003 included some significant improvements, but in the summer of 2003 the Blaster worm led to another major mobilization with the Sledgehammer task force trying to “squash the bugs.”
  • Things only really turned the corner on a sustained basis with the introduction of the Security Development Lifecycle (SDL) in July 2004 and the release of Windows XP SP2 later that year

At the end of the day, though, Microsoft wound up in a much stronger position than they had been in before.  By the time I was GM of Competitive Strategy in 2006-7, security and customer perceptions of trustworthiness were starting to become significant competitive advantages.  Today, Microsoft’s reputation for security is one of the reasons that school districts like New York are replacing Zoom with Microsoft Teams.

So if Zoom continues to apply the lesson they’ve learned, and sustains their new focus on trust, there’s a big upside.

Zoom already has a remarkably usable, highly scalable, and very reliable product.  If they also become leaders in security, privacy, and other aspects of trust, they’ll be in a great position.

 


Thanks to Steve, David, George, Dragos, Matt, Kristen, Jeff, Pat, Michael, Jason, Deborah, and everybody else for feedback and discussions on earlier versions of this post.

12 Lessons from WT:Social

Lessons (so far) from WT:Social

Growth has tapered off significantly on WT:Social, the news focused social network from Jimmy Wales of Wikipedia fame.  Usability remains a huge problem.  There’s a lot of spam and other noise.  It’s still early days, and things may well improve over time, but it’s hard to be optimistic.

So now’s a good time to take a step back and look at what can be learned from the experience so far.  Over the years, I’ve done posts like this with Mastodon, Diaspora, Google+, and other social networks.  This time, I’m working on news focused social network software myself, so some of these lessons are likely to be especially relevant for me.

To start with, here’s a few I discussed in my 2017  Mastodon post (where I noted “we’ve seen them before with Dreamwidth, Diaspora, StatusNet, Gnu Social, Pinboard, Ello, and others”)  that are worth reiterating once again:

  • A lot of people want an alternative to corporate-owned ad-funded social networks.
  • A small team of developers can get something usable out quickly.
  • There’s interest across the world, not just in the U.S.

Moving on to some new lessons ….

  1. People like the idea of working together to help fight disinformation.   Perhaps the most encouraging takeaway from the WT:Social experience so far is that a lot of people understand that disinformation is a problem — want to help do something about it.  True, WT:Social’s “everybody can edit anything” approach doesn’t work well; no surprises there.  Still, it’s worth exploring other approaches involving more nuanced collaboration between paid professionals and “the crowd” (with training available, and perhaps some kind of Slashdot-like meta-moderation), all assisted by solid tools. [1]
  2. There’s a good opportunity for a “better reddit”.  Jimmy positioned WT:Social as a Facebook alternative, but as I discussed in Why is an “intellectual dark web” site at the top of my feed? , it’s currently more like reddit … and that’s not a bad thing!  On many topics, reddit’s links are mediocre (or worse) and provide very limited perspectives.  reddit discussions are often toxic. And while there are alternatives with some traction to Facebook and Twitter (MeWe and Mastodon both have millions of users), none of them have the same news focus as reddit.
  3. Design and usability are key.  People understand that a new site won’t be as polished as reddit or Facebook, but if it’s too confusing they generally won’t invite their friends [2] — and are likely to stop coming back.  WT:Social would have been better off starting with less functionality (did they really need hashtags right off the bat?) and putting more attention on design and usability.
  4. Help people have good initial experiences.   My first impression of WT:Social included getting asked for money, seeing off-topic links that happened to be at the top of the default subwikis at the time, and then getting spam in my email.  Hooray!  And pity the new user who found stuff confusing: for quite a while, there was no easy way of asking for help or finding the FAQ. Fortunately, it’s not hard to improve “first use experiences” through techniques like better design, simple onboarding screens, and easy access to resources and support. [3]
  5. Focus on accessibility up front or it will be a problem. WT:Social is a horrible experience using a screen-reader, and has many blatant accessibility bugs like missing alt-text and low color contrast that free site analyzers like Axe and WAVE can detect. Many other social networks also don’t do a great job here either, so there’s a big opportunity for a new offering to distinguish itself and a large audience of people whose needs aren’t being met today.
  6. Focus on harassment up front or it will be a problem.  WT:Social is filled with mechanisms that are optimized for harassers, doesn’t allow muting or blocking, and doesn’t even make it easy to find the code of conduct or anti-harassment policy.  Similarly, Wikipedia, Diaspora, Google+, Mastodon, and Twitter didn’t pay attention to harassment up-front, with the expected results.   Y’know, it doesn’t have to be this way.
  7. Think about how different cultural norms and legal systems will interact, including difficult areas relating to content that different people view as art, “porn”, and/or “NSFW”.  There are opportunities for innovation here: Mastodon worked through some similar issues, and came up with interesting techniques like tailorable content warnings and a mechanism to deal with images that are legal in some geographies but not others.
  8. Design for everybody, not just the kind of people the founder usually interacts with.  Lessons #3-7 are all examples of this (and I talked about another one, the term “subwiki”, in a previous post).[4]  I’ve made the same mistake myself.  Fortunately, it’s not hard to do better: work with a broad range of people, including those who are marginalized in different ways than you, from the very beginning — and listen to their ideas, suggestions, and feedback.
  9. Consider building on an existing discussion platform instead of rolling your own.  WT:Social’s initial discussion mechanism was pretty basic, and even after a couple of months of enhancements the lack of notifications can make it hard to have a good discussion there.  Does it make sense to leverage existing open-source commenting platforms like Coral Project or forum software like Discourse, NodeBB, or Vanilla Forums?
  10. Consider leveraging open standards based on decentralized identity and verifiable credentials.   Decentralized architectures are more complex but also a much better match for the real world.  Credit for this one goes to Kaliya Young (aka IdentityWoman) on Twitter, where she also provided some links to reading material.
  11. There’s a big opportunity for anti-oppressive social networks in general.  Today’s large social networks welcome racists, misogynists, alt-righters, and other bigots; Facebook goes even farther, siding with authoritarians and promoting genocide.  Most emerging alternatives either appeal even more blatantly to fascists (gab.ai) or strive for “neutrality” (WT:Social, MeWe, Minds). [5]   Dreamwidth continues to be a shining exception, and Mastodon’s early positioning as “Twitter without Nazis” is another (and there’s a lot to be learned from its challenges).  Still it’s clear that here’s a very large under-served market here.
  12. It’s time for a different approach. What would a news focused social media site look like if it were grounded in design justice and built on best practices and research into anti-harassment, content moderation, online extremism, and amplifying marginalized voices?  It’s hard to know, because there aren’t any high-profile examples of this.  Seems like an opportunity to me!

One of the things that really struck me as I was working on this list is  really striking about WT:Social is how they’ve repeated a lot of mistakes other social networks (including Wikipedia) have made.   But even though WT:Social hasn’t taken advantage of its opportunities to learn from other social networks, other social networks can learn from WT:Social.

I’m sure there are other good lessons as well – or aspects of these I’ve overlooked.  If you’ve have thoughts, please share them!

 


Thanks to Deborah, Eve, and everybody else who gave feedback on earlier versions of this post!


[1] As I was working on this post, I stumbled on Amy X Zhang’s thesis, which has some intriguing ideas and prototypes on the tools front.  Starbird et. al.’s paper on  disinformation as collaborative work, is also relevant.  How to apply collaborative approaches to countering disinformation?

[2]  The responses to Jimmy’s recent Why Inviting Friends Is Important highlight this.

[3] Indeed, WT:Social has recently made some progress here, thanks to Linda Blanchard’s excellent work on the Beginner’s Guide subwiki.

[4] Another example: the way new users automatically follow Jimmy Wales.  Jimmy’s said that this is done to make it more convenient for him to broadcast messages for everybody on the site … but there are plenty of other ways to accomplish this.  I get it that Jimmy wants to share the news when Rush’s drummer dies or a Turkish court rules in favor of Wikipedia, but it’s a classic case of assuming that users who haven’t expressed an interest in classic rock or Wikipedia share share his interests.

[5] I talked at length about “neutrality” in WT:Social will have to pick a side.   Jimmy’s comment in the  discussion on WT:Social is illuminating: he thinks people are “yearning” for technology that “fosters the kind of social activity that promotes truth and civil discourse.”  For more on why “civility” is so problematic, see what Ijeoma Oluo, Jamilah Lemieux, Kitanya Harrison, @sassycrass,  and @AngryBlackLady have to say about it.

Where are Black Women’s Voices in the Rolling Stone “Russian Troll” Story?

w

It’s great to see the recent surge of media interest on disinformation and the 2020 election. Errin Haines’ Manipulation Machines and Whitney Phillips’ The Toxins We Carry are two good examples.  Disappointingly, a recent Rolling Stone story with a clickbait headline has gotten a lot more exposure so far than either of these: over 60,000 shares on Facebook.  While the article makes some good points, it also has some major problems — starting with a complete erasure of Black women.

‘‘Erasure’’ refers to the ways the media (and more generally society) ignores the existence and contribution of some people and groups. Moya Bailey and Trudy (aka @thetrudz) give an example in On misogynoir: citation, erasure, and plagiarism: despite coining the term misogynoir, and writing about it for years they experience, to varying degrees,

our contributions being erased, our writing not cited, or our words plagiarized by people who find the word compelling.

Another example: the Rolling Stone article doesn’t quote, cite, or even mention any Black women — even though it focuses on disinformation campaigns involving fake accounts claiming to be Black women, a topic that (rather unsurprisingly) Black women have been dealing with and developing expertise in for a very long time.

As a result, the article ignores techniques that Black women have successfully used to identify and combat this kind of disinformation.  Instead, the Rolling Stone article presents a recommendation with obvious flaws: it won’t work to reduce Russian disinformation, and it’s harmful to Black women.

Erasure of Black women is far too common, on this topic and many others.   So before we delve into the Rolling Stone article, I want to highlight a surprisingly easy and straightforward way to help you notice it.

Be very skeptical about any article that doesn’t include Black women with expertise.

I’ll expand on this suggestion with some additional techniques at the end of this article, after looking at the erasure in more detail, and briefly discussing one of the other problems it leads to.  First, though, let’s start with the Rolling Stone article’s positive aspects.

Gray Russian nesting dolls, with the Twitter logo on the right -- the image from the Rolling Stone article

In That Uplifting Tweet You Just Shared? A Russian Troll Sent It, authors Darren Linvill and Patrick Warren highlight the threat of disinformation.  The article does a good job of showing how “professional trolls” like Russia’s IRA start by building trust by sharing tweets and links their target audience agrees with, before mixing in disinformation.  As Linvill and Warren say,

Effective disinformation is embedded in an account you agree with

This isn’t a new observation.   Shireen Mitchell’s 2018 report How The Facebook Ads that Targeted Voters Centered on Black American Culture, for example, similarly notes the IRA’s initial ads were “designed to build a trusted community of Black and Latino voters” before pivoting to focus on digital voter suppression, a topic she continues to focus on in her ongoing work.   Still, it’s a very important point, and one that a lot of people tend not to think about.

It’s easy to assume that just because an account is tweeting things you agree with, it’s on the same side as you are … but that’s not always the case.  Don’t trust a Twitter account just because it’s got some good tweets.

Pictures of Joy Reid and Shireen Mitchell, with a chyron on the bottom saying "#AMJOY: Black Voters continue to be the target of digital disinformation campaigns" and the MSNBC logo

The Rolling Stone article’s examples, both of fake accounts impersonating Black women, highlight another valuable takeaway.  Of course, accounts of any race and gender can be faked and used for disinformation, so it would have been helpful to include some other examples as well; Linvill’s explanation of why they didn’t (in response to a question from Sabaah Folayan) is, as he admits, unsatisfying.  Still, as the Senate Intelligence Committee’s report discusses, Black Americans were the group targeted the most by Russian social media efforts in 2016, and they’re still doing it.

So it’s important to be aware that “impersonation” is one of the techniques manipulators use.

Again, though, this isn’t a new observation.  For example, impersonating Black women was discussed in

Which brings us back to the erasure.   None of this previous work is quoted or cited in the Rolling Stone article.  No actual Black women are mentioned.  The only hints of Black women’s existence in the article are the fake accounts used as examples.

One consequence of this erasure shows up when the discussion turns to how to combat this kind of disinformation.  The Rolling Stone article doesn’t even discuss the techniques which Black women have been developing and refining for years.  The authors instead suggest teaching “digital civility.”

As @sassycrass points out on Twitter:

Hey, you know what ISN’T the issue here? Civility. How could it be? Per this SAME piece, trolls are GREAT at mimicking semi-civil discourse.

Conversely, as Black women including Ijeoma Oluo, Jamilah Lemieux, Kintanya Harrison, @sassycrass, @AngryBlackLady, and many others have noted, “civility” is a club that’s often used to attack Black women and other people of color.  One way this plays out on Twitter, Facebook, and other social networks is that dissenting voices, particularly those of people of color, are routinely dismissed as Russian bots or trolls.

So the authors’ suggestion of “digital civility” as a remedy to disinformation isn’t just unhelpful.   It actively aids the Russians in their goal of stoking racial dissent.  It’s a good example of a point that Shireen Mitchell and Whitney Phillips discussed last month: when discussing disinformation, white journalists who don’t face the same daily threats as people of color:

tend to add to the harms because they are not affected by it but also don’t know the nuances enough to tell the story effectively.

Sign saying "Listen to Black Women!"

Unless you’re an expert in disinformation techniques, you probably weren’t aware of the previous work that Black women have done on this front — it hasn’t received very much media attention.  As Manipulation Machines points out, too many journalists aren’t connected to the communities where this work is being done and discussed, so they tend not to write about it.

Still, you don’t need to be an expert to notice that Black women weren’t mentioned in an article about impersonating Black women. Here’s three straightforward techniques have helped me a lot.

  • Always look to see whether an article includes perspectives from Black women with expertise. If not, think twice before sharing it, and be very skeptical about any recommendations it makes. Instead, look for other related articles that are by Black women, or at least include Black women’s perspectives.
  • Listen to what Black women are saying about an article before amplifying it. If they have critiques, amplify the critiques rather than the article.

Fortunately there are journalists who do feature the work of Black women.   So here’s a few additional suggestions:

  • Seek out perspectives from Black women. Read (and support!) publications that feature their work. Buy their books. Follow them on social media. The people I’ve mentioned and linked to here are all great starting points; Imani Gandy’s Fems of Color list is another.
  • When somebody else shares a link that erases Black women, point it out, and provide alternate links.
  • Look at the links you’ve shared, and the links that have been shared to your group. How many are by Black women? How many include Black women’s perspectives? Set a goal of sharing as many artciles by Black women as by white men, and as many articles including Black women’s perspectives as white men’s perspectives.

It’s also worth highlighting that there’s often the same pattern of erasure and disinformation with other perspectives that are usually marginalized. So it’s worth rereading the bullet points above, and whenever you see “Black women”, also think about how these suggestions apply to trans, queer, and non-binary people, disabled people, and other groups that are often both erased and targeted with disinformation.

I certainly do agree with the authors of the Rolling Stone article that dealing with disinformation being created and distributed with the skill that foreign and domestic actors have today is new ground for most of us.  Digital voter suppression focused on Black voters is going to be a key battleground in the 2020 campaign.  As Shireen Mitchell recently said on Facebook, discussing her appearance on Joy Reid’s MSNBC show:

We got work to do.

Now’s a good time to start.

 


Thanks to Toshiye, Candace, Shasta, Dragos, Deborah, Jacquie, and everybody else who gave feedback on earlier drafts of this article!

Image credits:

 

WT:Social will have to pick a side

WT:Social logo with pin question marks on top of it

Over 400,000 people have signed up for WT:Social, Jimmy Wales’ news focused social network. The potential is clearly there: I’ve found some high-quality links on WT:Social that I hadn’t seen elsewhere. There’s also a lot of spam and other kinds of noise, along with usability problems and harassment … still, it’s early days yet; these issues may well get addressed over time.  And the problem Jimmy’s trying to solve is a real one: a lot of people want a better way of getting and discussing the news that avoids the clickbait headlines and disinformation that are so common on today’s social networks.

There’s certainly an opportunity here.  Even though WT:Social’s often been described  as an alternative to Facebook, it’s current functionality is very reddit-like: people can share links to “subwikis” (analogous to “subreddits”) that focus on different topics, and discuss them in comments.  Some people love reddit, but there are many topics where reddit’s links are mediocre (or worse) and provide very limited perspectives.  Not only that, reddit discussions are often toxic.   So it made a lot of sense for Jimmy to do an “Ask Me Anything” (AMA) on reddit, as a way of getting the word out and recruiting new users.

The AMA was certainly interesting, with Jimmy answering quite a few questions — including one from me, which I’ll talk about more below.  I don’t spend a lot of time on reddit, so it was also a vivid reminder to me of just how bad discussions on reddit often are.  At the same time, though, it also highlighted one of WT:Social’s biggest challenges.

One of the first questions Jimmy answered was from me, based on an experience I had on WT:Social a couple of days ago, where the software recommended I join a subwiki dedicated to attacks on trans and non-binary people.

Subwikis to join: Stop the Gender-Madness

My question had two parts.  Was that okay?  If not, how will keep things like this from happening in the future?  Jimmy replied

I’m sure it was deleted quickly – if not let me know. That’s totally unacceptable.

The key to wikis is genuine community control – putting the power in the hands of the quality members of the community rather than having to wait for someone to do something. As we grow, we plan to have more and more tools to allow that kind of control.

It hadn’t been deleted.   So I followed up with a link, and Jimmy (or perhaps another admin) immediately deleted it.  To me, that’s a very good thing.  But not everybody agreed.  For example:

reddit comment from dickheadaccount1: I can't view the group, but did you seriously delete a group that has non-leftist views on gender?

Many of us really don’t want to see anti-trans hate speech — I agree with Jimmy that it’s totally unacceptable.  Others think that saying trans people’s existence is “unscientific and unrealistic” and “harmful to society” (as this subwiki did) is just a  “non-leftist view of gender”, not an an attack on trans and non-binary people; to them.  Asking trans and non-binary people to “collaborate kindly” with bigots who think they or their friends shouldn’t exist isn’t a solution.   And who winds up getting treated as a “quality member” of the community?

Another question Jimmy answered, this one from sridc, highlights another aspect of the challenge.  The question cited my previous post, Why is an “intellectual dark web” site at the top of my feed? , as an example of “victimhood culture”  (I feel seen!) and asked how WT:Social would “encourage members to focus on the content, instead of discrediting a news source.”  Here’s Jimmy’s response:

My view is that collaboration and kindness as a part of the culture is a big part of it.

One reason we have a victimhood culture (which goes in many directions) on social media is that you typically have only 3 choices to deal with something awful: block the person so you don’t see them anymore (which doesn’t help the broader community), yell at the person (which is why so many places are poisonous), or report the person (into systems that don’t scale and get it wrong quite a lot).

Better is genuine community control in the wiki way.

There was skepticism in the replies.   JoeMobley complained that “the anti-trumpers get together and down-vote any message they object to.”  Jimmy agreed that voting isn’t particularly helpful in many cases, and talked instead about a technique Wikipedia uses to try to get consensus.  In response to that, sridc complained that his Wikipedia updates to the callout culture Wikipedia page had been edited out by three “leftist editors” who rejected his “reliable sources.”  sridc also talked gave another example of “leftists,” relating to a page about antifa, and to bolster his case provided a link to an article from … Breitbart.

Looks like not everybody agrees on what’s a “reliable source.”*

Similarly, in the reddit discussion, funknut described Quillette as “a glorified blog for the alt-right to trade misinformation about the humble city of Portland.”  Others objected to this message, and got together and down-voted it.   It’s not just an issue with voting, though.  On WT:Social, the Long Reads subwiki — which everybody joins by default — has featured a series of links from Quillette and other “intellectual dark web” sites, as well as critiques like Jordan Peterson & Fascist Mysticism and 21 Racial Microaggressions You Hear On A Daily Basis.  The discussions there are … very reddit-like.  As I said in my previous post:

Y’know, there are a lot of reasons people are looking for alternatives, but I don’t think I’ve ever heard people say “the real problem with Facebook and Reddit today is that there’s not enough arguing about white supremacy and the ‘intellectual dark web’.”

When the community is split on an issue that people feel passionately about, “community control” isn’t a good enough answer.  In response to another question, Jimmy shared his viewpoint that “Out of every 1,000 people I think 990 of them are perfectly nice and wonderful.”  Whatever the numbers are (your mileage may vary) — and no matter how “nice and wonderful” they are to Jimmy — anti-trans bigots, white supremacists, fascists, and their supporters can ruin a site’s experience for everybody else.

To have any chance of succeeding, WT:Social will to have to pick a side.

 


* For what it’s worth, Facebook sides with sridc — they’re paying Breitbart a bunch of money for the right include their articles as part of their new “high-quality news” page.  But that’s part of the reason that so many people I know are looking for Facebook alternatives, so … let’s just say there’s a range of opinions here, and gab.ai already provides an alternative for people who don’t think Facebook favors the Breitbart’s of the world enough.


Thanks to Deborah and everybody else who gave feedback on earlier versions of this post.

Why is “intellectual dark web” content at the top of my feed? Thoughts on WT:Social

WT:Social - News focused social network (the WT:Social logo)

On Friday, I signed up for WT:Social, a news focused social network from Jimmy Wales of Wikipedia fame.  There’s a lot of buzz about WT:Social, and membership is soaring — up from just a few thousand users at the beginning of the month to almost 100,000 when I signed up two days ago.  The waitlist is long, but if you get a paid account ($12.99/month or $100/year) you can skip the queue.

Since I’m also working on some news focused social network software, and so am interested to see how others approach the problem, I paid for a month.  If you’re also developing social media software, there’s a lot to learn here, so it might be worth it for you as well.

Otherwise, save your money. [1]

Red flags from the beginning

There were some red flags from the beginning, starting with the lack of up-front information about a code of conduct, anti-harassment policy, or content guidelines.  As Elisa Camhort Page said when we were discussing this

A site that welcomes any content is inevitably a site that welcomes harassment, hate speech, threats, and misinformation. You cannot stave off one if you will not take a stand on the other.

Yeah really.  Eventually I discovered that the Terms and Conditions actually does link out to a Code of Conduct, as well as FAQs on Diversity and Ethics; from the dates on them, they seem to have been written for WT:Social’s previous incarnation as WikiTribune, but presumably they still apply.  Still, most people won’t invest the effort to find these, and so won’t know what’s expected of them.   It’s much better to make sure that people see these right up front — and explicitly agree to them.

Another immediately-obvious problem: the experience using a screen reader is really horrible.  There’s no “skip navigation” link, so the initial experience on the page starts with reading out all the menus and recommended sub-wikis.  Then when you finally get to a link, the title of the article is repeated multiple times, and it reads out the complete URL.  Yikes.

Also, it doesn’t seem like WT:Social has really thought through about how people might try to game the system, let alone applied structured techniques like “social threat modeling[2]  For example, the notifications are all on by default — meaning new posts get sent to you via email   What could possibly go wrong?  Here’s a screenshot of some email I got (with the subwiki’s name blanked out).

Email header. From: info@wikitribune.com Subject: WT:Social (wiki name blanked out): Subscribe to Read | Financial Times

In this particular case it was an accident [3] but you can certainly see how it could get abused.  Mechanisms like this make it open season for spammers, harassers, propagandists, and other unsavory types.

If you have an account there, you can turn the notifications off by going to “My Account” and then “Edit Notifications”.  The link https://wt.social/myaccount/notifications also works, at least for now … although, as Kathy Gill points out, the way the notification dialog uses red and green is problematic from an accessibility perspective.   Here’s what the initial settings look like via Coblis, the color blindness simulator.  Are they on or off?

Notifications dialog, with Off buttons in black and on buttons in grey

Even though I’ve turned all the notifications off, I still see some when I check the site.  Still, it’s a lot better than it was — and things aren’t showing up in my email.

It’s more like reddit than Facebook

Even though a lot of people are describing WT:Social as an alternative to Faecebook, it’s really a lot more like reddit.  Links get organized into “subwikis”, which fill a similar role to reddit’s “subreddits”.  You can browse a subwiki, comment on posts there, or join it (which lets you submit links of your own).

The word “subwiki” doesn’t seem like a great choice to me.  Subwiki’s aren’t wikis, and they aren’t part of a wiki.  In my own informal survey nobody found it a particularly appealing name.  But, it probably sounded good to Jimmy Wales and the people he hangs out with.

Your home page is a “feed” of the most recent posts, along with the most recent comments, from any of the subwikis that you’ve joined.  There are also some “global links” that the people running the site decide everybody gets to see (no way to opt out yet, sorry, and no information about how they decide on which links to send out).   There’s also the additional twist of collaborative wiki-like editing of posts, although I haven’t been able to get it to work yet. [4]

It mostly works.  I was able to figure out how to make a post and share a link myself (although I had to hit refresh to see whether it had succeeded or not).   I like exploring new social networks, so I hunted around found the FAQ and Known Bugs list. [5]  Putting my civil liberties hat on, I created the Section 215 subwiki to share links about the upcoming USA FREEDOM Act reauthorization battle, and seeded it with a post.  Then I sent invitation links to a couple of friends.

This was, in retrospect, a mistake.  My apologies.  If you’ve also signed up, and are considering inviting other people, please read this footnote first.[6]

How I spent my Friday evening

A few hours later one of the friends I had sent an invitation link to asked me

“Why is there an article from Quillette at the top of my WT:Social feed?”

Good question. I went back to check WT:social again and there was an article from Quillette at the top of my feed as well. WTF?

For those of you who don’t know Quillette, it’s an online magazine usually described a a part of the “Intellectual Dark Web” (IDW), which also includes other prominent members like Jordan Peterson, Ben Shapiro, and Jonathan Haidt.  Like others in the IDW, Quillette is polarizing.[7]  Some people see it as upholding values of free speech against the onslaught of SJWs and snowflakes. Others see it as … not the kind of content they want to be confronted with unexpectedly on a Friday night.

Most of my friends fall into the second category, so I hurriedly circled back to the people I had shared invitations with and let them know that they might be in for an unpleasant surprise if they signed up.  Then I looked to see what was going on.

Before we go into that, though, think for a moment about the effect this is likely to have on WT:Social. Lots of people are looking for alternatives to Facebook et al. When somebody like my friend goes to check out a new site and the first thing they see is IDW content … they’re likely to leave, and not come back.

And people who hear about this and don’t want to deal with IDW content might not even bother to check WT:Social out.  When I’ve told other friends that if the sign up for Jimmy Wales’ new social network they they might well see IDW content at the top of their feed, their reaction is generally that they’ve got better things to do with their time.

Then again, there are plenty of people out there who actively like IDW content. They’re ones who are likely to stick around, and invite their friends.  By placing this content so prominently, WT:Social is going to attract them — and drive away the people like me and most of my friends, who would rather not be confronted with IDW content on a Friday night.   This seems like good news for IDW fans who feel like they’re being oppressed by Facebook, Twitter, and reddit.  But as we’ll see, even for them, there are downsides.

Why should IDW fans have all the fun?

Once I looked into it, I realized that what had happened to my friend was fairly straightforward:

  • When they signed up for WT:Social, they were automatically joined to the “Long Reads” subwiki, (along with a handful of other subwikis).
  • When somebody shared IDW content to Long Reads, all 16,000 people in the “Long Reads” subwiki (including people like my friend, who were automatically joined when they signed up) saw it at the top of their feed.  It’s quite possible some or all of them got it in their email as well.

It turned out that I had been automatically signed up for the “Long Reads” subwiki too.  When I left it, the Quillette article vanished from my feed.

But wait a second, why should IDW fans have all the fun? So I rejoined “Long Reads” and shared Jessie Daniels’ Twitter and White Supremacy: A Love Story. When I asked another friend to sign up, here’s what they saw at the top of their feed.

WT Social Feed, with "Twitter and White Supremacy" at the top

Of course, criticisms of large tech companies for helping white supremacists are also polarizing.  Some people see this as … not the kind of content they want to be confronted with on a Friday night. One WT:social member appeared particularly incensed that this link was in his feed, and replied with multiple comments objecting to this “obvious nonsense” and “BS sensationalist headline”. And when I refreshed my front page, there was also a heated debate on the Quillette post as well.

Since there isn’t any way to hide posts from your feed, or prevent WT:social from showing you the five most recent comments on every post, now there was something for everybody!

  • Conservatives looking for alternatives because they feel like they’re being oppressed by corporate social media sites will be immediately irritated by “obvious nonsense.”  Why use WT:Social instead of alt-right fave gab.ai?
  • People looking for alternatives because they feel like corporate social media sites are siding with white supremacists may get a better first impression — but then as soon as they scroll down they’ll see IDW content.  Thanks but no thanks.
  • And people from across the political spectrum will get to see bloviating in comments – with no way to turn it off.  Y’know, there are a lot of reasons people are looking for alternatives, but I don’t think I’ve ever heard people say “the real problem with Facebook and Reddit today is that there’s not enough arguing about white supremacy and the ‘intellectual dark web’.”

A good learning experience

People are continuing to flock to WT:Social: 75,000 new members over the last two days, and the wait list is over 100,000.  The potential is there; for example, somebody posted a link to a story about sexism in Wikipedia, and there were some really great comments.  There’s interesting links on some of the subwikis as well.  But judging from the discussion on the site, most people signing up aren’t having good experiences.

WT:Social Subwiki / Spam requests happening, Created about 2 hours ago. Is there a way to block users or delete friend requests? I'm starting to get spam requests already. :-(

Admittedly, it’s early days yet.  WT:Social could learn from this, take a step back, and redesign their system yet again to pay more attention to things like harassment, abuse, and hate speech.  I’m not holding my breath, but we shall see.  I haven’t deleted my account yet[8] , so if you want to friend or follow me, here I am.

More importantly, WT:Social is not the only game in town.  Their initial floundering is also a learning experience for other nascent social networks and news-focused social media.   True, many of the lessons aout what not to do could also have been learned from Wikipedia’s own history and projects like Mastodon and Diaspora that also set out to provide free speech-oriented alternatives to ad-funded, surveillance capitalism social networks.   Still, it’s a good reminder.

And fortunately, there are positive lessons as well.  One big takeaway is the huge amount of interest in WT:Social (as well as MeWe, the privacy-friendly Facebook alternative, which is also currently getting a lot of signups[9]).  A couple of years ago I wrote about a potential tipping point.  Since then, the pent-up demand is continuing to grow — and not just with techies; I’ve seen a lot of activists I know talking about WT:Social.

Another takeaway is that it’s time for a different approach.  What would a social media site look like if it built on best practices and research into anti-harassment, content moderation, online extremism, and amplifying marginalized voices?

Hopefully we’ll start to see some examples of this over the next few months.

Acknowledgements

Many thanks to Shireen, Kaliya, Shasta, Kathy, Elisa, Victoria, Jim, Vicki, Jim, Soren, Deborah and everybody else for the valuable discussion about WT:Social and feedback on earlier versions of this post!

Footnotes

[1] I certainly don’t mind paying for ad-free social media; I’ve had paid subscriptions to Dreamwidth for years, and support a couple of Mastodon instances on Patreon.  But these are all sites that I started using for free and have had good experiences — and they are asking for a lot less than WT:Social.  Dragos Ruiu describes describes WT:Social’s approach as a “fee extortion waiting queue” which is pretty much how I feel about it too.   Also, Wales’ track record is not encouraging; see for example Mathew Ingram’s Wikipedia’s co-founder wanted to let readers edit the news. What went wrong? and Julia Jacobs’ Wikipedia Isn’t Officially a Social Network. But the Harassment Can Get Ugly.

[2] Shireen Mitchell and I discussed social threat modeling in our 2017 SXSW talk.  There’s an overview of related work in The Winds of Change are in the Air.  My personal experience is that taking a social threat modeling approach early in a project is incredibly valuable.  Like so many other security-related issues, this kind of stuff is very hard and expensive to try to patch in after the fact.

[3] Somebody had shared a link to a story from the Financial Times, quite the one about WT:Social, that turned out to be paywalled.  So when WT:Social tried get the title of the article, it instead got the paywall message.  The software didn’t bother check for this, but just posted it blithely, and sent out the email update to everybody following the subreddit who hadn’t yet turned off notifications.  The person who had posted the link realized their mistake, and deleted it quickly … but it was too late: the email had already gone out.

[4] Implementation bugs aside, I don’t understand how this is even supposed to work.  The impression I have is that you can set up posts that anybody can edit and people will then converge on a neutral point of view summary. What could possibly go wrong?

[5] Which has some scary stuff, like not being able to deny a friend request.

[6] Invitation links have some very unexpected behavior: everybody who accepts via the same link gets connected as friends, with no option to approve.  Once again, what could possibly go wrong?

[7] For example, when I shared an earlier draft of this on Facebook, somebody took exception to my classifying Jordan Peterson as “a mainstay” of the IDW.  So for a while the Facebook thread — which was supposed to be discussing WT:Social — turned into an argument about whether or not Peterson aligns with white supremacists, how misogynistic and anti-trans he is or isn’t, what some see as a pattern of passing off bullshit as “scientific studies”, and so on.

[8] Although I’ve cancelled future payments

[9] Of course, MeWe has challenges of its own.  See Inside MeWe, Where Anti-Vaxxers and Conspiracy Theorists Thrive.