DRAFT: A short reading list on Facial Recognition

Here’s some links, mostly short articles by AI researchers and lawyers, that I’ve found very helpful in thinking about facial recognition.  I’ve included excerpts, but if you’ve got the time it’s really worth reading the full article. A couple of them are paywalled so I’ve included PDFs as well.  At the end, I’ve also included some press coverage of current legislation in Washington State.   This list is a work in progress, so please let me know about other links you think should be here!

San Francisco was right to ban facial recognition. Surveillance is a real danger, Veena Dubal (associate Professor of Law at the University of California, Hastings)

The concerns that motivated the San Francisco ban are rooted not just in the potential inaccuracy of facial recognition technology, but in a long national history of politicized and racially-biased state surveillance….   Based on my years of working as a civil rights advocate and attorney representing Muslim Americans in the aftermath of September 11th, I recognize that the debate’s singular focus on the technology is a red herring.

Even in an imaginary future where algorithmic discrimination does not exist, facial recognition software simply cannot de-bias the practice and impact of state surveillance. In fact, the public emphasis on curable algorithmic inaccuracies leaves the concerns that motivated the San Francisco ban historically and politically decontextualized.

Should Government Halt the Use of Facial-Recognition Technology?, Meredith Walker (co-founder of AI Now Institute)

But inaccuracy is far from facial recognition’s only problem. Facial recognition is generally applied by those who already have power—like employers, landlords and the police—to surveil and in some cases oppress those who don’t. And because there are no restrictions on data sharing between private users and government, regulations that focus solely on government use don’t account for the full threat of this technology.

We already see alleged abuse of power in practice, in cases like the Baltimore Police Department and Freddie Gray protesters. The police reportedly scanned photos to identify and target people with outstanding warrants, according to material posted on the ACLU of Northern California website—chilling free speech and undermining the right to due process….

It is difficult to think of an industry where we permit companies to treat the public as experimental subjects, deploying untested, unverified and faulty technology that has been proved to violate civil rights and to amplify bias and discrimination. The dangers of bias and error, and the way in which facial recognition exacerbates existing inequalities, have serious implications even when the systems are automating “routine” functions like check-ins at airports.

Facial recognition is the plutonium of AI, Luke Stark (Researcher at Microsoft Research); the PDF is available here.

Yet even if facial recognition systems were ever able to map each and every human face with technical perfection, the core conceptual mechanism of facial recognition would remain, in my view, irredeemably discriminatory. If human societies were not racist, facial recognition technologies would incline them toward racism; as human societies are often racist, facial recognition exacerbates that animus.

In addition to the surveillance aspects, facial recognition systems also has a huge impact on trans, non-binary, and gender non-conforming people.  Os Keyes (UW grad student and Ada Lovelace fellow, who testified in Olympia last year on a facial recognition bill) has done some outstanding work here.  In A UW engineer explains how facial recognition tech erases trans people, they say:

Facial recognition systems generally have a lot of problems when it comes to identities that aren’t static. So people whose appearances change over time, for example. People who develop facial recognition are aware of this in the context of aging. They talk about how the system doesn’t work for really old or really young people, because apparently all wrinkles look the same and all chubby babies look the same.

The idea of transition, for example, just doesn’t factor in at all. Facial recognition developers seem entirely ignorant of the possibility that trans people exist and that, because we exist, who we are is not going to be a continuous, static, lifelong thing.

[Facial recognition has a] binary nature. It’s also the physiological nature. Also, frankly, the technology doesn’t just model gender, and so we have consequences for people who don’t fit well for the model. It also shapes our notion of what gender is.

The Misgendering Machines: Trans/HCI Implications of Automatic Gender Recognition is a research paper by Keyes (the PDF is available here) looking at  Automatic Gender Recognition (AGR), a subfield of facial recognition that aims to algorithmically identify the gender of individuals from photographs or videos.

AGR consistently operationalises gender in a trans-exclusive way, and consequently carries disproportionate risk for trans people subject to it.

Sasha Costanza-Chock’s outstanding Design Justice, A.I., and Escape from the Matrix of Domination discusses phrased in terms of AI systems in general; this section in particular very much apply to facial recognition systems.

Universalist design principles and practices erase certain groups of people, specifically those who are intersectionally disadvantaged or multiply-burdened under capitalism, white supremacy, heteropatriarchy, and settler colonialism. What is more, when technologists do consider inequality in technology design (and most professional design processes do not consider inequality at all), they nearly always employ a single-axis [as opposed to intersectional] framework. Most design processes today are therefore structured in ways that make it impossible to see, engage with, account for, or attempt to remedy the unequal distribution of benefits and burdens that they reproduce. As Crenshaw notes, feminist or antiracist theory or policy that is not grounded in intersectional understanding of gender and race cannot adequately address the experiences of Black women, or other multiply-burdened people, when it comes to the formulation of policy demands. The same must be true when it comes to our ‘design demands’ for A.I. systems, including technical standards, training data, benchmarks, bias audits, and so on.

Washington State Facial Recognition Legislation, 2020  (SB 6280/SB 6281)

Cleanup still needed.

Inside Washington state’s battle with big tech over privacy

https://www.wired.com/story/microsoft-looms-privacy-debate-home-state/

 

https://www.seattletimes.com/seattle-news/politics/who-is-taking-and-selling-your-personal-data-washington-lawmakers-working-to-give-people-have-more-of-a-say/