In social theorist Michel Foucault’s classic formulation, “visibility is a trap.” Foucault explained how power is exercised through techniques of surveillance in which people are constantly watched and disciplined, “the object of information, never a subject of communication.” The less obvious the mechanism, the more powerful the disciplinary function of surveillance. It is tempting to point to the smart recording devices we carry around in our pockets and exclaim that “we are all caught inside the digital dragnet!” But, the fact is, we do not all experience the dangers of exposure in equal measure. Consider the “If you see something, say something” signs that litter public spaces, the brigade of White women reporting Black people to the police, the broken windows policies that license law enforcement to discipline small infractions like vandalism and toll-jumping, allegedly in order to deter and prevent larger crimes, and police body cameras that supposedly capture what “really” happened when an officer harasses or kills someone: clearly people are exposed differently to the dangers of surveillance.
In the most comprehensive study of its kind, a group of researchers at Georgetown Law School obtained over 10,000 pages of information from more than 100 police departments across the country, to examine how the use of facial recognition software impacts different communities. They found that “[t]he databases they use are disproportionately African American, and the software is especially bad at recognizing Black faces, according to several studies.”34 What’s more, the different global settings in which AI is taught to “see” impacts the technical settings designed to identify individuals from various groups. It turns out that algorithms “developed in China, Japan, and South Korea recognized East Asian faces far more readily than Caucasians. The reverse was true for algorithms developed in France, Germany, and the United States, which were significantly better at recognizing Caucasian facial characteristics.”35 This suggests that the political–geographic setting augments the default setting of Whiteness. The ethnoracial makeup of the software design team, the test photo databases, and the larger population of users influence the algorithms’ capacity for recognition, though not in any straightforward sense.
For instance, when it comes to datasets, a 2012 study found that an algorithm trained “exclusively on either African American or Caucasian faces recognized members of the race in its training set more readily than members of any other race.”36 Scholars at Georgetown University’s Center on Privacy and Technology point out that the disparities in facial recognition across racial groups may be introduced “at a number of points in the process of designing and deploying a facial recognition system”:
The engineer that develops an algorithm may program it to focus on facial features that are more easily distinguishable in some race than in others – the shape of a person’s eyes, the width of the nose, the size of the mouth or chin. This decision, in turn, might be based on preexisting biological research about face identification and past practices which themselves may contain bias. Or the engineer may rely on his or her own experience in distinguishing between faces – a process that is influenced by the engineer’s own race.