It is certainly the case that algorithmic discrimination is only one facet of a much wider phenomenon, in which what it means to be human is called into question. What do “free will” and “autonomy” mean in a world in which algorithms are tracking, predicting, and persuading us at every turn? Historian Yuval Noah Harari warns that tech knows us better than we know ourselves, and that “we are facing not just a technological crisis but a philosophical crisis.”63 This is an industry with access to data and capital that exceeds that of sovereign nations, throwing even that sovereignty into question when such technologies draw upon the science of persuasion to track, addict, and manipulate the public. We are talking about a redefinition of human identity, autonomy, core constitutional rights, and democratic principles more broadly.64
In this context, one could argue that the racial dimensions of the problem are a subplot of (even a distraction from) the main action of humanity at risk. But, as philosopher Sylvia Wynter has argued, our very notion of what it means to be human is fragmented by race and other axes of difference. She posits that there are different “genres” of humanity that include “full humans, not-quite humans, and nonhumans,”65 through which racial, gendered, and colonial hierarchies are encoded. The pseudo-universal version of humanity, “the Man,” she argues, is only one form, and that it is predicated on anti-Blackness. As such, Black humanity and freedom entail thinking and acting beyond the dominant genre, which could include telling different stories about the past, the present, and the future.66
But what does this have to do with coded inequity? First, it’s true, anti- Black technologies do not necessarily limit their harm to those coded Black.67 However, a universalizing lens may actually hide many of the dangers of discriminatory design, because in many ways Black people already live in the future.68 The plight of Black people has consistently been a harbinger of wider processes – bankers using financial technologies to prey on Black homeowners, law enforcement using surveillance technologies to control Black neighborhoods, or politicians using legislative techniques to disenfranchise Black voters – which then get rolled out on an even wider scale. An #AllLivesMatter approach to technology is not only false inclusion but also poor planning, especially by those who fancy themselves as futurists.
Many tech enthusiasts wax poetic about a posthuman world and, indeed, the expansion of big data analytics, predictive algorithms, and AI, animate digital dreams of living beyond the human mind and body – even beyond human bias and racism. But posthumanist visions assume that we have all had a chance to be human. How nice it must be … to be so tired of living mortally that one dreams of immortality. Like so many other “posts” (postracial, postcolonial, etc.), posthumanism grows out of the Man’s experience. This means that, by decoding the racial dimensions of technology and the way in which different genres of humanity are constructed in the process, we gain a keener sense of the architecture of power – and not simply as a top- down story of powerful tech companies imposing coded inequity onto an innocent public. This is also about how we (click) submit, because of all that we seem to gain by having our choices and behaviors tracked, predicted, and racialized. The director of research at Diversity, Inc. put it to me like this: “Would you really want to see a gun-toting White man in a Facebook ad?” Tailoring ads makes economic sense for companies that try to appeal to people “like me”: a Black woman whose sister-in-law was killed in a mass shooting, who has had to “shelter in place” after a gunman opened fire in a neighboring building minutes after I delivered a talk, and who worries that her teenage sons may be assaulted by police or vigilantes. Fair enough. Given these powerful associations, a gun-toting White man would probably not be the best image for getting my business.