A recent initiative called Auditing Algorithms is developing a research community around the practice of auditing, and they released a white paper in August 2018 that outlines a way forward.58 To date, the Data & Society Research Institute offers the most thorough elaboration of “algorithmic accountability,” noting that “there are few consumer or civil rights protections that limit the type of data used to build data profiles or audit algorithmic decision-making.”59 Advancing this effort, danah boyd and M. C. Elish pose three crucial questions that are a starting point for any tech equity audit as it relates to AI systems:
What are the unintended consequences of designing systems at scale on the basis of existing patterns in society?
When and how should AI systems prioritize individuals over society and vice versa?
When is introducing an AI system the right answer – and when is it not?
Crucially, such audits need to be independent and enforceable. Currently there are not even any industry-wide standards for social impact that fully account for the way in which algorithms are used to “allocate housing, healthcare, hiring, banking, social services as well as goods and service delivery.”60 Google’s AI ethics principles, created in the aftermath of the controversy over the company’s Pentagon contract, are a good start but focus too narrowly on military and surveillance technologies and, by relying on “widely accepted principles of international law and human rights,” they sidestep the common practice of governments surveilling their own citizens. Nor do these principles ensure independent and transparent review; they follow instead a pattern current in corporate governance that maintains “internal, secret processes” that preclude public accountability.