Authors

* External authors

Venue

Date

Share

Reconciling Legal and Technical Approaches to Algorithmic Bias

Alice Xiang

* External authors

Tennessee Law Review

2021

Abstract

In recent years, there has been a proliferation of papers in the algorithmic fairness literature proposing various technical definitions of algorithmic bias and methods to mitigate bias. Whether these algorithmic bias mitigation methods would be permissible from a legal perspective is a complex but increasingly pressing question at a time when there are growing concerns about the potential for algorithmic decision-making to exacerbate societal inequities. In particular, there is a tension around the use of protected class variables: most algorithmic bias mitigation techniques utilize these variables or proxies, but anti-discrimination doctrine has a strong preference for decisions that are blind to them. This Article analyzes the extent to which technical approaches to algorithmic bias are compatible with U.S. anti-discrimination law and recommends a path toward greater compatibility. This question is vital to address because a lack of legal compatibility creates the possibility that biased algorithms might be considered legally permissible while approaches designed to correct for bias might be considered illegally discriminatory. For example, a recent proposed rule from the Department of Housing and Urban Development (“HUD”), which would have established the first instance of a U.S. regulatory definition for algorithmic discrimination, would have created a safe harbor from disparate impact liability for housing-related algorithms that do not use protected class variables or close proxies. An abundance of recent scholarship has shown, however, that simply removing protected class variables and close proxies does little to ensure that the algorithm will not be biased. In fact, this approach, known as “fairness through unawareness” in the machine learning community, is widely considered naive. While the language around algorithms was removed in the final rule, this focus on the visibility of protected attributes in decision-making is central in U.S. anti-discrimination law. Causal inference provides a potential way to reconcile algorithmic fairness techniques with anti-discrimination law. In U.S. law, discrimination is generally thought of as making decisions “because of” a protected class variable. In fact, in Texas Department of Housing and Community Affairs v. Inclusive Communities Project, Inc., the case that motivated the HUD proposed rule, the Court required a “causal connection” between the decision-making process and the disproportionate outcomes. Instead of examining whether protected class variables appear in the algorithm, causal inference would allow for techniques that use protected class variables with the intent of negating causal relationships in the data tied with race. While moving from correlation to causation is challenging—particularly in machine learning, where leveraging correlations to make accurate predictions is typically the goal—doing so offers a way to reconcile technical feasibility and legal precedence while providing protections against algorithmic bias.

Related Publications

On the Validity of Arrest as a Proxy for Offense: Race and the Likelihood of Arrest for Violent Crimes

AIES, 2021
Riccardo Fogliato*, Alice Xiang, Zachary Lipton*, Daniel Nagin*, Alexandra Chouldechova*

The risk of re-offense is considered in decision-making at many stages of the criminal justice system, from pre-trial, to sentencing, to parole. To aid decision makers in their assessments, institutions increasingly rely on algorithmic risk assessment instruments (RAIs). The…

"What We Can’t Measure, We Can’t Understand": Challenges to Demographic Data Procurement in the Pursuit of Fairness

FaccT, 2021
McKane Andrus*, Elena Spitzer*, Jeffrey Brown*, Alice Xiang

As calls for fair and unbiased algorithmic systems increase, so too does the number of individuals working on algorithmic fairness in industry. However, these practitioners often do not have access to the demographic data they feel they need to detect bias in practice. Even …

  • HOME
  • Publications
  • Reconciling Legal and Technical Approaches to Algorithmic Bias

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.