Authors
- Riccardo Fogliato*
- Alice Xiang
- Zachary Lipton*
- Daniel Nagin*
- Alexandra Chouldechova*
* External authors
Venue
- AIES-2021
Date
- 2021
On the Validity of Arrest as a Proxy for Offense: Race and the Likelihood of Arrest for Violent Crimes
Riccardo Fogliato*
Zachary Lipton*
Daniel Nagin*
Alexandra Chouldechova*
* External authors
AIES-2021
2021
Abstract
The risk of re-offense is considered in decision-making at many stages of the criminal justice system, from pre-trial, to sentencing, to parole. To aid decision makers in their assessments, institutions increasingly rely on algorithmic risk assessment instruments (RAIs). These tools assess the likelihood that an individual will be arrested for a new criminal offense within some time window following their release. However, since not all crimes result in arrest, RAIs do not directly assess the risk of re-offense. Furthermore, disparities in the likelihood of arrest can potentially lead to biases in the resulting risk scores. Several recent validations of RAIs have therefore focused on arrests for violent offenses, which are viewed as being more accurate reflections of offending behavior. In this paper, we investigate biases in violent arrest data by analysing racial disparities in the likelihood of arrest for White and Black violent offenders. We focus our study on 2007–2016 incident-level data of violent offenses from 16 US states as recorded in the National Incident Based Reporting System (NIBRS). Our analysis shows that the magnitude and direction of the racial disparities depend on various characteristics of the crimes. In addition, our investigation reveals large variations in arrest rates across geographical locations and offense types. We discuss the implications of the observed disconnect between re-arrest and reoffense in the context of RAIs and the challenges around the use of data from NIBRS to correct for the sampling bias.
Related Publications
Despite extensive efforts to create fairer machine learning (ML) datasets, there remains a limited understanding of the practical aspects of dataset curation. Drawing from interviews with 30 ML dataset curators, we present a comprehensive taxonomy of the challenges and trade…
We tackle societal bias in image-text datasets by removing spurious correlations between protected groups and image attributes. Traditional methods only target labeled attributes, ignoring biases from unlabeled ones. Using text-guided inpainting models, our approach ensures …
Deep neural networks trained via empirical risk minimisation often exhibit significant performance disparities across groups, particularly when group and task labels are spuriously correlated (e.g., “grassy background” and “cows”). Existing bias mitigation methods that aim t…
JOIN US
Shape the Future of AI with Sony AI
We want to hear from those of you who have a strong desire
to shape the future of AI.