Alice
Xiang

Profile

Alice Xiang is the Global Head of AI Ethics at Sony. As the VP leading AI ethics initiatives across Sony Group, she manages the team responsible for conducting AI ethics assessments across Sony's business units and implementing Sony's AI Ethics Guidelines. In addition, as the Lead Research Scientist for AI ethics at Sony AI, Alice leads a lab of AI researchers working on cutting-edge research to enable the development of more responsible AI solutions. Alice also recently served as a General Chair for the ACM Conference on Fairness, Accountability, and Transparency (FAccT), the premier multidisciplinary research conference on these topics.

Alice previously served on the leadership team of the Partnership on AI. As the Head of Fairness, Transparency, and Accountability Research, she led a team of interdisciplinary researchers and a portfolio of multi-stakeholder research initiatives. She also served as a Visiting Scholar at Tsinghua University’s Yau Mathematical Sciences Center, where she taught a course on Algorithmic Fairness, Causal Inference, and the Law.

She has been quoted in the Wall Street Journal, MIT Tech Review, Fortune, Yahoo Finance, and VentureBeat, among others. She has given guest lectures at the Simons Institute at Berkeley, USC, Harvard, SNU Law School, among other universities. Her research has been published in top machine learning conferences, journals, and law reviews.

Alice is both a lawyer and statistician, with experience developing machine learning models and serving as legal counsel for technology companies. Alice holds a Juris Doctor from Yale Law School, a Master’s in Development Economics from Oxford, a Master’s in Statistics from Harvard, and a Bachelor’s in Economics from Harvard.

Message

“Our AI Ethics research team conducts cutting-edge research on fairness, transparency, and accountability in AI. Our projects aim to enable the development of more ethical AI within Sony and also to contribute to the global research discourse around AI ethics. Our goal is to make Sony a global leader in AI ethics.”

Publications

Considerations for Ethical Speech Recognition Datasets

WSDM, 2023
Orestis Papakyriakopoulos, Alice Xiang

Speech AI Technologies are largely trained on publicly available datasets or by the massive web-crawling of speech. In both cases, data acquisition focuses on minimizing collection effort, without necessarily taking the data subjects’ protection or user needs into considerat…

Causality for Temporal Unfairness Evaluation and Mitigation

NeurIPS, 2022
Aida Rahmattalabi, Alice Xiang

Recent interests in causality for fair decision-making systems has been accompanied with great skepticism due to practical and epistemological challenges with applying existing causal fairness approaches. Existing works mainly seek to remove the causal effect of social categ…

Men Also Do Laundry: Multi-Attribute Bias Amplification

NeurIPS, 2022
Dora Zhao, Jerone T. A. Andrews, Alice Xiang

As computer vision systems become more widely deployed, there is increasing concern from both the research community and the public that these systems are not only reproducing but amplifying harmful social biases. The phenomenon of bias amplification, which is the focus of t…

A View From Somewhere: Human-Centric Face Representations

NeurIPS, 2022
Jerone T. A. Andrews, Przemyslaw Joniak*, Alice Xiang

We propose to implicitly learn a set of continuous face-varying dimensions, without ever asking an annotator to explicitly categorize a person. We uncover the dimensions by learning on a novel dataset of 638,180 human judgments of face similarity (FAX). We demonstrate the ut…

A View From Somewhere: Human-Centric Face Representations

NeurIPS, 2022
Jerone T. A. Andrews, Przemyslaw Joniak*, Alice Xiang

Biases in human-centric computer vision models are often attributed to a lack of sufficient data diversity, with many demographics insufficiently represented. However, auditing datasets for diversity can be difficult, due to an absence of ground-truth labels of relevant feat…

Men Also Do Laundry: Multi-Attribute Bias Amplification

ECCV, 2022
Dora Zhao, Jerone T. A. Andrews, Alice Xiang

As computer vision systems become more widely deployed, there is increasing concern from both the research community and the public that these systems are not only reproducing but amplifying harmful social biases. The phenomenon of bias amplification, which is the focus of t…

Human-Centric Visual Diversity Auditing

ECCV, 2022
Jerone T. A. Andrews, Przemyslaw Joniak*, Alice Xiang

Biases in human-centric computer vision models are often attributed to a lack of sufficient data diversity, with many demographics insufficiently represented. However, auditing datasets for diversity can be difficult, due to an absence of ground-truth labels of relevant feat…

Attrition of Workers with Minoritized Identities on AI Teams

EAAMO, 2022
Jeffrey Brown*, Tina Park*, Jiyoo Chang*, McKane Andrus*, Alice Xiang, Christine Custis*

The effects of AI systems are far-reaching and affect diverse commu- nities all over the world. The demographics of AI teams, however, do not reflect this diversity. Instead, these teams, particularly at big tech companies, are dominated by Western, White, and male work- ers…

Reconciling Legal and Technical Approaches to Algorithmic Bias

Tennessee Law Review, 2021
Alice Xiang

In recent years, there has been a proliferation of papers in the algorithmic fairness literature proposing various technical definitions of algorithmic bias and methods to mitigate bias. Whether these algorithmic bias mitigation methods would be permissible from a legal pers…

On the Validity of Arrest as a Proxy for Offense: Race and the Likelihood of Arrest for Violent Crimes

AIES, 2021
Riccardo Fogliato*, Alice Xiang, Zachary Lipton*, Daniel Nagin*, Alexandra Chouldechova*

The risk of re-offense is considered in decision-making at many stages of the criminal justice system, from pre-trial, to sentencing, to parole. To aid decision makers in their assessments, institutions increasingly rely on algorithmic risk assessment instruments (RAIs). The…

"What We Can’t Measure, We Can’t Understand": Challenges to Demographic Data Procurement in the Pursuit of Fairness

FaccT, 2021
McKane Andrus*, Elena Spitzer*, Jeffrey Brown*, Alice Xiang

As calls for fair and unbiased algorithmic systems increase, so too does the number of individuals working on algorithmic fairness in industry. However, these practitioners often do not have access to the demographic data they feel they need to detect bias in practice. Even …

Blog

March 16, 2023 | AI Ethics

Being 'Seen' vs. 'Mis-Seen': Tensions Between Privacy and Fairness in Computer Vision

Privacy is a fundamental human right that ensures individuals can keep their personal information and activities private. In the context of computer vision, privacy concerns arise when cameras and other sensors collect personal in…

Privacy is a fundamental human right that ensures individuals can keep their personal information and activities private. In the c…

May 12, 2021 | Sony AI

Launching our AI Ethics Research Flagship

I recently joined Sony AI from the Partnership on AI, where I served on the Leadership Team and led a team of researchers focused on fairness, transparency, and accountability in AI. In that role, I had a unique vantage point in…

I recently joined Sony AI from the Partnership on AI, where I served on the Leadership Team and led a team of researchers focused…

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.