Alice
Xiang

Profile

Alice Xiang is the Global Head of AI Ethics at Sony. As the VP leading AI ethics initiatives across Sony Group, she manages the team responsible for conducting AI ethics assessments across Sony's business units and implementing Sony's AI Ethics Guidelines. In addition, as the Lead Research Scientist for AI ethics at Sony AI, Alice leads a lab of AI researchers working on cutting-edge research to enable the development of more responsible AI solutions. Alice also recently served as a General Chair for the ACM Conference on Fairness, Accountability, and Transparency (FAccT), the premier multidisciplinary research conference on these topics.

Alice previously served on the leadership team of the Partnership on AI. As the Head of Fairness, Transparency, and Accountability Research, she led a team of interdisciplinary researchers and a portfolio of multi-stakeholder research initiatives. She also served as a Visiting Scholar at Tsinghua University’s Yau Mathematical Sciences Center, where she taught a course on Algorithmic Fairness, Causal Inference, and the Law.

She has been quoted in the Wall Street Journal, MIT Tech Review, Fortune, Yahoo Finance, and VentureBeat, among others. She has given guest lectures at the Simons Institute at Berkeley, USC, Harvard, SNU Law School, among other universities. Her research has been published in top machine learning conferences, journals, and law reviews.

Alice is both a lawyer and statistician, with experience developing machine learning models and serving as legal counsel for technology companies. Alice holds a Juris Doctor from Yale Law School, a Master’s in Development Economics from Oxford, a Master’s in Statistics from Harvard, and a Bachelor’s in Economics from Harvard.

Message

“Our AI Ethics research team conducts cutting-edge research on fairness, transparency, and accountability in AI. Our projects aim to enable the development of more ethical AI within Sony and also to contribute to the global research discourse around AI ethics. Our goal is to make Sony a global leader in AI ethics.”

Publications

Measure dataset diversity, don’t just claim it

ICML, 2024
Dora Zhao*, Jerone T. A. Andrews, Orestis Papakyriakopoulos*, Alice Xiang

Machine learning (ML) datasets, often perceived as neutral, inherently encapsulate abstract and disputed social constructs. Dataset curators frequently employ value-laden terms such as diversity, bias, and quality to characterize datasets. Despite their prevalence, these ter…

Not My Voice! A Taxonomy of Ethical and Safety Harms of Speech Generators

FaccT, 2024
Wiebke Hutiri*, Orestis Papakyriakopoulos*, Alice Xiang

The rapid and wide-scale adoption of AI to generate human speech poses a range of significant ethical and safety risks to society that need to be addressed. For example, a growing number of speech generation incidents are associated with swatting attacks in the United States…

Ethical Considerations for Responsible Data Curation

NeurIPS, 2023
Jerone Andrews, Dora Zhao*, William Thong, Apostolos Modas, Orestis Papakyriakopoulos*, Alice Xiang

Human-centric computer vision (HCCV) data curation practices often neglect privacy and bias concerns, leading to dataset retractions and unfair models. HCCV datasets constructed through nonconsensual web scraping lack crucial metadata for comprehensive fairness and robustnes…

Beyond Skin Tone: A Multidimensional Measure of Apparent Skin Color

ICCV, 2023
William Thong, Przemyslaw Joniak*, Alice Xiang

This paper strives to measure apparent skin color in computer vision, beyond a unidimensional scale on skin tone. In their seminal paper Gender Shades, Buolamwini and Gebru have shown how gender classification systems can be biased against women with darker skin tones. While…

Flickr Africa: Examining Geo-Diversity in Large-Scale, Human-Centric Visual Data

AIES, 2023
Keziah Naggita*, Julienne LaChance, Alice Xiang

Biases in large-scale image datasets are known to influence the performance of computer vision models as a function of geographic context. To investigate the limitations of standard Internet data collection methods in low- and middle-income countries, we analyze human-centri…

Augmented data sheets for speech datasets and ethical decision-making

FaccT, 2023
Orestis Papakyriakopoulos*, Anna Seo Gyeong Choi*, William Thong, Dora Zhao*, Jerone Andrews, Rebecca Bourke, Alice Xiang, Allison Koenecke*

Human-centric image datasets are critical to the development of computer vision technologies. However, recent investigations have foregrounded significant ethical issues related to privacy and bias, which have resulted in the complete retraction, or modification, of several …

A View From Somewhere: Human-Centric Face Representations

ICLR, 2023
Jerone T. A. Andrews, Przemyslaw Joniak*, Alice Xiang

Few datasets contain self-identified sensitive attributes, inferring attributes risks introducing additional biases, and collecting attributes can carry legal risks. Besides, categorical labels can fail to reflect the continuous nature of human phenotypic diversity, making i…

Being 'Seen' vs. 'Mis-Seen': Tensions between Privacy and Fairness in Computer Vision

Harvard Journal of Law & Technology, 2023
Alice Xiang

The rise of facial recognition and related computer vision technologies has been met with growing anxiety over the potential for artificial intelligence (“AI”) to create mass surveillance systems and further entrench societal biases. These concerns have led to calls for grea…

Considerations for Ethical Speech Recognition Datasets

WSDM, 2023
Orestis Papakyriakopoulos*, Alice Xiang

Speech AI Technologies are largely trained on publicly available datasets or by the massive web-crawling of speech. In both cases, data acquisition focuses on minimizing collection effort, without necessarily taking the data subjects’ protection or user needs into considerat…

Causality for Temporal Unfairness Evaluation and Mitigation

NeurIPS, 2022
Aida Rahmattalabi, Alice Xiang

Recent interests in causality for fair decision-making systems has been accompanied with great skepticism due to practical and epistemological challenges with applying existing causal fairness approaches. Existing works mainly seek to remove the causal effect of social categ…

Men Also Do Laundry: Multi-Attribute Bias Amplification

NeurIPS, 2022
Dora Zhao*, Jerone T. A. Andrews, Alice Xiang

As computer vision systems become more widely deployed, there is increasing concern from both the research community and the public that these systems are not only reproducing but amplifying harmful social biases. The phenomenon of bias amplification, which is the focus of t…

A View From Somewhere: Human-Centric Face Representations

NeurIPS, 2022
Jerone T. A. Andrews, Przemyslaw Joniak*, Alice Xiang

We propose to implicitly learn a set of continuous face-varying dimensions, without ever asking an annotator to explicitly categorize a person. We uncover the dimensions by learning on a novel dataset of 638,180 human judgments of face similarity (FAX). We demonstrate the ut…

A View From Somewhere: Human-Centric Face Representations

NeurIPS, 2022
Jerone T. A. Andrews, Przemyslaw Joniak*, Alice Xiang

Biases in human-centric computer vision models are often attributed to a lack of sufficient data diversity, with many demographics insufficiently represented. However, auditing datasets for diversity can be difficult, due to an absence of ground-truth labels of relevant feat…

Men Also Do Laundry: Multi-Attribute Bias Amplification

ECCV, 2022
Dora Zhao*, Jerone T. A. Andrews, Alice Xiang

As computer vision systems become more widely deployed, there is increasing concern from both the research community and the public that these systems are not only reproducing but amplifying harmful social biases. The phenomenon of bias amplification, which is the focus of t…

Human-Centric Visual Diversity Auditing

ECCV, 2022
Jerone T. A. Andrews, Przemyslaw Joniak*, Alice Xiang

Biases in human-centric computer vision models are often attributed to a lack of sufficient data diversity, with many demographics insufficiently represented. However, auditing datasets for diversity can be difficult, due to an absence of ground-truth labels of relevant feat…

Attrition of Workers with Minoritized Identities on AI Teams

EAAMO, 2022
Jeffrey Brown*, Tina Park*, Jiyoo Chang*, McKane Andrus*, Alice Xiang, Christine Custis*

The effects of AI systems are far-reaching and affect diverse commu- nities all over the world. The demographics of AI teams, however, do not reflect this diversity. Instead, these teams, particularly at big tech companies, are dominated by Western, White, and male work- ers…

Reconciling Legal and Technical Approaches to Algorithmic Bias

Tennessee Law Review, 2021
Alice Xiang

In recent years, there has been a proliferation of papers in the algorithmic fairness literature proposing various technical definitions of algorithmic bias and methods to mitigate bias. Whether these algorithmic bias mitigation methods would be permissible from a legal pers…

On the Validity of Arrest as a Proxy for Offense: Race and the Likelihood of Arrest for Violent Crimes

AIES, 2021
Riccardo Fogliato*, Alice Xiang, Zachary Lipton*, Daniel Nagin*, Alexandra Chouldechova*

The risk of re-offense is considered in decision-making at many stages of the criminal justice system, from pre-trial, to sentencing, to parole. To aid decision makers in their assessments, institutions increasingly rely on algorithmic risk assessment instruments (RAIs). The…

"What We Can’t Measure, We Can’t Understand": Challenges to Demographic Data Procurement in the Pursuit of Fairness

FaccT, 2021
McKane Andrus*, Elena Spitzer*, Jeffrey Brown*, Alice Xiang

As calls for fair and unbiased algorithmic systems increase, so too does the number of individuals working on algorithmic fairness in industry. However, these practitioners often do not have access to the demographic data they feel they need to detect bias in practice. Even …

Blog

March 29, 2024 | Life at Sony AI

Celebrating the Women of Sony AI: Sharing Insights, Inspiration, and Advice

In March, the world commemorates the accomplishments of women throughout history and celebrates those of today. The United States observes March as Women’s History Month, while many countries around the globe observe International…

In March, the world commemorates the accomplishments of women throughout history and celebrates those of today. The United States …

January 18, 2024 | Events

Navigating Responsible Data Curation Takes the Spotlight at NeurIPS 2023

The field of Human-Centric Computer Vision (HCCV) is rapidly progressing, and some researchers are raising a red flag on the current ethics of data curation. A primary concern is that today’s practices in HCCV data curation – whic…

The field of Human-Centric Computer Vision (HCCV) is rapidly progressing, and some researchers are raising a red flag on the curre…

December 13, 2023 | Events

Sony AI Reveals New Research Contributions at NeurIPS 2023

Sony Group Corporation and Sony AI have been active participants in the annual NeurIPS Conference for years, contributing pivotal research that has helped to propel the fields of artificial intelligence and machine learning forwar…

Sony Group Corporation and Sony AI have been active participants in the annual NeurIPS Conference for years, contributing pivotal …

September 21, 2023 | AI Ethics

Beyond Skin Tone: A Multidimensional Measure of Apparent Skin Color

-->Advancing Fairness in Computer Vision: A Multi-Dimensional Approach to Skin Color Analysis In the ever-evolving landscape of artificial intelligence (AI) and computer vision, fairness is a principle that has gained substantial …

-->Advancing Fairness in Computer Vision: A Multi-Dimensional Approach to Skin Color Analysis In the ever-evolving landscape of ar…

June 29, 2023 | AI Ethics

New Dataset Labeling Breakthrough Strips Social Constructs in Image Recognition

New Dataset Labeling Breakthrough Strips Social Constructs in Image RecognitionThe outputs of AI as we know them today are created through deeply collaborative processes between humans and machines. The reality is that you cannot …

New Dataset Labeling Breakthrough Strips Social Constructs in Image RecognitionThe outputs of AI as we know them today are created…

April 17, 2023 | AI Ethics

Exposing Limitations in Fairness Evaluations: Human Pose Estimation

As AI technology becomes increasingly ubiquitous, we reveal new ways in which AI model biases may harm individuals. In 2018, for example, researchers Joy Buolamwini and Timnit Gebru revealed how commercial services classify human …

As AI technology becomes increasingly ubiquitous, we reveal new ways in which AI model biases may harm individuals. In 2018, for e…

March 16, 2023 | AI Ethics

Being 'Seen' vs. 'Mis-Seen': Tensions Between Privacy and Fairness in Computer Vision

Privacy is a fundamental human right that ensures individuals can keep their personal information and activities private. In the context of computer vision, privacy concerns arise when cameras and other sensors collect personal in…

Privacy is a fundamental human right that ensures individuals can keep their personal information and activities private. In the c…

May 12, 2021 | AI Ethics

Launching our AI Ethics Research Flagship

I recently joined Sony AI from the Partnership on AI, where I served on the Leadership Team and led a team of researchers focused on fairness, transparency, and accountability in AI. In that role, I had a unique vantage point in…

I recently joined Sony AI from the Partnership on AI, where I served on the Leadership Team and led a team of researchers focused…

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.