* External authors




Attrition of Workers with Minoritized Identities on AI Teams

Jeffrey Brown*

Tina Park*

Jiyoo Chang*

McKane Andrus*

Alice Xiang

Christine Custis*

* External authors

EAAMO 2022



The effects of AI systems are far-reaching and affect diverse commu- nities all over the world. The demographics of AI teams, however, do not reflect this diversity. Instead, these teams, particularly at big tech companies, are dominated by Western, White, and male work- ers. Strategies for preventing harms done by AI must also include making these teams more representative of the diverse communi- ties that these technologies affect. The pipeline of students from K-12 and university level contributes to this - those with minori- tized identities are underrepresented or excluded from pursuing computer science careers. However there has been relatively little attention given to how the culture at tech companies, let alone AI teams, contribute to attrition of minoritized people in the workplace. The current study uses semi-structured interviews with minoritized workers on AI teams, managers of AI teams, and leaders working on diversity, equity, and inclusion (DEI) in the tech field (N = 43), to investigate the reasons why these workers leave these AI teams. The themes from these interviews describe how the culture and climate of these teams may contribute to attrition of minoritized workers, and strategies for making these teams more inclusive and representative of the diverse communities affected by technologies developed by these AI teams. Specifically, the current study found that AI teams in which minoritized workers thrive tend to foster a strong sense of interdisciplinary collaboration, support professional career development, and are run by diverse leaders who understand the importance of undoing the traditional White, Eurocentric, and male workplace norms. These go beyond the “quick fixes” that are prevalent in DEI practices.

Related Publications

A View From Somewhere: Human-Centric Face Representations

ICLR, 2023
Jerone T. A. Andrews, Przemyslaw Joniak, Alice Xiang

Few datasets contain self-identified sensitive attributes, inferring attributes risks introducing additional biases, and collecting attributes can carry legal risks. Besides, categorical labels can fail to reflect the continuous nature of human phenotypic diversity, making i…

Considerations for Ethical Speech Recognition Datasets

WSDM, 2023
Orestis Papakyriakopoulos, Alice Xiang

Speech AI Technologies are largely trained on publicly available datasets or by the massive web-crawling of speech. In both cases, data acquisition focuses on minimizing collection effort, without necessarily taking the data subjects’ protection or user needs into considerat…

Causality for Temporal Unfairness Evaluation and Mitigation

NeurIPS, 2022
Aida Rahmattalabi, Alice Xiang

Recent interests in causality for fair decision-making systems has been accompanied with great skepticism due to practical and epistemological challenges with applying existing causal fairness approaches. Existing works mainly seek to remove the causal effect of social categ…

  • HOME
  • Publications
  • Attrition of Workers with Minoritized Identities on AI Teams


Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.