Authors

* External authors

Venue

Date

Share

Attrition of Workers with Minoritized Identities on AI Teams

Jeffrey Brown*

Tina Park*

Jiyoo Chang*

McKane Andrus*

Alice Xiang

Christine Custis*

* External authors

EAAMO 2022

2022

Abstract

The effects of AI systems are far-reaching and affect diverse commu- nities all over the world. The demographics of AI teams, however, do not reflect this diversity. Instead, these teams, particularly at big tech companies, are dominated by Western, White, and male work- ers. Strategies for preventing harms done by AI must also include making these teams more representative of the diverse communi- ties that these technologies affect. The pipeline of students from K-12 and university level contributes to this - those with minori- tized identities are underrepresented or excluded from pursuing computer science careers. However there has been relatively little attention given to how the culture at tech companies, let alone AI teams, contribute to attrition of minoritized people in the workplace. The current study uses semi-structured interviews with minoritized workers on AI teams, managers of AI teams, and leaders working on diversity, equity, and inclusion (DEI) in the tech field (N = 43), to investigate the reasons why these workers leave these AI teams. The themes from these interviews describe how the culture and climate of these teams may contribute to attrition of minoritized workers, and strategies for making these teams more inclusive and representative of the diverse communities affected by technologies developed by these AI teams. Specifically, the current study found that AI teams in which minoritized workers thrive tend to foster a strong sense of interdisciplinary collaboration, support professional career development, and are run by diverse leaders who understand the importance of undoing the traditional White, Eurocentric, and male workplace norms. These go beyond the “quick fixes” that are prevalent in DEI practices.

Related Publications

A Taxonomy of Challenges to Curating Fair Datasets

NeurIPS, 2024
Dora Zhao*, Morgan Klaus Scheuerman, Pooja Chitre*, Jerone Andrews, Georgia Panagiotidou*, Shawn Walker*, Kathleen H. Pine*, Alice Xiang

Despite extensive efforts to create fairer machine learning (ML) datasets, there remains a limited understanding of the practical aspects of dataset curation. Drawing from interviews with 30 ML dataset curators, we present a comprehensive taxonomy of the challenges and trade…

Resampled Datasets Are Not Enough: Mitigating Societal Bias Beyond Single Attributes

EMNLP, 2024
Yusuke Hirota, Jerone Andrews, Dora Zhao*, Orestis Papakyriakopoulos*, Apostolos Modas, Yuta Nakashima*, Alice Xiang

We tackle societal bias in image-text datasets by removing spurious correlations between protected groups and image attributes. Traditional methods only target labeled attributes, ignoring biases from unlabeled ones. Using text-guided inpainting models, our approach ensures …

Efficient Bias Mitigation Without Privileged Information

ECCV, 2024
Mateo Espinosa Zarlenga*, Swami Sankaranarayanan, Jerone Andrews, Zohreh Shams, Mateja Jamnik*, Alice Xiang

Deep neural networks trained via empirical risk minimisation often exhibit significant performance disparities across groups, particularly when group and task labels are spuriously correlated (e.g., “grassy background” and “cows”). Existing bias mitigation methods that aim t…

  • HOME
  • Publications
  • Attrition of Workers with Minoritized Identities on AI Teams

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.