* External authors




Feature and Label Embedding Spaces Matter in Addressing Image Classifier Bias

William Thong

Cees Snoek*

* External authors

BMVC 2021



This paper strives to address image classifier bias, with a focus on both feature and label embedding spaces. Previous works have shown that spurious correlations from protected attributes, such as age, gender, or skin tone, can cause adverse decisions. To balance potential harms, there is a growing need to identify and mitigate image classifier bias. First, we identify in the feature space a bias direction. We compute class prototypes of each protected attribute value for every class, and reveal an existing subspace that captures the maximum variance of the bias. Second, we mitigate biases by mapping image inputs to label embedding spaces. Each value of the protected attribute has its projection head where classes are embedded through a latent vector representation rather than a common one-hot encoding. Once trained, we further reduce in the feature space the bias effect by removing its direction. Evaluation on biased image datasets, for multi-class, multi-label and binary classifications, shows the effectiveness of tackling both feature and label embedding spaces in improving the fairness of the classifier predictions, while preserving classification performance.

Related Publications

Beyond Skin Tone: A Multidimensional Measure of Apparent Skin Color

ICCV, 2023
William Thong, Przemyslaw Joniak*, Alice Xiang

This paper strives to measure apparent skin color in computer vision, beyond a unidimensional scale on skin tone. In their seminal paper Gender Shades, Buolamwini and Gebru have shown how gender classification systems can be biased against women with darker skin tones. While…

Augmented data sheets for speech datasets and ethical decision-making

FaccT, 2023
Orestis Papakyriakopoulos, Anna Seo Gyeong Choi*, William Thong, Dora Zhao, Jerone Andrews, Rebecca Bourke, Alice Xiang, Allison Koenecke*

Human-centric image datasets are critical to the development of computer vision technologies. However, recent investigations have foregrounded significant ethical issues related to privacy and bias, which have resulted in the complete retraction, or modification, of several …

Content-Diverse Comparisons improve IQA

BMVC, 2022
William Thong, Jose Costa Pereira*, Sarah Parisot*, Ales Leonardis*, Steven McDonagh*

Image quality assessment (IQA) forms a natural and often straightforward undertaking for humans, yet effective automation of the task remains highly challenging. Recent metrics from the deep learning community commonly compare image pairs during training to improve upon trad…

  • HOME
  • Publications
  • Feature and Label Embedding Spaces Matter in Addressing Image Classifier Bias


Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.