Celebrating Black Voices in AI: Sony AI Team Member Jerone Andrews Discusses Mitigating Biases in Visual Datasets at UCL AI Centre Black History Month Event
by Jerone Andrews
December 16, 2021
This past October, in recognition of Black History Month, I was delighted to be invited to participate in a Black History Month panel, Celebrating Black History Month in AI, organized by the UCL AI Centre. The event was shaped to reflect key issues that members of UCL’s Black community, both staff and students, had expressed that they would like to see addressed and to recognize the Black community’s achievements in and contributions to the field of AI.
For the event, I was a panelist in a session entitled How to use AI to benefit the Black community. The aim was to consider existing projects and to elicit discussion on ways to resolve existing structural problems that the Black community continues to face.
This prompted meaningful conversations on bias, in particular how to mitigate dataset biases. This is particularly pertinent to me with respect to some of my current research here at Sony AI. In my presentation, I gave a brief introduction to my background and how I first came to be working in the field of AI, followed by an outline of my current research projects and how they relate to biases in visual datasets.
Lately, I have spent a considerable amount of time thinking about biases in people-centric computer vision datasets. Specifically, how datasets can be made to be more inclusive and diverse, as well as current impediments to this. During my talk, I alluded to the fact that datasets are often subject to latent historical and representational biases, reflecting social inequities and representational disparities. Irrespective of their intent to faithfully depict the world, they render very narrow, discrete views of the visual world. What I mean by this is that despite best efforts to make datasets objective representations of the world, they are invariably subjective. Just think about the narratives and reasoning behind the ‘impartial’ taxonomies that we employ to categorise people. As the philosopher Thomas Nagel suggested, it’s impossible to take a ‘view from nowhere’. Therefore, we need to make datasets more inclusive by integrating a diverse set of perspectives from their inception.
Datasets are invariably shaped by problems that researchers and practitioners wish to resolve. However, datasets are not only informed by the perspectives of their developers, but equally by those who create the data samples, as well as those tasked with annotating them. For example, when annotating people in image datasets, if we have a homogenous group of annotators, this risks perpetuating harmful social stereotypes. The perception that people have of others is undoubtedly shaped by their own cultural background. This is particularly the case for people’s ability to perceive skin colour. In 2002, researcher Mark Hill found that Black and White annotators perceived greater color variation within their own race.
The highlight of the event for me was the keynote talk by Professor Chris Jackson Jackson, who is a British geologist, spoke on the topic of Race, Racism, and Representation, centering on his experience as the first black scientist to give a Royal Institution Christmas lecture (2020). What I found most thought-provoking in Jackson’s talk was when he spoke about the pressure he felt leading up to the Christmas lecture. He explained that part of the tension stemmed from the fact that there are relatively few black scientists in the UK. This makes every opportunity to enter spaces that were historically closed to marginalized communities a potentially stressful situation and it makes “getting it right” all the more critical. This feeling of wanting to “get it right” resonated with me, to the extent that it represents a desire to disprove claims that your selection, recognition, or achievements are in any way a result of liberal tokenism or virtue signalling. Moreover, I was also impressed by how in the run-up to the Christmas lecture, Jackson spoke out on a whole range of issues, ‘urging organizations to help tackle racism, misogyny, and transphobia’.
Overall, Jackson’s talk further underscored for me the need for increased representation in the field of science and technology, as well as the value of events such as this, which create a forum for researchers to share their experiences and expertise, and to celebrate their accomplishments.
In my role here at Sony, I am researching and developing auditing tools that have the capacity to measure the visual diversity of human-centric image datasets. Through these tools, I hope to make datasets more representative of the heterogeneity that exists within demographic groups. My work explicitly focuses on visual diversity, since having a demographically balanced dataset does not guarantee that the individuals depicted are visually diverse. In particular, during the annotation process of unlabeled datasets, ethnic groups may be inadvertently excluded from racial categories if they do not conform to stereotypes. Ultimately, I hope we can move towards more nuanced methods for auditing and improving dataset diversity, and by extension contribute to building a more equitable and inclusive society.
Jerone is a Research Scientist and contributes to the Sony AI Gastronomy and AI Ethics flagship projects. His research interests and expertise span computer vision and deep learning, in particular self-supervised anomaly detection, transfer learning, and adversarial machine learning. Prior to joining Sony, Jerone received an MSci in Mathematics from King’s College London, which he followed with an EPSRC-funded MRes and Ph.D. in Computer Science at University College London (UCL). Subsequently, Jerone was awarded a Royal Academy of Engineering Research Fellowship, a British Science Association Media Fellowship with BBC Future, and a Marie Skłodowska-Curie RISE grant. While at UCL, Jerone also spent time as a Visiting Researcher at the National Institute of Informatics (Tokyo) and Telefónica Research (Barcelona).
May 11, 2023 | Sony AI
Meet the Team #7: Victoria Matthews, Julienne LaChance, Dora Zhao
AI Ethics sits at the core of our activities at Sony AI. In this installment of our Meet the Team series, meet members of our growing AI Ethics team. This month we feature Victoria…
April 17, 2023 | AI Ethics
Exposing Limitations in Fairness Evaluations: Human Pose Estimation
As AI technology becomes increasingly ubiquitous, we reveal new ways in which AI model biases may harm individuals. In 2018, for example, researchers Joy Buolamwini and Timnit Gebr…
March 16, 2023 | AI Ethics
Being 'Seen' vs. 'Mis-Seen': Tensions Between Privacy and Fairness in Computer V…
Privacy is a fundamental human right that ensures individuals can keep their personal information and activities private. In the context of computer vision, privacy concerns arise …