Being 'Seen' vs. 'Mis-Seen': Tensions between Privacy and Fairness in Computer Vision

Alice Xiang



The rise of facial recognition and related computer vision technologies has been met with growing anxiety over the potential for artificial intelligence (“AI”) to create mass surveillance systems and further entrench societal biases. These concerns have led to calls for greater privacy protections and fairer, less biased algorithms. An under-appreciated tension, however, is that privacy protections and bias mitigation efforts can some-times conflict in the context of AI. Reducing bias in human-centric computer vision systems (“HCCV”), including facial recognition, can involve collecting large, diverse, and candid image datasets, which can run counter to privacy protections. It is intuitive to think that being “unseen” by AI is preferable — that being underrepresented in the data used to develop facial recognition might somehow allow one to evade mass surveillance. As we have seen in the law enforcement context, however, just because facial recognition technologies are less reliable at identifying people of color has not meant that they have not been used to surveil these communities and deprive individuals of their liberty. Thus, being “unseen” by AI does not protect against being “mis-seen.” While in the law enforcement context this tension can simply be resolved by prohibiting the use of facial recognition technology, HCCV encompasses a much broader set of technologies, from face detection for a camera’s autofocus feature to pedestrian detection on a self-driving car. The first contribution of this Article is to characterize this tension between privacy and fairness in the context of algorithmic bias mitigation for HCCV. In particular, this Article argues that the irreducible paradox underlying current efforts to design less biased HCCV is the simultaneous desire to be “un-seen” yet not “mis-seen” by AI. Second, the Article reviews the strategies proposed for resolving this tension and evaluates their viability for adequately addressing the technical, operational, legal, and ethical challenges surfaced by this tension. These strategies include: using third-party trusted entities to collect data, using privacy-preserving techniques, generating synthetic data, obtaining informed consent, and expanding regulatory mandates or government audits. Finally, this Article argues that solving this paradox requires considering the importance of not being “mis-seen” by AI rather than simply being “unseen.” De-tethering these notions (being seen versus unseen versus mis-seen) can help clarify what rights relevant laws and policies should seek to protect. For example, this Article will examine the implications of a right not to be disproportionately mis-seen by AI, in contrast to regulations around what data should remain unseen. Given that privacy and fairness are both critical objectives for ethical AI, it is vital for lawmakers and technologists to address this tension head-on; approaches that rely purely on visibility or invisibility will likely fail to achieve either objective.

Related Publications

Not My Voice! A Taxonomy of Ethical and Safety Harms of Speech Generators

FaccT, 2024
Wiebke Hutiri*, Orestis Papakyriakopoulos, Alice Xiang

The rapid and wide-scale adoption of AI to generate human speech poses a range of significant ethical and safety risks to society that need to be addressed. For example, a growing number of speech generation incidents are associated with swatting attacks in the United States…

Ethical Considerations for Responsible Data Curation

NeurIPS, 2023
Jerone Andrews, Dora Zhao, William Thong, Apostolos Modas, Orestis Papakyriakopoulos, Alice Xiang

Human-centric computer vision (HCCV) data curation practices often neglect privacy and bias concerns, leading to dataset retractions and unfair models. HCCV datasets constructed through nonconsensual web scraping lack crucial metadata for comprehensive fairness and robustnes…

Beyond Skin Tone: A Multidimensional Measure of Apparent Skin Color

ICCV, 2023
William Thong, Przemyslaw Joniak*, Alice Xiang

This paper strives to measure apparent skin color in computer vision, beyond a unidimensional scale on skin tone. In their seminal paper Gender Shades, Buolamwini and Gebru have shown how gender classification systems can be biased against women with darker skin tones. While…

  • HOME
  • Publications
  • Being 'Seen' vs. 'Mis-Seen': Tensions between Privacy and Fairness in Computer Vision


Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.