Category

People

Share

Being 'Seen' vs. 'Mis-Seen': Tensions Between Privacy and Fairness in Computer Vision

AI Ethics

March 16, 2023

Privacy is a fundamental human right that ensures individuals can keep their personal information and activities private. In the context of computer vision, privacy concerns arise when cameras and other sensors collect personal information without an individual's knowledge or consent. The rise of facial recognition and related computer vision technologies has been met with growing anxiety over the potential for artificial intelligence to create mass surveillance systems and further entrench societal biases.

These concerns have led to calls for greater privacy protections and fairer, less biased algorithms. However, when we look deeper into the issue, it’s the same privacy protections and bias mitigation efforts that can conflict in the context of AI. Reducing bias in human-centric computer vision systems (HCCV), including facial recognition often involves collecting large, diverse, and candid image datasets, which may counter privacy protections.

It is easier to think that being “unseen” by AI is preferable—that being underrepresented in the data used to develop facial recognition might somehow allow a person to evade mass surveillance. When we look at the law enforcement context, just because facial recognition technologies are less reliable at identifying people of color has not meant that they have not been used to surveil these communities and deprive individuals of their liberty. Therefore, being “unseen” by AI does not protect against being “mis-seen.” While in the law enforcement context, this tension can simply be resolved by prohibiting the use of facial recognition technology, HCCV encompasses a much broader set of technologies, from face detection for a camera’s autofocus feature to pedestrian detection on a self-driving car.

My research on this topic was published in the Harvard Journal of Law & Technology. “Being 'Seen' vs. 'Mis-Seen': Tensions between Privacy and Fairness in Computer Vision” characterizes this tension between privacy and fairness in the context of algorithmic bias mitigation for human-centric computer vision systems. In particular, I argue that the basic paradox underlying current efforts to design less biased HCCV is the simultaneous desire to be “un-seen” yet not “mis-seen” by AI.

In this research, I also review the strategies proposed for resolving this tension and evaluate their viability for adequately addressing the technical, operational, legal, and ethical challenges that surfaced from this tension. These strategies include: using third-party trusted entities to collect data, using privacy-preserving techniques, generating synthetic data, obtaining informed consent, and expanding regulatory mandates or government audits.

Solving this paradox requires considering the importance of not being “mis-seen” by AI rather than simply being “unseen.” De-tethering these notions (being seen versus unseen versus mis-seen) can help clarify what rights relevant laws and policies should seek to protect. For example, this research examines the implications of a right not to be disproportionately mis-seen by AI, in contrast to regulations around what data should remain unseen. Given that privacy and fairness are both critical objectives for ethical AI, lawmakers, and technologists need to address this tension head-on; approaches that rely purely on visibility or invisibility will likely fail to achieve either objective.

You can read “Being 'Seen' vs. 'Mis-Seen': Tensions between Privacy and Fairness in Computer Vision” now in the Harvard Journal of Law & Technology.

If you are interested in joining Sony AI to help define a future where AI is used to unleash human creativity while achieving fairness, transparency, and accountability, please visit our careers page.

Latest Blog

March 26, 2024 | AI Ethics

When Privacy and Fairness Collide: Reconciling the Tensions Between Privacy and …

Invisibility is sometimes thought of as a superpower. People often equate online privacy with selective invisibility, which sounds desirable because it potentially puts them in con…

January 18, 2024 | Sony AI

Navigating Responsible Data Curation Takes the Spotlight at NeurIPS 2023

The field of Human-Centric Computer Vision (HCCV) is rapidly progressing, and some researchers are raising a red flag on the current ethics of data curation. A primary concern is t…

January 11, 2024 | Sony AI

From Hypothesis to Reality: The GT Sophy Team Explains the Evolution of the Brea…

Since its inception in 2020, Sony AI has been committed to enhancing human imagination and creativity through the acceleration of AI research and development. One of the first exam…

  • HOME
  • Blog
  • Being 'Seen' vs. 'Mis-Seen': Tensions Between Privacy and Fairness in Computer Vision

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.