Being 'Seen' vs. 'Mis-Seen': Tensions Between Privacy and Fairness in Computer Vision
AI Ethics
March 16, 2023
Privacy is a fundamental human right that ensures individuals can keep their personal information and activities private. In the context of computer vision, privacy concerns arise when cameras and other sensors collect personal information without an individual's knowledge or consent. The rise of facial recognition and related computer vision technologies has been met with growing anxiety over the potential for artificial intelligence to create mass surveillance systems and further entrench societal biases.
These concerns have led to calls for greater privacy protections and fairer, less biased algorithms. However, when we look deeper into the issue, it’s the same privacy protections and bias mitigation efforts that can conflict in the context of AI. Reducing bias in human-centric computer vision systems (HCCV), including facial recognition often involves collecting large, diverse, and candid image datasets, which may counter privacy protections.
It is easier to think that being “unseen” by AI is preferable—that being underrepresented in the data used to develop facial recognition might somehow allow a person to evade mass surveillance. When we look at the law enforcement context, just because facial recognition technologies are less reliable at identifying people of color has not meant that they have not been used to surveil these communities and deprive individuals of their liberty. Therefore, being “unseen” by AI does not protect against being “mis-seen.” While in the law enforcement context, this tension can simply be resolved by prohibiting the use of facial recognition technology, HCCV encompasses a much broader set of technologies, from face detection for a camera’s autofocus feature to pedestrian detection on a self-driving car.
My research on this topic was published in the Harvard Journal of Law & Technology. “Being 'Seen' vs. 'Mis-Seen': Tensions between Privacy and Fairness in Computer Vision” characterizes this tension between privacy and fairness in the context of algorithmic bias mitigation for human-centric computer vision systems. In particular, I argue that the basic paradox underlying current efforts to design less biased HCCV is the simultaneous desire to be “un-seen” yet not “mis-seen” by AI.
In this research, I also review the strategies proposed for resolving this tension and evaluate their viability for adequately addressing the technical, operational, legal, and ethical challenges that surfaced from this tension. These strategies include: using third-party trusted entities to collect data, using privacy-preserving techniques, generating synthetic data, obtaining informed consent, and expanding regulatory mandates or government audits.
Solving this paradox requires considering the importance of not being “mis-seen” by AI rather than simply being “unseen.” De-tethering these notions (being seen versus unseen versus mis-seen) can help clarify what rights relevant laws and policies should seek to protect. For example, this research examines the implications of a right not to be disproportionately mis-seen by AI, in contrast to regulations around what data should remain unseen. Given that privacy and fairness are both critical objectives for ethical AI, lawmakers, and technologists need to address this tension head-on; approaches that rely purely on visibility or invisibility will likely fail to achieve either objective.
You can read “Being 'Seen' vs. 'Mis-Seen': Tensions between Privacy and Fairness in Computer Vision” now in the Harvard Journal of Law & Technology.
If you are interested in joining Sony AI to help define a future where AI is used to unleash human creativity while achieving fairness, transparency, and accountability, please visit our careers page.
Latest Blog
November 15, 2024 | Sony AI
Breaking New Ground in AI Image Generation Research: GenWarp and PaGoDA at NeurI…
At NeurIPS 2024, Sony AI is set to showcase two new research explorations into methods for image generation: GenWarp and PaGoDA. These two research papers highlight advancements in…
October 4, 2024 | AI Ethics
Mitigating Bias in AI Models: A New Approach with TAB
Artificial intelligence models, especially deep neural networks (DNNs), have proven to be powerful tools in tasks like image recognition and natural language processing. However, t…
September 14, 2024 | Scientific Discovery
Behind the Research: How Sony AI Researchers are Pioneering AI Models for Scient…
The pace of scientific research is accelerating, with an exponential increase in published research articles each year. For instance, in 1980, approximately 500,000 scientific arti…