Being 'Seen' vs. 'Mis-Seen': Tensions Between Privacy and Fairness in Computer Vision
AI Ethics
March 16, 2023
Privacy is a fundamental human right that ensures individuals can keep their personal information and activities private. In the context of computer vision, privacy concerns arise when cameras and other sensors collect personal information without an individual's knowledge or consent. The rise of facial recognition and related computer vision technologies has been met with growing anxiety over the potential for artificial intelligence to create mass surveillance systems and further entrench societal biases.
These concerns have led to calls for greater privacy protections and fairer, less biased algorithms. However, when we look deeper into the issue, it’s the same privacy protections and bias mitigation efforts that can conflict in the context of AI. Reducing bias in human-centric computer vision systems (HCCV), including facial recognition often involves collecting large, diverse, and candid image datasets, which may counter privacy protections.
It is easier to think that being “unseen” by AI is preferable—that being underrepresented in the data used to develop facial recognition might somehow allow a person to evade mass surveillance. When we look at the law enforcement context, just because facial recognition technologies are less reliable at identifying people of color has not meant that they have not been used to surveil these communities and deprive individuals of their liberty. Therefore, being “unseen” by AI does not protect against being “mis-seen.” While in the law enforcement context, this tension can simply be resolved by prohibiting the use of facial recognition technology, HCCV encompasses a much broader set of technologies, from face detection for a camera’s autofocus feature to pedestrian detection on a self-driving car.
My research on this topic was published in the Harvard Journal of Law & Technology. “Being 'Seen' vs. 'Mis-Seen': Tensions between Privacy and Fairness in Computer Vision” characterizes this tension between privacy and fairness in the context of algorithmic bias mitigation for human-centric computer vision systems. In particular, I argue that the basic paradox underlying current efforts to design less biased HCCV is the simultaneous desire to be “un-seen” yet not “mis-seen” by AI.
In this research, I also review the strategies proposed for resolving this tension and evaluate their viability for adequately addressing the technical, operational, legal, and ethical challenges that surfaced from this tension. These strategies include: using third-party trusted entities to collect data, using privacy-preserving techniques, generating synthetic data, obtaining informed consent, and expanding regulatory mandates or government audits.
Solving this paradox requires considering the importance of not being “mis-seen” by AI rather than simply being “unseen.” De-tethering these notions (being seen versus unseen versus mis-seen) can help clarify what rights relevant laws and policies should seek to protect. For example, this research examines the implications of a right not to be disproportionately mis-seen by AI, in contrast to regulations around what data should remain unseen. Given that privacy and fairness are both critical objectives for ethical AI, lawmakers, and technologists need to address this tension head-on; approaches that rely purely on visibility or invisibility will likely fail to achieve either objective.
You can read “Being 'Seen' vs. 'Mis-Seen': Tensions between Privacy and Fairness in Computer Vision” now in the Harvard Journal of Law & Technology.
If you are interested in joining Sony AI to help define a future where AI is used to unleash human creativity while achieving fairness, transparency, and accountability, please visit our careers page.
Latest Blog
December 13, 2024 | Sony AI
Sights on AI: Alice Xiang Discusses the Ever-Changing Nature of AI Ethics and So…
The Sony AI team is a diverse group of individuals working to accomplish one common goal: accelerate the fundamental research and development of AI and enhance human imagination an…
December 13, 2024 | Events
From Data Fairness to 3D Image Generation: Sony AI at NeurIPS 2024
The 38th Annual Conference on Neural Information Processing Systems is fast approaching. NeurIPS 2024, the largest AI conference in the world, is taking place this year at the Vanc…
December 4, 2024 | AI Ethics
Exploring the Challenges of Fair Dataset Curation: Insights from NeurIPS 2024
Sony AI’s paper accepted at NeurIPS 2024, "A Taxonomy of Challenges to Curating Fair Datasets," highlights the pivotal steps toward achieving fairness in machine learning and is a …