Sights on AI: Alice Xiang Discusses the Ever-Changing Nature of AI Ethics and Sony AI’s Impact on Responsible AI
Sony AI
December 13, 2024
The Sony AI team is a diverse group of individuals working to accomplish one common goal: accelerate the fundamental research and development of AI and enhance human imagination and creativity, particularly in the realm of entertainment. Each individual brings different experiences, along with a unique view of the technology, to this work. This insightful Q&A series, Sights on AI, highlights the career journeys of Sony AI’s leaders and offers their perspectives on a number of AI topics.
Peter Stone ・ Erica Kato Marcus・ Tarek Besold ・ Yuki Mitsufuji ・ Lorenzo Servadei・ Alice Xiang
Alice Xiang is the Lead Research Scientist for the AI Ethics Flagship at Sony AI, where she leads a lab of AI researchers conducting cutting-edge sociotechnical research, particularly in ethical data collection, fairness benchmarks, and bias mitigation. She also serves as the Global Head of AI Ethics at Sony, where she leads AI ethics governance for Sony Group, including the establishment of governance frameworks and policies for evaluating AI development and use.
In this blog, Alice reflects on her motivation for a career in AI ethics, examines how the field has transformed over the past decade, and discusses the barriers facing researchers focused on the topic. She also explores what brought her to Sony AI, the work the organization is doing in AI ethics, and what she is most optimistic about as the industry moves forward.
What was your path to the field of AI ethics?
I was attracted to AI ethics since it uniquely leveraged my interdisciplinary, sociotechnical background while enabling me to tackle societal issues I was passionate about, such as bias and equity. As a Chinese American growing up in predominantly white areas of the U.S., I often experienced the alienation of being treated like an outsider. This feeling intensified when I later entered male-dominated fields of study and work.
Throughout my primary and secondary education, I gravitated toward subjects related to STEM (Science, Technology, Engineering, and Math), an area with historically low female representation. I pursued my interest in math and social sciences at Harvard, where I received my Bachelor’s Degree in Economics and Master’s Degree in Statistics. I then went on to further my education in Economics with a Master’s Degree from the University of Oxford. These interdisciplinary experiences led me into the world of data science, which was just taking off as a field.
After graduating from my Master’s programs, I joined a technology company where I worked as a data scientist. This is where I first encountered some of the pervasive challenges in the AI space—particularly those relating to its potential harms. I was developing my first commercial machine learning model, and at the time (2014), there were no guidelines or guardrails to help me assess potential problematic biases in my data, which could subsequently affect my model. Like many other developers at that time, I knew my data had certain biases and lacked broad representation, but there wasn’t an effective and systematic way to identify and address these issues. This made me worry about how my work might unintentionally amplify biases and underrepresentation through the technology I was creating. I didn’t want the technology that I was creating to marginalize people in the same ways I had experienced, especially with the use of such technologies growing rapidly.
This experience inspired me to explore the intersection of technology and law at Yale Law School, where I could deepen my understanding of how policymaking could create guardrails around emerging technologies. AI ethics requires integrating technical, sociological, legal, economic, and political perspectives, making it an ideal path for my career. Embracing this field of study enabled me to combine my technical, social science, and legal backgrounds to uncover new insights and make a meaningful impact on the world—such as limiting the biases that exist and increasing the representation of those who may be missed by technology.
Now, as AI grows more prevalent, the existence of a field like AI ethics becomes increasingly important for understanding the impact of the technology on businesses, individuals, and society.
What brought you to Sony?
My first AI ethics role was at Partnership on AI where I led research on fairness, transparency, and accountability in AI. This work gave me perspective on the different players in the AI ethics space and the potential impacts companies could have on the industry.
Given AI ethics is constantly changing, having an insider view of how organizations are tackling these challenges was of great interest to me. Not only did I get to see first-hand the roadblocks companies were encountering, but I also gained a better understanding of the role research could play in legal and policy discussions.
When I was presented with the opportunity to work at Sony, the combination of research and real-world application was compelling. It also gave me a unique opportunity to impact AI ethics both internally and externally in the broader technology industry.
After accepting the position, my first course of action was to build and lead the company’s AI ethics initiatives from the ground up. It started with creating our AI ethics research lab at Sony AI in 2020 and then quickly expanded in scope to include overseeing the company's AI ethics governance. Sony had already established its commitment to responsible AI development in 2018 with the release of its AI Ethics Guidelines, which aim to ensure that Sony’s development and utilization of AI technology enriches people’s lives and contributes to the development of society. In my role, I am responsible for working closely with our business units to ensure AI ethics compliance across the entire organization, including the development of policies and governance frameworks to assess potential risks and harms with such technologies. This, coupled with my research role, gives me a front seat to helping Sony chart an actionable path forward toward developing and deploying AI across the organization in a responsible manner.
How would you describe Sony AI’s work in AI ethics and the impact your team is trying to make?
As part of our AI Ethics Flagship, we aim to develop solutions that enable AI ethics in practice and create AI technologies that are fair, transparent, and accountable. Our research thus far has been focused on ethical data collection, bias detection, and mitigation, specifically within the domain of computer vision. To date, we have developed a new measure for skin tone diversity in computer vision, introduced a new method for measuring diversity within image datasets, and identified ethical issues in human-centric computer vision (HCCV) data curation. We have also started exploring the ethical and safety harms of speech generators.
We believe this work is vital not only for AI development across Sony, but also for the advancement of the broader research community and industry best practices.
In examining extensive industry research and AI practices, we have observed many tough, unanswered questions that are preventing organizations from effectively adopting AI ethics and significant barriers to doing it well in practice. Our research has uncovered open issues that we are exploring deeply, such as how companies will collect and use data more ethically, what rights people should have to their own data, how privacy will be considered when data is so easily gathered, and more.
Ethical data collection is challenging in many ways—from time and resources to representation. Establishing suggested parameters will be paramount in helping organizations adopt ethical data collection practices. Giving organizations resources to execute AI ethics in practice is a critical step in the greater adoption of AI ethics globally.
Fairness is also hard to address in practice. While organizations don’t want to release biased models, significant obstacles remain when it comes to detecting and mitigating bias. At the industry level, there's limited consistency on how we should do that, with limited benchmarks available for developers to use. Until better methods for detecting and mitigating bias are further developed, it's hard to check for or reduce bias. Developers need tools that enable them to do so.
How have you built your team to achieve the goals you have set forth for the AI Ethics Flagship?
Given AI’s far-reaching impacts across industries, geographies, and societies, it is essential that the teams creating it bring diverse perspectives. This is particularly important in AI ethics because ethics always has a socio-technical and subjective element that people's experiences can inform.
For example, the types of biases you have been exposed to may vary depending on where you have lived or what you have studied. This could impact how you perceive the fairness, transparency, and accountability of AI systems and, ultimately, how you would evaluate a model for those things. That’s why it can be very helpful for AI developers to have teams that are made up of a diverse group of people who can draw from their own experiences and share different perspectives.
Diversity is one of the major strengths of my team and the work we are doing. We believe this is what truly sets us apart. We have intentionally built a globally diverse team – not only in terms of geographical locations but in lived experience, academic study, and career choice.
We have individuals from the U.S., Europe, Japan, and other regions with core backgrounds in computer vision, human-computer interaction (HCI), legal, policy, operational, and engineering. They have each worked or conducted research at different companies, organizations, and universities around the world and provide unique insights into how AI could impact individuals, industries, or societies.
Since entering the field of AI, what are some of the most significant changes you’ve seen?
The field of AI ethics has made huge advances in the last 10 years. Early in my career, the terms “AI ethics” and “responsible AI” were not mainstream. There were no conferences dedicated to AI ethics; today, there are multiple conferences, including ACM FAccT and AIES, where researchers from around the world gather to share insights. In higher education, AI ethics is now an established academic subfield that students can take courses on.
Outside of research and academia, there has also been a great deal of progress made towards policymaking. Considering ethical issues during AI development has gone from being a nice-to-have that companies can opt to invest in to being a requirement. We have seen different jurisdictions around the world begin to offer guidance on AI or build out proposed regulatory frameworks specific to their countries, regions, or states. This activity is happening rapidly given the growing prevalence of AI across industries, and it has introduced new challenges that must be addressed, such as interoperability.
What is the most significant barrier AI ethics researchers and AI developers are facing in the industry today?
It is difficult for people and companies to really grasp all of the different components of AI ethics. So many things could fall into this category—bias, explainability, transparency, accountability, and sustainability concerns, among others. With so many elements, there is an inherent challenge in focusing enough on an area and driving meaningful progress while also avoiding being too narrow in the approach.
If we were looking at bias specifically, for instance, the biggest challenge is that we live in a biased society. If we had a perfectly fair society and were collecting data on that, we would only be worried about additional biases introduced by models. But countless biases exist around the globe, creating a critical challenge for researchers: How do we keep AI from reflecting, perpetuating, and entrenching those biases within our society?
This question underscores the difficulties in defining relevant biases for AI, identifying them, and addressing them. Coupled with differing perceptions of bias—influenced by our backgrounds and lived experiences—it is unlikely that we will ever find a solution that solves all biases. However, that doesn’t preclude us from striving to make AI less biased.
Adoption is also tricky. At Sony, there is a closer connection between the compliance side and the research side of AI ethics, which can make it easier to incorporate our findings into the guidance or techniques that have been developed. As a general matter though, the fact that AI ethics is such a new field means that best practices are still in the research stage, so there remain significant challenges when going from research to operationalization.
What are you most optimistic about in the field of AI ethics research?
Seeing how much the field has expanded in the last few years has been exciting. This growth has given me a lot of optimism for future growth and establishment in AI ethics, especially as AI will only increase in its importance and pervasiveness throughout society. With this, new ethical questions will continue to emerge—questions that may never be completely solved but will need to be addressed and refined over time. This gives us a lot of compelling research questions and challenges that will drive innovation and progress in AI ethics. It also fuels the motivation for continued advancement in AI ethics research, ensuring that it remains relevant and proactive in guiding the responsible development and deployment of AI technology.
Latest Blog
December 13, 2024 | Events
From Data Fairness to 3D Image Generation: Sony AI at NeurIPS 2024
The 38th Annual Conference on Neural Information Processing Systems is fast approaching. NeurIPS 2024, the largest AI conference in the world, is taking place this year at the Vanc…
December 4, 2024 | AI Ethics
Exploring the Challenges of Fair Dataset Curation: Insights from NeurIPS 2024
Sony AI’s paper accepted at NeurIPS 2024, "A Taxonomy of Challenges to Curating Fair Datasets," highlights the pivotal steps toward achieving fairness in machine learning and is a …
November 15, 2024 | Sony AI
Breaking New Ground in AI Image Generation Research: GenWarp and PaGoDA at NeurI…
At NeurIPS 2024, Sony AI is set to showcase two new research explorations into methods for image generation: GenWarp and PaGoDA. These two research papers highlight advancements in…