Sony AI Values and Why They are Important for the Future of AI
By Michael Spranger, COO, Sony AI Inc.
October 12, 2021
Sony AI’s mission is to unleash human imagination and creativity with AI. This mission is only achievable if we follow three core values that underlie everything we do. We believe extraordinary innovation requires diversity in people and methods; AI should be developed and deployed in a responsible, fair and transparent way; and technology should serve social good. We uphold these values in support of our mission to use AI to help unleash human imagination and creativity in new ways. We hold ourselves accountable to these values every day, as we conduct research, create technology, form partnerships, and build our team.
Diversity, Equity and Inclusion
As a foreigner living in Japan, as an AI researcher and as the COO of Sony AI, diversity is an issue that I spend a lot of time thinking about.
It’s crucial for any technology that impacts human life to be created from a diversity of voices. Historically, AI has been largely driven forward without the participation of people from different backgrounds and this includes obvious problems like underrepresentation of certain genders, sexual orientations, geographies and ethnicities. This can have dramatic consequences for marginalized and underrepresented communities. We have seen examples of this in pedestrian detection algorithms for autonomous cars that perform worse on people of color, or AI hiring tools that favor male candidates over better qualified female candidates. AI systems get deployed in the real world and they can do real damage to people’s lives. It’s often claimed that these problems can be reduced to bias in data. This is not correct. These problems arise fundamentally due to missing voices in the process of creating algorithms, business processes and organizational structures. So, diversity is necessary to prevent harm to people.
But, diversity and inclusion are not only crucial to prevent harm; they are the fundamental values on which globally impactful AI needs to be built. Without input from different experiences – including cultural, social and educational backgrounds – it is impossible to design, build and deploy successful, ethical AI systems on a global scale. One dimension we need to pay close attention to when expanding our teams is to make sure we hire people from different disciplines – including the arts, social sciences, philosophy, etc. In my mind it is extremely important to have these different backgrounds. Some of this is personal: talking to people from diverse backgrounds is just so much more fun and inspiring. But more importantly, I think AI cannot succeed in fulfilling its promise for humanity without input from the breadth of human experience.
For me personally, diversity has always been an interesting topic. I have always been romantically attached to traveling and moving around in the world – to experiencing different cultures and meeting people from diverse backgrounds. I was born in Russia and have lived in Germany, the US, Bulgaria, France and Japan. My own journey into AI zigzagged through studying philosophy, archeology and psychology among other topics. As is the case with science – which makes us aware of what we don’t know – so has travel and living in different places made me aware of the experiences that I did not have. It is therefore my personal mission to push us to more diversity.
Our ambition is to build AI that is globally impactful. We can only do that successfully if our teams are diverse – as a lot of research shows. But it’s also a moral obligation. Diversity, equity and inclusion are the values that need to guide us as we unlock the potential of AI.
Transparency in AI is often understood as making sure that decisions computed by AI systems need to be intelligible to users of the technology. This is obviously important. AI impacts the lives of people in very fundamental ways – be it through financial decisions or via recommendations of products, services and news. In such cases people have a right to know why decisions are being made and this isn’t really an AI issue but one of human rights.
Transparency is important to Sony AI for other reasons as well. Our mission is to unleash human imagination and creativity – which we explore by building AI systems for creators. For us, creators are at the center. We want to build tools that unlock their creativity. For this, AI systems need to enable stories around creations. They need to inspire and empower. But we can’t achieve this with systems that are not transparent in how and why they reach certain recommendations or decisions.
For example, in our gastronomy project we work with top chefs to create new recipes using AI. We want to create AI systems that help chefs get new inspiration for recipes never done before, while optimizing their food in terms of health and sustainability. To achieve this, we build human–in–the–loop systems that empower chefs to create recipes with new inspiration from AI. This doesn’t remove the chefs from the process of creation but rather puts them at the center of the process and augments their abilities using new tools. Ultimately, it is the creative vision of the chef in combination with AI that will unlock a new gastronomy. It’s also the chef that is in control of the creative process – but in order to use AI effectively the chef needs to understand why certain suggestions are being made. The chef needs to be able to understand and build stories around their creations. For this, transparency of AI systems is critical.
AI is a technology that enables amazing progress but also has considerable risk. Researchers may or may not be aware of these risks while they are developing the technology. I strongly believe in openness as a way to mitigate potential risks by exposing the technology during its development to a global audience. Ultimately, society needs to decide how to balance the opportunities and risks. It’s important that society knows what research is being conducted and why. This can only be achieved by organizational transparency. Instead of just working in the lab, we are committed to sharing our research results frequently and often – so as to enable a global dialogue about the future of AI, our research, as well as the potential risks and opportunities of AI.
Technology companies are known for having very little transparency into their processes or development, particularly in fields like AI, where black box data science has been the norm for years. Sony AI is aiming to change this approach. We believe AI should be developed in a transparent way, where both the process and goal are to build organizations, research programs and systems that are transparent and trustworthy.
Last, but not least, there is the imperative of creating AI that serves the social good. There is tremendous potential for the use of AI, but it remains unrealized because the technology is not applied to core societal problems in the same way that it’s been applied to other issues, such as business efficiency. We need to develop AI technology with the right mindset from the start, which for Sony AI means putting humans at the center of AI development and creating AI that empowers society.
As AI has advanced, the issue of ethics in AI has grown in urgency, and the industry has started to respond. As a global technology leader, we feel that it is our responsibility and privilege to lead the charge in AI ethics. Several members of the Sony AI team are active in internal and industry groups focused on AI ethics, including the Partnership for AI. Sony Corporation also has a dedicated ethics team that is focused on creating and implementing ethics guidelines for the Sony Group organizations. I hope to see industry collaboration in this area grow, as more companies work to preserve personal privacy, eliminate bias and create products that add measurable benefit to humankind.
But we need to do more. We are facing huge social issues, from the climate crisis to issues around poverty, health and sustainability. Many of these issues will require sophisticated AI in order to be solved. For instance, we think a lot about how AI can help the food industry, which is one of the largest contributors to climate change but which also has a huge impact on people’s health. Moreover, we think that AI can help in discovering a new science, for example in understanding the complex relation between food, our nervous system and gut health. We think that robots will help people have access to better meals.
The journey at Sony AI into addressing huge challenges has just started and I hope there are many things to come in the future. For me AI has always been a tool of empowerment – however we all need to make sure that we can realize its potential together. I believe it takes a diverse team working in a transparent organization with a clear goal to contribute to the social good, in order to realize these potentials.
Meet the Team
Sony AI has been growing quickly in our first year. We have been hiring across the globe during these immensely challenging times. I am particularly proud that we were able to attract the best talent from five continents all with different backgrounds and approaches to AI. Therefore, it’s my great pleasure to announce a new series of interviews with the people of Sony AI that explores their individual hopes and motivation for working in AI.If you share our values and are interested to learn more about the open positions, please visit joinus and do not hesitate to reach out and talk to us.
COO, Sony AI Inc.
Michael Spranger is the COO of Sony AI Inc., Sony’s strategic research and development organization established April 2020. Sony AI’s mission is to “unleash human imagination and creativity with AI.” Michael is a roboticist by training with extensive research experience in fields such as Natural Language Processing, robotics, and foundations of Artificial Intelligence. Michael has published more than 70 papers at top AI conferences such as IJCAI, NeurIPS and others. Concurrent to Sony AI, Michael also holds a Senior Researcher position at Sony Computer Science Laboratories, Inc., and is actively contributing to Sony’s overall AI ethics strategy.
January 18, 2024 | Sony AI
Navigating Responsible Data Curation Takes the Spotlight at NeurIPS 2023
The field of Human-Centric Computer Vision (HCCV) is rapidly progressing, and some researchers are raising a red flag on the current ethics of data curation. A primary concern is t…
January 11, 2024 | Sony AI
From Hypothesis to Reality: The GT Sophy Team Explains the Evolution of the Brea…
Since its inception in 2020, Sony AI has been committed to enhancing human imagination and creativity through the acceleration of AI research and development. One of the first exam…
December 14, 2023
Event Tables for Efficient Experience Replay
Each of us carries a core set of experiences, events that stand out as particularly important and have shaped our lives more than an average day. However, this is often not the cas…