Sights on AI: Lingjuan Lyu Discusses Her Career in Privacy-Preserving AI and Staying Inspired As The AI Landscape Has Advanced
Sony AI
July 8, 2025
The Sony AI team is a diverse group of individuals working to accomplish one common goal: accelerate the fundamental research and development of AI and enhance human imagination and creativity, particularly in the realm of entertainment. Each individual brings different experiences, along with a unique view of the technology, to this work. This insightful Q&A series, Sights on AI, highlights the career journeys of Sony AI’s leaders and offers their perspectives on a number of AI topics.
Peter Stone ・ Erica Kato Marcus・ Tarek Besold ・ Yuki Mitsufuji ・ Lorenzo Servadei・ Alice Xiang・ Pete Wurman・ Lingjuan Lyu
Lingjuan Lyu is the Head of the Privacy-Preserving Machine Learning (PPML) team in the Imaging and Sensing Flagship at Sony AI. A globally recognized expert in privacy and security, she leads a team of scientists and engineers dedicated to advancing research in AI-privacy and security-related areas. Before joining Sony AI, Lingjuan spent over eight years in academia and industry, including a tenure at Ant Group. Her contributions to the field are reflected in more than 100 publications in leading journals and conferences such as Nature, NeurIPS, CVPR, ICML, and ICLR.
In this blog, Lingjuan reflects on her journey into AI, the evolving challenges in privacy and security, and her team’s groundbreaking work at Sony AI.
What inspired you to pursue a career in AI?
Growing up in China, STEM education was highly emphasized. I’ve always believed technologies in the STEM field can make our lives more intelligent and convenient. My passion for this field was driven by a combination of necessity and ambition. I saw a clear need for more trustworthy, privacy-conscious systems, and recognized the opportunity to tackle challenging issues in privacy, security, and intellectual property (IP) protection.
My research journey began with humanoid dancing robots to simulate humans and robust watermarking techniques to protect image IP. That focus deepened through secure chip design research in my master’s program, and later expanded into IoT and machine learning privacy research during my PhD at the University of Melbourne. These experiences have been instrumental to the work that I’ve focused on over the course of my career.
How have you seen AI, privacy, and intellectual property protection evolve over the past 15 Years?
AI has evolved at an extraordinary pace, especially over the last decade. When I published my first paper on IP protection in 2011, concepts like privacy, security, and responsible AI were far from mainstream. Since then, privacy and IP protection issues have grown substantially as technological advancements have made them more critical, not less.
I’d divide this evolution into two phases: before and after the advent of transformers. Prior to the invention of the transformer, AI models were smaller, and privacy and security were relatively low-profile concerns in AI research. The emergence of transformers and foundation models fundamentally shifted the landscape. These large-scale models have brought unprecedented power – but also greater uncertainty, stronger memorization of sensitive data, and more potential attack surfaces. Traditional privacy techniques often fall short in this new context, either because they are too costly or degrade the user experience.
The rise of Generative AI has added new challenges. With this technology’s rapid progress, generated images or music, for example, are becoming progressively realistic, making it more difficult to differentiate between authentic and AI-generated content. This has introduced challenges in privacy, security, compliance, fairness, and IP protection.
Given the growing emphasis on AI, is the research community focusing more on security and privacy?
Privacy and security are becoming increasingly popular topics in both academia and industry, with the number of papers published growing each year. However, the increased focus on privacy and security has also been coupled with research on protecting IP.
Ten years ago, when people talked about security, it was mainly in the context of hardware or systems. Research and industry practices have shifted to addressing risks in AI-powered applications – including vision-based applications, audio, video, speech, and other relevant areas – and ensuring that IP remains protected. For creator-centric companies like Sony, securing IP and user privacy is critical to maintaining trust and business value.
What are some common misconceptions about privacy-preserving AI?
One of the most common misconceptions about privacy-preserving AI is the belief that training models on consensual or pre-processed data automatically ensures privacy compliance. In reality, privacy and security risks can arise at any stage of a model’s lifecycle, from data collection and training to deployment and inference.
For example, removing sensitive data at the source can create a false sense of security. Given the complexity of modern models, especially large-scale ones, privacy leakage can still occur through the data structure, training dynamics, or model outputs. Simply put, sanitizing the input doesn’t always sanitize the outcome.
Privacy and IP protection must be built into models from the start, not retrofitted after the fact. Ensuring proper privacy protection requires close collaboration and communication between developers, legal teams, and regulators, along with continued education for all of the researchers and product teams working on the project.
As AI models continue to grow and become more prominent, what challenges have you faced in your research?
One of the most significant challenges we’ve faced is the constantly evolving landscape of privacy and IP laws, regulations, and policies around the globe. This makes it difficult to create solutions that remain compliant over time, especially when privacy and IP laws differ so widely. For example, a solution that’s compliant in Japan might not be compliant in the US or EU markets.
When designing or researching an AI solution, practitioners must always consider the strictest IP or privacy laws to ensure it is compliant globally. At Sony, the goal is to develop AI solutions that can be deployed across business units worldwide, not just in one market. Designing for just one region at a time often requires duplicative effort, leading to higher costs and delays that could otherwise be spent advancing innovation.
Privacy regulations and laws will continue to evolve, so researchers and engineers need to continuously adapt approaches to stay compliant and effective.
How has your team stayed motivated and inspired as the AI, Privacy, and IP landscape has evolved?
The PPML team was established in 2021, combining researchers and engineers working toward a shared goal: building trustworthy sensing platforms for imaging and sensing use cases. For us, innovation must be both cutting-edge and responsible.
Our primary focus for the first two years was our goal of building the most trustworthy sensing platform possible, an effort powered by federated learning, privacy-preserving computer vision, and on-device AI.
In 2023, we evolved our focus to include creating low-cost, high-impact Vision Foundation Models (VFMs) and visual generative AI. In this process, we’ve integrated federated learning techniques into our VFM development pipeline to reduce risk and enhance privacy. We’ve also initiated research projects that look at the benefits of training our own visual AI models while lowering costs. A recent paper on this subject was accepted at CVPR ‘25 titled Stretching Each Dollar, Diffusion Training from Scratch on a Micro Budget.
Our team is highly self-motivated. We are driven by a combination of business impact and personal curiosity. This motivation pushes us to explore new ideas and directions, push ourselves, do more challenging work, and solve more difficult questions.
In your opinion, what has been the most exciting piece of research you’ve worked on at Sony AI?
The VFM project, Argus: A Compact and Versatile Foundation Model for Vision, has been one of the most exciting and ambitious efforts for the team. Our goal was to develop a compact, efficient, and low-cost vision foundation model, as traditional, large-scale VFMs can be impractical for deployment due to their computational and memory demands.
The team proposed the VFM, delivering the first phase of the research, which we are calling version 1.0, in just nine months. It covers 12 vision tasks from image-level to pixel-level analysis. We took a design-first approach, prioritizing compact and efficient architectures rather than relying on downstream model distillation or compression, as these elements often come with performance trade-offs.
Six months later, we updated the research with version 1.5, which covered an additional five vision tasks for a total of 17 tasks. We believe these models have the potential to represent the most comprehensive and high-performing VFMs at the million-parameter scale. The team’s passion for developing this new model paid off, culminating in a CVPR paper acceptance – a major milestone.
We believe compact vision foundation models like Argus will help shape the future of edge AI, especially given the limited memory and computational capabilities of edge devices. These models provide numerous advantages beyond just deployment flexibility. They require fewer compute resources, enable faster inference speeds, and contribute to smaller carbon footprints.
We are now working on the next generation of our VFM research, exploring how we can unify vision perception, understanding and generation into a single model. While I can’t share more details at this time, the project has been both inspiring and demanding, and we look forward to continuing to expand our research in this area.
What is your vision for the future of AI and its impact on the world?
AI is rapidly advancing from narrow to more general systems capable of understanding and creating. While true artificial general intelligence (AGI) remains a distant goal, we're also starting to see the rise of embodied AI, where robotics and AI merge to assist in the physical world, which may have significant impacts in the future.
I hope AI will ultimately serve as a catalyst for human flourishing, augmenting our capabilities in different areas such as healthcare (early disease detection, personalized medicine), education (bridging gaps for underserved communities and lifelong learners), and creativity (music, art, writing, storytelling, and more).
To me, the true value of AI lies beyond productivity. It’s about expanding human potential. Success should be measured by how well AI helps reduce inequality, lessens drudgery and preserves human agency. While there are risks, with proactive, thoughtful governance and interdisciplinary collaboration, AI could help address some of humanity’s greatest challenges. The future isn’t just about what AI can do; it’s about what we choose to do with it.
Latest Blog

July 7, 2025 | Scientific Discovery, Events
Scientific Discovery: How Sony AI Is Building Tools to Empower Scientists and Pe…
At this year’s AI4X 2025, Sony AI is presenting two research projects that show how intelligent systems can augment knowledge and creativity in complex scientific research. These t…

July 1, 2025 | Sony AI
Advancing AI: Highlights from June
June was a month of real-world progress across Sony AI. We brought new models to IJCNN, revisited SXSW’s human-robot creativity panel, celebrated Peter Stone’s AAAI webinar appeara…

June 17, 2025 | Events, Sony AI
SXSW Rewind: From GT Sophy to Social Robots—Highlights from Peter Stone and Cynt…
While SXSW 2025 may now be in the rearview mirror, the conversations it ignited continue to resonate. On March 10, 2025, Peter Stone, Chief Scientist at Sony AI and Professor at Th…