Category

Share

SXSW Rewind: From GT Sophy to Social Robots—Highlights from Peter Stone and Cynthia Breazeal’s SXSW Conversation

Events

Sony AI

June 17, 2025

While SXSW 2025 may now be in the rearview mirror, the conversations it ignited continue to resonate. On March 10, 2025, Peter Stone, Chief Scientist at Sony AI and Professor at The University of Texas at Austin, participated in the panel "Pushing Creativity to New Bounds: Future Robot Applications." Alongside moderator, Evan Ackerman, and human-robot interaction expert from MIT, Cynthia Breazeal, the discussion delved into how advanced robotics and AI are transforming creative fields such as music, art, and storytelling. The panel emphasized the importance of responsible development and meaningful collaboration with creators to ensure these technologies augment human creativity rather than replace it.

As we reflect on the insights shared during this session, it's clear that the integration of AI and robotics into creative processes is not a fleeting trend but a significant shift in how we approach innovation. Peter Stone's contributions highlight Sony AI’s ongoing commitment to developing AI that enhances human potential and creativity.

INTERVIEW

Moderator: Evan Ackerman:

All right, my name is Evan Ackerman. You’re at the "Pushing Creativity to New Bounds: Future Robot Applications" panel. I'm a journalist with IEEE Spectrum. If you haven't heard of IEEE, it's the Institute of Electrical and Electronics Engineers. We've been around for a long time. Our founding members include Nikola Tesla and Thomas Edison, and we have 500,000 members worldwide. I write about robots for IEEE Spectrum, the society's magazine. I've been writing about robots for nearly 15 years, with around 8,000 articles published.

Part of what's exciting about writing about robots is that every year I think, “This is it–this is the year robots are going to make a real difference in our lives.” And every year I’m wrong. It’s always “five years away.” But this year, maybe it’s finally different. There's more investment in humanoids and AI than ever before, and that's going to change things–especially around productivity and work.

Today, though, we're talking about something more human than just productivity: creativity and imagination. How can robots help with that? And maybe... how shouldn't they? We'll get into all of that.

Let’s start with panelist introductions. Peter, do you want to begin?

Peter Stone:
Sure. Happy to. It’s a pleasure to be here. I wear a number of different hats. I'm a professor at the University of Texas at Austin, just down the road. I direct Texas Robotics and am a faculty member in the Computer Science department.

I've been working in artificial intelligence for more than three decades now. Some people say, “Wait–hasn't AI only been around for a few years?” But it's actually been around for about 75 years. My work focuses on reinforcement learning, multi-agent systems, and robotics.

Another major role I have is as Chief Scientist at Sony AI. We work on a variety of things–some robotics applications, others tied to creativity. I’m looking forward to the conversation.

Cynthia Breazeal:
Yeah, I'm Cynthia Breazeal. I also wear a number of hats, all in academia. At MIT, I'm a faculty member at the Media Lab and direct the Personal Robots Group. Peter and I were just chatting–we’ve been in the field about the same amount of time.

I’ve always worked on autonomous robots and systems, with my academic claim to fame being the founding and pioneering of the field of social robotics and human-robot interaction.

I also serve as the MIT Dean for Digital Learning. Over time, I’ve started applying AI and robotics to human flourishing–things like learning, mental health, and well-being. I'm the founding director of an MIT-wide initiative called Responsible AI for Social Empowerment and Education (RAISE), which is about helping everyone–globally–understand and harness AI responsibly.

Moderator: Evan Ackerman:
Awesome. I want to start with a couple of all-time favorite robots–Kismet and Leonardo, which Cynthia worked on. Can you introduce us to them, and explain how they helped spark human-robot interaction?

Cynthia Breazeal:
Absolutely. This takes us back. I did my graduate work at what was then called the MIT Artificial Intelligence Lab, now CSAIL. I was a student of Rodney Brooks–a rock star in robotics and a pioneer in autonomous systems.

At that time, autonomous robots were mostly inspired by insects–biological designs that operated in extreme, unstructured environments. These robots were built to be far from humans–to handle tasks that were dull, dirty, or dangerous. In academia, no one was seriously thinking about robots and people interacting.

Fast forward to 1997. NASA lands Sojourner and Pathfinder on Mars–a huge moment. I'm in grad school thinking: we’ve sent robots into oceans, volcanoes, now Mars... I grew up with Star Wars and The Jetsons–where are the robots in our homes?

So, I marched into Rod’s office and said, “I need to change everything.” I wanted to build robots that interact with people–all kinds of people.

Taking inspiration from the personal computer revolution–if we want a robot in every home, what’s the interface? For personal computers, it was the graphical interface. For robots, I realized people anthropomorphize them even if they aren't humanoid. So I thought: social interaction could be the universal interface.

That was the birth of social robotics. Kismet was the first recognized example–an expressive robotic head inspired by infant-parent interactions. It wasn’t human, but it could engage emotionally and socially. That was the start of a journey toward emotionally intelligent machines.

Moderator: Evan Ackerman:
And Leonardo?

Cynthia Breazeal:
If Kismet was a mechanical cartoon–expressive, but just a head–Leonardo was a full character. Around the time Spielberg’s AI came out, I was approached by Kathleen Kennedy, one of the executive producers. She wanted someone to bridge the science behind the movie and the real world.

In the film, there’s a robot teddy bear named Teddy–a “super toy.” I loved Teddy. I went to meet Stan Winston, the legendary animatronics designer behind the movie, and pitched an idea: let's actually build Teddy–not as a puppet, but a real robot. That became Leonardo.

Leonardo was designed with Stan’s full creative vision–more organic than Kismet, with 70 degrees of freedom. He looked and moved like a creature out of a movie, but was fully autonomous. He became the flagship platform of my lab for six years.

Moderator: Evan Ackerman:
And we’ll talk more about why robots like Leonardo are important. But first–Peter, while Cynthia was doing this work, you were teaching robots to play soccer. Can you talk about the genesis of RoboCup, and how the last two decades have taken us from there?

Peter Stone:
Listening to Cynthia, I realize how parallel our experiences were. We were grad students at the same time–she at MIT, me at Carnegie Mellon–both top robotics schools.

At the time, roboticists typically worked on just one aspect of a system: vision, locomotion, manipulation. Few tackled the full picture–building robots that integrated all of these. Similarly, Cynthia explored how robots might interact with humans. I was thinking about how robots might interact with each other, which was also a relatively unexplored area.

Soccer has always been one of my creative outlets. Around 1994, I attended an AI conference–AAAI–and saw a demo of one robot playing a basic soccer-like game against another on a ping-pong table. They called it “robot soccer,” but to me, that wasn’t really soccer. There were no teams.

So I went to my advisor and said, “I want to stop what I’m doing and work on robot soccer.” She said, “Okay–let’s do it.”

Soon after, I connected with a group in Japan, including Hiroaki Kitano, now the CTO of Sony. He had a vision: inspired by the looming Deep Blue vs. Kasparov chess match, he wanted to raise the bar. Instead of beating humans at board games, he proposed a grand challenge: by 2050, develop a team of humanoid robots that can beat the FIFA World Cup champions.

It was incredibly ambitious. Soccer is real-time, physical, continuous. It requires teamwork, spatial reasoning, and adaptability–far beyond chess or Go. And just like Cynthia’s goal of getting robots into people’s homes, this became a lifelong challenge for me–one that’s still driving our research.

We’re not there yet–not even close. But these are exactly the kinds of challenge problems that inspire generations of innovation and creativity.

Moderator: Evan Ackerman:
Let’s talk more about why that interaction matters. These days, we see companies building humanoids and saying, “Robots will take over our jobs–problem solved.” But why not just let robots handle all the stuff we don’t want to do? Why focus on robots that interact with humans?

Cynthia Breazeal:
Social robotics was never about robots working instead of people–it was about people and robots working together. Whether it’s learning, caregiving, or creative collaboration, most of what we do as humans happens in relationship with others. It’s dialog, shared experience, goal setting–all deeply social.

Also, robots aren’t people. Even if we try to make them human-compatible, they’re fundamentally different. That’s a strength, not a weakness. My work explores long-term interaction–not just how robots act in the moment, but how they evolve with users over time.

For example, I founded a company to create a social robot named Jibo. We deployed it in schools, hospitals, and homes across the U.S. for up to a year–all cloud-connected. It helped us study the real-world experience of living with a robot. How do relationships form? How do they fade? What makes people come back?

We found that the robot had to provide real value. It needed to be emotionally uplifting and engaging–but also not something people became overly dependent on. In health and education especially, we want robots to help people grow stronger and more capable–not reliant. We design for the robot to eventually step back.

Peter Stone:
That’s such an important point. At UT Austin, we have a project called Living and Working with Robots. It's interdisciplinary–engineers, social scientists, humanists–and the focus is that “with.” Robots shouldn’t replace us; they should collaborate with us.

Especially in fields like elder care, where we see a real tension: robots can support longer independent living, which is great. But if they reduce human contact, that’s a real concern. We have to ask not only what can we do, but what should we do?

Cynthia Breazeal:
Exactly. And when you look at aging, health care, education–we’re seeing a huge shortage in human professionals. Teachers are leaving. Nurses are leaving. Demand is rising faster than we can train replacements.

AI and robotics won’t replace human care, but they can help support it–by extending reach, by helping family caregivers, by making existing staff more effective. It’s a sociotechnical system–the robot doesn’t solve everything, but it plays a meaningful part.

Peter Stone:
And we’re just beginning to explore how behavior changes over time. We’re launching a new robotics honors program at UT, where all the students live in the same dorm. We’re proposing to put robots in the space–not just for research, but for real-world social interaction.

A robot meeting someone for the first time needs to introduce itself. But once you’ve interacted for weeks, it should remember you. And humans change too–people adapt to the presence of robots. These longitudinal studies are key to unlocking meaningful interaction.

Cynthia Breazeal:
Exactly. In our work, we explore how humans and robots grow together. How do they personalize over time? How do they become more helpful? How do we avoid the novelty wearing off?

Long-term engagement is a major challenge. If a robot feels boring or unhelpful, people stop using it. We want the opposite–a system that brings lasting value.

We’ve even studied positive psychology coaches–robots that teach people emotional resilience. The goal isn’t to stay forever. It’s to help people build skills and independence. When the robot goes away, people should be stronger because of it.

Moderator: Evan Ackerman:
I love that focus on support rather than takeover. Let’s shift to creativity. How do we design robotic systems that can enhance human creativity–without dominating it or making people feel inadequate?

And Peter, maybe you can speak to GT Sophy and what it revealed about how robots do things differently–and how that can open up new creative opportunities.

Peter Stone:
You mentioned GT Sophy, so I’ll start there. It’s a project I led at Sony AI where we set out to build an AI agent that could outdrive the best human players in the racing simulator Gran Turismo. And this isn’t just a game–it’s so realistic that professional race car drivers actually train in it.

Some of the top Gran Turismo players even go on to race professionally. There's a movie based on that true story. So the idea was: can we teach an AI to be the best in the world at this–in real-time, against the best human players?

We didn’t know if it would be possible, but in 2022 we published the results in Nature. GT Sophy beat four of the world’s top human players in head-to-head races.

But here's the most fascinating part: the human players learned from it. One top racer said, “GT Sophy goes into corners slower than I would–but exits faster.” She tried it, and it improved her driving.

That’s where you see AI acting as a creative partner, not just a competitor. It discovers novel strategies–and humans can adopt and evolve them. That’s the dream of human-AI collaboration.

It reminded us of AlphaGo, where the AI made a move that Go experts thought was a mistake–until they realized it was actually brilliant. It changed how humans played Go. That’s creativity.

Further Reading:

Moderator: Evan Ackerman:
But how do you keep that from being frustrating? You don’t want a robot that always wins, right? If an AI always solves a puzzle or draws better than a child can, that could be discouraging. How do you balance helping without taking over?

Cynthia Breazeal:
Great question. In my work with kids, we've explored this with social robots as creativity support tools. We designed robots that behave more or less creatively in tasks like drawing or captioning images. Then we studied how kids responded.

What we found was fascinating: kids who interacted with more creative robots became more creative themselves. They were more expressive, more resilient, and even more confident. They started to see themselves as creative thinkers.

It’s a powerful reminder that we emulate the behaviors of those we interact with–including robots. And yes, that means we need to design these systems carefully. They can be persuasive–even unintentionally. So we focus on amplifying creativity, not replacing it.

Peter Stone:
On the question of discouragement–I’d say it’s more about how the robot interacts than whether it’s better.

We’ve known since Deep Blue beat Kasparov in 1997 that AI can outperform humans in chess. But people still care about human chess. They watch the tournaments. They play. The human element still matters.

In fact, for a while, the best chess players were actually “centaurs”–teams made up of humans plus AI. The AI suggested ideas, and the human made the call. That combo was better than either alone.

Now, in some domains, like chess, AI is simply better. But that doesn’t eliminate human interest or engagement.

With GT Sophy, we knew beating players wasn’t enough. The goal was to make it fun. So we worked with game designers to make it feel fair–giving humans faster cars or adjusting difficulty levels.

We wanted players to learn, not just lose. And it worked. GT Sophy is now integrated into Gran Turismo 7 on PlayStation 5. The reviews have been fantastic. Players know they’re up against an AI that could be faster, but they’re enjoying the interaction–and improving from it.

Moderator: Evan Ackerman:
I remember something similar at a drone race a few years ago–a Swiss lab built a drone AI that beat the best human pilot. The human was stunned… and inspired. But the AI system only worked in a very controlled environment.

So here’s the question: should we be maintaining a “moat” around human creativity? Are there places we should protect as uniquely human, or is that unrealistic?

Cynthia Breazeal:
It depends on the task. Something like chess is fully defined–it’s easy to optimize. But when you're talking about education, healthcare, aging, those are messy, human-centered domains. They're deeply emotional and relational. You need trust, empathy, cultural nuance.

You also need to consider alignment with values–something AI can’t always guarantee. In these spaces, I believe human relationships will always be essential.

That said, we should always ask: what is the unique value people bring to each other? And how can AI collaborate, not compete?

Peter Stone:
That reminds me–I recently attended a workshop in London called Science in the Age of AI. One speaker was Daniel Bedingfield, a pop star from the UK who’s been experimenting with AI-generated music.

He gave a live demo: in under an hour, he wrote a song using AI tools, crowdsourced lyrics, generated melodies, and layered it all together.

It was impressive–but also kind of heartbreaking. He said, “What used to take me weeks in the studio now takes an hour. I’m having a creative crisis.”

But what he didn’t quite realize is that his experience still mattered. When he picked among 60 generated melodies, he knew which two would work. That judgment–built from years of practice–was essential.

So while the tools are evolving, we still need creative people. We’re just giving them new ways to express themselves.

Like Beethoven–if there were dozens like him back then, his music might not have stood out. What makes creativity special is doing something no one else is doing. AI won’t change that–it will just create new pathways to stand out.

Cynthia Breazeal:
And many of those pathways will come from new tools. Generative AI is exciting, but still very hard to steer. We need better interfaces, better control. There’s lots of room for innovation.

I think the next generation–today’s kids–will grow up as AI natives. They’ll expect to collaborate with these tools. That’s why we need to teach AI literacy early. Not just how it works–but when it doesn’t, and what the risks are.

They’re going to invent the next wave of tools. We need to empower them to do it thoughtfully.

Moderator: Evan Ackerman:
That raises the question–do we need to worry? Are we at the moment where robots really will change everything?

Peter Stone:
If I had to bet… no, not yet. We’ve seen this hype cycle before. MOOCs were supposed to replace college. GPT was supposed to end writing. Instead, we’re learning how to use these tools in better ways.

We’ll do the same with robots–learn how they fit into our lives, not just take them over.

Cynthia Breazeal:
Exactly. And while we’ve had great success in certain research areas–turning those into sustainable, real-world companies is another story.

So yes, we’re building impressive systems. But we still have a long way to go to bring them to market responsibly and at scale.

Moderator: Evan Ackerman:
We’re going to save the last 10 minutes or so for questions. So be thinking of a question. I’ll ask just one more.

Where do you actually think we’ll be with robots–creative or otherwise–in the next few years? There’s a lot of hype out there. It’s a constant problem for robotics. So maybe two questions: Where are we going to be... and where would you like us to be?

Cynthia Breazeal:
There are still a lot of hard problems in robotics. Even with generative AI, as Peter said, real-time multimodal interaction is still difficult. These models often have a lot of latency.

When you bring robots into the real world–where they need to physically interact with people and environments–it’s tough. There’s still a lot of work to be done to make them truly robust and scalable.

That said, robotics has always been systems-driven. We take the technologies available and use what works. When deep learning came along, it didn’t change how I built robots–it just helped things become more reliable.

But generative AI is different. Now I can imagine whole new ways to design social robots. It’s exciting–but we need a lot more innovation before it’s ready for real-world scale.

Peter Stone:
To answer directly: where will we be in a few years? I think... pretty close to where we are now. That doesn’t mean progress won’t happen–it will. Labs are moving quickly. But I don’t think we’ve had our “ChatGPT moment” in robotics just yet.

Where I want to be? I want robotics to have its iPhone moment–not just cool hardware, but a platform that others can build on. Apple created a device that worked, but then opened it up to creative people outside of engineering to make apps. That’s what made it transformative.

We need robust robot platforms that can navigate, manipulate, interact, and speak naturally–and then open them up to designers, artists, teachers, and creators to come up with applications we haven’t imagined yet.

Cynthia Breazeal:
Exactly. And we can’t forget: the real world is not the internet.

Robots don’t live in curated training datasets. They live in dynamic, physical spaces–full of noise and unpredictability. That’s what makes this hard.

Conclusion

As robotics continues to evolve–from the racetrack to the classroom to the home–one thing is clear: technology should serve to amplify human creativity, connection, and capability, not replace it.

At Sony AI, we believe in building AI that works with people. Whether through social interaction, adaptive learning, or creative collaboration, our goal is to develop tools that help humanity flourish–thoughtfully, ethically, and inclusively.

Because the future of robotics isn’t just about what machines can do. It’s about what they should do–and who we become when we build them to work alongside us.

Latest Blog

June 12, 2025 | Events, Sony AI

Research That Scales, Adapts, and Creates: Spotlighting Sony AI at CVPR 2025

At this year’s IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR) in Nashville, TN Sony AI is proud to present 12 accepted papers spanning the main conference and…

June 3, 2025 | Sony AI

Advancing AI: Highlights from May

From research milestones to conference prep, May was a steady month of progress across Sony AI. Our team's advanced work in vision-based reinforcement learning, continued building …

May 1, 2025 | Sony AI

Advancing AI: Highlights from April

April marked a standout moment for Sony AI as we brought bold ideas to the world stage at ICLR 2025. This year’s conference spotlighted our work across generative modeling, multimo…

  • HOME
  • Blog
  • SXSW Rewind: From GT Sophy to Social Robots—Highlights from Peter Stone and Cynthia Breazeal’s SXSW Conversation

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.