
Fei-Fei Li, known as the "Godmother of AI," had an in-depth wide-ranging podcast conversation with Jany Hejuan Zhao, the founder and CEO of NextFin.AI, chair of TMTPost and publisher of Barron's China
NextFin -- As 2025 draws to a close, Fei-Fei Li, the Stanford University professor known as the “Godmother of AI,” has been ushering in wave after wave of new developments with World Labs, the frontier AI company she founded in 2023. These include the release of Marble, the first commercial “world model,” which has finally made people realize that “world models” are not merely a conceptual idea, but something real and practically useful.
Looking back, my first meeting with the visionary AI pioneer dates back to 2017, inside an academic building at Stanford. That year, Tianqiao Chen, the founder of Shanda Group and a renowned tech philanthropist, who had just settled in Silicon Valley, introduced her to me and several other longtime friends, noting, “She is one of the most outstanding scientists in the United States.” At the time, the ImageNet initiative launched by Professor Li was still in full swing. It was also during that first meeting and conversation with her that I learned a new idea: why the size of a dataset determines the level of intelligence. This was the original intention of ImageNet—building the largest possible data pool to advance artificial intelligence (AI). Although the scale of data processed in the AI world today has grown by trillions of times, at that time ImageNet created the largest dataset ever. More importantly, the ImageNet project she led proved—amid widespread skepticism—to both academia and industry that “data,” just like algorithms, is a cornerstone of artificial intelligence development.
Over the following eight years, we witnessed how ImageNet became a milestone in the history of generative AI. The AI pioneer’s efforts in the world of artificial intelligence have never slowed. From leading the ImageNet initiative—driving a major leap in datasets and the transition from AI 1.0 to AI 2.0—to taking on a new mission today: leading the development of “world models” to break through the limitations of large language models to generate the 3D world, she once again finds herself at the crossroads of a data bottleneck in world models.
Driven by curiosity about her new entrepreneurial venture, I had an in-depth video podcast conversation with Professor Li, who served as Vice President at Google and Chief Sicentist of AI/ML at Google Cloud during her sabbatical from January 2017 to September 2018. In the nearly two-hour discussion, which felt more like a relaxed chat, we covered a wide range of topics—from studying abroad as a teenager to choosing a scientific path; from becoming a member of the three most prestigious U.S. academies in arts and sciences, engineering, and medicine to starting a tech firm in Silicon Valley; from the different challenges AI has faced at various stages of its development to the possible solutions at each stage. Along the way, she also endured rumors and doubts. This time, in response to my question, she did not shy away from public speculation about her family background, allowing me to see the story of a girl from an ordinary Chinese family who crossed the ocean and grew with resilience in an unfamiliar reality and academic world.
In front of NextFin—the world's first AI agent platform for financial news and data analysis founded by myself, a serial entrepreneur—Li wove together the technological evolution of world models and spatial intelligence with her personal values, methodology, and entrepreneurial judgment into a coherent and clear narrative: the world is more than just language, and the next step for AI is enabling machines to “see, generate, and interact” within a continuous three-dimensional world; and before all grand promises, AI is, and will remain, a tool—the steering wheel must always be in human hands." This is the agency humanity must never give up, and the belief humanity must never abandon ... AI is just a tool. I believe in humanity, not AI,” she said. The offhand remarks by the woman who revolutionized AI stirred a deep and lasting resonance within me.
This podcast conversation coincided with World Labs’ recent launch of Marble, its latest commercial spatial intelligence model. From a single image or text prompt, Marble can generate “a persistent, freely navigable, and geometrically consistent” 3D world, which can be exported in formats such as Gaussian Splat for exploration and further creation on the web and VR devices. It marks a tangible step from “content generation” to “world generation.” Media coverage has highlighted Marble’s “larger, clearer, and more consistent” worlds, as well as its usable engineering pipeline for creators and developers, including export, web and VR rendering, and interaction.
At the same time, world models are becoming a new battleground for the industry. Google DeepMind has successively launched Genie 3 and Gemini Robotics 1.5, emphasizing a model direction focused on “generating interactive environments with spatial understanding and planning capabilities.” Earlier this year, it also formed a dedicated world-modeling team focused on applications in gaming, film, and robotics.
Progress in the field has outpaced expectations from a year ago. In an episode of NextFin’s podcast Jany Talk, Professor Li predicted that the transition from “language generation” to “world generation” will bring an application-level explosion in spatial intelligence within the next two years. Since securing significant funding in 2024, World Labs has consistently advanced with the vision of Large World Models (LWM), pushing the boundaries of what AI can achieve.
Professor Li admitted the overwhelming pressure—fearing that her models might not be good enough, that she might let down the young coworkers who follow her, and that she might let down investors. But as she put it, “If you ever stop feeling uncertain, it means you’ve stopped being challenged, and that means what you’re doing may not matter as much.” She spoke calmly of setbacks: “If you fail, you fail—it’s not a big deal,” and emphasized the need for patience: “People always expect things to happen quickly, but they rarely do.” Amid the noisy restlessness of the AI world, her words felt like a steadying anchor.
On a personal note, I have embarked on a new entrepreneurial journey in Silicon Valley, launching NextFin.AI powered by native AI technology. Professor Li encouraged me, saying, “Your efforts to explore new AI product forms in media are absolutely in the right direction—AI should better serve humanity.” Her persistence amid global skepticism has also been a source of strength for me.
The following is the transcript of a video podcast conversation between Fei-Fei Li, the founder of World Labs and a professor of computer science at Stanford University, and Jany Hejuan Zhao, a serial entrepreneur -- the founder and CEO of NextFin.AI, the world’s first AI agent platform for financial news and data analysis, founder and CEO of TMTPost and publisher of Barron’s China. It is edited for brevity and clarity.
Staying Curious and Facing Fear
Jany Hejuan Zhao: Your book The Worlds I See, left a very deep impression on me. The first time I read it, I cried several times. It was very touching. I even had my daughter read it too; she’s studying abroad, so she could really relate to it. My first question in Jany Talk is: do you have any advice or perspectives for teenagers on how to observe the world? This would be very helpful, not just for international students, but also for the current generation of teenagers in China.
Fei-Fei Li: : Thank you for liking my book, and thank you for having your daughter read it too. To be honest, I think today's teenagers are really incredible. Whether it’s from my students or colleagues who are young entrepreneurs, I often feel that I’m learning more from them than they are learning from me. So, I’m a bit reluctant to say I have anything to teach teenagers, but I can share some thoughts. I think the first word in the subtitle of my book is key—curiosity. It really is the starting point for everything. Especially as a child, curiosity is so pure and the world is still simple, and we approach it with a lot of curiosity.
When I wrote this book, my biggest feeling was that it was a sort of sorting out of my own scientific journey. I feel very lucky because, whether due to my family or my educational path, my curiosity has been nurtured. Looking back, many people protected my curiosity, which I consider a blessing. And I hope to share that blessing and insight with young people. Life often starts with curiosity. Don’t lose that curiosity, because it can really light a fire in your heart. Whether it’s curiosity about the world or pursuing your dreams, this fire can accompany you for a long time and lead you to do many things.
Jany Hejuan Zhao: So, how can we maintain this curiosity? Actually, I found some of the perspectives you shared in the book about how you observe the world and the people around you really interesting.
Fei-Fei Li: I think there are many different factors. A child can’t deliberately maintain or even discover curiosity because I think curiosity is innate. But as I mentioned, I feel very lucky because I might have a strong curiosity built into my genes. Looking back at the path I’ve walked, there have been many people who nurtured my curiosity. When there are so many like-minded people around me who also maintain their curiosity, it becomes easier for me to keep it. So, there are indeed many factors.
Many people say that you can’t copy someone else’s path to success. To some extent, that’s true because everyone is unique. But I also believe some things are universal—children’s curiosity is universal. Many parents and teachers are willing to nurture children’s curiosity, and that’s also common.
Jany Hejuan Zhao: You’re very humble. But perhaps behind this luck, as you said, it’s not just about you—it’s about the people around you, including your parents and teachers who protected your curiosity. This is also quite inspiring for us adults. For us as adults, how can we protect a child’s curiosity? What can we do to avoid destroying it?
Fei-Fei Li: That’s a great question, Hejuan, because you’re a mother too, and so am I. I’ve also been a teacher for many years. So what is the essence of curiosity? Curiosity is essentially a source of joy. It’s not utilitarian—it’s not about getting more knowledge or better grades, or achieving more, just because you’re curious. That would be a superficial form of curiosity, one driven by more utilitarian purposes.
True curiosity is joyful. In science, in research, even if it’s a small discovery or something insignificant, when it satisfies your true curiosity, you’re happy. And I think as parents and teachers, we need to empathize with that joy. A child’s curiosity comes from their genuine joy.
If you can’t empathize with that joy, it’s difficult to appreciate the curiosity. So I think, from the heart, to nurture that joy is also a kind of joy for yourself. When you see a child happy because of their curiosity—whether it’s because their curiosity is satisfied or because it motivates them—you can feel that joy too.
I think adults often can’t feel that joy because they’re wearing too many lens or filters: the lens of life, of utilitarianism, of pressure, of their own perceptions. These filters make it hard for adults to empathize with that joy, to feel that curiosity. So, unconsciously, we often fail to protect children’s curiosity.
Also, to be honest, as we grow older, there are many things that may seem not as good as when we were younger. But the joy of curiosity never changes. So, I think adults should also have the ability to experience joy. They should appreciate their own curiosity because it’s a source of happiness. Personally, I enjoy being with young people, learning from them, seeing all the things I don’t know, or sometimes learning something new. That’s a joy—it’s instinctual.
Jany Hejuan Zhao: That’s really important. Sometimes, adults lose the ability to feel joy, and we end up passing that pressure onto children. So, maybe we need to learn to feel joy ourselves first, and then teach them.
Fei-Fei Li: Yes, adults need to appreciate their own curiosity first.
Jany Hejuan Zhao: So, what brings you joy now?
Fei-Fei Li: I think creating really brings me joy—whether it’s creating technology, creating a team to solve tough problems, or coming up with new ideas, or learning new ideas. All of these things make me incredibly happy. That’s why I’ve enjoyed working on the front lines of scientific research for many years, working with students, and even starting businesses with young people. All of this brings me immense joy.
Jany Hejuan Zhao: That’s wonderful. But we also know that along with joy, there’s often confusion. So, how do you deal with confusion or self-doubt? For example, when you transitioned from physics to AI, you must have had a long period of “hesitation,” deciding between practical applications or pure science. How did you get through that period?
Fei-Fei Li: Honestly, I’ve always felt profound apprehension. Because when you’re in the process of exploration, whether it’s in scientific research or in life, you’re always in a state of uncertainty. If you’re not apprehensive, then you’re too comfortable, and that means you’re not challenging yourself. I’m someone who likes to challenge myself, so I feel like I’ve been in a state of profound apprehension for my whole life. And since I’m always in fear, I’ve learned to accept it and deal with it. It’s just part of the process—you can’t completely get rid of fear.
I feel profound apprehension every day. But beyond apprehension, there are other things as well. First, excessive apprehension is not useful. You have to take things one step at a time. For example, if you’ve gone through a tough immigration experience, you’ll realize that with so many unknowns, all you can do is focus on today and finish it well. So, when I was chatting with a group of young entrepreneurs at Y Combinator in Silicon Valley recently, I told them about the concept of Gradient Descent in machine learning. It’s a way to deal with apprehension.
Another thing is to have faith. You need courage, some belief, and confidence. That belief might be the kind of belief that even with courage and confidence, and after working for a long time, you might still lose. But, that’s okay. If you fail, you fail, but at least you tried. Failure isn’t something to fear.
Jany Hejuan Zhao: So, what are you most fearful of right now?
Fei-Fei Li: I fear many things. Right now, I’m starting a company, and I’m worried our model isn’t right, or our product hasn’t found its position. But the biggest fear for me is that I have this amazing group of young people working with me, and I can’t let them down. That’s my biggest fear. Of course, I also don’t want to disappoint investors, but honestly, not letting down these young people feels more important to me than not disappointing investors.
I call my colleagues "young people" because they’re so young and talented. In my heart, these young people are my teachers, but they also trust me. So, I really care deeply—I try my best not to let them down.
Jany Hejuan Zhao: I see that some of your students are also starting their own businesses, including some robotics projects. How do you feel about your students starting businesses?
Fei-Fei Li: I support them very much, and I’m proud of them. Starting a business requires a certain belief—everyone has different beliefs. Especially as founders, you need to have more belief than others. These young people, who grew up in the era of AI, have broader perspectives than I did. So, I’m really happy for them.
“Spatial Intelligence” is The Way to AGI or Just AI
Jany Hejuan Zhao: Perhaps because you come from a computer vision background, I can sense that you have a kind of persistence—almost an obsession—with computer vision. That is different from large language models.
But many people are also saying things like “language is the world” or “information is the world.” For example, the founder of Anthropic has said that the entire future world will be a datafied world. I think everyone is actually talking about the same thing: how we observe and understand the world, and how AI can ultimately represent the world. You may care more about vision, while large language models may focus more on language.
After more than a year of starting a company, do you feel that the obsession is still there? Or do you still believe that the world cannot be composed of language alone—that it is something richer, more three-dimensional, four-dimensional, or even more spatially intelligent?
Fei-Fei Li: Yes, I firmly believe that the world is not just language. But let me first explain my belief, because technically there is indeed a shared underlying concept, which is why I can understand why some people say “language is the world.”
At a high level, I firmly believe the world is not only language. If by language we mean this discrete, tokenized information—and relatively speaking, it is one-dimensional. Even though what language expresses does not have to be one-dimensional, the representation of language itself is still fairly one-dimensional.
I think the world is actually much richer. As I’ve emphasized repeatedly, spatial intelligence has many properties, including physical properties, that go beyond the concept of language. And many things—whether human behavior or natural phenomena—cannot be fully described by language, nor can language accomplish everything we want to do.
From the moment we open our eyes every day, just imagine our daily human lives—from survival to work, to creation, to feeling and perception, to richer human-to-human emotions and all aspects of life—these are not things that language alone can achieve.
Of course, saying “language is the world” sounds nice, and it doesn’t sound wrong, because it’s an extremely broad statement. When a statement is that broad, it’s kind of hard for it to be wrong.
But technically speaking, digitization is inevitable. That includes vision models, spatial intelligence, and robotics models—they will all be digitized. But if digits and language become exactly the same thing, then the concept has been replaced. If you call all digital representations “language,” then fine—everything is language, and there’s nothing left for me to argue about.
Jany Hejuan Zhao: Right, that’s a bit like Wittgenstein, who said “information is the world.” If we use the idea of “information is the world,” then perhaps everyone is actually understanding things under the same conceptual framework.
Fei-Fei Li: But in my view, information is not only language. It also includes spatial information. Spatial information is, I think, just as beautiful and just as significant as language-based information.
Jany Hejuan Zhao: But we’re also encountering the reality that spatial intelligence—or world models—haven’t progressed as fast as people imagined. This is also the direction you’re currently pursuing through your startup. So how long do you think it will take before people can really perceive tangible changes in this area, whether in entrepreneurship or exploration?
Fei-Fei Li: Honestly, it’s hard to say what counts as fast or slow. From the time we started the company to now, it’s only been a bit over a year. We’ve seen progress from video models to real-time video models, to multimodal models, and to our own 3D models. Even though we haven’t scaled them massively yet, that pace of change is actually quite fast.
But the broader AI environment has created extremely aggressive expectations for AI.
Jany Hejuan Zhao: It always feels like it’s still not fast enough.
Fei-Fei Li: Exactly. Whether something is fast or slow is subjective. But I can tell you why I chose to start a company: I felt the timing was right.
Entrepreneurship is different from academic research—it must align with the market and deeply respect it. Many entrepreneurs who are better than me say that timing is the most important thing. You can’t be too early, when the market and technology aren’t ready; and you can’t be too late, when there’s no room left for you.
When World Labs was founded, spatial intelligence was still a bit early—but not so early that it would take another 5 to 10 years. I believe that in the next one to two years, it will experience explosive growth.
Just look at the dramatic progress in video generation, and then at world models. I firmly believe we’ll see major advances within one or two years, and I can already see the potential for market applications. So I don’t know whether that’s fast or slow—I just think this is a very good time to work on spatial intelligence.
Jany Hejuan Zhao: You’re saying there could be explosive growth within one or two years? That’s already very fast—much faster than I imagined. I originally thought it would take at least five years.
Fei-Fei Li: I hope so. I find the models we’re building now very exciting.
Jany Hejuan Zhao: Then let’s talk about the current progress of World Labs’ models.
Fei-Fei Li: We’re working on world generation—generating worlds. And we see many applications, from digital creatives to game development, film, design, architecture, VR, XR, AR, and robotic simulation.
Each of these markets can be subdivided into many more niches, all of which have strong demands for 3D space. And generative AI has a special characteristic: by lowering the difficulty of things that were previously very hard to do, it opens up many markets you couldn’t have imagined before.
Generating 3D spaces is extremely difficult. How many people in the world truly have the ability to do that? The tools they use are very cumbersome. I’ve tried Blender and Unity myself—it was overwhelming.
But creators often have great ideas in their minds; they’re limited by tools, not by imagination. AI can empower them. It can empower existing creators, and it can also empower people who never realized they could do this before—because it used to be too hard.
People like me never used Blender or Unity before—I found them too annoying and didn’t have the time. But once AI gives me that capability, of course I’ll use it, because it brings new inspiration and new possibilities.
That’s why I think 3D world models are so exciting. They tackle something that’s very hard for ordinary humans to do. When AI lowers the barrier to that capability, it creates an incredible opportunity to open up the market.
Jany Hejuan Zhao: If you manage to conquer this fortress, does that mean the final bottleneck of artificial general intelligence is broken, and AGI is achieved?
Fei-Fei Li: I think without spatial intelligence—or without generative 3D world models—it doesn’t count as AGI. But AGI is like a door with many locks, each requiring a different key. I do believe spatial intelligence is one of those keys.
That said, the metaphor isn’t perfect, because the door isn’t simply open or closed—it opens gradually.
So I’ve always said that I don’t really know what “AGI versus AI” even means. Because AI and AGI seem to share the same dream. At its core, this is a scientific curiosity: can machines think, can they do things? That was the original dream of AI, and the dream of AGI doesn’t seem all that different. So I don’t really see a clear distinction between AI and AGI.
Regardless of whether we call it AI or AGI, this dream is realized step by step. With every step we take, we move a little closer to that dream.
Spatial intelligence is definitely part of that journey. Whether it’s empowering human creativity, applications from games and design to industry, robotics, or imagined worlds like the metaverse, AR, VR—spatial intelligence is essential.
Jany Hejuan Zhao: One example you gave left a deep impression on me—the story of trilobites and vision. Trilobites took hundreds of millions of years to evolve a complex visual system. Now we’re trying to give AI a similarly complex visual system—not just to perceive a simulated world, but to generate worlds. The difficulty is obvious: how can a few years compete with hundreds of millions of years?
Fei-Fei Li: I can’t even imagine it. But at the same time, you can’t think about it that way. Because I think engineering and mathematics follow paths that are very different from the path of biological evolution. So this is really a comparison between apples and oranges. To put it this way, evolutionary iteration is extremely slow—much, much slower than the iteration of algorithms. And carbon-based systems and silicon-based systems operate very differently in terms of computation. So from a time-scale perspective, I don’t think they are really comparable. Still, evolution gives us a lot of insight and inspiration.
For example—and this brings us back to data—why is data so important? Why did our lab originally emphasize the concept of data? A lot of that inspiration came from evolution. Because the long course of evolution is actually a course of big-data training, right? The difference is that today, in the digital age, we don’t need to wait billions of years to collect data. We can collect data at massive scale.
In the end, it all comes back to the same underlying idea—the concept is similar, but the way it is carried out is completely different. It is fundamentally different from how evolution and nature operate.
Jany Hejuan Zhao: It’s not a time-based evolutionary process. It might even be exponential rather than linear.
Fei-Fei Li: Because there’s so much data. In one pass, you might process as much data as evolution saw over tens of millions of years. So you really can’t make a direct comparison.
A Bias: Data Versus Algorithm
Jany Hejuan Zhao: Speaking of data, we can go back to when you first ushered in ImageNet. ImageNet was essentially about data. But it used a more community-driven approach and much larger-scale data to push AI forward by a big step.
Fei-Fei Li: Looking back now, it seems very small. But at the time, it really was the largest.
Jany Hejuan Zhao: But when you were doing it back then, you actually faced many challenges. A lot of people questioned it. First, they questioned ImageNet itself. Second, they questioned the underlying principle behind ImageNet—that is, the idea that the more data you have, the stronger your computing capability, the more training data you have, and the higher the level of intelligence you can achieve.
At that time, hardly anyone believed in this principle. Looking back now, why were you able to firmly believe that this was something worth sticking to?
Fei-Fei Li: I don’t think believing in your own hypothesis is that strange. On the scientific path, after deep thinking, you naturally form some hypotheses—and you have to believe in some of them. Of course, as a scientist, you also have to accept that some hypotheses will turn out to be wrong. I’ve certainly had many hypotheses that turned out to be wrong.
But this particular hypothesis was something I had thought about for a long time. Mathematically, it’s a concept about generalization. I spent all my PhD program years working on models and algorithms, so I accumulated a lot of insights and gradually realized this.
At the end of the day, AI—mathematically speaking—has always been about one thing: generalization. That’s really it. And how do you achieve generalization? There are two aspects: algorithms and data. And the two are tightly connected.
If the algorithm is too complex and the data is scarce, you overfit. If the data is abundant but the algorithm isn’t good enough, you also overfit. There’s a mathematical relationship between the two.
At the time, after thinking about it for so long, I firmly believed in this. And I was part of the earlier generation of computer vision PhD students who worked with machine learning. I was lucky—my PhD years coincided with a turning point in computer vision, when many machine learning concepts were being adopted. So I had a relatively deep understanding of this.
I wasn’t the only one who understood it, of course. But I saw the importance of data back then, so I stuck to it. It really came down to curiosity. I actually found the whole process quite fun. When you’re trying to prove a hypothesis, it’s exciting. You’re full of passion, and you just keep fighting your way forward—like battling monsters in a game. As long as you’re not defeated, you keep fighting.
Jany Hejuan Zhao: Like leveling up while fighting monsters. From ImageNet back then to World Labs today, you’re once again at a new crossroads between algorithms and data. Now, for world models or vision models, data has become an especially difficult problem again.
Fei-Fei Li: A bottleneck, yes.
Jany Hejuan Zhao: How do you break through this bottleneck? Because when you think about space—how do we acquire that data? I touch something and feel whether it’s hot or cold. That feels even harder.
Fei-Fei Li: Exactly. This is a spiral of progress. Back then, ImageNet gave computer vision its largest dataset, and the field flourished. Then the internet brought massive amounts of natural language data, and large language models flourished.
Now we’re back to vision—though AI as a whole is much bigger now, so it’s not just about vision. Look at how fast video models are developing—that’s because there’s a lot of video data. Look at how fast autonomous driving is developing—that’s because some companies have accumulated massive amounts of driving, road, and environmental data.
So you’re right: we’re back to data and algorithms. Actually, it’s not even “back”—we never left. But we are indeed at a very critical point again.
Jany Hejuan Zhao: Yes, exactly.
Fei-Fei Li: Sometimes I find it interesting that even today, people still place more emphasis on algorithms. But everyone who truly works in AI—whether in startups or large companies—knows that data, if not more important, is at least equally important.
Yet when people talk about it, algorithms still sound more “fancy.” Actually data is truly a science.
Jany Hejuan Zhao: Yes, many people value algorithms so much that algorithm engineers are paid far more than data engineers. People’s perceptions of the difficulty and importance of these two things really are quite different. Data just doesn’t seem as “sexy.”
Fei-Fei Li: One of humanity’s weaknesses is bias.
Jany Hejuan Zhao: Do you think this is a very big bias?
Fei-Fei Li: Honestly, if it’s biased, then it’s biased. The world isn’t perfect anyway. As for me, I have pretty thick skin about this. If you ask me whether it’s a bias, fine—I’ll say it is. But does that mean I need to fight it? I’m too lazy to fight it. As long as I know the truth myself, that’s enough.
Jany Hejuan Zhao: So how does World Labs address this data bottleneck now?
Fei-Fei Li: That I can’t tell you.
Jany Hejuan Zhao: Because it’s a business secret?
Fei-Fei Li: Exactly.
Jany Hejuan Zhao: But I can imagine that if you truly believe there will be an explosion of progress within one or two years, then you must have found some ways to break through the data bottlenecks for world models. I’m really looking forward to seeing that.
Long Road Ahead For Robot Models
Jany Hejuan Zhao: Let’s go back to autonomous driving. I’ve been wondering—are autonomous driving models essentially a scaled-down or simplified version of world models?
Fei-Fei Li: They should be. They really should be. At least, I hope they are. Of course, I don’t know exactly what Tesla or Waymo have internally, or how much 3D information is involved in their systems.
Autonomous cars are actually robots—the earliest mass-produced robots created by humans. But they are extremely limited robots. What are they? They’re box-shaped robots, essentially rectangular boxes, operating in a largely two-dimensional world, because roads are basically two-dimensional, not three-dimensional. And in this 2D world, they do just one thing: avoid colliding with other objects. Those objects may be cars, pedestrians, or roadside obstacles. But in essence, it’s a box-shaped robot in a 2D world whose sole goal is not to bump into things.
Now think about the 3D robots we want to build in the future. In a three-dimensional world, their purpose is precisely to touch all kinds of objects—helping us wash dishes, cook, fold clothes. That comparison tells you how simple a robot a car really is.
Jany Hejuan Zhao: It really is very simple.
Fei-Fei Li: Exactly. That’s why I say the world model for cars is also simpler—it’s simpler because the task itself is simple. Of course, I’m not saying autonomous driving isn’t impressive. Tesla and Waymo are both remarkable. But from a scientific, macro-level perspective on world models and robotics, this is just the beginning. What comes next is far more complex.
Jany Hejuan Zhao: So if we think of current autonomous driving systems—the spatial perception models we can understand and experience today—as a low-end version of world models, they indeed handle relatively simple problems. They’re still very far from true robot models.
Fei-Fei Li: And generally speaking—though I truly don’t know what Tesla is doing internally—I don’t think their approach is centered on generative spatial models or world models, because they don’t really need generation. Maybe they use generation during training, but I don’t know. Their main tasks aren’t generation; they focus on judgment, recognition, detection, and so on.
So when it comes to Tesla’s “world model,” I don’t think it’s a strongly generative model, because it doesn’t need to be. But robots do need that. Robot training needs it. You simply can’t collect enough real-world data. What we’re doing is closely related to creativity and design, and those inherently require generation—generation itself is a use case.
Jany Hejuan Zhao: About robot models, I’ve seen that you also collaborate with Nvidia on robot-related models. In China, the robotics industry is very hot right now—lots of startups, lots of funding—but the focus is more on mechanical intelligence, manufacturing, and hardware. On the AI model side, especially generative models, breakthroughs seem more limited so far. How do you see the current stage of generative models for robotics?
Fei-Fei Li: I think robotics is fascinating. Robotics is incredibly hot in Silicon Valley right now. My own lab has been working on robotics for more than a decade, and many of my former students are now leading robotics research across startups and large companies alike. I really love this field and I’m very positive about it.
That said, I also believe we need to stay very calm and rational. Robotics research is still in its early stages. First, as we discussed, robots truly lack data. Think about autonomous driving—it’s been worked on for decades, and cars constantly collect data while people are driving them. Robots, on the other hand, have very limited commercial use cases, especially in daily life, so data collection is extremely difficult.
That’s why taking the generative AI route is both interesting and promising. Generative AI—especially video generation—opens up new possibilities for training. You can do simulations. What we’re doing with robot simulation is very promising. You can even use video models at inference time to assist with online planning.
So there are many exciting possibilities. In a way, robotics is benefiting from the rapid development of neighboring fields like generative AI. That’s why I’m excited—but we still need to wait and see. Robotics still has a long road ahead, especially when it comes to commercialization and everyday-use robots.
Jany Hejuan Zhao: Industrial robots may move faster, right?
Fei-Fei Li: Industrial robots have been in use for a long time already.
Jany Hejuan Zhao: I mean more intelligent industrial robots.
Fei-Fei Li: Yes, because their scenarios are relatively constrained. They operate in controlled environments and have access to plenty of data.
Jany Hejuan Zhao: If robotics still has a long way to go, and one major bottleneck is data, does that create new opportunities—like startups focused on robotic simulation data? Would data-focused startups around robotics be more promising than building robots directly?
Fei-Fei Li: Data companies can definitely be very successful. Just look at Scale AI—it’s a great example. So yes, data is a real business opportunity. But as the saying goes, the devil is in the details. How you do it, and how well you do it, really matters.
The most important things in a data business are: first, how big the market is; and second, whether you can deliver the data your customers actually need. Robot data is especially hard to collect, because you need robots to collect robot data. If humans collect it instead, scaling becomes very slow. It’s not like cars, which are already everywhere and can gather data very quickly.
Jany Hejuan Zhao: So the robotics industry currently faces two major challenges: data and application scenarios. Without sufficient data, application scenarios remain limited, and the two issues are closely linked. People also feel that there aren’t many compelling use cases yet—companies like Unitree, for example, are still largely focused on performance and demonstrations.
If we view robotics as being in a very early stage of development, what other challenges remain? And how many years might it take to complete this cycle? What key milestones still need to be crossed?
Fei-Fei Li: I can give you one data point or a simple fact. From the moment autonomous driving became a concept to real commercialization: Google formed a small autonomous driving team in 2006, and Waymo began operating on public roads around 2024. That’s nearly 20 years. There are similarities and differences here. The automotive industry was already very mature—its supply chain, OEMs, and use cases were well established—so that helped. But AI itself wasn’t mature back then, which is why autonomous driving had such a long AI development path.
Today, AI is far more mature, so that part should move faster. But aside from industrial robots or very limited scenarios, robotics doesn’t yet have application environments as mature as cars. So whether this journey will be faster than 20 years or slower is hard to say.
I do believe AI will accelerate things compared to autonomous driving back then. But as we said earlier, the problem is also harder—it’s a truly three-dimensional world. I’m often asked how many years this will take, and I honestly don’t like answering that question because it’s very complex. I can only say this: I believe that within our lifetimes, we will definitely see it.
Jany Hejuan Zhao: Let’s wait and see. I know there are commercial secrets involved, but if we imagine the long arc of spatial intelligence and complex visual systems—comparing it to the 400-million-year evolutionary journey from trilobites onward—where do you think we are now? Early stage, or already somewhere in the middle?
Fei-Fei Li: Wow, it is hard to compare. You asked a great question. I think about this myself sometimes. In some aspects, today’s spatial intelligence—especially multimodal models—has already far surpassed humans. For example, object recognition has long exceeded human capabilities. How many breeds of dogs, species of birds, or types of cars can an average person recognize? AI is far better than most people at that.
Another example is 3D generation. Humans have actually quite good 3D understanding, but we’re very poor at generating 3D mentally—unless you’ve had specialized training. Doing 3D generation purely in one’s head is generally weak. This is different from children playing with clay—there, the 3D creation involves embodied interaction. But if you ask someone to imagine a 3D structure in their mind and then draw it, most people perform quite poorly. In this respect, AI can already achieve some very, very impressive results.
But when it comes to the deep understanding humans have of the 3D world—the physical relationships between objects, materials, physical properties, and all the rich intelligence embedded in that understanding—AI still falls far short. And that’s not even mentioning social understanding: how humans understand each other, which is also a form of visual understanding.
Humans are extraordinarily complex. So in some dimensions, AI is already comparable to—or even beyond—humans, while in others, it remains far behind.
And even though I believe deeply in spatial intelligence as an AI researcher, my belief isn’t blind. It’s grounded in scientific understanding and years of work in this field—seeing both the opportunities and the direction of the technology. Passion is necessary, especially for entrepreneurship, but judgment about technology requires strong logic and scientific rigor.
Jany Hejuan Zhao: Scientific rigor and careful reasoning underpin it all.
Golden Time for Startup or Big Tech Takes It All?
Jany Hejuan Zhao: Right now, most of our attention is focused on a few big tech companies. For example, Google Gemini, or OpenAI, which has grown from a small company into a giant. Anthropic has also effectively become a giant. Everyone is watching these giants. In the U.S. stock market, people talk about the “Magnificent Seven.”
Do you think small companies still have opportunities in this wave of AI development? And where do those opportunities lie, especially for new entrepreneurs?
Fei-Fei Li: I hope they do—because my own company is a small one. But hope aside, this is a valid question. When it comes to the integration of data, resources, computing power, and talent, companies that can consolidate these resources do have higher chances of survival and success.
That said, I don’t think we should only look at these more obvious factors. Obvious factors are easy to see, easy to talk about, and therefore spread easily.
Let me give a very simple example: AI coding. Microsoft was the first to do AI coding, right? Copilot. It had perfect timing, location, and people—everything working in its favor. It had all the resources, all the use cases, and even GitHub belongs to Microsoft. So why didn’t it fully dominate?
Today, what’s hot in Silicon Valley are Cursor and Claude Code. How is it that, under such circumstances, small companies were able to break through? This shows that obvious factors alone are not enough.
If everyone keeps judging solely based on these visible factors, their conclusions will be biased. In human history, there has never been an era where only big companies had a chance to win—never. In every era and in every society, big companies often had strong resource-integration capabilities as well. So what does this come down to? Creativity, opportunity, execution, and timing. These are all essential elements.
On top of that, AI is truly a horizontal technology. That means it creates opportunities at many application levels—far more than big companies can possibly cover. Small companies have countless opportunities to build applications extremely well, push them to the limit, and gradually carve open the market. All of that is possible.
Jany Hejuan Zhao: So for small companies, would choosing vertical application opportunities be better and more promising?
Fei-Fei Li: Exactly. It depends on what kind of small company. If you don’t have the capability to build foundation models or large models, then you definitely need to focus on applications. But applications aren’t only vertical. Take our company, for example—I don’t know whether you’d call it small or not; I’d still say it’s small. But we do have enough capability to build foundation models, so we also build models.
Jany Hejuan Zhao: To build models, you really need someone with your kind of background.
Fei-Fei Li: Right. Building models requires a very different talent structure.
Jany Hejuan Zhao: This brings us to a concept you often talk about: AI for Good. You believe AI should be more inclusive and bring benefits to ordinary people, rather than being controlled by a small elite. It should be used to serve humanity and promote good, not to do harm. This is a very interesting topic, and for scientists, it often has two sides.
You—and also Professor Geoffrey Hinton—have recently emphasized the need to be vigilant about AI’s potentially destructive power, even greater than nuclear weapons. But there’s another view that says we are still in a development phase and shouldn’t overemphasize AI risks right now. From your perspective, at this stage, should we focus more on development, or should we, like Professor Hinton suggests, simultaneously put more effort into safety and alignment?
Fei-Fei Li: I actually think this is just common sense. AI is a tool, and tools are double-edged swords. Every human tool—from something as small as fire or a stone axe, to nuclear weapons, biotechnology, or AI—is a double-edged sword. Of course, I believe tools should be used for good. But at the same time, we must prevent them from being misused—whether intentionally or unintentionally.
So I think both extremes are irrational. If we only focus on development and don’t care at all about safety or ethical use, that would be a disaster. But if we only talk about ethics every day and refuse to develop the technology, we would also miss many opportunities. Good technology can bring enormous benefits.
That’s why I often tell the media that I’m actually quite boring. I don’t say sensational things or take black-and-white positions. I always say the most boring things.
Jany Hejuan Zhao: But that’s the rigor of a scientist.
Fei-Fei Li: I don’t think this has anything to do with being a scientist. It’s just basic human common sense. Think about parenting: would you teach your child how to use fire? Of course you would—how to cook, for example. When you teach them, you explain the benefits of fire, but you also explain its dangers. That’s really just common sense.
Jany Hejuan Zhao: So how do we ensure that, in the development of AI, it becomes more widely accessible and benefits the public, instead of turning into a form of power? I increasingly feel that when technology is controlled by a few giants or by governments, it can become a tool of power. How do we prevent it from becoming a means of controlling humanity, and instead make it a way to benefit humanity?
Fei-Fei Li: You’re right. AI is a tool of power, and it is also a tool for good. It will always be a tool. In my view, this tool will become increasingly powerful. But before it becomes uncontrollable, it is still a human tool, and humans have the responsibility to keep it controllable.
Like all tools, we should never expect the tool itself to figure out what it ought to do. Whether it is used for good is a human responsibility. So controlling AI and guiding how it is used—that responsibility lies with humans: with laws, institutions, education, and society as a whole. Every society is different, and every individual is different, but the responsibility ultimately lies with humanity.
An Upper Hand for Humans or AI?
Jany Hejuan Zhao: You also mention this at the end of your book—that AI is not meant to replace humans. There are many things AI cannot replace, including empathy. Emotional connection and communication are deeply human needs. So in the development of AI, how can we design it—or guide its development—in a way that preserves the parts of humanity that shine most brightly, and ensures that humans themselves are not replaced?
Fei-Fei Li: That’s a very good question, Hejuan. I think we really need to look at AI rationally—understand what it is, and then think rationally about what society needs today. Take education, for example. In the age of AI, we urgently need to update our educational philosophies and methods. We need to let children use this tool, and help them understand that it can empower their creativity and learning in many ways. At the same time, we must also teach them about the potential problems this tool can bring.
And this isn’t just about educating children. I think the biggest issue in the adult world is that we assume children are the ones who need education, when in fact the people who most need to be educated are ourselves. So we need to educate ourselves, educate the public, provide the public with sufficient information, and give policymakers and lawmakers more opportunities to learn and understand these technologies. All of this is extremely important.
In the end, how we develop and govern AI is really about our own learning, growth, and self-governance. Ultimately, it all comes back to people.
Jany Hejuan Zhao: Yes, educating ourselves is actually harder—much harder. Struggling with human nature is often more difficult than grappling with AI.
Fei-Fei Li: That’s absolutely true.
Fei-Fei Li: I think in the age of AI—especially with tools that possess cognitive abilities—the real lesson for us is that we should understand ourselves better and govern ourselves better. That “self” refers both to individuals and to groups. Sometimes I feel that all the heated discussion around AI misses the point. In the end, what’s lacking isn’t discussion about AI, but self-reflection on human nature—both individual and collective.
Jany Hejuan Zhao: Perhaps during the development of AI, we actually need more opportunities to discuss the development of human nature itself. Many young people are confused right now. There have been layoffs in Silicon Valley recently. I hear from many people who studied computer science—once in extremely high demand, including Stanford graduates—who are now facing layoffs and uncertainty. People say AI will replace many jobs, leading to unemployment and many other ripple effects.
So in this process—whether through education, or through how we understand the world and reflect on ourselves—how should we view the impact AI may have on our work, our lives, and even our emotional well-being? What should we, as humans, do?
Fei-Fei Li: What individuals need to do and what society as a whole needs to do are different.
For individuals, the first thing is to recognize that the era is changing. Pretending nothing is happening—like an ostrich burying its head in the sand—is not helpful. The world is changing, and jobs will change. Every major technological revolution brings job transformations, and often periods of pain. Some transitions are smoother; others are not and can cause social disruption.
So as individuals, we need to learn and adapt. Again, it comes back to maintaining curiosity—curiosity about life and about the world. Even if that curiosity comes from fear in adulthood, that’s okay. At least it gives you the motivation to learn. That is what individuals need to self-reflect on.
As for society, I believe our educational structures urgently need reform. Take K–12 education, for example. We ask teenagers to spend years on exam-oriented learning or on finding standard answers. In the United States, it’s not purely exam-driven, but it still emphasizes testing, and many teaching methods are based on knowledge “filling.” These approaches can—and should—be updated, and urgently so.
AI is rapidly demonstrating that many tasks can be done by machines. Asking humans to spend decades learning to do things that machines can already do is a waste of human potential. That’s why I strongly call on those who think about education, shape education policy, and implement education to seize the opportunity of the age.
For more than 100 years, our educational methodology has barely changed. My greatest hope is that when historians look back a century from now, at the early decades of the 21st century, they will say that humanity carried out an educational revolution.
Jany Hejuan Zhao: What would that educational revolution look like to you? In terms of direction or concrete changes, what are you most hoping for?
Fei-Fei Li: I believe we should use AI to empower both educators and students. By using AI to save time and energy, we can allow students—under the guidance of teachers and through self-guidance—to develop cognition and capabilities that AI cannot achieve.
Humans have enormous potential. Every individual has immense potential. Our brains are not fully utilized, and neither individuals nor societies have realized their full potential. You only need to look at the vast differences between individuals to see how great that potential is. Some people possess almost superhuman abilities, which shows that such capacity exists within human nature—we just haven’t unlocked it for most people.
With AI as a tool—and even with the disruption AI brings to human work—we have an opportunity to rethink education entirely. Our educational methodology hasn’t fundamentally changed in over a century. Now is the moment to transform it completely—from knowledge-based education, to skills-based education, to cognitive development, and ultimately to education about being human.
Jany Hejuan Zhao: Yet what we’re seeing now is that AI development seems to be pushing societies—not only in the U.S. but also in China—to place even greater emphasis on STEM (science, technology, engineering and mathematics). Education focused on cognition or the humanities is becoming less valued. Even the U.S. is talking about manufacturing reshoring and training more engineers. I find this somewhat confusing.
Fei-Fei Li: If education truly changes, we shouldn’t divide it into science versus humanities anymore. AI can enable everyone to learn coding—so are those people scientists or humanists? AI can also help people better appreciate beauty, read literature, and even write poetry. The entire methodology can change. Previously, we separated disciplines; AI gives us the chance to move beyond that.
The other day, my child was reading Harry Potter and asked me about a complicated plot point in the fifth volume—something neither of us fully understood. So we asked AI. We used ChatGPT and Gemini, asking step by step: what did Dumbledore do at that moment? What did Harry do? What did McGonagall do? After a series of questions, we finally understood the situation. This small example shows how many opportunities AI gives us.
But in the end, it still comes down to how people use this tool. What I fear most is human surrender—when people think, “AI is so smart, there’s nothing left for me to do.” That’s very frightening.
Jany Hejuan Zhao: People just “lie flat” and give up.
Fei-Fei Li: Right? I am not aware of this phrase. That’s a very vivid phrase. And it’s scary. Humans have immense potential, countless opportunities to shape the world, and countless opportunities to make the world a better place . AI is just a tool.
Jany Hejuan Zhao: Listening to you repeatedly say “AI is just a tool” today has really struck me. I know many people, including AI researchers, and ironically, those who don’t use or understand AI often think of it as a tool. But many people working in AI say the opposite—that AI isn’t just a tool, that AI is everything, the future, the world itself, and that we shouldn’t treat it merely as a tool.
Because you are a true AI expert and scientist, hearing this from you is particularly powerful. It’s a simple sentence, but it shapes how we perceive and understand AI. Language is a gate—it shapes how we understand the world.
Fei-Fei Li: Human nature—and human agency—is the most important thing. If we give up our agency, we give up our curiosity and motivation to change ourselves and the world.
I honestly don’t understand what people mean when they say “AI is the world.” I really don’t. One could just as well say “a single flower contains an entire world.” I don’t know what “AI is the world” means. Behind the phrase “AI is just a tool” is a view of the relationship between humans and AI: seeing AI as a tool means seeing humans as more important, and placing greater emphasis on humanity itself.
Fei-Fei Li: Ultimately, when I say “AI is a tool,” it reflects my faith in humanity—my faith in human nature and human society. I believe in humans. I do not believe in AI.
Dark Side of AI and AI Safety
Jany Hejuan Zhao: Earlier you mentioned that your family gave you many precious things. Could you share one or two examples of what your family gave you that you treasure most?
Fei-Fei Li: I actually mention this in my book. After finishing it, I realized that the book was really about my mother, not about me.
Jany Hejuan Zhao: Yes. I read the stories about your mother and your father, and I found them deeply moving. Coming from an ordinary family and growing step by step through your own efforts—it’s very inspiring.
Fei-Fei Li: Yes, my family was very ordinary, and quite small. In my childhood memories, there was my maternal grandmother, but on my father’s side there was no one. It was just a small, very ordinary family. My mother was in very poor health. But again, this wasn’t anything unusual—many families are like this.
What it gave me, though, were many precious things that I only understood after growing up. When you’re young, you spend so much time under survival pressure. But once you’ve walked that road, you realize—first of all—it truly forges your willpower. “Forging willpower” sounds like a big, abstract phrase, something people say that doesn’t mean much. But once you’ve been through those experiences, you don’t need to say it—it’s already there.
Second, although my work as an AI scientist is very “machine-like”—working with computers, algorithms, and data—my life experiences gave me a deep understanding of human nature. Those experiences, especially witnessing birth, aging, illness, and death, and seeing human vulnerability, gave me many perspectives. I think these are extremely valuable perspectives.
In Chapter 10 of my book, I wrote specifically about my mother’s illness. Why did I do that? Because I’m one of the very few AI researchers—perhaps the first or second—who is also a member of the U.S. National Academy of Medicine. And why did I become a member of the Academy of Medicine? Because over many years, I didn’t only work on AI as a professor; I also did a lot of work related to healthcare, especially healthcare delivery. Decades of accompanying my mother meant that I was truly struggling and navigating within the healthcare system.
Jany Hejuan Zhao: Long illness makes a doctor.
Fei-Fei Li: Absolutely. So many surgeries, so many illnesses big and small, daily caregiving—every experience gave me a deep understanding of healthcare. When I later worked on AI-enabled healthcare projects, I realized that my understanding was very different from others’. I truly had deeper insight. That also allowed me to work better with colleagues in hospitals, because they felt I respected them. They could see that I wasn’t just someone who talked only about computer science. “You actually understood our work and our pain points!” That perspective was incredibly special.
Jany Hejuan Zhao: That’s remarkable—going from “long illness” to “a doctor,” and to a Member of the National Academy of Medicine.
Fei-Fei Li: No experience in life is wasted.
Jany Hejuan Zhao: So this was also driven by curiosity.
Fei-Fei Li: It was driven by both curiosity and survival—but ultimately, it was driven by love. I loved my mother and wanted her to live, to be healthy. That’s why I devoted so much energy. That motivation really came from love.
Jany Hejuan Zhao: Professor Li, I personally have a major concern. As you know, the media industry has been discussing this a lot in recent years. First, AI has a huge impact on journalism, and next it will prompt our industry to undergo major changes. I’m also working on new products myself, hoping to build a better company and better products in the AI era.
But on the other hand, we’re seeing growing conflicts with long-held professional journalistic values. AI can now generate enormous amounts of text and images, and fake news and fabricated images are everywhere online. Many people can’t tell what’s real anymore. Even videos can be faked. How should we view the flood of misinformation AI may bring? I know you’ve also been personally affected—as a public figure, you’re especially vulnerable to online rumors. How should we think about this?
Fei-Fei Li: That’s true. And honestly, I deeply empathize. People often call me asking, “Where are you? What happened to you?” And I say, “Nothing—I’m at home sleeping.” There are all kinds of rumors. Some people who care about me are so worried that they don’t even dare to call, saying, “Something so big happened to you, we didn’t want to disturb you.” And I have to tell them: nothing happened.
Jany Hejuan Zhao: Exactly—everything’s fine.
Fei-Fei Li: Yes. So I really understand and empathize. As I said earlier, there are several layers to this issue. The first is public education. AI is a new thing. When cars were first invented, they were extremely unsafe—there were no seat belts, speeds were uncontrolled, and many problems existed. Humanity paid a heavy price in blood and tears before we gradually made cars safer. Even today, when your child becomes a teenager and starts learning to drive, you’re extremely nervous and provide extensive education.
I remember when I was young, my father repeatedly told me, “Never touch an electrical outlet with wet hands.” He must have said it 200 times. That’s education. When it comes to AI’s risks and fake news, public education is absolutely critical. This tool will always be used by people. In media, I’m sure you see how impossible it is to guard against everything. When fake news about me appeared recently, I asked a journalist friend, “Why are there so many fake stories? It’s unbelievable.”
Jany Hejuan Zhao:
When I saw those stories, I wanted to message you and ask whether they were true.
Fei-Fei Li:
Exactly. I didn’t even realize at first that AI was involved. Someone with ulterior motives could write one piece of content, and in the past, without AI, they’d have to write it themselves or hire others. Now with AI, they press a button and generate 1,000 or even 10,000 pieces instantly. AI truly empowers harmful behavior as well.
In this situation, I think the first step—for individuals and for society—is understanding. We will all encounter these things, and we need to recognize them. Once I realized something was AI-generated, I actually found it amusing and became curious about how AI could write like that—and I stopped feeling hurt.
More seriously, though: first, individual education and collective education are essential. Second, institutions and policies matter—and they must be built on understanding. Without recognizing the destructive potential of this tool, we won’t just face fake news, but many other harms. Without awareness, we’ll never develop better systems or ways to deal with it.
Third, I strongly support what you’re doing. The tool is here—it has both good and bad sides. As a media professional, how do you update your products? How do you use your wisdom and execution to create new products that are original and distinctive? For example, when I read your writing, I know it wasn’t written by AI—or even if AI was used, it still delivers real value. That depends on human creativity. It’s not easy.
In the AI era, we are sometimes victims and sometimes deeply impacted. But as I said before, the responsibility ultimately lies with us—how we use the tool, how we avoid being harmed by it, how we prevent it from harming others, and how we accomplish what we want to do. All of these responsibilities remain human responsibilities.
Jany Hejuan Zhao:
Your remarks are insightful. Many young people admire you greatly. If you could say something to your 16-year-old self—just starting out on her journey of study—what message would you give to today’s youth?
Fei-Fei Li:
I’m actually very poor at giving advice to young people. But I truly believe this is an extraordinary era. Technology is changing, society is changing, and young people today have countless opportunities. In the end, it’s up to you to seize them.
Carry your curiosity, and what I call the “North Star” in my book—the passion, belief, and sense of purpose in your heart. Be yourself, and work to change the world. That, I think, is the greatest opportunity this era offers you. I hope young people recognize this opportunity and give themselves the chance to go for it—just do it.
Jany Hejuan Zhao:
Thank you, Professor Li. To close today’s conversation, I’d like to end with one of your own sentences: I hope humanity never gives up on themselves.
Thank you again, Professor Li, for taking time out of your very busy schedule to have this conversation with me.
Fei-Fei Li:
Thank you.


