Can a Virtual Fish Outsmart Your Cat? The Surprising Link Between Zebrafish and the Future of AI
January 12, 2026
By: Marylee Williams
Email: [Email]
Media Inquiries: Aaron Aupperlee, School of Computer Science, [Email]
Aran Nayebi, an assistant professor at Carnegie Mellon University’s School of Computer Science, jokingly remarks that his robot vacuum has a larger computational capacity than his two cats, Zoe and Shira. Yet, while the vacuum mindlessly follows a preset path, his feline companions leap, play, and explore their environment with genuine autonomy. This stark contrast sparked Nayebi’s curiosity: What if we could create AI that mimics this natural, self-driven exploration?
But here's where it gets controversial... Nayebi and his team at CMU have developed a virtual zebrafish that exhibits animal-like autonomy in a simulated environment—without any prior training. This isn’t just about mimicking movement; it’s about replicating the curiosity and adaptive behavior that define biological intelligence. Their work hints at a future where autonomous “AI agent scientists” could sift through vast datasets, uncovering patterns and insights free from human bias. But can machines truly replicate the serendipity of human discovery?
The team’s research, published in collaboration with colleagues like Reece Keller, Alyn Kirsch, Felix Pei, Xaq Pitkow, and Leo Kozachkov, focuses on a computational model called 3M-Progress (Model-Memory-Mismatch Progress). This model allows the virtual zebrafish to explore its environment by comparing new sensory experiences with its prior understanding of how the world works. For instance, if the zebrafish expects moving its tail to propel it through water, but it doesn’t, the mismatch triggers an update to its model, driving curiosity-like exploration.
And this is the part most people miss... The memory component of 3M-Progress is twofold: it includes both real-time experiences and an “ethologically relevant prior memory”—a built-in understanding of how the world should work. This dual-memory system gives the AI agent just enough flexibility to construct intrinsic goals, mirroring the exploration behavior of real zebrafish. But does this mean AI is truly ‘thinking’ like an animal, or is it just sophisticated pattern recognition?
One of the most striking findings came when the researchers recreated a scenario from previous biological studies. When real larval zebrafish lose the ability to swim due to severed tail function, they enter a state of futility-induced passivity—they try to swim, realize they can’t, and temporarily stop moving. Remarkably, the virtual zebrafish exhibited the same behavior without any prior knowledge of this state. Is this a breakthrough in AI autonomy, or are we merely replicating biological mechanisms without understanding the underlying consciousness?
Nayebi argues that this cyclical behavior—trying, failing, and trying again—is a key step toward recreating animal-like autonomy in AI. By focusing on intrinsic motivation rather than external rewards, the team’s approach challenges traditional AI training methods. For example, while a robot vacuum is rewarded for cleaning a mapped area, the virtual zebrafish explores its environment driven by curiosity, not rewards. But if AI can’t feel curiosity, can it ever truly be autonomous?
A controversial interpretation... Some critics argue that AI’s exploration is merely an illusion of autonomy, a product of programmed algorithms rather than genuine self-awareness. Nayebi counters that as AI tackles more complex problems, the solutions increasingly resemble how the brain works—not because AI is becoming conscious, but because certain problems have limited optimal solutions. So, are we creating intelligent machines, or are we just building better tools?
The team’s next steps include applying this autonomy across different AI embodiments, not just zebrafish. But as we push the boundaries of AI, how do we ensure these systems remain aligned with human values?
What do you think? Is AI’s autonomy a step toward true intelligence, or are we overestimating its capabilities? Share your thoughts in the comments below—let’s spark a conversation about the future of AI and its place in our world.