Google DeepMind has unveiled an experimental AI system that can generate interactive, explorable virtual worlds from a single sentence or image, pushing the boundaries of how artificial intelligence can simulate reality.
Announced on January 29 via Google’s Innovation and AI blog, the project, called Project Genie, is a research prototype designed to transform simple prompts into dynamic digital environments. The system is powered by Genie 3, DeepMind’s latest “world model” AI.
Unlike traditional image or video generators, Genie 3 creates worlds that respond to user actions in real time. Users can walk, fly, or drive through landscapes while the AI continuously generates new surroundings on the fly. These environments can range from realistic settings such as forests and deserts to imaginative, fantasy-style worlds.
Thrilled to launch Project Genie, an experimental prototype of the world’s most advanced world model. Create entire playable worlds to explore in real-time just from a simple text prompt – kind of mindblowing really! Available to Ultra subs in the US for now – have fun exploring! pic.twitter.com/2XDy0V0BW0
— Demis Hassabis (@demishassabis) January 29, 2026
Google says world models like Genie 3 mark a major shift in AI research. Rather than producing static visuals or short video clips, they aim to simulate living environments that maintain consistency and logic as users explore them.
DeepMind first revealed Genie 3 in 2025, describing it as a breakthrough for its ability to sustain interactive worlds for several minutes.
According to Google, Project Genie allows users to sketch worlds using text or images, explore them freely, remix environments, and export creations as videos.
















