AI Technology

World Models AI: Why $2 Billion Is Flowing Away from LLMs

World models are the next big shift in AI. Here's what AMI Labs' $1B bet means for non-technical founders building AI-powered apps without code.

World Models AI: Why $2 Billion Is Flowing Away from LLMs
World Models AI: Why $2 Billion Is Flowing Away from LLMs

Over $2 billion has flowed into world model startups in the first three months of 2026 alone. First, Fei-Fei Li's World Labs closed a $1 billion round in February. Then, Yann LeCun's AMI Labs matched it with a $1.03 billion seed round in March, the largest seed ever for a European company. Add in earlier bets like Runway's world model release and NVIDIA's open-source Cosmos platform, and a pattern becomes hard to ignore.

The AI industry is placing its biggest bets not on better chatbots, but on AI that understands the physical world. If you're building products with no-code tools, this shift matters more than you might think.

What Are World Models in AI?

World models are a category of artificial intelligence systems designed to build internal representations of how physical environments work. Unlike large language models (LLMs) such as ChatGPT or Claude, which predict the next word in a sequence, world models predict the next state of a physical environment, accounting for physics, spatial relationships, object permanence, and cause-and-effect dynamics.

Here's a simple way to understand the difference: if you ask an LLM to describe a ball rolling down a ramp, it can produce a fluent written description. But it has no internal sense of gravity, momentum, or what happens when the ball reaches the bottom. A world model, by contrast, would maintain an abstract representation of the ramp, the ball, and the forces acting on it, allowing it to reason about what comes next in the physical scene, not just in the sentence.

World models are currently being developed for use in robotics, autonomous vehicles, healthcare, industrial automation, and 3D environment generation. Key organizations building world models in 2026 include AMI Labs (founded by Yann LeCun), World Labs (founded by Fei-Fei Li), NVIDIA (through its open-source Cosmos platform), and Runway.

World models take a different approach. Instead of predicting words, they learn abstract representations of how real environments behave: physics, cause and effect, spatial relationships, time. As Fei-Fei Li put it when announcing World Labs' funding, "If AI is to be truly useful, it must understand worlds, not just words."

LeCun's approach at AMI Labs is based on something called JEPA (Joint Embedding Predictive Architecture), a framework he first proposed in 2022. Rather than generating pixel-perfect predictions (which is why AI video still glitches), JEPA learns higher-level patterns, the kind of understanding that lets a system know a ball will keep rolling even after it disappears behind a wall.

The Funding Tells a Story

AMI Labs announced its $1.03 billion seed round on March 10, 2026, reaching a $3.5 billion pre-money valuation. Its investor list reads like an AI power map: Nvidia, Bezos Expeditions, Toyota Ventures, Samsung, and several major European funds including Cathay Innovation and Daphni. The company is headquartered in Paris, with offices planned in New York, Montreal, and Singapore.

But AMI Labs isn't alone. The world models space has attracted major capital from multiple directions in a short window:


  • World Labs (Fei-Fei Li): $1 billion in February 2026, with $200 million from Autodesk alone for integrating world models into 3D design workflows . The company's product, Marble, already lets users generate persistent 3D environments from text, images, or video.

  • NVIDIA Cosmos: An open-source world foundation model platform trained on 20 million hours of real-world data. Cosmos models are already being used by companies like Figure AI, Uber, and XPENG for robotics and autonomous vehicle development.

  • Runway: Released its first world model, GWM-1, in December 2025, adding physics-aware generation to its AI video toolkit.

PitchBook projects the world model market in gaming alone could grow from $1.2 billion (2022-2025) to $276 billion by 2030. The spatial computing market more broadly is projected to reach $1.2 trillion by 2035.

AMI Labs CEO Alexandre LeBrun is candid about the timeline, though. This isn't a startup chasing quick revenue. LeBrun has indicated the first usable models could take about a year, with initial focus areas including healthcare, robotics, wearables, and industrial automation. He's also predicted that world models will become the next buzzword in AI, with every company soon claiming the label to attract funding.

The company also plans to open-source much of its research and publish papers as it goes, a notable choice in an industry increasingly moving toward closed models.

What This Means for AI-Powered Apps?

Here's where things get practical. World models don't just matter for robotics labs and self-driving car companies. They represent a fundamental upgrade in what AI can perceive and reason about, which eventually flows into the tools everyone uses.


  1. Smarter simulations and previews

Imagine building an app that lets users visualize how furniture would look in a room, not just as a flat overlay, but with accurate lighting, shadows, and spatial awareness. World Labs' Marble product already generates persistent 3D environments from simple inputs. When these capabilities arrive as APIs, no-code builders will be able to add physics-aware rendering to their apps without custom engineering.


  1. More reliable AI agents

If you've ever used an AI agent that loses track of context or makes decisions that ignore obvious real-world constraints, that's partly because current models lack spatial and causal reasoning. As Vikram Taneja, head of AT&T Ventures, told TechCrunch, physical AI is poised to hit the mainstream in 2026 as new AI-powered device categories enter the market. World models could make AI agents significantly more grounded, better at planning, sequencing tasks, and understanding consequences.


  1. New categories of no-code apps

As world model capabilities trickle into platforms and APIs, expect new building blocks for non-technical creators: drag-and-drop components for 3D environments, physics simulations, or sensor data dashboards that actually understand what they're measuring. The gap between "AI that writes text" and "AI that understands environments" is where new product categories will emerge.

How No-Code Builders Can Start Thinking About This?

World models are still early-stage technology. AMI Labs won't have commercial products for at least a year, and broad platform integration will take longer. But that doesn't mean there's nothing to do right now.


  1. Prototype interfaces for spatial and physical data

If you're building with a no-code platform like Emergent.sh, you can already start designing apps that present 3D data, sensor readings, or environment simulations. When world model APIs become available, you'll be ready to plug them in.


  1. Watch for world model APIs

Just as OpenAI's API made text generation accessible to non-developers, world model APIs will eventually let builders add physics-aware reasoning to their apps without understanding the underlying math. NVIDIA's Cosmos models are already open-source and available on Hugging Face. World Labs' Marble offers free and paid tiers. Keep an eye on AMI Labs' planned open-source releases, too.


  1. Think beyond text-based AI

The next wave of AI-powered products won't just chat. They'll understand spaces, objects, and physical processes. If you're brainstorming your next app, consider use cases where understanding the real world, not just language, is the core value. Logistics dashboards that model warehouse flow. Fitness apps that understand body mechanics. Real estate tools that simulate renovations. With 94% of companies globally now using AI in at least one business function, the demand for more capable, spatially aware applications is only going to grow.

Key Takeaway for Builders

World models represent a shift from AI that processes language to AI that understands environments. That shift is still early, but over $2 billion in funding in a single quarter makes the direction clear. For non-technical founders and solo builders, the opportunity isn't to build world models yourself. It's to be ready to use them when they arrive as accessible APIs and platform features. The builders who prototype spatial, physics-aware, and environment-driven apps now will have a head start when the tooling catches up.

What This Means for Builders?

World models represent a shift from AI that processes language to AI that understands environments. That shift is still early, but over $2 billion in funding in a single quarter makes the direction clear. For non-technical founders and solo builders, the opportunity isn't to build world models yourself. It's to be ready to use them when they arrive as accessible APIs and platform features.

If you want to get moving today, Emergent.sh gives you the tools to go from idea to live product without writing code. And for practical starting points, check out our guide on how to make money with AI without being a developer, explore the best AI app builders for 2026, or browse vibe coding examples for non-developers to see real projects you can replicate.

We publish two new articles every week covering the AI developments, tool launches, and trends that matter most for builders like you. Keep following this space.

Build production-ready apps through conversation. Chat with AI agents that design, code, and deploy your application from start to finish.

Copyright

Emergentlabs 2026

Designed and built by

the awesome people of Emergent 🩵

Build production-ready apps through conversation. Chat with AI agents that design, code, and deploy your application from start to finish.

Copyright

Emergentlabs 2026

Designed and built by

the awesome people of Emergent 🩵

Build production-ready apps through conversation. Chat with AI agents that design, code, and deploy your application from start to finish.

Copyright

Emergentlabs 2026

Designed and built by

the awesome people of Emergent 🩵