Tuesday, November 18, 2025

The Death of LLMs Will be the Birth of Humanoid Robotics

Robot saluting deceased LLMs for their sacrifice
Image created with ChatGPT via MS Copilot.

TL;DR for this post: 2035 will be "The Year of the Humanoid Robot."

I'm not alone in this prediction; Nvidia CEO Jensen Huang expects 1 billion humanoid robots in use by that same year. While I'm not that bullish, even my measured optimism will no doubt come as a shock to anyone who knows my pessimistic views about the current state of artificial intelligence. (We're in an AI bubble that's going to crash very hard in the next 18-36 months.) But it's precisely because the current incarnation of AI is doomed that I'm so bullish on the near-term future of robotics.

Stick with me; it will all make sense in the end.

The AI Arms Race (to Nowhere)

I have some sympathy with the current AI heavyweights building pharaohic monuments to hyper-scale computation, not least because they have no choice. The infamous Bitter Lesson has taught these companies that there is no apparent elegant solution for Artificial General Intelligence -- AKA the Holy Grail of AI research -- so they simply have to keep feeding their models more computing power and more training data until some magic tipping point is passed and a human-equivalent synthetic mind pops out. 

Meta and Anthropic and Open AI and Alphabet and Microsoft and Amazon have no choice but to pursue AGI -- and spend heretofore incomprehensible amounts of money chasing it -- because whoever gets there first is assumed to claim the greatest first-mover advantage in computing history. To secure this advantage, they need bottomless sources of computing power and data, so they are building data centers on a Biblical scale and wedging AI "assistants" into every conceivable app to "talk" to us and capture ever more contextual language data.

First one to Skynet wins.

Also, chasing AGI boosts stock valuations beyond all previous fundamentals, while abstaining from the race would wreck those same values. CEOs at AI companies are trapped in the same feedback loops that have created past tech bubbles, only with orders of magnitude more capital in play. 

That's why the AGI arms race is happening. Yet, if so many smart people are putting their money where their mouths are when it comes to AGI, why would I doubt that AGI is coming? 

The AI Nuclear Winter

My skepticism is based on the fact that everyone is chasing Large Language Model/Transformer AI, and there's no evidence that improving LLMs leads to AGI. Yann LeCun just bailed on Meta because he doesn't think their LLM investment is going anywhere. (LeCun thinks World Models are the future. So do I.)

Moreover, Large Language Models are trained on language samples and we've already trained LLMs on the entire Internet (legally or not) -- there's no giant, untapped corpus of language left to feed them. Again, this is why Microsoft is wedging Copilot into every conceivable app for no reason; they want you to "talk" to your apps every day so they can feed that fresh language to their LLMs. The same goes for every other non-Microsoft descendant of MS Clippy we encounter in daily life. In fact, companies are so desperate for new language data that LLMs are starting to train on the current LLM-poisoned public web rather than language written by humans, causing them to hallucinate in new, weirder, less predictable ways.

LLMs probably aren't going to significantly improve, let alone evolve into AGI, because we've taught them all we can with all the language we have.

The counter-argument from LLM optimists is: So what if we don't get AGI out of this spending spree? Advanced basic AI still makes us wildly more productive and these companies wildly more profitable. 

Does it, though?

For starters, every AI company is price-dumping right now, which means they're losing money building AI and losing more money by selling it. (And they can't raise prices because China is price-dumping even harder to stay in the game and preserve its role as the cheap tech supplier of choice.) AI developers care more about accessing your data than generating revenue right now because your training data is too valuable. Thus, ubiquitous AI features will remain unspoken loss-leaders for AGI research for the foreseeable future. 

It's unlikely anyone will pay more than their current AI subscriptions for their current (or even improved) versions of LLM AIs. The technology also isn't going to get cheaper while AI developers are investing GDP-level capital into building hyperscale data centers at a wartime industrial pace. AI won't get profitable because AI creators simply don't care to try to make money on it right now and likely couldn't even if they did.

As to AI boosting productivity, 95 percent of AI business projects failcompanies don't know how to use AI effectively or what tasks to assign it, leaders fundamentally don't understand why AI can't be easily fixed, and the workers who are "given" AI tend to be less productive and actually see their output negatively warped by AI. For crying out loud, AIs still can't read tabular data; current LLMs barf on CSV files that MS Excel and first-year interns have parsed effectively (and far more cheaply) for 40 years.

Frankly, the only thing the current version of AI appears good at is crime, which suggests consumers are more likely to turn on AI than embrace it. Soured public sentiment may lead to AI regulation that constrains its capabilities rather than expanding them. 

AI makes most people and most companies worse at their jobs, they just can't afford to quit it right now because they're a) afraid of missing the boat on AI productivity gains and b) everyone is convinced AI will actually become good any second now. AI productivity is a myth and, as soon as that myth is disbelieved, the whole AI market will crash.

Also, people hate data centers, so the AI house of cards may crumble before they even finish building it.

This, strangely, is all good news for roboticists, the economy, and us. We just have to endure the fallout first.

Eyes on are the Prize

We're building absurd levels of computing infrastructure to improve an AI technology that's probably already as good as it's going to get. The crash is inevitable, but when it comes, that infrastructure will still be here -- and will be rentable for pennies on the dollar. This mirrors the Y2K dial-up Internet bubble that led to the online and mobile app renaissance that followed a decade later. Pets.com died so that enough cheap infrastructure was available to allow Amazon and Google to conquer the world.

So why am I saying the next tech renaissance will be in robotics, not AGI? 

That's easy: Smart glasses.

LLMs are demonstrably bad at running robots but that's probably a training data problem. Roboticists don't really know how to build good hands but, even if they did, hands are really hard to program for. That's why Tesla has a whole lab dedicated to documenting how humans use their bodies to perform tasks; they want it to serve up training data for its Optimus robots. Apple is making similar investments.

Humanoid robots are critical because we don't have to redesign our lived environment to accommodate them. Humanoid 'bots can in theory use all the same doors and stairs and tools and appliances we do without having to adapt either the robots or those places and things to each other. But humans are remarkably sophisticated mechanisms with highly evolved balance and environmental manipulation features. Building equivalent mechanical devices is hard; writing software for them is significantly harder.

Tesla's approach is expensive and comparable efforts would also be costly for any other robot developer. Even if the data centers to train robot-driving AIs are cheap, you also need a cheap, semi-ubiquitous source of human physical interaction data. You need the physio-kinetic equivalent of MS Copilot; an AI data siphon that's everywhere, all the time, building a constant flow of source-data to train your humanoid robotics software.

Smart glasses can and will be that siphon. 

Despite a clumsy rollout, Meta's AI glasses are fairly impressive and they (or gadgets like them) are already being put to productive use. Physicians are using augmented reality glasses to assist in so many medical procedures that there are academic meta-studies of their efficacy. Start-ups are building smart glasses apps for skilled trades. Any place humans are performing sophisticated tasks with their hands, smart glasses are there to assist and to learn. This is training data gold.

Smart glasses can provide high-quality video recordings of humans using their hands in the real world, annotated and contextualized by AI apps in real time. The makers of these devices will have an unfair advantage in designing humanoid robotics software because they will have all the critical sample data for common and edge cases. 

Once enough of this data has been gathered, there will a plethora of cheap data centers to train robot operating models. The chip-makers that overbuilt capacity to supply hyperscale data centers will also be ready to build the edge computing processors to run the robots for shockingly affordable prices. Finally, when a compelling confluence of robot bodies, software, and capabilities arrives, all the LLM manufacturing capacity sitting idle can ramp up to build the army of robot CPUs we've always dreamed of (and maybe feared).

The Waiting is the Hardest Part

Now, I don't expect smart glasses to become common consumer gadgets in the next 18 months, but perhaps over the next 60 they'll get less dorky and more compelling. There's a social acceptance that needs to happen, too; turning everyone wearing glasses into a surveillance drone is only going to be palatable when we have good privacy systems in place and the uses outweigh the benefits. 

This is why I suspect it will be 10 years before we see a humanoid robotics boom, not five. Smart glasses are critical to the development of humanoid 'bots, and we're still a few years away from Meta Ray Bans and the like being useful enough that people are willing to throw out even more personal privacy to adopt them. (The same privacy concerns will accompany household robots that listen to and watch you in order to receive instructions and perform their tasks; smart glasses will lay the groundwork for this comfort.)

In 2030, we'll be crawling out of the economic recession caused by the LLM crash. In 2035, we'll be riding the wave of a robotics revolution. We just have to survive the AI Nuclear Winter first.