The race toward artificial general intelligence (AGI) is colliding with hard physical limits, as the computing capacity driving smarter systems is expected to run head on into grid capacity issues.
For all the talk of breakthroughs in algorithms and scale, the real AGI bottleneck may not be code but current—the electricity that makes computing capacity possible. In short, AGI power–the juice needed to run envisioned future AI-driven systems–faces a major grid capacity crunch.
We’re not talking power for chatbots here, no matter how much their use is exploding. This is about AGI power, which is a completely different beast.
What makes AGI power unique is how its appetite multiplies as the models shift from chat into planning mode. Chat is one pass; planning is many.
AI-powered planning often uses reinforcement learning (trial-and-error updates guided by rewards), test-time computation (extra tries or searches at answer time to pick the best), and multi-agent orchestration (a broader “team” of AI tools that plans, verifies, and executes tasks in turns).
Each planning step adds more passes, more “what-ifs,” longer context, and tool calls. That is why AI planners can be 10 to 1000 times more complex in computations—and every extra cycle needs electricity.
To put it simply, chat is one and done; planning involves many steps. With trial-and-error rewards, extra search-for-answer time, and teams of AI tools talking in loops, the work multiplies and so do the watts.
The Mach-2 race car with no road
Imagine a race car designed to hit Mach 2, twice the speed of sound. For you numbers nerds, that’s 1,534 mph. On paper the car is a marvel of engineering. In practice, there’s no road to drive it on and precious little of the exotic fuel it needs to burn. That’s today’s, and especially tomorrow’s, AI power problem. Algorithms are advancing at breathtaking speed, but the power infrastructure beneath them is struggling to keep up.
Eric Schmidt, former Google CEO and longtime AI observer, recently testified before the US House Energy & Commerce Committee that the US alone may need 90 gigawatts of additional power to feed AI advances that are on the horizon. That is comparable to dozens of nuclear power plants’ worth of capacity.
Elon Musk has gone even further, bluntly warning that the fundamental limitation on AI is going to be electricity generation.
Arm CEO Rene Haas, who leads the company that designs the chips inside most of the world’s smartphones and is expanding into energy efficient processors for data centers, has called AI’s energy appetite “insatiable,” and has warned it could exceed 20% of U.S. electricity by decade’s end if current trends accelerate.
The message is consistent: scaling artificial intelligence at this pace is scaling demand on the global power grid just as fast.
Data center needs meet physical power limitations
The reasons outlined above are why one campus-scale data center already equals the power load of a mid-sized city. Multiply that by dozens or hundreds worldwide, and the grid starts to look less like background infrastructure and more like the ultimate throttle on innovation.
This isn’t only an engineering problem. It’s a national strategy question. If power becomes the rate limiter for AGI, then countries with abundant, affordable electricity gain an edge as decisive as the best chips or smartest researchers.
Canada’s hydroelectric resources, the Gulf states’ willingness to fund grid expansion, and China’s aggressive power grid buildout all are examples of energy shaping AI’s competitive map.
For the United States, Schmidt’s 90-gigawatt estimate translates into a dilemma: how to expand grid capacity at a scale the tech sector is pushing for while traditional energy projects take decades. Without clear planning, the mismatch could stall progress, no matter how brilliant the software.
AI is often discussed as though its growth is unstoppable, as if it is a curve on a chart racing up and up. But the energy side of the equation tells a different story. Computing capacity is advancing at exponential speed. Grid capacity is not. The intersection is the grid capacity crunch: a point where ambition hits amperage.
The crunch is not here yet, but it’s coming into view. Schmidt’s 90-gigawatt call, Musk’s warnings, and Haas’s industrial outlook all suggest the same reality. The next stage of AI will be measured not just in model parameters but in megawatts.
The road ahead: now, near-term, later, never
AGI power is the headline, but electricity is the fine print. The road to artificial general intelligence must be wired into grids capable of carrying the load.
Here’s how the timeline likely unfolds from what is already happening to what will never see the light of day.
Happening now — Helpful steps underway: placing large data center campuses near low-cost hydro, upgrading transmission where surplus power already exists, and getting more work per watt from newer chips and better cooling. These moves relieve pressure, but they don’t redirect AI-driven demand for energy.
Near term through 2030 — The center of gravity. Big cloud companies and utilities are funding firm power and large interconnects, such as new natural gas plants to anchor reliable power, large solar or wind where land and permits allow, and billion-dollar server campuses tied directly into substations. Globally, several countries have announced multi-gigawatt buildouts for AI infrastructure, including roughly 10 gigawatts of new grid in India and 5–10 gigawatts of new capacity across parts of the Gulf states. These are not hypotheticals—they’re under way.
Later, after 2030 — Larger grid upgrades that take time: major transmission corridors, long-lead generation projects, and market changes that better match supply and demand across regions. This phase can accelerate the power curve, yet not on today’s timelines. Goals like doubling U.S. grid capacity by 2030 may look possible on paper, but they face a daunting list of hurdles that include permitting, construction, and material supply chains.
Happening never — Endless AI scaling without more electricity. However appealing the theory that smarter code and more efficient microchips will erase the power problem, history shows the opposite: efficiency gains help, but bigger ambitions usually outpace them. The reward for good work is more work—and more watts.
That’s the paradox: intelligence that feels limitless is tethered to the most tangible of limits—available power to support its growth. The Mach-2 car is being assembled; the question is whether we can lay the road and produce the fuel to let it run.
FAQs
Q1: What is AGI power?
The electricity needed to run advanced AI systems, especially planning-mode models that repeat many passes and searches.
Q2: Why does AI energy demand keep rising?
Efficiency gains encourage more compute use—larger models, longer context, and extra searches—so total demand grows.
Q3: How much electricity could AI use by 2030?
Estimates suggest U.S. data centers could rise from ~4% of grid use today to over 20% by decade’s end.
Q4: Can smarter chips solve the crunch?
They help, but planning workloads grow faster than efficiency gains. The real limit is electricity supply.
This is part one of a three-part series.







Leave a Reply