A "Dark Data Center" Crisis
As much as we fear a financial bubble bursting, the real worry is what happens in its aftermath.
Full disclosure: I work with several tech companies doing interesting and valuable things with AI, and all are focused on customer needs. They are exceptional, and I believe they will be the survivors you’ll see on the other side of the coming grind. I’m careful who I work with.
Yesterday I wrote about one of the biggest problems Artificial Intelligence faces in becoming a widespread and important part of the economy: Most tech companies have a hard time understanding, or even liking, the companies that they count on to pay them.
That has been a longstanding problem in tech, and it leads to periodic clean outs of ill-conceived, low-empathy companies that offer alienating, nonessential products. But there’s also the hard technical problem of building out an AI ecosystem that businesses can use. This doesn’t get headlines, since it’s as unsexy as an engine room, but it’s essential, and uniquely hard where AI is concerned.
What’s more, if an AI ecosystem isn’t built before the bubble bursts, it’s going to be hard for tech to reuse its big investments in productive ways.
This problem won’t simply affect tech. It matters for the larger economy too. Much of the world’s capital spending now funds chips, data centers, networks, AI startups, and much else in the race to expedite the expected transformational (and positive) economic effect of AI. The longer the transformation from investment to economic productivity takes, the harder it becomes for companies to pay off the debt, and the less valuable much of the core capital equipment, like AI chips, becomes.
You don’t see this problem on the surface. The eye-catching AI powerhouses of Large Language Models, which enable contemporary AI to hold conversations and make quirky videos, are now well understood and managed by the likes of of OpenAI, Google, Anthropic, and a few others. The AI products are far from perfect, and they never will be,1 but they are impressive, and the companies have lots of cash and talent.
To enable LLMs to connect and function with the rest of the economy, however, lots more remains to be done. The big players are doing some of this work, but the necessary ecosystem of independent developers, midsize players, and integration consultants is, at best, primitive.
Deploying and managing the expensive chips in AI is still hard and inefficient. Power supplies are a nightmare.2 The data LLMs use is of uneven quality, and often leads to hallucinations. The value for most companies will come from their own proprietary data, but getting it suited to AI is often laborious, up close work. It will require lots of new software to make that data cleaner and more reliable.
Governance and security, key issues for big companies, are mostly an afterthought. Each company has different ways of building a variety of products and services for AI, when standardization is the only means to efficiency and uptake.
Moreover, many corporations, in a rush to show they “get” AI, are cranking out junk versions of trendy technologies, then leaving them unimproved and untended. AI developers are laughing at this, while recognizing it’s a huge problem for them: They need successful standard installations, spread across the industry, in order to build sophisticated task-completing AI “agents.” The current quality of a customer service chatbot, a relatively simple task, is pretty low, and at most companies that’s about as good as it gets.
This is one area where a crash would help. There will be plenty of pain, failure and consolidation before the industry has the kind of standards around which companies can start to develop important products. That kind of development usually happens during downturns, after a bubble bursts and speculative work gives way to the focus of necessity. Personal computers really got going during the 1981-1983 recession, and the computer servers that eventually proved core to the Internet came into their own during the 1992-1994 downturn.
History shows what often results from this harsh necessity. After the 1873 market crash destroyed many railroad businesses, there was a lot of consolidation, and eventually standardization of things like track widths. But most of the parts of the locomotive didn’t have to change much, and could be re-used after demand recovered. After the 1929 crash, Radio Corporation of America never recovered its pre-crash high, but vacuum tubes and radio spectrum could be re-used and made valuable again, as people invented new radio entertainments and ads for Depression-era folks looking for cheap fun.

The Internet bubble, our most recent big bust, was mostly a telecommunications capital spending bubble which burst to the tune of some $50 billion. A few years after the crash much of the wireless spectrum and fiber optic cable that people had strung, buried, and sunk was sold for dimes on the dollar. It soon found new life as the means to all that outsourcing and offshoring, as cash-strapped companies looked for new ways to save money.
More fatefully, the early staff at Google made use of this so-called “dark” fiber to build out the basis of what’s probably still the world’s largest private network. They and others bought up excess and defective computer overruns, and developed software that could help them work around hardware errors, and work as a single big computing system. This facilitated the early development of what would become cloud computing.
The AI bubble looks like a dangerous exception to this pattern of creative destruction. There are big data centers going up around the world, and they’re being filled with all kinds of expensive chips. It’s still pretty hard to get these chips up, networked, and tuned. The longer it takes to put them to work, the longer the dead time when they’re not earning for their owners, and helping to pay down debt.
And not just any earning will do: For all of this to work, AI companies need valuable demand from high-paying corporate customers. Consumers aren’t worth as much, and tend to be fickle. That’s why the lack of connection in the overall AI ecosystem is a big problem. If the ecosystem isn’t there soon, kablooey.
That’s only part of the worry, though. The dark fiber we had in 2001 could be lit up again and it was still very valuable when demand picked up. But would the same be true for dark data centers, if that is the outcome of this bust? Lighting them up would be tougher. Given the speed of change in the semiconductor business, old AI chips will be far less valuable in three years than they are today. How does something like that get sold for pennies on the dollar, repurposed, and made into something even more valuable?
In fairness, I would ask a negative question like that. By nature I’m skeptical, and for more than two decades as a journalist, I was paid to be paranoid. Thus I have a bias to see the problem, and I have to work harder than most to trust human ingenuity, and the innovative desperation of necessity. I’ve recently talked with someone using AI to bring AI chips online faster than ever, and with a data expert who has a plan to make all data suitable for AI-type analysis.
So maybe I’m wrong, and something even more magnificent might be made from the ruins of dark data centers.
On the other hand, not all bubbles burst and create wonderful new productivity. Japan, whose financial markets I covered from 1991-1994, had almost no good to show from its corrupted balance sheets, and spent 30 years recovering. In the US, lots of the construction for the mortgage-centered 2008 financial crisis was bought up by big real estate investors who didn’t have to react quickly to price pressures. This is one reason why housing prices are still out of whack, and rentals are so high. Shopping mall busts gave us…empty shopping malls. They make for great photographs, but little else. Not every bust has a productive outcome.

The issue is whether inside a dark data center there’s something valuable that can be reused, or whether old chips are old chips. If the latter, those now-magical semiconductors may come to be seen as only a proxy for financial speculation. We’ll find out what kind of a bubble this was, productive or wasteful, if or when creative technologists start doing interesting new things with the spare parts they have before them. If they can’t, we’ll have $1 trillion of broken dreams, scattered across the globe.
Hallucinations, or what happens when an AI starts making stuff up, are in an important sense a feature, and not a bug, of Generative AI, the type of AI at the center of the AI boom. GenAI is based on statistically analyzing unimaginable amounts of data, and making a best guess of what should be the next word/sentence/paragraph/pixel in an image. Statistically, which is to say “without utter certainty.” The makers of these systems have gotten very good, and in the future will probably code all kinds of fact-checkers, but certainty is impossible.
Matt Levine, whose brains and productivity awe me, had an excellent piece Wednesday on OpenAI’s need for $10 billion in power supplies, along with the many challenges and promises of OpenAI’s CEO, Sam Altman. A highly rational person, Matt finds Altman hard to take entirely seriously.
One lasting outcome of the AI build is likely to be increased focus on new energy sources. Altman likes nuclear power, even fusion, but the more interesting development is how much effort China is putting into deploying and creating new efficiencies in solar power, as it emerges as a clean-energy “electrostate.” The U.S., unfortunately, has walked away from its leadership in this important area.





There is recent development I think might be worth a look. Corporate insurers are seeking regulator approval to have AI failures removed as insurance obligations.
This is an LLM issue primarily but it could effect more AI technologies than just generative.
The key potential point of concern is that if it goes through, the risks associated with the deployment of the current paradigms of models will have to be borne by the organization using the product.
It could lead to more corporate clients deciding to delay commercialization of AI based products in fear of realizing massive potential losses. Further hurting adoption of the technology requiring these massive gpu specific datacenters.
Quentin, I can always count on your writing to make me think about something in a useful and fresh way. Really good as ever.