ru24.pro
News in English
Июль
2024

AI Investors Are Starting to Wonder: Is This Just a Bubble?

0
Photo-Illustration: Intelligencer; Photo: Getty Images

The clearest winner of the recent AI boom isn’t OpenAI, which arguably kicked the whole thing off, nor is it Microsoft or Google, which plowed money into AI and AI into their products in exchange for healthy boosts to their stock prices. The greatest beneficiary of the flow of capital into any and all things LLM is Nvidia, which designs and sells the best chips for training and running modern AI models. Tech giants have been buying them up by the tens of thousands, using them for their own purposes but also renting them out to other, smaller firms. Cloud providers are all in. Venture firms are building their own clusters to win over startups. Early this month, this multi-year buying briefly made Nvidia the most valuable company in the world.

The tech industry’s hundreds of billions of dollars of investment in AI is, in other words, largely an investment in Nvidia chips, hardware and infrastructure needed to support and deploy Nvidia chips, and the power needed to power Nvidia chips. One common way to think about what’s been happening for the last few years is that the biggest players in AI have all been competing to design, train, and deploy the most capable AIs, primarily in the form of large, expensive, general-purpose “foundation” models, hoping that they’ll win customers through a combination of better engineering, better data, and smarter research or product bets. Another way to describe it is that a lot of players in AI believe that computing power is destiny — models themselves can quickly become obsolete — and are hoarding as much physical hardware as possible, and building facilities to contain it.

A multi-hundred-billion-dollar race to build fundamentally similar supercomputers raises one extremely straightforward question that AI firms have been able to remain vague about for a couple of years now: For what? AI executives have treated the answer as too obvious to explain: AGI (artificial general intelligence) or more lately ASI (artificial superintelligence). They believe, or say they believe, that they’re on the cusp of building machines that can automate, at minimum, a great deal of economically productive labor. The more AI chips they have, the greater share of labor they can automate. The upside is so incredible that there’s no need to get bogged down in the details. AGI will provide its own business models when it arrives.

As we approach the two-year anniversary of the release of ChatGPT, however, demand for AI chips appears to be cooling slightly, and notable investors and analysts are starting to ask for a little bit more detail. At Sequoia, the venerable VC firm, partner David Cahn revisited a nagging question he posed late last year:

At that time, I noticed a big gap between the revenue expectations implied by the AI infrastructure build-out, and actual revenue growth in the AI ecosystem, which is also a proxy for end-user value… This week, Nvidia completed its ascent to become the most valuable company in the world. In the weeks leading up to this, I’ve received numerous requests for the updated math behind my analysis. Has AI’s $200B question been solved, or exacerbated?

If you run this analysis again today, here are the results you get: AI’s $200B question is now AI’s $600B question.

Cahn is unpersuaded by the argument that buying up GPUs is like building railroads — they depreciate, rapidly become obsolete, and don’t get you much in the way of monopolistic pricing power. Even if it were, he argues, lots of people lost a lot of money at the front end of the railroad boom. Cahn is generally optimistic about the long-term potential of AI, which he describes as potentially a “generation-defining” technology wave, and suggests lots of potential upside for some investors. But, he says, a dangerous “delusion” has taken hold: “that we’re all going to get rich quick, because AGI is coming tomorrow, and we all need to stockpile the only valuable resource, which is GPUs.”

Meanwhile, at Barclays, a group of analysts tried to run some numbers, estimating industry capital expenditure on AI and using research publications from Google to hazard a guess at how much all this new infrastructure can support, in terms of actual AI products:

“Based on the 2026 consensus AI inference capex above,” the report says, “we estimate that the industry is assumed to produce upwards of 1 quadrillion AI queries in 2026. This would result in over 12,000 ChatGPT-sized products to justify this level of spending, illustrated below.” This is, one could argue, at least consistent with the most extreme AGI rhetoric, in that it seems to assume total and indiscriminate deployment of AI across every industry and beyond. It’s also consistent with not having much of a specific plan at all, especially given recent moves by Microsoft and Apple to bring AI processing back onto users’ devices from the cloud, and the proliferation of smaller, more efficient AI models that don’t have as much use, either during training or operation, for 10,000-GPU supercomputer clusters.

Most notable, perhaps, is a research newsletter from Goldman Sachs, in which Head of Global Equity Research Jim Covello makes the case that the AI boom has a lot in common with the Dot-com bubble:

Over-building things the world doesn’t have use for, or is not ready for, typically ends badly. The NASDAQ declined around 70 percent between the highs of the dot-com boom and the founding of Uber. The bursting of today’s AI bubble may not prove as problematic as the bursting of the dot-com bubble simply because many companies spending money today are better capitalized than the companies spending money back then… The more time that passes without significant AI applications, the more challenging the AI story will become. And my guess is that if important use cases don’t start to become more apparent in the next 12-18 months, investor enthusiasm may begin to fade.”

Covello is broadly dismissive of AI in terms that probably feel familiar — it’s “bullshit,” as Ed Zitron paraphrases — but his most important claims are probably his most restrained: that the high level of investment in AI is largely about FOMO within the tech industry, which has struggled to articulate with any specificity, or demonstrate in the form of products, the actual trillion-dollar opportunity of AI; and that investor pressure on companies outside of tech is driving companies with completely unclear uses for current AI technology to invest anyway, suggesting a rather classic investment bubble.

Arguments about AI have a tendency to slide into abstract, speculative territory, converting narrow questions about reducing errors in LLMs into online fights about the nature of intelligence and discussions about safe AI deployment into sci-fi writing prompts about whether imminent superintelligences will enslave, liberate, or simply exterminate all life on Earth. Allegedly more sober claims made by tech companies follow much the same pattern, diverting questions and criticism into fuzzy conversations about inevitable progress toward human-level machine intelligence and higher productivity, with occasional calculated performances of grave humility about the economic disruption such inevitabilities imply. This has been a useful rhetorical strategy for AI firms in general, as they raised early money, dealt with the press and critics, and had their first encounters with dazzled regulators. Most importantly, it helped produce the aforementioned FOMO.

If investor confidence falters, however, and if this really is the moment when VCs and major banks start to speak more cautiously about AI, then the tech industry’s speculative honeymoon could come to an end, and fast. Without a collective sense of momentum, the discourse around AI — whatever you make of its general potential — shrinks to the size of a balance sheet.