Never mind DeepSeek: here’s why the AI mania won’t last
DeepSeek Isn’t AI Speculators’ Biggest Worry
“DeepSeek has sparked a deep freakout,” reported The Wall Street Journal (“Nvidia Stock Sinks in AI Rout Sparked by China's DeepSeek,” 27 January). “The Chinese artificial-intelligence upstart has trained high-performing artificial intelligence models cheaply – and without the most advanced gear provided by Nvidia and others.” It claims that the cost of training its latest model is less than $6 million. According to Sam Altman, CEO of OpenAI, its GPT-4 model, which it launched in late 2023, cost more than $100 million to train; other AI CEOs, such as Dario Amodei of Anthropic, reckon that the cost of some models is now approaching $1 billion.
This enormous blowout has been a bonanza for companies such as Nvidia: their market caps have surged in response to exploding demand for ever more costly AI chips. AI’s high barriers to participation – as well as American restrictions on the sale of certain AI chips to China – have created competitive “moats” for American tech titans such as Amazon, Alphabet (Google), Meta Platforms (Facebook) and Microsoft. They’re among the few companies which can afford the scores of billions of dollars – or more – required to develop the AI capabilities that increasingly support their operations.
DeepSeek has suddenly “pulled the rug from under global companies riding the AI wave ... as investors question the outlook for AI spending.” If its claims are true (over the past week a growing number of knowledgeable people has doubted them), it could overturn prevailing assumptions about the computing power, storage and energy which AI requires. If it’s correct, then much of today’s AI capex, if it proceeds, might eventually generate losses.
In January, OpenAI, Oracle and SoftBank announced a JV (“Stargate”) which over the next few years intends to spend as much as $500 billion building AI-related infrastructure. This year, Microsoft and Meta plan $80 billion and $65 billion respectively of AI-related expenditure; indeed, Meta intends to build an AI data centre “so large that it would cover a significant part of Manhattan.” How much of this spending is now superfluous?
Will leading AI firms and their applications lose their “first-mover advantage”? Above all, does DeepSeek threaten the AI-powered bacchanalia that’s added ca. $15 trillion – almost as much as China’s current GDP – to the Nasdaq’s market cap since 2022?
There are also moral and national security issues. In a post on X (27 January), Marc Andreessen (co-founder of Netscape, the world’s first web browser, and of venture capital firm Andreessen Horowitz) called DeepSeek’s AI model “one of the most amazing and impressive breakthroughs I’ve ever seen” and “a profound gift to the world.”
I’m not so sure: has he asked this “gift” about COVID-19’s origins? Human rights in Tibet? Tiananmen Square? The people of Taiwan’s right to self-determination? If so, what did it tell him?
And is this really, as Andreessen proclaimed, “AI’s Sputnik moment”? Analysts haven’t confirmed DeepSeek’s claims. But several cited by WSJ’s tech writers (“DeepSeek Won’t Sink U.S. AI Titans,” 27 January) doubt that it’s developed anything comparable to advanced U.S.-based AI models at such low cost.
Even if it has, DeepSeek could benefit rather than threaten Nvidia and other AI chipmakers (including Amazon, Google and Microsoft, who’ve been developing their own chips). Technological or other developments which slash the cost of a good or service can increase its quantity demanded. If AI at its pre-DeepSeek price wasn’t economical for some applications, and DeepSeek slashes its price, then it could convert latent into overt demand – and thereby increase the total demand for AI. Consumers and low-cost producers would benefit most; but others, perhaps including Nvidia, could also gain.
“Increased competition rarely reduces aggregate spending,” an analyst observed on 27 January. But it often alters its composition – and an industry’s leaders. As Marc Benioff, CEO of Salesforce, noted, the sudden rise of DeepSeek “is kind of classic in our industry. The pioneers are not the ones who end up being the victors.”
“As AI gets more efficient and accessible,” added Microsoft’s CEO, Satya Nadella, in a post on X (27 January), “we will see its use skyrocket, turning it into a commodity we just can’t get enough of.” “The AI spending war,” concluded WSJ, “might just be entering a new phase.”
What Should Most Concern Today’s Tech Speculators
If Nadella is correct and AI is becoming a commodity, then two consequences follow. The previous section implied the first: lowest-cost producers will thrive, and higher-cost ones will wither. This section introduces the second – more fundamental and still unnoticed – consequence: AI’s impact upon productivity and profitability won’t differ from the many “revolutionary” technologies which have preceded it.
“For two years,” emphasised The Wall Street Journal (“The Day DeepSeek Turned Tech and Wall Street Upside Down,” 27 January), “markets’ belief that the rise of artificial intelligence would usher in a new era of productivity growth has fuelled trillions of dollars in stock-market gains.” If this “new era” rests upon false foundations, what’ll happen to the gains?
James Mackintosh (“DeepSeek Undercuts Belief That Chip-Hungry U.S. Players Will Win AI Race,” The Wall Street Journal, 28 January) raises related crucial points: “more AI competition will make it hard for Big Tech to generate the oligopoly-like profit margins that investors hope for. If the companies can’t make fat profits, it will be even harder to justify their high valuations. These valuations, remember, rely on the assumption that AI tools will be both widely used and highly profitable, but even the experts have little explanation of how the business model will work. It will also be harder to explain why they are sinking so much money into AI data centers.”
For more than two centuries, cumulative improvements of technology have propelled countless advances of agriculture’s, mining’s and industry’s (including service industries’) efficiency – and, therefore, enormous increases of living standards. By obsessing about the latest technology, downplaying or ignoring its antecedents and hyping its prospects, today’s AI bulls (and tech bulls generally) are ignoring this reality.
They’re once again – I assume unwittingly – parroting the mantra chanted by Alan Greenspan and other “Dot Com” zealots at the turn of the century:
- “Revolutionary technology” is allegedly generating or shortly will produce an enormous and permanent acceleration of productivity’s rate of increase. In the 1990s, such transformative technology included biotech and genomics, telecoms and above all the Internet and its applications; today, it includes crypto-currencies, EVs, solar and wind power and particularly AI.
- These large and lasting leaps of productivity, in turn, are supposedly producing or before long will generate the surges of profit which place tech stocks’ recent huge returns – and current valuations – on sound and sustainable bases.
This article debunks these two claims. In particular, I demonstrate that “tech revolutions” don’t accelerate the growth of productivity – at least, they never have since the Second World War. Because claim #1 is false, claim #2 can’t be true.
These results should disconcert bullish speculators, but they won’t surprise realistic investors. “You can see the computer age everywhere but in the productivity statistics,” Robert Solow famously quipped in 1987 (the year he won the Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel, usually but erroneously called “the Nobel Prize in Economics”). In the 1990s, however, Alan Greenspan ignored Solow – and mistakenly inferred from the Internet’s rapid rise that a sharp acceleration of productivity, profits and returns was occurring. Today, Jerome Powell is much more sensible; specifically, he’s sceptical about AI’s ability to boost productivity.
Unfortunately, before 27 January few market participants were as cautious as Powell. Quite the contrary: they were mostly as credulous as Greenspan.
“The Maestro” Swallowed – and Spread – the Hype
In August 1995, at the meeting of the Federal Reserve’s Open Market Committee, its Chairman, Alan Greenspan, declared: “there is a major statistical problem.” In his view, accepted measurements of productivity such as output per hour worked didn’t reflect the strong acceleration of productivity that was obviously occurring. Why did he believe that its rate of growth was quickening? Why did he think this was evident?
Greenspan’s remarks in this and other FOMC meetings during the next several years contained three unspoken but nonetheless clear premises. Firstly, technological revolutions always beget rapid, economy-wide advances of productivity; secondly, the Internet was expanding rapidly; thirdly, it constituted a “once-in-a-century” leap of technology.
What subsequently became known as “hedonic adjustments,” which in Greenspan’s view equity markets were somehow taking into consideration, allegedly demonstrated that “there has indeed been an acceleration of productivity if one properly incorporates (as) output that which the (stock) market values as output ...”
In 1995, in short, Greenspan experienced an epiphany: the stock market was rising – and would continue to climb – because the growth of productivity was accelerating. Data couldn’t detect it, but “The Maestro” convinced himself that his superior insight, plus some statistical sleight of hand, could!
Furthermore, given this alleged boost to productivity, corporate profits were supposedly much higher (i.e., companies were expensing what they should have been capitalising) – and thus stocks were actually much cheaper – than was generally recognised.
Greenspan became one of the Dot Com bubble’s most ardent and influential apostles, and he used his public addresses as sermons to spread its gospel.
Perhaps most notably, on 13 January 2000 he delivered a speech entitled “Technology and the Economy.” Over the next three years he repeatedly delivered variants of this address. “When we look back at the 1990s from the perspective of 2010,” he reckoned in its first version, “the nature of the forces currently in train will have presumably become clearer. We may conceivably conclude from that vantage point that, at the turn of the millennium, the American economy was experiencing a once-in-a-century acceleration of (technological) innovation, which propelled forward productivity, output, corporate profits and stock prices at a pace not seen in generations, if ever.”
“Alternatively,” he acknowledged, “that 2010 retrospective might well conclude that a good deal of what we are currently experiencing was just one of the many euphoric speculative bubbles that have dotted human history. And, of course, we cannot rule out that we may look back and conclude that elements from both these scenarios have been in play in recent years.”
In this and later speeches, Greenspan invariably championed the first possibility: “... it is information technology that defines (the 1990s). The reason is that (IT) lies at the root of (this period’s) productivity and economic growth.”
Having clarified the present, at an address entitled “The Revolution in Information Technology” which he delivered at a “New Economy” conference on 6 March 2000, he also divined the future: “businesses continue to find a wide array of potential high-rate-of-return, productivity-enhancing investments. And I see nothing to suggest that these opportunities will peter out any time soon. Indeed, many argue that the pace of innovation will continue to quicken in the next few years, as companies exploit the largely untapped potential for e-commerce ...” On that basis, “so far as I can judge ..., it is not evident that we are seeing, as yet, a cresting in the growth of productivity.”
Four days later, on 10 March 2000, NASDAQ peaked at 5,048 – and during the next 30 months crashed almost 85%. As its collapse quickened, in a speech entitled “Business Data Analysis” delivered on 13 June 2000, Greenspan subtly retreated: “that there has been some improvement in the growth of aggregate productivity is now generally conceded by all but the most sceptical.”
Wasn’t he aware that a couple of years earlier the Congressional Budget Office had commissioned an economist, Robert Gordon, to investigate this crucial issue? In “Has the ‘New Economy’ Rendered the Productivity Slowdown Obsolete?” (NBER revised version, 14 June 1999) Gordon delivered his verdict:
“There has been no productivity growth acceleration in the 99% of the economy located outside the sector which manufactures computer hardware ...”
“Indeed,” Gordon continued, “far from exhibiting a productivity acceleration, the productivity slowdown in manufacturing has gotten worse: when computers are stripped out of the durable manufacturing sector, there has been a further productivity slowdown in durable manufacturing in 1995-99 as compared to 1972-95, and no acceleration at all in non-durable manufacturing.”
In short, aided and abetted by Greenspan, in the late-1990s and early-2000s speculators assumed that the “Internet revolution” was boosting productivity’s growth – and that it would thereby support the skyrocketing prices of “tech” shares.
But as Gordon concluded (and I’ll show below), productivity wasn’t accelerating. Quite the contrary: it was sagging. When speculators finally came to their senses, “tech” stocks crashed.
Was the Internet Really so Important? How Significant Will AI and Crypto Be?
Recall Greenspan’s belief: the “Internet revolution” was a “once-in-a-century technological innovation.” Was that the characterisation of knowledgeable people at the time? “Poised as we are between the twentieth and twenty-first centuries,” the U.S. National Academy of Engineering (NAE) observed at the height of the Dot Com bubble in 2000, “it is the perfect moment to reflect on the accomplishments of engineers in the last century and ponder the challenges facing them in the next.”
Working with the professional societies of various branches of engineering, medicine and science, during 2000 NAE’s contributors selected and ranked the 20 greatest achievements of the twentieth century. “The main criterion for selection was not technical ‘gee whiz,’ but how much an achievement improved people’s quality of life. The result (which it released on 1 September 2000 as an article entitled “Great Achievements and Grand Challenges”) is a testament to the power and promise of engineering.”
“Reviewing the list,” NAE noted, “it’s clear that if any of its elements were removed our world would be a very different place – and a much less hospitable one. The list covers a broad spectrum of human endeavour, from ... advancements that have revolutionised virtually every aspect of the way people (live, work, play and travel).”
Here’s the NAE’s list, ranked in order of importance:
- “Electrification – vast networks of electricity provide power for the developed world;
- Automobiles – revolutionary manufacturing practices made cars more reliable and affordable, and the automobile became the world’s major mode of transportation;
- Aeroplanes – made the world accessible, spurring globalisation on a grand scale;
- Water Supply and Distribution – prevent the spread of disease, increasing life expectancy;
- Electronics – first with vacuum tubes and later with transistors, electronic circuits underlie nearly all modern technologies;
- Radio and Television – dramatically changed the way the world receives information and entertainment;
- Agricultural Mechanisation – numerous innovations led to a vastly larger, safer, and less costly food supply;
- Computers – are now at the heart of countless operations and systems that impact our lives;
- Telephones – changed the way the world communicates personally and in business;
- Air Conditioning and Refrigeration – beyond convenience, these innovations extend the shelf-life of food and medicines, protect electronics and play an important role in health care delivery;
- Highways – 44,000 miles of U.S. highways enable personal travel and the wide distribution of goods;
- Spacecraft – going to outer space vastly expanded humanity’s horizons and resulted in the development of more than 60,000 new products on Earth;
- Internet – provides a global information and communications system of unparalleled access;
- Imaging – numerous imaging tools and technologies have revolutionized medical diagnostics;
- Household Appliances – have eliminated many strenuous and tedious tasks;
- Health Technologies - from artificial implants to the mass production of antibiotics, these technologies have led to vast health improvements;
- Petroleum and Petrochemical Technologies – provided the fuel that energized the twentieth century;
- Laser and Fibre Optics – applications are wide and varied, including almost simultaneous worldwide communications, non-invasive surgery, and point-of-sale scanners;
- Nuclear Technologies – a new source of electric power;
- High-performance Materials – are lighter, stronger, and more adaptable than ever before.”
It’s easy to understand why electrification tops the list: most of the other items presuppose it. On that basis, however, #7 and #16 deserve much higher rankings (quality of life presupposes drinkable water and plentiful food), and #6 and #11 merit much lower ones (rail transport, an innovation of the nineteenth century, can do much of what truck transport does). I’d also have thought that #13 presupposes #18, and that #3 presupposes #20 – and therefore that the latter should receive higher rankings than the former.
But these are mere cavils: the list’s crucial point, for my purposes, is its assessment of the Internet. Was it a “once-in-a-century technological innovation”? According to NAE’s assessment at the height of the Dot Com bubble, it didn’t even rank among the top-half of the 20th century’s innovations! The Internet was significant; but compared to the century’s other technological innovations, it wasn’t THAT important. Dot Com zealots of the late-1990s and early-2000s utterly rejected this assessment. They were adamant: it “was changing everything.”
Ultimately, their rigid – I’m tempted to say “delusional” – views cost them dearly. Will today’s enthusiasts of AI and crypto-currencies eventually suffer the same fate?
The “Fourth Industrial Revolution”?
Today’s tech bulls are adamant: AI is and will be transformative. Over the past couple of years, some have used the phrase “fourth industrial revolution” to describe its future impact. The first three industrial revolutions occurred over extended intervals when disruptive technologies were introduced and increasingly swiftly adopted. The first was the mechanisation of industry and agriculture in the 18th and 19th centuries respectively; the second was electrification during the 19th and 20th centuries, and the third was digitisation since late-20th century.
According to Forbes (“AI: Overhyped Fantasy or Truly the Next Industrial Revolution?” 15 August 2024), “the advent of AI … has kickstarted the next industrial revolution. It promises the most dramatic changes yet as electronic brains ... supercharge our productivity, creativity and capability across every field of human endeavour.”
Ed Yardeni, the President of Yardeni Research, Inc., told Fox Business on 3 January: “the Roaring 2020s scenario predicts scalable Artificial Intelligence and chronic labour shortages … In this scenario, productivity growth continues to improve …”“The U.S. could be on the cusp of a productivity boom similar to the one triggered by internet technology in the 1990s,” added The Wall Street Journal (“The U.S. Needs a Productivity Miracle. It Might Just Get One,” 26 December 2024).
Let’s express this as clearly as possible, and draw a vital distinction. Do advances of technology cause productivity (output per hour) to rise? Unquestionably: in mathematical terms, that’s a first derivative – and examples since the 18th century are countless.
But that’s not the bulls’ contention. They’re asserting that “revolutionary” technology causes productivity’s rate of growth greatly to accelerate. That’s a big increase of a second derivative. They’re also saying that transformative tech’s boost of productivity will be long-lasting.
Powell Is Much Less Irrationally Exuberant than Greenspan Was
“America appears to be in the midst of a productivity boom the likes of which hasn't been seen in years,” Axios reported on 1 February 2024. In the fourth quarter of 2023, productivity rose at an annualised rate of 3.2% – the third consecutive quarter whose growth exceeded 3%. “But (Fed Chairman Jerome) Powell appears sceptical that those productivity gains will be sustained ...”
At a press conference on 31 January 2024, he “guessed” that productivity’s rate of growth during recent quarters would subsequently recede towards its long-term average. He also declined to view AI as a panacea: “will it be the case that we come out of this (allegedly post-inflationary) period more productive … on a sustained basis? I don’t know," he mused. AI may lift productivity’s rate of growth, “but probably not in the short run. Probably, maybe in the longer run.”
Lisa Cook, a member of the Fed’s Board of Governors, agrees that AI “has yet to show signs of lifting productivity.” In a speech at the Fed’s Atlanta branch on 30 September 2024, she told reporters: “although I share the view that AI could lift productivity out of this period of low growth, it bears emphasis that recent productivity gains have been modest despite rather impressive changes in information technology.”
Cook concluded: “the modest productivity growth seen of late already incorporates gains from some types of AI. Whether generative AI delivers a similar, incremental contribution to productivity growth or something larger remains to be seen.”
What about Crypto?
In an address on 11 June 2024, Andrew Charlton, formerly a Rhodes Scholar and Accenture’s “Sustainability Services Lead for Growth Markets” and presently a federal ALP MP whom The Australian Financial Review in 2022 described as a “centrist, evidence-based, data-driven economist with entrepreneurial flair,” alleged that blockchain technology can reverse decades of sluggish productivity growth in Australia.
A blockchain is a digital ledger of all transactions across a network. Blockchains underpin crypto-currencies. A crypto-currency is (or is alleged to be) a medium of exchange, created and stored on a blockchain, which uses (1) an algorithm to control the creation of monetary units and (2) cryptographic techniques to verify the transfer of funds.
By improving the efficiency of “a range of industries including healthcare, tax collection and real estate,” reckoned Charlton, blockchain has the “rare ability to help drive the growth of productivity across the economy.” Its impact, he asserted, could therefore be similar to other “fundamental technologies like air travel and the internet.”
What Say Actual Data?
Using data compiled by the Federal Reserve Bank of St Louis, Figure 1 plots the most valid and reliable measure of productivity: output per hour worked. It plots this measure as a 12-month percentage change and as a ten-year compound annual growth rate (CAGR). Since 1948, both measures have risen at an average rate of ca. 2% per year.
The short-term measure has fluctuated greatly (standard deviation of 1.8%) and without trend. The long-term one, in contrast, has varied much less (standard deviation of 0.6%) and has clearly been cyclical: sometimes it accelerates and other times it decelerates. This clearly disconfirms the conventional wisdom:
- From 1978 to 1999 – that is, the era when the personal computer and Internet were developed and rose to ubiquity – productivity’s long-term rate of growth slowed below its long-term average;
- its CAGR quickened above the long-term average in 2002-2011 – to a rate that was no higher than the one that prevailed in the 1960s and 1970s;
- since 2012, crypto-currencies and AI have risen to prominence and tech stocks’ returns have zoomed – yet productivity’s ten-year CAGR has sunk below its average since 1948.
Figure 1: Growth of Productivity (Output per Hour Worked), U.S. Non-Farm Workers, Quarterly, January 1948-July 2024
“Due to the volatility of productivity data,” reported WSJ on 26 December, Yardeni “prefers to look at a rolling five-year average of productivity growth (that’s an arithmetic mean, not the ten-year geometric mean plotted in Figure 1), which hit an annualised pace of 1.9% in the third quarter of 2024, from a low of just 0.6% in the fourth quarter of 2015. Yardeni believes this could reach 3.5% in the second half of this decade.”
As Figure 1 makes plain, he’s super-bullish: bearing in mind that arithmetic means always exceed their geometric counterparts (see How you – and managed funds – overstate your returns, 17 October 2024), he’s predicting an acceleration of productivity’s growth beyond anything that’s ever been observed. For that very reason, I doubt it.
The conventional wisdom about revolutionary technology’s huge and permanent impact upon the economy’s overall productivity was wrong a quarter-century ago, and it’s mistaken today. When speculators finally relearn this vital lesson, will the AI boom – including DeepSeek – implode?
An Aside Lest I Be Misunderstood ...
It’s possible that you’re shaking your head and asking: “what do you mean, ‘tech revolutions don’t permanently boost productivity’s growth; nor do they generate lasting profits and investor returns’? Haven’t you heard of Apple, Microsoft, Google, etc.?”
I’m saying that they behemoths don’t owe their large profits and stellar long-term returns primarily to their technology; moreover, I’m suggesting (recall this article’s second paragraph) that their success stems principally from their market positions (“moats”).
The phrase “economic moat,” which Warren Buffett has popularised, refers to the existence and durability of a business’s ability to protect its profitability. Like a medieval castle, the moat (durable competitive advantage) defends those inside the fortress (profits) from outsiders (competitors). For full details, see Heather Brilliant, et al., Why Moats Matter: The Morningstar Approach to Stock Picking (John Wiley & Sons, 2014) and Pat Dorsey, The Five Rules for Successful Stock Investing: Morningstar’s Guide to Building Wealth and Winning in the Market (John Wiley & Sons, 2004).
Implications
“There’s usually a grain of truth that underlies every mania ... It just gets taken too far,” Howard Marks, co-chairman of Oaktree Capital Management, wrote in a note to investors on 2 January. “A lot of what’s been going on,” noted James Mackintosh, “is similar to when investors discovered the internet. They’ve grasped that AI is A Big Deal, but can’t yet see exactly how or when it will make money.”
“It’s clear,” added Marks, “that the internet absolutely did change the world. In fact, we can’t imagine a world without it. But the vast majority of internet and e-commerce companies that soared in the late ’90s bubble ended up worthless.” Much the same, I suspect, will 25 years hence be true of AI.
The internet, in other words, was a single instance of a well-known pattern to which AI will likely conform. “It has long been the prevalent view,” wrote Benjamin Graham in The Intelligent Investor, that “successful investment lies first in the choice of those industries that are most likely to grow in the future and then in identifying the most promising companies in these industries. For example, smart investors ... would long ago have recognised the great growth possibilities of the computer industry as a whole and of International Business Machines in particular. And similarly for a number of other growth industries and growth companies.”
Today’s advocates of “growth stocks” would certainly agree. They should (re)read Graham: this “prevalent view” is never as easy in prospect “as it always looks in retrospect.” People in general and speculators in particular, in Jason Zweig’s phrase, possess “a remarkable ability to make rear-view mirrors out of rose-coloured glass.”
To emphasise this point, Graham reinserted into the book’s 1973 edition a paragraph from its first (1949) edition: “Such an investor may for example be a buyer of air-transport stocks because he believes their future is even more brilliant than the trend the market already reflects. For this class of investor the value of (The Intelligent Investor) will lie more in its warnings against the pitfalls lurking in this favourite investment approach than in any positive technique that will help him along his path.”
In his Comments in the book’s 2008 edition, Zweig elaborated: “air-transport stocks … generated as much excitement in the late 1940s and early 1950s as Internet stocks did a half century later ... They ... turned out to be an investing disaster … (Graham’s lesson) is ... that you should never succumb to the ‘certainty’ that any industry will outperform all others in the future.”
The insuperable problem with “growth” – AI is but the latest example – is that boosters exaggerate its prospects and speculators push the prices of its securities to excessive heights. When realism finally prevails, speculators receive mediocre (if they’re lucky) or disastrous (if they’re not) returns.
Even its most fervent boosters agree: AI is embryonic, or at most in its infancy. That’s why it’s growing so rapidly. But it’s impossible to know how it’ll develop, which companies it’ll reward and punish, etc. Does Nvidia’s dominance (presently in the range 70-95% of the AI chip market), the sudden rise of DeepSeek and its possible impact upon Nvidia and others reprise the rise, collapse and bankruptcy of internet darlings like Global Crossing? It once controlled most of America’s internet traffic and was thus one of the lauded – and largest (by market cap) – companies in the world.
It takes much less intestinal fortitude to ask: if NAE assessed the 20 greatest technological developments over the century to 2025, where would AI and crypto – the epicentres of the latest bout of euphoria – rank?
It’s reasonable to suppose that they’ll eventually replicate the fates of the other manias generated by the innovations on the NAE’s list. AI, in other words, surely won’t fade. Quite the contrary: it’ll probably grow strongly. But AI mania – and the confidence that AI stocks’ valuations will continue to rise – will surely deflate.
Conclusion
Enthusiasts and alleged “experts” are again proclaiming that a tech revolution is underway. As Alan Greenspan and others ardently believed a quarter-century ago, today’s AI and crypto boosters are also asserting (or strongly implying) that this revolution is or before long will boost productivity’s rate of growth. At least until 27 January, they therefore concluded – like Greenspan they also suppose that more rapid advances of productivity beget higher profits, and that such profits will support high valuations – that AI and tech shares more generally can continue to skyrocket.
Tech enthusiasts always believe fervently but seldom investigate dispassionately (why bother when the latest craze is a “sure thing”?). In contrast, I’m sceptical precisely because I’ve enquired and analysed. I conclude that tech speculators’ most fundamental beliefs are false: I’ve found no evidence since the Second World War that “tech revolutions” permanently increase productivity’s rate of growth. Nor have I uncovered any grounds to believe that a long-term acceleration of its growth in the U.S. is underway; I therefore doubt that one is imminent.
In conclusion, AI and crypto are – like most “technology” at most junctures over the past century – grossly overhyped. As a result, and hardly for the first time, today’s speculators are woefully overconfident. “Pride goeth before destruction,” cautions the Book of Proverbs (KJV, 16:18), “and an haughty spirit before a fall.”
Click LIKE so that Livewire knows that you want more of this type of content. Click FOLLOW for notification when my next wire appears.
5 topics
5 stocks mentioned