AI Unplugged: Is the AI bubble bursting? 10 min read

Transparent bubbles floating against a dark background, symbolising the potential burst of the AI technology bubble

Is AI really giving us value or is it due a reckoning?

A few years on after AI tools like ChatGPT reared their uncanny heads, the sheen seems to be wearing off.

The same tech that promised to transform everything is showing the classic signs of a bubble: The Bank of England has warned that market concentration in AI stock valuations haven’t been this extreme in half a century, while veteran analysts posit the current frenzy mirrors the dot-com era almost perfectly.

It seems like the returns aren’t keeping pace with the rhetoric, and it’s starting to feel less like a revolution and more like recursion.

And yet the digital marketing industry has gone all in on it.

Every agency is “AI-powered”, every platform “fuelled by machine learning”, and every tool seems to have added an AI feature no one asked for. Speed has become the new god, and originality the sacrifice.

What happens when the miracle stops performing?

If the AI bubble bursts (and if you read on, you’ll see it’s starting to creak), an industry that has built its identity on automation will have to reckon with an awkward truth: the machine was never that clever to begin with.

As a content writer at Bamboo Nine, I’ve ridden the AI wave from the moment it crashed into our industry. Like many on our team, I cut my teeth on copywriting before the prompts and plugins. I’ve watched the promises, the panic, and the pivoting first-hand, and what I’ve seen has left me sceptical.

How it started.

Let’s rewind to what in digital timelines are the sepia-tinted days of yore: 2022.

For many in the industry, it was the closing months of this year when AI slinked  onto the scene.

Some called it as world-altering as the Cambrian explosion. Some.

In 2023, ChatGPT was really making changes to our workflows. We were told that tools like it could easily do our jobs. For us in the Content Team, that meant that by feeding ChatGPT a few keywords and a short brief, it could supposedly produce 700 words of SEO-optimised content in seconds.

Shit.

But the reality was far less dazzling.

All pattern no pulse.

The dust has settled somewhat from its initial hype, and it’s fair to say AI has not turned out to be as seismic a creative shift as it was portended.

If you’ve used them as much as we have, you’ll know that LLMs are
 fine. But they’re certainly not revolutionary.

The more we used it back then, the more we realised that everything it wrote sounded eerily similar, and that’s still the case today. Early versions were worse still: you had to re-feed context every few prompts, and after hours of re-editing to make it sound remotely human, the time saved was negligible and you could actively feel your brain cells dying.

But even as we saw its limits, the hype kept inflating. Clients began to ask the very understandable question: if AI can write faster and cheaper, why pay humans at all?

But this is the same illusion fuelling the AI investment boom, explored below: speed mistaken for value, output mistaken for originality. ChapGPTis  an imitator, not a creator.

And that’s the crux of the AI bubble: AI’s perceived worth keeps climbing, even as the real creative and economic returns flatten. It’s not a revolution, it’s replication, and replication doesn’t hold its value for long.

So what’s the AI bubble and what does it mean?

A bubble, in economic terms, is what happens when speculation inflates faster than value, and sooner or later, reality catches up.

The numbers don’t add up.

OpenAI and Anthropic are growing, yes, but they’re nowhere near profitable. Their business model hinges on continued faith — and funding — in a technology that’s improving more slowly with each generation. As The Atlantic notes, “newer models have been marred by delays and cancellations, and those released this year have generally shown fewer big improvements despite being far more expensive to develop.”

Then there’s the money. It’s been reported that Oracle recently inked a colossal $300 billion cloud infrastructure deal with OpenAI. But OpenAI doesn’t have that capital, which means it’ll need even more colossal funding rounds. So, if OpenAI isn’t as potently profitable as it’s predicted to be, the financial fallout of this could be disastrous.

And the returns so far aren’t inspiring confidence.

A recent Forbes summary of MIT research found that 95% of enterprise AI pilots produce zero return on investment. And, according to RogĂ© Karma at The Atlantic, “the entire U.S. economy is being propped up by the promise of productivity gains that seem very far from materializing”. Citing data from The Wall Street Journal, Karma notes that Alphabet, Amazon, Meta, and Microsoft, the tech titans supposedly riding the AI wave, have seen free cash flow decline by 30% in two years.

So, collectively, the big five (including Tesla) have spent around $560 billion on AI-related projects since 2024, yet generated only $35 billion in AI-related revenue.

Oh dear.

The pressure is building.

This is why we’re getting warnings of “a growing risk that the AI bubble could burst,” because Big Tech’s trillion-dollar valuations are built largely on expectation, not evidence.

If the AI bubble hasn’t burst yet, it’s only because we’re still inflating it.

The gap between expectation and outcome is widening. And as investors, businesses, and marketers all look for the “transformative impact” they were promised, it increasingly feels as though AI isn’t the dawn of a new era, but just another expensive dawn simulation.

How the AI bubble affects digital marketing.

The industrial complex of AI thrives on festering FOMO.

It convinces you that if you’re not prompting, you’re falling behind. That if you’re not using AI, you’re in the stone ages.

That illusion of inevitability, that AI is the only future worth having, is what keeps the hype alive. It’s not just investors pouring billions into models that can’t yet turn a profit; it’s businesses and marketers racing to prove they’re “AI-powered” before they’ve asked what problem they’re actually solving.

I think that the result is an echo chamber of efficiency-speak and automation evangelism, where using AI is treated as a strategy in itself rather than a means to one.

AI limits our thinking.

Add to this that AI isn’t merely failing to deliver the productivity gains that justify its price tag; it may also be dulling the very skills it was supposed to enhance. Emerging research is showing the surprising cognitive cost of an overreliance on AI tools.

In the MIT Media Lab’s Your Brain on ChatGPT study, researchers used EEG (brainwave monitoring) across groups tasked with writing essays either with or without AI help. They found that participants relying heavily on language models showed weaker neural connectivity, diminished memory recall, and declining engagement in creative tasks — what the authors framed as an accumulation of “cognitive debt.”

Another study, published in Societies by Michael Gerlich, investigates the connection between frequent AI users exhibiting weaker critical-thinking performance due to “cognitive offloading”, or letting the machine do the mental lifting.

The pattern is clear: the more we outsource our words, the less we understand what we’re actually saying. 

If we let AI think for us, we stop thinking like marketers, the very people meant to understand emotion, timing, and persuasion. AI can mimic empathy, but it can’t feel it. It can reproduce the shape of persuasion, but not its substance.

Which brings us back to the bubble. Just as investors are overvaluing the economic promise of AI, we’re overestimating its cognitive one. The more faith we place in its output, the less capable we become of producing true value ourselves.

The AI content trough.

So if you’re using AI  to mass-produce thoughtless content without an expert understanding of copywriting, of SEO, of strategy, of marketing, you’re doomed to fail. Good messaging needs  a brain that understands marketing without a chatbot needing to explain it.

What good is speed and scale if what you’re scaling is slop?

And pure AI-driven content is slop of the highest order: pre-packaged, semi-coherent paste, shunted down a chute into the already-swirling trough of online content. Your feed.

Do you want to eat slop? No!

Neither does your audience.

Rage against the machines.

It’s no wonder people are starting to push back. The more synthetic our content becomes, the more audiences crave something real.

Last year’s viral fiascos said it all:  Google’s “AI Overviews” recommending people eat rocks and run with scissors; the tongue-in-cheek slur “clanker” becoming shorthand for online frustration with AI’s growing ubiquity.

The backlash is becoming measurable, too: according to NIM research, labelling content as AI-generated tends to increase scepticism and reduce engagement. People are more cautious when they know a piece was machine-assisted.

Supporting this, a report from accounting firm KPMG found that only 8.5% of 48,000 people surveyed “always” trusted AI search results, while Gartner’s report on  AI search summaries found that over 50% of consumers don’t trust AI searches at all, most citing “significant mistakes.”

If audiences don’t trust the machine, why would they trust a brand that sounds like one? In marketing terms, that’s not just a reputational risk, it’s a value collapse. The supposed efficiency of AI content is already undermining the very trust that gives brands their worth.

What’s the future?

The irony is that in chasing the future, much of the industry has started to regress: faster production, flatter ideas, and an overreliance on tools that promise transformation but deliver repetition.

If you’re hitching your future to a tool that’s burning through capital, water, and electricity faster than it creates value, is that really future-proofing? Or is it just inflating a bubble that’s one prompt away from popping?

But despite what personal misgivings you may have about it, AI isn’t going anywhere fast in digital marketing.

There may come a reckoning for AI’s environmental cost in the future (Google’s emissions have increased by a horrifying 50% since 2020, largely due to the enormous amount of energy that AI demands, while Sam Altman recently announced OpenAI’s next wave of AI data centres will apparently require “as much power New York City and San Diego combined”). But the tools aren’t going away.

So, the challenge is to use them sustainably, to build smarter, not faster.

 


 

The Bamboo Nine way.

Our goal is simple: use the tools without becoming them. To create work that lasts because it’s driven by sense, strategy, and a very human kind of intelligence.

At Bamboo Nine, we use AI where it makes sense: to save time and make processes smoother so that we can deliver more bang for our clients’ buck.

But we don’t mistake it for strategy, creativity, or craft. AI can give you words, but not meaning. The best work still comes from people, from those who interpret, empathise, and push beyond what’s predictable.

And, really, that’s what audiences crave now more than ever: something real, made by someone who understands what it means to be human.

Find out more about how we set the right foundations through our Discovery Consultation service.

 

Get in touch

How can we help?