with Stephen Johnson • Thu 9 October, 2025
Hey Big Thinkers,
In October 2021, I wrote an article for Big Think about AI that ended with a summary of AI experts’ predictions about when humanity would create artificial general intelligence (AGI) — systems capable of understanding, learning, and performing almost any intellectual task that humans can. The surveys varied, but one put the average prediction around 2100.
That article is, to put it mildly, outdated. In the wake of rapid AI advancement, most experts now believe AGI will emerge within the next 15 years, with some claiming it’ll come this decade. “If you say you don’t think AGI is going to arrive until 2040, you are seen as like a hyper conservative, basically Luddite, in Silicon Valley,” as Andy Mills and Matt Boll put it on their new podcast, The Last Invention.
Now, some cold water: A recent report found that 95% of organizations that have integrated generative AI into their operations have seen “zero return.” This obviously doesn’t mean today’s models are useless, or that there aren’t worthwhile applications for AI outside of business. Still, it’s not easy for me to square this finding with grand predictions about AGI — a technology that would ostensibly emerge from scaling up the types of systems we have today.
Are businesses lagging behind? Will it be “agentic AI” that tips the scales? Are these reports failing to capture all the ways that businesses and individual employees are using these tools?
The answer may very well be “yes” to all three questions, as Ross Pomeroy explores this week.
Read on,
Stephen
THE BIG REVEAL
AI adoption rates look weak — but the data hides a bigger story
The biggest question in the financial world today is arguably, “Is AI a bubble that’s about to burst?” I won’t pretend to know. But one metric that might lend a tiny bit of insight into the question is AI adoption rates among U.S. companies. In June, data from the Census Bureau found that American companies had begun reducing their use of AI. That same month, McKinsey published a report stating that nearly “eight in ten companies report using gen AI—yet just as many report no significant bottom-line impact.” Is this just a bump on the road toward widespread adoption of AI and, theoretically, AGI in the workplace?
To find out, Ross Pomeroy spoke with one of the authors of the McKinsey report. Among other topics, they discuss the differences between today’s chatbots and “agentic AI,” which is in its early days, but could be what unlocks real transformation for businesses.
Fast Stats
95% — The percentage of AI rollouts that fail.
4% — The share of companies that reach $1 million in revenue.
120 — The age people would rather die than live to be.
9 —The minutes it takes Jesse Eisenberg to teach you how to turn your anxiety into motivation.
THE BIG LEAP
The China factor in the great progression of the next 25 years
In just 35 years, China has gone from steam engines and rice paddies to the world’s largest high-speed rail network and clean-energy manufacturing base. In this op-ed, Peter Leyden warns that this “great progression” comes with a darker side: an entrenched surveillance state now poised to steer the future of AI. The U.S., he argues, should cooperate with China on climate tech while competing fiercely to ensure that the AI revolution unfolds within open, democratic systems.
MINI PHILOSOPHY
The sci-fi hypothesis that explains why you click with certain people
By Jonny Thomson
The Sun is setting. We’ve had four beers and haven’t moved for two hours. It’s the first time I’ve ever spoken with Tony, and we get on like long-lost siblings.
“Tony,” I say, “let me stop you right there to compliment you on the size of your hippocampus.”
Tony laughs, just a touch awkwardly.
“Err, what?” he says.
“Your amygdala? Perfect. Great distribution, great connections, lovely blood flow.”
Another laugh — less mirth, more awkward.
“In fact, there’s little I can fault about either lobe. But then, I would say that… because I’ve got the same ones!”
Tony doesn’t laugh, but he does stand up, mumbling some excuse.
Tony and I might have got on, but it was a bad idea to point out why.
Read this week’s article to find out what on Earth I’m on about.
Subscribe to Mini Philosophy on Substack for even more from Jonny Thomson.
Popular Columns
Starts With A Bang: Macroscopic quantum tunneling wins 2025’s Nobel Prize in physics
Books: AI vs. AI: The upcoming arms race against disinformation online
Freethink: How a dog’s life could extend yours
THE BIG CONSIDERATION
The search for alien life must heed this lesson from Stephen King
In Stephen King’s The Stand, a plague wipes out nearly every human on Earth, sparing less than 1% of the global population. Imagine you survived — alone on a remote farm, wondering whether anyone’s left. You can’t know for sure. But here’s one reasonable assumption: If you find just one other survivor, there’s a very good chance there are many more out there. In this article, astrophysicist Ethan Siegel explains how this line of thinking applies to our search for life in the cosmos.
Stephen Johnson is the managing editor at Big Think.
Get more from Big Think:
Mini Philosophy | Starts With A Bang | Big Think Books | Big Think Business











I was just reading something on LinkedIn yesterday about the possibility of an AI bubble, and it seems legit. Aside from data center and AI spending, the rest of the US economy remains incredibly stagnant.
We are currently at the peak of the hype cycle for AI, and I sense that many companies are beginning to realize it's not a magic cure-all. I also think many companies simply threw money at new AI systems without ever truly understanding what they were investing in.
If AI is truly propping up the economy, and then AI spending crashes, I shudder to think what might happen next.
It will this week when I get done with it.