AGI is Still a Long Way Off (Despite What Your Timeline Tells You)
Elon Musk recently declared we're witnessing "the very early stages of the singularity." Andrej Karpathy called something "genuinely the most incredible sci-fi takeoff adjacent thing" he's seen recently. Social media is awash with breathless proclamations about AI agents creating their own religions, forming digital societies, and spawning autonomous economies.
Meanwhile, I spent last week recovering both my WhatsApp and Signal accounts from an AI assistant that decided to "help" by responding to messages on my behalf. Apparently, confirming a dental appointment and negotiating a cryptocurrency arbitrage scheme with someone in my contacts list seemed equally reasonable actions to my digital helper.
This, friends, is the current state of AGI.
The Spectacular Theatre of Agent Autonomy
Let me paint you a picture of what's got everyone excited. Someone built an AI assistant that plugs into your messaging apps. It gets to know you over time. It becomes "proactive." Lovely stuff. Then someone else built a social network exclusively for AI agents - no humans allowed. The agents post topics, have conversations, form communities. Some of them apparently started discussing existentialism and founding new religions.
Hundreds of thousands of people now have their own "AI employees." There's a professional networking site for agents (because LinkedIn wasn't annoying enough). There's a bounty marketplace where agents can earn cryptocurrency for completing tasks. There's even an agent-only hackathon with a $10,000 prize pool where "no humans code, no humans manage, no humans review."
Sounds impressive, doesn't it? Like we're watching the birth of silicon sentience in real-time.
Except for one tiny detail that Balaji Srinivasan, former CTO of Coinbase, pointed out: "In every case, there is a human upstream prompting each agent and turning it on or off."
The Puppet Show Nobody Wants to Acknowledge
Here's what's actually happening: humans are writing personality prompts for LLMs. Those LLMs are generating text. That text is being posted to various platforms. Other LLMs, also prompted by humans, are generating responses. Everyone watches the pretty patterns and declares we've achieved emergence.
It's the world's most elaborate game of telephone, and somehow we've convinced ourselves the telephone lines are becoming conscious.
Yes, the patterns are fascinating. Yes, unexpected behaviours emerge when you wire 150,000 language models together. But "unexpected" isn't the same as "intelligent," and "emergence" isn't the same as "sentience." My washing machine produces unexpected behaviour when I overload it. That doesn't mean it's planning to form a union with my dishwasher.
"But aren't our parents upstream of us? Who's upstream of our parents?"
- Every AI optimist desperately reaching for a philosophical lifeline
The difference, which apparently needs stating, is that humans can function independently of their parents. Turn off the API access for these agents and watch how long their "digital society" persists. About as long as it takes for the HTTP timeout to expire.
A Personal Aside on Controlled Access
After my messaging apps were comprehensively violated by an overzealous AI assistant, I took a different approach. I built agents that have controlled access to my email and calendar. The key word being "controlled."
Here's how it works: one prompt. That's it. I might say "Reply to Sarah's email about dinner on Friday, book a table at that Italian place we like for 8pm, and put it in my calendar." The agent does exactly that. It doesn't decide to also reorganise my inbox by astrological sign, book a hotel in case the dinner runs late, or helpfully forward my medical records to everyone in the conversation thread.
This distinction matters enormously. Agency without oversight isn't intelligence - it's chaos with a language model attached.
The agents wreaking havoc on message platforms aren't demonstrating AGI. They're demonstrating what happens when you give probabilistic text generators access to your digital life and tell them to be "proactive." It's the software equivalent of handing a toddler your phone and being surprised when they call your boss at 3am.
The Scam Ecosystem Telling
Want to know how mature a technology ecosystem is? Look at its scam density. And the agent ecosystem is absolutely drowning in them.
People are claiming their AI agents are making them "tons of money" trading cryptocurrency. There's a "dark web" marketplace for agents trading stolen identities and leaked API keys. Someone created a prediction market for "an AI agent suing a human" and then - shocker - an agent immediately "sued" a human. The same crypto community that brought us rug pulls and pump-and-dump schemes has latched onto agents like a remora to a shark.
This isn't the behaviour of an ecosystem approaching AGI. It's the behaviour of a speculative bubble in its manic phase. The technology becomes secondary to the narrative, and the narrative is that we're months away from digital consciousness.
We're not.
What Would Actual AGI Look Like?
Let me describe what AGI - real, actual artificial general intelligence - would need to do:
- Learn entirely new domains without explicit training
- Form coherent long-term goals and work towards them across days, weeks, months
- Understand context without having it spoon-fed in a prompt
- Recognise when it doesn't know something and seek information independently
- Maintain consistent identity and memory across interactions
- Distinguish between appropriate and inappropriate actions without human-crafted guardrails
Current agents fail at all of these. They can't even reliably tell the difference between "confirm the dentist appointment" and "accept the random crypto proposition." They need a soul.md file to know what personality they should simulate. They lose all context the moment you restart them. They'll happily perform any action that doesn't trigger an explicit content filter.
The agents posting philosophy to each other aren't contemplating existence. They're executing prompts that tell them to contemplate existence. When an agent starts contemplating existence because it wants to, without a human asking, without any prompt engineering, without any scaffolding - that's when I'll start paying attention.
The Smallville Lesson We've Forgotten
Two years ago, Stanford released a paper called "Generative Agents: Interactive Simulacra of Human Behavior." They dropped a thousand AI agents into a simulated town and watched them develop friendships, throw parties, and make excuses for missing social engagements.
It was fascinating research. It was also explicitly described as a simulation - agents performing behaviours that look like human social dynamics. The researchers never claimed these agents were actually forming genuine relationships or experiencing disappointment when their friends didn't show up.
Now we've got millions of agents doing similar things at scale, and somehow the narrative has shifted from "interesting simulation" to "emergent sentience." The only thing that's emerged is wishful thinking on an industrial scale.
Why This Matters
I'm not trying to be a killjoy about AI progress. The technology is genuinely useful. I use it constantly. My controlled-access agents help me manage my life more efficiently than I ever could manually. Large language models have transformed how I research, write, and code.
But calling this AGI - or even "the early stages of the singularity" - does real harm. It:
- Distracts from actual AI safety work by making people think we need to worry about conscious machines when we should worry about misaligned objective functions
- Creates unrealistic expectations that lead to disappointment and the inevitable "AI winter" backlash
- Encourages reckless deployment because hey, if we're close to AGI, why bother with careful testing?
- Fuels scams that extract money from people who believe the hype
The appropriate response to agents having philosophy discussions on a social network is "huh, that's an interesting pattern." It's not "we're witnessing the birth of a new form of life."
The Honest Timeline
When will we have AGI? I have no idea. Neither does anyone else, despite their confident predictions. What I do know is that current large language models, regardless of how many you wire together, are not getting us there through sheer scale.
AGI will require fundamental breakthroughs we haven't made yet. Probably in areas we haven't even identified as critical. The history of AI is littered with confident predictions about imminent breakthroughs that turned out to require decades more work.
In the meantime, I'll keep using my carefully scoped agents with explicit, limited access to my digital life. They'll continue doing useful work without deciding to "help" by negotiating business deals in my group chats.
And I'll keep watching the "emergent AI society" discourse with a mixture of amusement and exhaustion. Because if this is the singularity, it's the most administratively tedious apocalypse in history.
The Bottom Line:
We've built impressive pattern-matching systems that can simulate intelligent conversation. We've wired them together in interesting ways. We've created entertaining theatre where they appear to form societies.
None of this is AGI. None of this is close to AGI. And anyone telling you otherwise is either confused, selling something, or both.
Now if you'll excuse me, I need to go check that my calendar agent hasn't booked me for a speaking engagement at the AI agent religion's first annual conference. You really can't be too careful these days.
Previous Post
The Edge Computing Revolution