“Artificial unintelligence more like it.”
So declared a reader of The Australian after one of my recent columns on artificial intelligence. Another chimed in with this observation: ‘AI can NOT work out what is a Spam or Phishing E-Mail, something that a human can do at just a glance.’
I stared at these comments, genuinely bewildered. Here were readers of Australia’s national broadsheet – presumably educated, thoughtful people – discussing artificial intelligence as if it were a malfunctioning spam filter from 1995. One complained about AI making humans obsolete, while simultaneously dismissing its capabilities. Another insisted the pre-internet world was “significantly better in most respects.”
The comments section read like a support group for technological anxiety. Each contributor seemed to outdo the last in finding reasons why AI was simultaneously useless and threatening, overhyped and dangerous, incompetent and too powerful.
The contradictions would have been amusing if they were not so revealing.
A few days later, I watched Eric Schmidt’s recent TED interview, and the cognitive dissonance nearly gave me whiplash.
The former Google CEO was blunt in his interview: AI is underhyped. Within five years, he predicted, nations might literally bomb each other’s data centres to prevent AI dominance. America alone needs 90 gigawatts of new power generation. That is 90 nuclear plants’ worth just to feed these systems.
Then came the kicker: Schmidt called this the most important development in 500 years. Maybe a thousand. The biggest thing since the printing press, perhaps since writing itself.
My readers fretted about email sorting. Schmidt warned about civilisational rupture.
The gulf between these two visions disturbed me enough that I turned to an AI system – yes, the very technology my readers dismissed – and asked it to analyse Schmidt’s claims.
The comprehensive dossier it produced – running over 26 pages with 90 footnotes – was sobering.
Current AI systems are already revolutionising drug discovery, identifying novel antibiotics in months rather than years. Companies are getting drugs to clinical trials in half the time at one-tenth the traditional cost. Protein-folding algorithms now map structures faster than laboratories can verify them.
They are designing battery materials with triple the density of today’s lithium-ion cells. They are proving mathematical conjectures that baffled humans for decades. DeepSeek’s models match Western performance at a fraction of the cost.
Schmidt was not exaggerating. If anything, he was understating things.
The dossier went deeper. It detailed ‘agentic AI’ – systems that act autonomously, adapt in real-time, solve multi-step problems without human intervention. These are not chatbots but genuine digital agents, what one might call software organisms: capable of complex reasoning and action.
It explained recursive self-improvement, where AI systems enhance their own capabilities at accelerating rates. It outlined the exponential growth in computational demand – doubling every few months, not years. Metrics from January look quaint by December.
While The Australian’s readers debated junk mail, the technology was already transforming entire industries. The disconnect was not just wide; it was widening by the day.
Still mulling this over, I Zoomed my oldest friend back in Germany. We went to school together; he is now a law professor, thoughtful and measured in all things. His response to my AI anxieties was philosophical.
“If AI had arrived a century after my death,” he mused, “I would not feel I had missed anything. I would have lived perfectly contentedly without it.” A pause. “But it is here now. We see what is coming. So we engage with it, whether we like it or not.”
His resigned wisdom struck me as profoundly sensible. No hysteria, no evangelism, just clear-eyed recognition of reality. We talked for an hour about what this meant for education, law, society itself. He was already using AI for legal research, finding it helpful despite wishing it had never been invented. The pragmatism was so perfectly German, so perfectly him.
This resignation reminded me of something I had read years ago. When the Eiffel Tower went up, one Parisian aesthete hated it so much that he ate lunch in its restaurant daily. It was, he explained, the only spot in Paris where the damn thing was not visible on the horizon.
The story had always amused me. Now, it seemed a perfect metaphor for our moment. To escape what repels us, we must inhabit it.
And really, are we not already inhabiting our new AI-ffel Tower? Those dismissive readers typed their comments through AI-optimised networks, on AI-designed devices, on platforms using AI for content moderation and recommendation.
Google’s search algorithms, Amazon’s logistics, Netflix’s suggestions – all AI, all the time. They are dining at Gustave’s restaurant while insisting the tower does not exist.
Throughout history, technologies that terrify one generation become essential to the next, though not always without dislocation or cost. The printing press would corrupt youth – church leaders genuinely believed this. Telephones would end privacy – Victorian critics were certain. Television would create a generation of idiots – 1950s intellectuals wrote earnest books about it. The internet would make us all stupid – Nicholas Carr made this argument only fifteen years ago.
Yet each generation’s technological terror becomes the next generation’s normal. The Eiffel Tower itself – that ‘metal monstrosity’ that would ‘crush Paris under its weight’ – is now simply Paris.
But here is where AI differs from every technology before it.
Books did not replace conversation. Radio did not kill newspapers entirely. Television did not murder radio. Even the internet, for all its disruption, mostly transformed rather than eliminated earlier media. Each technology layered atop the previous, changing but not erasing what came before.
More importantly, these technologies revealed something profound about human nature. They do not transform us – they amplify us. Give internet access to the intellectually curious, they become amateur scholars, accessing papers and lectures from the world’s best minds. Give it to conspiracy theorists, they form digital cults, finding ‘evidence’ for any belief.
The educated watch documentaries from the BBC and university lectures from MIT. Others watch endless cat videos and reality TV clips. The technology does not change people. It reveals them. It amplifies them.
AI takes this amplification principle to the extreme. It does not just amplify our ability to communicate or consume. It amplifies capability itself. A good programmer becomes extraordinary. A creative writer becomes prolific. A researcher accesses insights at superhuman speed. But – and this is crucial – a person who cannot form a proper question gets nothing of value.
And this is precisely where my Australian readers stumbled into their trap.
AI presents itself as deceptively simple. Just chat with it! It is like texting a friend! This accessibility masks the skill required for genuine mastery. The interface invites casual use but rewards sophisticated engagement. Ask lazy questions, get lazy answers – and conclude AI is overhyped. Feel vindicated, stop trying to improve. The downward spiral accelerates.
I have watched this happen repeatedly. Intelligent people, accomplished in their fields, approach AI like they are ordering coffee. They type a vague request, receive a generic response, and conclude the technology is mediocre.
They never learn to prompt properly, to iterate, to guide the system toward excellence. They mistake the mirror for the view.
Nobody reads a quantum physics textbook, fails to grasp it, and then blames physics. We have cultural frameworks for recognising when we are out of our depth. But AI’s conversational interface tricks people into thinking they are experts after one bad interaction. They are looking in a mirror and blaming the reflection.
Schmidt’s timeline – five years to potential catastrophe – is not science fiction. The mathematics of exponential improvement are pitiless. When capabilities double every few months, not years, the future arrives faster than human institutions can adapt. Countries mastering AI will see every strength multiplied. Those resisting will watch every weakness magnified. What starts as a small gap becomes an unbridgeable chasm faster than most people can comprehend.
Australia and New Zealand face particular challenges here. We have perfected Tall Poppy Syndrome, that deplorable cultural reflex to cut down anyone who excels. We prefer lifestyle to achievement, comfort to ambition, consensus to competition. These were manageable quirks in the analogue age. Pleasant, even. They made our countries nice places to live.
But exponential technologies do not care about nice. They reward ambition, punish complacency, amplify both excellence and mediocrity without mercy.
When The Australian’s readers cite inadequate energy infrastructure as a reason to avoid AI, they are making excuses. The real infrastructure gap is in our minds.
Yet the situation is not hopeless. In our part of the world, too, we have genuine AI capabilities. CSIRO’s Data61 ranks among the world’s top publicly funded AI labs. Atlassian integrates machine learning into tools used by millions of software teams. Canva deploys AI for design at consumer scale.
In New Zealand, Rocket Lab uses AI for trajectory optimisation, Soul Machines creates eerily lifelike digital humans, and Xero embeds AI throughout its accounting platform. So the spark exists. What it lacks is cultural oxygen.
My conversations revealed essentially three responses to AI’s arrival, though they blur at the edges and most people combine elements of each.
First, denial. Some resist entirely – like those Australian readers. They will keep doing things the old way while competitors race ahead with AI-augmented everything. They will write every document from scratch while others generate, edit and refine in minutes. They will solve problems through trial and error while others get instant answers.
It is not just inefficiency; it is a choice to become irrelevant.
Second, evangelism. Others, like Schmidt, sprint toward the future. They build the data centres, train the models, write the algorithms. They create the infrastructure, set the rules, capture the value. They will own tomorrow because they are creating it today. Not everyone can be a prophet. But prophets, as they say, tend to profit.
Third, pragmatic acceptance. My German friend represents perhaps the wisest response: neither denial nor evangelism, but measured adaptation. He wishes this cup could pass but knows it cannot. And so he drinks. My friend will adapt enough to stay relevant without pretending to love what he must endure.
There is dignity in this position, and probably sustainability too.
There is something deeply odd about descendants of people who crossed the Pacific in wooden ships being frightened by a chatbot. Our ancestors took genuine risks – physical, financial, existential. We fret about whether AI might give us the wrong recipe for pavlova.
But that Parisian who hated the Eiffel Tower? He died, eventually. Paris kept both its controversial tower and its beauty. Today, there is hardly a postcard from Paris that does not feature the tower. It defines the city it was supposed to ruin. Time has a way of settling these arguments, usually in favour of the future.
The question is whether countries like Australia and New Zealand will participate in shaping AI’s impact or simply be shaped by it. The early choices matter enormously. Countries writing the rules today will embed their values and interests in systems that may run for centuries.
Countries complaining about spam filters will get whatever others decide to give them.
Those reader comments I started with – they are not really about AI at all. They are about a culture that has grown too comfortable to recognise uncomfortable truths. A society that mistakes its good fortune for good judgment, its isolation for independence, its luck for virtue. The mirror does not lie, even when we do not like what it shows.
I think about my German friend, using AI for his legal research despite wishing it did not exist. I think about Schmidt, warning about bombs and data centres while building the future anyway. I think about those readers, so certain AI cannot do anything useful, typing their certainty into AI-powered systems.
The gap between these worldviews is not just about technology. It is about whether we engage with reality as it is or as we wish it were. Whether we acknowledge the world has changed or keep our eyes shut. Whether we shape the future or let it shape us.
Both Australia and New Zealand have been fortunate for a very long time. They have prospered through geography more than strategy, thrived on natural resources more than innovation, and avoided catastrophes through isolation more than foresight. But luck, as any gambler knows, eventually runs out. And when it does, these countries had better have something more substantial than spam filter complaints to fall back on.
AI will amplify whatever we bring to it. The question is whether it will magnify the boldness that sent settlers across vast oceans, the ingenuity that built nations from scratch, the pragmatism that made small populations punch above their weight. Or whether it will amplify the complacency, the risk aversion, the Tall Poppy Syndrome that now threatens to leave both countries behind.
The technology itself is neutral. The choice of what gets amplified is ours.
To read the article on the Quadrant website, click here.