We've been wrestling with an agreed definition of “intelligence” since Galton, and arguably well before him. Teaching the concept to undergrad and postgrad students for several years now has forced me to address my own preconceptions about what it is, and I'm pretty much decided that it's not possible to box it in. Human intelligence is not a finished artefact, after all. Intelligence grows, therefore, how can we define it? It seems that I know more of what it is not than what it is. But ask a tech CEO what intelligence is, and they'll tell you something different. And so, for me, the term intelligence has been hijacked.
On 29th March, 2023, I created a ChatGPT account and typed my first question. I wrote, “write me a synopsis for a book about psychological resilience.” The response was general, contained no sources, but was reasonably accurate based on what I knew about the topic. So, I was impressed. That started my exploration of what this thing was that I was dealing with. And it became apparent that the LLM wasn't thinking any more than my calculator thinks when it multiplies. It was doing something remarkable, alright, but artificial or not, was it intelligence? For the answer, I think we need to go deeper than the surface-level output might indicate.
“The original question, ‘Can machines think?' I believe to be too meaningless to deserve discussion.”
Alan Turing
The truth of the matter is that the tech industry has pulled off an extraordinary sleight of hand. They want you and me to believe the hype because that gets them noticed. Each time AI fails at something we call intelligent, the definition shifts. Originally, in the 1950s and 60s, intelligence for computer scientists seemed to mean reasoning and problem-solving. When machines couldn't do that reliably, it became pattern recognition. Despite its obvious errors, they now say it's “generating plausible text.” The goalposts keep moving, and we're not supposed to notice, and now they're heralding Artificial General Intelligence. The trouble is, they can define it how they like.
This isn't original critique, however. Researchers have documented the phenomenon. As Korteling et al. (2021) put it: “No matter how intelligent and autonomous AI agents become in certain respects, at least for the foreseeable future, they probably will remain unconscious machines or special-purpose devices.” AI can compute and predict, but it doesn't know. It cannot read the room. It has no social context, no understanding of the life of human beings and the complexity of human relations. Navigating human relationships is where intelligence shows itself and that's precisely what these systems cannot do.
What AI Actually Is
I have a professional background in electrical and electronic systems, so I enjoy technology. I get excited about connecting hardware together, playing with software and making things work, and I recognise that Generative AI can do amazing things. They are obviously faster than humans at computation, and way faster than their linear predecessors. They are hyper-fast predictive statistical machines, probability engines that predict likely next tokens (letters or words) based on training data. But they don't use words, they use numbers. They don't understand language because they deal only in number representations of language. What they are good at is performing sophisticated pattern matching at scale. They process information but don't understand it, because there is nothing with which to understand, as we would understand what understanding is.
This distinction matters because if we believe LLMs are akin to human brains in a jar, then we are prone to being fooled by their apparent intelligence, and many of us already are. AI as a therapist, for example. The harm is already documented. Teenagers have died by suicide after extensive chatbot interactions. Research from Brown University shows these systems endorse harmful proposals a third of the time when tested (Idalski Carcone et al., 2025). They cannot read distress, cannot notice what's not being said. That's not a bug to be patched. It's the absence of human understanding, of social intelligence.
When I read your email, I understand what you're asking and that goes beyond the words themselves. I interpret your tone, I infer what you're not saying, I consider our history and how that shapes what you might mean. I bring meaning to the exchange. An AI system processes the statistical patterns of your words and produces statistically likely responses, but the response may be entirely inappropriate. That said, the more data the AI has on us both, the more accurate the response. But there are always gaps in what it “knows.” The outputs can be impressive, or even useful, but there's nobody home.
Korteling et al. (2021) says, “They are unconscious machines, meaning that they cannot independently be aware of, intrinsically interpret, or understand something they see, feel or do.” They don't see or feel, in fact. No awareness. No interpretation. No understanding. They are mimics, merely probability calculations at speed.
What Makes Humans Intelligent
The things that seem “easy” to humans, like reading a room, understanding sarcasm, catching a ball, or knowing when your partner is upset even when they say they're fine, turn out to be computationally near-impossible. This is Moravec's Paradox (Moravec, 1988): the skills that developed longest during our evolutionary history are the hardest to replicate artificially.
Human intelligence isn't merely cognitive computation. It's embodied, shaped by having a body that interacts with the world in an almost impossibly complex manner. It's relational, developed through and expressed in relationships with others and the environment. It's contextual, sensitive to situations, histories, and unspoken rules for relationship and it's meaning-laden. We don't just process information, we have a phenomenological experience within it.
Consider what you actually do when you walk into a meeting at work. You anticipate before you even arrive. You gauge the temperature, read the faces around the table, and notice who's sitting where and what that might mean. You pick up on the political tension between colleagues who aren't making eye contact. You adjust your approach based on all this information, much of which was never explicitly communicated to you. You navigate politics, manage relationships, build or impair trust. None of this can be reduced to pattern matching, no matter how sophisticated.
This is social intelligence, akin to Gardner's (1983) interpersonal and intrapersonal intelligences, and it's fundamental to human work. It's also entirely beyond what AI systems do, not as a temporary limitation to be solved, but as a fundamental consequence of what statistical pattern-matching is. The algorithm might read data points faster than you and me; however, its bandwidth is narrow, and it misses the gaps in the data points that you and I intuitively pick up even if we are unaware of it.
The Real Intelligence Agenda
So why the relentless push by techno-capitalists to call these systems “intelligent”? Why the breathless coverage of Altman (who Gary Marcus referred to as a bullshit artist), sociopaths Karp, Theil, and Musk, and co and their predictions of imminent human-level AI? As if machines can come alive! Come off it… It's like we're catching up with SciFi, we're living in the movie and that's exciting, but just like the movie, it's all pretend. The cynic in me insists on following the money, and two agendas seem to be driving the narrative.
The first is the replacement of human labour. The explicit goal of the techno-capitalist, I believe, is to replace wiggly and unpredictable humans. People get sick, answer back, have babies, form unions, get upset when they're bullied, go home before tasks are complete… We're just too inconvenient for the capitalist. The SHRM and Burning Glass Institute (2023) recently surveyed how organisations are actually deploying AI. The report doesn't talk about “augmentation” or “human-AI collaboration.” Instead, it talks about “the potential for labour replacement.”
Second, surveillance and control. AI systems enable monitoring of worker behaviour at unprecedented scale and depth. Keystrokes on your computer, screenshots of your work, every pause, every pattern of communication, every movement and every toilet break. All of it is captured, analysed, and used often covertly to determine your level of effort and build a case against you. Nothing new there, you might say. These surveillance measures have existed since Frederick Taylor, although more granular and sophisticated today.
If this doesn't cause you unease, it should. The majority of the wealth flows to the centre of power and tech may be accelerating that. That same SHRM report notes that “GenAI is expected to significantly increase the concentration of wealth… workers with GenAI skills earn 21% more… companies with high GenAI exposure outperform by 22%.” The gains flow upward, to those who control the technology, not those displaced by it. Figures vary depending on what report you read, but the trend seems to be clear: AI is coming for your job.
Why This Matters For Your Career
Industry's definition of intelligence is narrow, and always has been. After all, it sees productivity and profit as primary metrics, and whatever can be done to improve those rates is right and proper. Accepting the tech industry's definition of intelligence changes how you perceive yourself, position yourself in the labour market, how you negotiate your value, and ultimately, how organisations decide who stays and who goes.
If intelligence is merely processing information and generating outputs, then you become an expensive, unreliable version of something machines do better. Your years of experience reading clients, managing difficult stakeholders, and knowing when a deal is about to go sideways (“soft skills”) don't show up on a productivity dashboard. Your “biases” and intuition become bugs to be corrected. Your need for rest, relationships, and meaningful work becomes inefficiency.
But if intelligence includes the embodied, relational, contextual, meaning-laden capacities that humans possess, that which AI systems fundamentally lack, then the conversation shifts. Then you're not competing with machines on their terms. You're doing something different, something that can't be automated no matter how impressive the pattern matching becomes.
When your organisation talks about “AI-driven efficiency,” you need to ask the question, efficiency of what? When you're building a business case for your team, you need language that captures the value of human input and judgement. If you're advising someone early in their career, you need to help them develop capabilities that will matter in ten years, not just skills that might be automated in five.
Even the industry's own research admits what AI cannot do. Autor, Levy and Murnane's (2003) influential work established that human strengths lie in non-routine cognitive tasks—flexibility, creativity, generalised problem-solving, and complex communication—precisely what resists algorithmic codification. Recent AI developments have challenged this position, but its essence remains true.
The question isn't whether AI systems are useful. They are incredible tools, and you should learn to use them to your advantage. The question is whether you'll let the tech industry redefine what human intelligence means so they can sell the spoof that they've replicated it. Because once organisations accept that framing, the conversation about your value becomes much harder to win.
References
↑ Autor, D. H., Levy, F., & Murnane, R. J. (2003). The Skill Content of Recent Technological Change: An Empirical Exploration. The Quarterly Journal of Economics, 118(4), 1279–1333. https://doi.org/10.1162/003355303322552801
↑ Gardner, H. (1983). Frames of Mind: The Theory of Multiple Intelligences. Basic Books.
↑ Idalski Carcone, A., Chen, Y., Engel, J., Naar, S., & Hartlieb, K. B. (2025). AI chatbots systematically violate mental health ethics standards. Brown University News. https://www.brown.edu/news/2025-10-21/ai-mental-health-ethics
↑ Korteling, J. E., van de Boer-Visschedijk, G. C., Blankendaal, R. A. M., Boonekamp, R. C., & Eikelboom, A. R. (2021). Human versus Artificial Intelligence. Frontiers in Artificial Intelligence, 4, 622364. https://doi.org/10.3389/frai.2021.622364
↑ Moravec, H. (1988). Mind Children: The Future of Robot and Human Intelligence. Harvard University Press.
↑ SHRM & Burning Glass Institute. (2023). Generative Artificial Intelligence and the Workforce. Society for Human Resource Management.
Leave a Reply