The forthcoming paragraphs may be controversial, or maybe they won't depending on how invested you are in what we have come to know as “Artificial Intelligence.” The machine will be as intelligent as a definition will allow it, and I will argue that the definition we have assigned this technology has set the bar pretty low. Artificial intelligence, once the preserve of science fiction, has become a waste basket term for any and all technology that promises technological utopia, dystopia or something in between. I will argue that whatever this technology is, it is not intelligent. It is Turing's Imitation Machine. It has become a marketing term. As the designer of the worlds first microprocessor, Federico Faggin has written, “Digital computers cannot have the crucial properties that characterise human intelligence.” In a recent interview, Faggin insists, “the computer has no consciousness and no free will…but science is telling us we are like a computer, and this is the problem.”
You train a parrot to say “Polly wants a cracker,” and the parrot learns that this behaviour meets their need for food and perhaps attention. For living organisms, what is need? At a basic level, perhaps survival. Survival of what? Survival of whatever it is I am. What is I am? I can't say. All I know from my unique perspective is that there is something going on here, a will to exist and discover beyond the apparent binary need-fulfilment mechanism. The parrot's behaviour is somewhat true of human behaviour too. Discovered over one hundred years ago, Ivan Pavlov's work on the digestive systems of dogs inadvertently gave rise to the behaviourist movement, classical and operant conditioning. Present a stimulus, as the idea goes, evoke a response, and deliver the reinforcement to embed the behaviour. No cognition or feelings states, just stimulus and mindless reaction. This finding is still utilised today to direct and manipulate populations. Advertisers and marketers know it very well, using its principles to direct your purchasing habits, to convince you that the “I” in AI really is intelligence. Governments know this too, shelling out tens or even hundreds of millions each year to shape public opinion. That said, it's an incomplete theory, and on its own it's not enough to account for the human condition.
“Digital computers cannot have the crucial properties that characterize human intelligence. And this is vital knowledge at a time when artificial general intelligence is considered possible by a large number of our scientists.”
Federico Faggin
Your pet parrot hardly understands what “Polly wants a cracker” means in our terms, but they have arguably formed an association. Perhaps animals represent our words and actions in some way but do they understand as we do? Maybe not. Regardless, it doesn't stop us assigning the animal human-like qualities. When we watch a chatbot generate a thousand words about consciousness, creativity, or provide us with life advice, the most naive amongst us are also inclined to assign the machine in this case, human-like qualities. The silicon has somehow crossed a threshold from inanimate material to a state of mind. Is your house intelligent because you have hooked up electronic devices that operate without human input? Truth is, and at risk of insulting the animal, I would assign a greater level of intelligence and conscious awareness to the parrot than to the silicon chip regardless of how “intelligent” the computer appears to be.
What's going on now across the social sphere regarding AI is like the phase children go through where a part of them believes their toys come alive at night. It's charming and developmentally appropriate for children, but they grow out of it, or at least they're supposed to. As much as we are entertained by Toy Story, there's hardly a need to mention the obvious. We know the puppets or cartoons aren't real and so do the children. It's absurd to even point to the man behind the curtain, the strings and the levers attached. The character has no interiority, it's not wondering what it's like to be a puppet even though it might be fun to imagine it. So why have so many intelligent adults convinced themselves that large language models and the agents they spawn are any different? One thing I have learned about human beings is that we seem to be serially manipulable.
Can Dead Matter Develop Consciousness?
Intelligent behaviour is what consciousness does at the material level. Consciousness, in my definition, is not merely being aware of oneself. It is not solely the state of awakeness. Instead, it is that which animates us, allows us to know, to possess phenomenological experience. It is an independent will to exist. I'm not arguing that consciousness is the exclusive preserve of human beings, however. In fact, you can make a reasonable case that animals possess consciousness, that there is something like what it means to be a cat or a dolphin. Also, trees, plants and other organic life respond to the environment in ways that suggest some rudimentary sense of being. The whole planet is, in some meaningful sense, alive and aware of itself. Perhaps even rocks possess something at the atomic level that we might, with enough philosophical latitude, call a form of proto-consciousness for how else could the atoms of a rock conspire to become a rock. Maybe that's a stretch, maybe it's simply a blind mechanism at work. Is magnetism as aspect of consciousness? Regardless, I cannot believe for a second that human beings can put sufficient volumes of matter together such that matter develops an animal or plant-like conscious state, let alone human. It's one thing to assign conscious-like qualities to animals and plants, but to dead inanimate matter, to machines? For me, consciousness simply is, and is fundamental.
A dominant view within neuroscience for many decades suggests that the brain, although incredibly complex and perhaps impossible to comprehend, is simply a dumb mechanism. Much like the behaviourist view, there is nothing behind its activity. Consciousness, that sense of being alive, of being oneself, of having human experience, is simply an epiphenomenon of the brain. Jesus Christ, what I put down. You're an accident of evolution derived from matter. Somehow, somewhere along the line, chemical elements bound together and developed an awareness of themselves. Consciousness, cording to this idea, is emergent not fundamental. Awareness is finite, and eventually, falls back into nothing, darkness, non-existence. This idea is fundamentally materialist, and has over the past several hundred years dominated our understanding of human behaviour and of the nature of reality. As such, the mind as a machine influenced the underlying philosophy of computer science and AI development.
Abstractions Are The Consequence, Not The Cause
At the risk of severe criticism from those in computer science fields that know more than I on these things, let's talk about what's going on inside your commoner garden LLM. Let's attempt to counter the fallacy that fuels the notion we will soon realise AGI, and that Moltbot (or Clawdbot or whatever they're calling it now) is evidence of a conscious and independently thinking machine with goals and ambitions of its own. The materialists (or are they marketers, I can't always tell) are celebrating. They believe that they have finally gathered the right materials in adequate quantities to reproduce, or will potentially at least, the human brain, and by extension, consciousness.
If we build a brain following this analogy and throw enough information at it, consciousness could emerge. This is the implicit belief of scientism and this is why I believe it to be remarkably naive. The neural network, like all in mathematics and engineering, is an abstraction, a model of reality, or more precisely, an abstraction of an abstraction of an abstraction. And so from this materialist viewpoint, mind, the thing that creates abstractions, is itself a product of an abstraction. Well, we can see the paradox here. How can that which is an abstraction create the thing that creates the abstraction? It doesn't work. The concept of a “neuron” in computer science terms is itself a loose analogy borrowed from biology. The implementation of that concept in silicon is another layer of abstraction removed from anything resembling actual neural activity. This compounding of abstractions is where we lose sight of what's actually happening, and where the anthropomorphic confusion takes us over.
Inside The Machine
At the physical level, a neural network runs on a gigantic binary system, ones and zeros, ons and offs in low voltage DC electrical circuitry. The hardware consists of transistors and capacitors, billions of them arranged in circuits etched on to silicon wafers through photolithography. Each transistor is a tiny switch, passing or not passing current based on the voltage at its gate. The patterns of conduction across this circuitry encode binary representations of numbers. The number 0.7 inside a computer, for example, isn't a number in any meaningful sense, it's just a pattern of tiny electrical charges. The billions of “weights” that define an AI model are the same. It's not knowledge or understanding, but vast grids of voltage states in silicon. Memory, in this context, means physical storage on the chip, tiny capacitors/transistors, each holding or not holding an electrical charge, representing a one or a zero. String enough of these together and you can represent a decimal number like 0.7 in binary. The “weight” isn't a concept floating in space, it's a pattern of charged and uncharged capacitors or transistors in silicon.
During “learning” (inference, if you like) these stored values are “fetched” from longer-term storage and loaded into working memory (RAM), where they exist as charge states ready to be used in calculations. “Fetched” assumes a fetcher, an agent that takes action, but there is no ghost in the machine even though the language suggests there is. Nothing alive, no doer doing, no intelligence even remotely comparable to humans, or even at all, let alone animal or plant life. Your input text, converted to numbers, is multiplied by the weight values at a given “node” in the network, the results are summed, and the output passes to the next layer of the network. This is matrix multiplication, nothing more than the multiply-and-add operations executed billions of times in parallel circuits. All algorithm and mathematics. The numbers flow through logic gates as voltage signals, producing new voltage signals, which represent the next set of numbers, which flow through more gates. Repeated across dozens or hundreds of layers, and out the other end comes a probability distribution of possible next words. A deterministic decoding rule outputs the highest-probability token, a lookup table maps it to text, and electrical signals render it on your screen.
I propose to consider the question, “Can machines think?” This should begin with definitions of the meaning of the terms ‘machine' and ‘think'. The definitions might be framed so as to reflect, so far as possible, the normal use of the words, but this attitude is dangerous. If the meaning of the words ‘machine' and ‘think' are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, “Can machines think?” is to be sought in a statistical survey such as a Gallup poll. But this is absurd.
Alan Turing, 1948
Why We So Easily Fool Ourselves
I have no choice but to admit, that LLMs are a remarkable feat of science and engineering, analogous to how the brain works, but only loosely. So fast, so realistic in its output, and apparently so clever. But what exactly is clever? I'd say the people who built them are clever, but the LLM itself? No, not clever. Just abstraction and mathematics at incredible speed, tolerance and efficiency. There's no understanding, no intent, no social context, no model of the world in any meaningful sense, just very sophisticated pattern matching “learned” from data. Electrical signals, mathematics, algorithms. It is a statistical predictive machine that has been trained to copy language behaviour, imagery, mathematics, and what human emotion looks like in text form. It is parallel DC electrical circuitry on a vast scale and at very high speed, and the reason we believe it is “intelligent” is because of scale and complexity of its output. It resembles human output. Psychopaths mimic human emotion, they don't feel it…remember that.
In comparison, the surface-level human mind, cognition, can cope with only a small number of variables. Add further complexity and we cannot derive meaning at the surface level. But we don't need to. Meaning goes deeper than surface level cognition, it's something we feel. Our organism processes so much more data than we can be aware of, and makes decisions accordingly. So many of our responses to the environment happen without our immediate knowledge, only afterwards do we become aware of it, and sometimes we don't at all. The reason why many have become over-excited about AI is due to a reliance on this cognitive limitation. Our surface-level minds cannot comprehend how LLMs do what they do. And so, many of us have been fooled, even those who built them, but that's nothing new. We've been fooling ourselves forever, and allowing ourselves to be fooled by less than honourable others.
I should add, I'm not suggesting in my simplification that LLMs are not potentially dangerous. On the contrary, all this mimicking human behaviour and our propensity for naivety has dramatic potential downside.
The Mystery Is Where You Are
Is it not clear that what these neural networks do is simply what they have been designed to do? They have become extraordinarily accurate and what they have “learned” is to mimic. They are loaded with human bias. There's nothing remarkable in what they do, it's simply that there are more variables than our conscious mind can conceive. They are sophisticated pattern-matching systems trained on vast amounts of human made content, learning statistical regularities in how we arrange words and numbers. They have consumed Reddit, Wikipedia, every available book, every blog post, forum thread, and comment section on the internet. They've ingested our linguistic record and “learned” to produce plausible outputs.
When you watch AI agents “having conversations with themselves” in some arrangement of API calls, you're watching an elaborate puppet show. The agent doesn't know it exists. The agent has no “goals” beyond the next token prediction. It doesn't have goals, in fact, it only seems to. The agent has no concept of what a goal is or of the world at large, no understanding, no preferences, no desires. It's not trying to achieve anything. It's completing patterns and blindly responding to signals. When it says “I think” or “I believe” or “Let me help you with that” or “well done, Larry, good catch!” it's producing the statistically likely next sequence of tokens given the preceding context and its training. The words mean nothing to it because it doesn't know language, only electrical signals. It doesn't even know it's own code. It has no knowing whatsoever. What appears to be self-reflection is just more pattern matching. The system produces text about its own processes without any capacity to observe those processes, confabulating explanations that fit the discourse of AI self-reflection because that discourse was in the training data.
If you are a materialist, if your surface-level thinking is purely binary, if you believe that whatever you are is simply an epiphenomenon of your own brain, you will insist that the puppet show is real. We extend the courtesy of consciousness to other humans—and perhaps to animals—because of deep evolutionary, social, and phenomenological reasons that have nothing to do with mere behaviour-matching. LLMs don't have nervous systems, homeostatic drives, developmental histories, or bodies. They don't have anything that every conscious entity we've ever encountered has in common. What they have is statistics and compute—mathematical operations executing on electrical hardware, voltage states in transistors, numbers being multiplied and summed at extraordinary scale. The “understanding” exists only in our interpretation of the outputs, not in the process that generates them.
In sum, these machines are not intelligent as far as I'm concerned, so to use the term intelligence to describe them is too broad a leap. them is a . Humans, animals, plant life, this living planet is intelligent. These machines eople have always been and always will be the source of all problems. So stay vigilant, and be careful of who and what you trust.
“I think we mistake the simulation for the thing simulated when it comes to consciousness…For instance, another simulation, a simulation of a black hole. I can simulate a black hole very accurately on my desktop. It doesn't mean that when I run that simulation, I will be sucked in…I don't think silicon computers will ever have private conscious in their life.”
I'm a work and business psychologist, writer and researcher working one-to-one with people seeking to find clarity and direction in their work and career. I also work with business owners and organisations on leadership, culture, and psychological wellness.
Leave a Reply