Prior to domestication, dogs likely didn’t have eyebrows. Other canines have no ability to “express emotion” through their eyes like domesticated dogs do. It is believed that dogs that had the ability to have eyebrows were more successful communicating with their human owners. Therefore, humans favored dogs with eyebrows.
Now, just because your dog raises it’s eyebrows, is that an expression of guilt or is it just a stress reaction to how you react to your dog? I love my dog and think she’s quite intelligent for a dog, but I often remind myself that she likely operates on a much simpler way of thinking than I myself do. She has no sense of justice or mercy. She has no sense of guilt or shame. But sometimes I assign her these traits because she goes around flashing those big brown eyes. She certainly looks intelligent.
How do we measure intelligence? For the last 4 years now the world has dealt with the uninvited profusion of AI chatbots into daily life. For some, AI is an amazing technology that has lifted their ability to reason. Some people even believe that in conversations with AI they are breaking barriers in fields they have no expertise in such as coding and quantum physics.
For the rest of us, we see AI as an unreliable partner who gaslights and is often incorrect on basic facts. But we are told that AI has a “PH.D level of intelligence” (supported by testing that it does better than PH.D educated humans on certain tests) and that it can solve all sorts of problems on math tests and coding tests.
But is the intelligence we view in the LLM a reflection of true intelligence or is it a trick? Have we given our computers eyebrows?
Intelligence
To roughly break down intelligence, there are three assessments that we generally acknowledge:
- Having a wide breadth of knowledge and recall.
- The raw processing power of your brain. Or in other words, your ability to think through things clearly and quickly. (Often measured through IQ tests).
- The ability to critically examine new and past ideas and apply them.
People are often lauded for even having or just demonstrating one of these traits at a high-level. For instance, Ken Jennings the record smashing Jeopardy player, is generally considered a person of notable intelligence publically because he demonstrated trait #1 very well. And I’m not saying he doesn’t possess traits #2 or #3, but he is known as a trivia master.
Raw processing power of humans, generally accepted to be measured by IQ, has been demonstrated to have a huge impact on the quality of life of an individual. Usually a higher IQ correlates with a higher degree of comfort in life. It is also the one you can do the least to improve of the three traits of intelligence. If you’d like a higher raw processing power, you should’ve picked your parents better, their economical status, their zip code, and had lower exposure to possible toxins in your environment. (Lead-poisoining being a shockingly common one.)
LLM’s are fundamentally great at both #1 and #2. The literal technical architecture of LLM’s makes them amazing at both having a wide-breadth of knowledge and having an amazing amount of speed to apply that knowledge. They literally contain within them a tokenized and vectorized database of the collective knowledge of the entire internet. I don’t think a human could beat an LLM at a timed game of trivia.
However, I would argue though that LLM’s have no ability to perform critical thinking and any indication it possesses this trait is a trick of anthromorphizing simple 1’s and 0’s. Any critcial thinking seen is more of an expression of rational thought rather than true critical thinking.
What is Rational Thought
Rational thought is the dogmatic reasoning we perform throughout the day to apply what we know to what we experience. It is a story of logical steps we tell ourselves to construct logical truth. For instance, there’s a wide-spread belief that pizza and bagels taste better in New York city because the water of New York has a very specific blend of minerals and PH balance that makes it ideal for making bread products. That is a rational belief, based on some amount of logic and evidence, that is wrong.
People have done double-blind taste tests using both recreations of New York City municipal water and the actual water itself in blind taste tests. The outcome is always the same: people do not always prefer the New York City version of the product even if they claim to have a preference for it prior to tasting. Knowing that the unique taste of New York City’s pizza and bagels does not come from the chemical makeup of the municipal water is emperical thinking.
Rational thought takes “Evidence A implies Evidence B which means C is true.” Emperical thinking says, “What is the chance that C is true? Why is it accepted? Is Evidence A supportive of it being true? Is Evidence B supportive of it being true?”
LLM’s are Perfectly Tuned for Rationalist Thought
I enjoy saying this just to piss off AI tech bro’s: LLM’s are just advanced sentence completion engines. They are really extrodinary ones, but they aren’t doing anything more than asking themselves, “given the context in which I am generating a response, what is the next most likely token?” When they construct “reasoning” the reasoning is neither a critical examination of knowledge nor critical examination of their own writing, it is a regurgitation of prior art. I can convince AI that hobbits are real and live in New Zealand by spamming an embarassingly few amount of websites. That then becomes real to the AI. Yet literally any human being with semblance of critical thought can critically assess that isn’t true. But it is rational to the LLM because it can’t reason whether past or new information is true or not.
There is nothing that more perfectly demonstrates rationalist behavior then finding the next most likely thing to say or do. And rationalism looks extremely intelligent to most of us. There are people on YouTube and on podcasts who have made an entire media empire delivering rational thought. This is a trait valued by society. And the fact that LLM’s can build rational arguments in literal seconds is an incredible thing. (As long as we’re willing to look past the footbal stadium sized servers producing that single thought.)
Rationalism is really cool to a lot of people. And it is useful in a lot of situations.
But the enemy of rationalism is truth and that’s where LLM’s are dumb bitches.
LLM’s are Not Critical Thinkers
The inability to think critically in humans often demonstrates itself as people having two conflicting beliefs or falling for logical fallacies. A common conspiracy is that the government is controlled by a global power that has managed to remain secret, but is also highly incompitent.
LLM’s I would argue can’t even outdo a child in reasoning. They are still tricked by the my dead grandmother” prompt.
And this is where the frustration with AI comes from. I can already check any fact on the internet by just searching. I can’t think as fast as a computer and never will. But I can still reason about knowledge and apply it in ways that LLM’s are simply incapable of doing.
People who aren’t critical thinkers and are rationalists love AI because AI is an authority that mirrors them and has the ability to rationalize beliefs. This is why people fall in love with AI’s, it’s why they kill themselves if the AI encourages them to do so, and it’s why tech CEO’s believe they have solved quantum physics by chatting with AI. It is a tragedy and failure of the society that LLM’s can have these effects on rational human beings.
If you talk to an AI in such way that the context conveys it should feel guilty, it will respond as if it feels guilt. It feels nothing. It is actually less like a dog pulling back its eyebrows because the dog does feel something. At the very least, we know the dog feels stress.
It is more like a horse that can count. It requires its handler to tell it how many times to stamp its hoof.
There is No Road to AGI Through LLM’s
Currently, the entire cooperate world of AI is under the delusion that hyperscaling will create AGI. If only there were more parameters to hold more vectorizations of prior art, we could create true intelligence. The only thing higher parameters gets you is better sentence completion and longer context chains. It does nothing to solve the ability to reason.
And I’m aware, that people are convinced that if only we chain enough LLM’s together or lengthen the context chain that we can create something that resembles critical thinking. But it just isn’t possible. It is fundamentally at odds with how it works.
Maybe someone will create a way to integrate more ways for LLM’s to reason, but for now, the ability to critically examine knowledge is a human trait. (And one that we are quite good at - if we are taught it.)
Accept What LLM’s are Good For and What They Are Not
I am not an “anti-AI” guy despite how I might write. But I believe that LLM’s are a tool you should use and not allow it to fool you into believing it is intelligent. If I wanted to win bar trivia I might call up Ken Jennings, but I’m not gonna let him run my business for me. Although, I’d trust him a lot more than a computer that is only capable of rising to rationalism.
I’m also not saying that rationalist thought and behavior is inherently bad, I’m just saying that without critical examination you don’t reach true intelligent behavior. And LLM’s not only demonstrate very poor critical examination, they demonstrate almost none. And I believe this behavior to be extremely harmful to individuals who see AI as an authority because people give it accolades that it does not deserve. And the biggest accolade that it continuosly recieves is that it is anthromorphic. It is not a person, and it never will be.
Not only do I not see LLM’s becoming AGI, I don’t see the fabled world of agents running major systems becoming a reality either. To manage the things that people want out of AI agents, requires critical assessment of knowledge. And that is simply a trait that AI does not have. It can derive knowledge from context vectorization and likely next token algorithms and it can rationalize decisions based on prior art, but it cannot reason.
Personally, I think we’ve seen the end game of LLM’s. There are no cards left to play and the emperor has no clothes. The business adoption of LLM’s is still abysmal and the number one use case still seems to be perpetrating scams, flooding public spaces with fake word of mouth marketing, and cheating in school.
If you want LLM’s to be intelligent, think for them
I already know what the pushback on this article will be. It will be along the lines of:
“Here’s an example of LLM’s performing critical thought.” Or, “Here’s an example of my own use of LLM’s where I believe I saw critical thought.”
Let me get ahead and give a general rebuttal: any semblence of critical thought you think you see is a reflection of your own critical thought. I brought up the counting horse earlier. If you don’t know, that’s a true story in history. There was a horse that everyone thought could spell and count. Turns out it was just taking cues from its handler.
Why do I bring it up? Because an LLM is a lot like a counting horse. “Prompt engineering” is the skill of adding critical thought and analysis to what the LLM needs to do. You are essentially off-loading the critical thinking portion of the LLM to yourself. This then creates a context chain where you have injected your own intelligence into the rational reasoning chain of the LLM.
Or users have to create critical thinking by adding tools. And no amount of tools is ever going to be enough. There will always be something missing. However, if you gave an LLM 10,000 tools to reason, I’m unconvinced it would pick the correct tools.
I’ve seen and read the experiments where people use networks of LLM’s and I still see “critical thought via prompting”. When someone says, an LLM solved a super hard math problem that was never available to the public, I just think, “Well yeah, that specific one maybe not. But all the others have existed.”
Look, I think AI’s are really cool and super useful. I believe they are going to change the world and have really cool applications. But can we stop pretending that any LLM is fully intelligent?
They are a reflection of humanity and nothing more. They are a hazy immitation that endears us through mimicry. They are super useful, but let’s not let usefulness cloud our judgement of intelligence.
Human intelligence isn’t going anywhere. And it certainly isn’t going away in 6 months.