This is Part 2 of my LLM series. In Part 1, I discussed how in just a few short years, we went from the childlike joy of creating “Pirate Poetry” to the despair that our jobs would disappear. My main message was to relax a bit, as companies abuse the hype cycle to distort what is actually happening.In this post I want to talk about how we fall prey to this distortion: we perceive LLMs as intelligent when they aren’t. A recent post from Jeppe Stricker put me on this path. He wrote, “AI produces fluent coherency, texts that follow the rules of argument and structures so well, it bypasses skepticism altogether.”He’s right. LLMs are language models, and their superpower is fluency. It’s this fluency that hacks our brains, trapping us into seeing them as something they aren’t.This is best seen when people claim that ChatGPT has “passed the Turing test.” The circular argument is that if we can’t tell an LLM from a human, it is effectively human. This just makes me shake my head, as it profoundly misunderstands what is happening.First, let me be a bit pedantic. The original Turing Test was designed to compare two participants chatting through a text-only interface: one AI and one human. The goal was to spot the imposter. Today, the test is simplified from three participants to just two: a human and an LLM. This changes the test from a comparison to a judgment.The problem? We really, really, really want to find the humanity in almost anything. This is a well-studied tendency called anthropomorphism, and this one-on-one test is basically setting us up to be hacked. This is why Stricker’s quote is so important. Since LLMs are trained to recombine text written by other humans, it bypasses our skepticism.Back in the 1960s, Joseph Weizenbaum created a human-mimicking chatbot called ELIZA. It used no “AI”; it just relied on a long list of if-then-else clauses that recreated the questioning patterns of a Rogerian psychologist. The program was shockingly effective at convincing use...
First seen: 2025-08-15 15:22
Last seen: 2025-08-15 18:22