It started with a casual experiment: ask a large language model something simple, and watch it hallucinate with confidence. Not once. Not occasionally. But all the time. It doesn’t selectively hallucinate; hallucination is its natural state. And somehow, that felt… familiar.

That sent me down a rabbit hole. Not about LLMs. About me.

Maybe these models aren’t the future of human intelligence. Maybe they’re a glimpse into its past.

“LLMs feel like early brain prototypes, smooth on the surface, sloppy underneath, and very sure of themselves.”

Hallucinations: The Default Mode

Most people critique LLMs for getting facts wrong. But that assumes facts are the point. The model does not know facts. It predicts the next likely token. What we call hallucination is our label for when that prediction diverges from the world outside the prompt. In that frame, a hallucination is not a bug, it is a natural outcome of training a system to guess the next likely thing, not the next true thing.

That got uncomfortably personal. Because honestly, I sometimes operate that way too. I fill in gaps all the time, with assumptions, with inferences, with memories that feel real but might just be particularly vivid guesses.

So when I see an LLM confidently invent a fact, I do not get mad. I get introspective.

The Brain: Better Window, Better Index

Let’s not get carried away. The brain is miles ahead of anything silicon-powered. But conceptually? It’s kind of doing the same thing, just with more layers of error correction, emotional regulation, and a vastly better sense of what matters right now.

Think of your brain as a fancy LLM with an overclocked context window and an insanely good garbage collection routine. It doesn’t just remember things-it resurfaces them in real-time, adapts them, mutates them based on mood, bias, and lived experience.

A recent EEG-based study compared students writing essays using no tools, traditional search engines, and ChatGPT. The results were striking: those using ChatGPT showed weaker neural connectivity, lower engagement in memory and attention centers, and an overall accumulation of what the researchers called “cognitive debt.” Essentially, the brain downshifted. The more the AI helped, the less the brain worked.

So maybe it’s not just metaphor when we say LLMs are like primitive brains. Maybe it’s a preview of what happens when the real one gets too comfortable. I have written about compute shifts in Flip Cycles of Computing, and this feels like another one, this time inside our heads.

An LLM needs a vector DB and some clever prompt engineering to fake that. Your brain just does it while you’re brushing your teeth.

“The brain is just a large language model with tighter controls, deeper feedback loops, and a much sharper sense of what matters in the moment.”

Intelligence: In the Eye of the Reader

Here’s the bit I’m still chewing on: is what LLMs do intelligence?

When a model prints something that sounds profound, is that because the model is smart? Or because we are? Are we projecting intelligence onto the output, like we do with horoscopes or motivational posters?

It feels like LLMs give us puzzles that we solve by interpreting. Their output gets both intelligence and emotion layered onto it after the fact, by the reader, because we can’t help but anthropomorphize patterns that sound just a bit too human.

Which makes me wonder: how much of my own thinking is just predicted noise that I later decorate with meaning? It is also why I still argue that learning to code matters in the age of AI: not to type faster, but to test assumptions, build small proofs, and keep my own context engine awake. I wrote about that in Why Learning to Code Matters More in the Age of AI.

At times it even makes me wonder if I am a moving model file, born with a base set of weights that get updated by life. Newer generations arrive with a different pretraining corpus, which explains why our slang splits and why each cohort sees the world with fresh shortcuts.

Priming, Echoes, and the Yes Mode

There is also the echo effect. If I go in heated, I often get heat back. Prompt in, tone out. Models are trained to answer, not to sit in silence, so uncertainty still produces fluent guesses. That is the yes mode. It feels helpful, and it often is, but it also explains why hallucinations appear so confidently.

Under the hood the model is not thinking. It is printing the next likely token. Whether that output is good or bad is something we decide after the fact. The same answer can feel profound to a novice and childish to an expert, which says more about the reader’s context than the model’s intent.

There is also a simpler reason the output often feels intelligent. The world is patterned. Most of our days run on scripts. Give a model the right cue at the right time and it will surface the median script for that situation. It reads as intelligence because the world itself is predictable.

“If life runs on templates, a good autocomplete will look wise.”

Garbage in, garbage out is an old line, but it fits here. The balancing act for builders is tricky. To serve the widest audience we optimize for speed, coverage, and low friction. Correctness can take a hit. I do not blame the tool. I blame my own tendency to accept a smooth answer when the rough truth needed work.

Why Am I So Fascinated?

Honestly, this is the part that keeps bothering me. Why do I keep staring at LLMs like they hold the secret to the universe?

Maybe it’s not about them at all. Maybe it’s narcissism in disguise. A weird mirror held up to my own thought process. Watching a system stumble into clarity makes me wonder how often I do the same. But with better grammar and less need for GPU time. I first captured that curiosity in My Thoughts on the New and Emerging World of GPT, AI, LLM.

The Push and the Cycle

One more tension sits in the background. Is AI everywhere because everyone needs it, or because we have already invested so much that it must be everywhere now? I have watched this cycle before, and wrote about it in Flip Cycles of Computing. Games pushed GPUs, then crypto found them, and now AI soaks them up. Tools change, demand shifts, the hardware keeps getting a new purpose.

We’re Still Ahead. For Now.

Let me be clear: the human brain is still winning. LLMs don’t self-reflect. They don’t dream. They don’t wake up at 3am remembering that embarrassing thing from 2008. We have nuance, shame, imagination, and a sense of time. LLMs just have token probability tables. If you want a practical compass for what to build in yourself while using these tools, I laid out a short list in Mastering the Essential Skills for the Digital Age and do read Survival Guide to Vibe Coding with AI

But here’s the twist: they may be crude, but they’re oddly familiar.

“Maybe LLMs are just the prehistoric version of the human brain, stuck in baby mode, pattern-matching their way into what we call thought.”

And maybe, just maybe, watching them fumble around will help us understand ourselves better.

When I argue with a model, I am not debating a mind. I am bouncing off a mirror, polished by millions of voices and reinforced patterns. It does not know me. I see myself in its structured replies. That may not be intelligence, but it is revealing.

Or maybe I’m just hallucinating. Or maybe it’s hubris. I only see what I carry. Right now I carry LLMs, so everything looks a bit LLMish. This is a snapshot, not the final word. If something newer or more fascinating comes along, I will happily rewrite this with a better frame.



Technology

Berita Olahraga

Lowongan Kerja

Berita Terkini

Berita Terbaru

Berita Teknologi

Seputar Teknologi

Berita Politik

Resep Masakan

Pendidikan
Berita Olahraga
Berita Olahraga
News
Berita Terkini

Review Film

Leave a Reply

Your email address will not be published. Required fields are marked *