Smarter Than Us? How AI Became Just Another Flawed Reflection

Smarter Than Us? How AI Became Just Another Flawed Reflection
Photo by Andrea De Santis / Unsplash

When artificial intelligence first started breaking into the public consciousness, it came with the promise of superhuman intelligence.
A tireless, unbiased mind.
Faster, sharper, more reliable than anything humanity could dream of.
It was supposed to be better than us.

Instead, what we got feels eerily familiar: a digital mirror — with all our flaws, amplified at scale.

Let's break it down:


1. Outdated Knowledge, Inflated Confidence

AI models, especially those trained before the explosion of "live learning," are often working with information that's already stale.
Even when wrong, they present answers with unshakable certainty, like a freshman who's read half a Wikipedia page and thinks he's an expert.

Example: Ask a model about a new law passed yesterday, and it might still cite a two-year-old version — in a tone that leaves no room for doubt.

It’s like talking to your one uncle at the family dinner who "knows everything" but hasn’t updated his facts since 1998.

2. Pretends to Think, Often Gets It Wrong

AI outputs feel like thoughtful reasoning — but much of it is pattern-matching without understanding.
It strings words together based on probability, not logic.

When pressed for real reasoning?
It hallucinates.
Confidently.

Illustration:
A user once asked an AI to explain a physics paradox.
It generated an elaborate — but completely fictitious — theory, citing nonexistent professors from nonexistent universities.


3. Says Sorry, But Ignores Instructions

Today's AI models are programmed to apologize ("I'm sorry if I made a mistake.") whenever caught.
But the next moment, they often repeat the exact same behavior.

Memory? Context? Discipline?
Still not quite there.

Instruction-following is better today than two years ago, but even now, users often find themselves rephrasing the same command five different ways, only to be told, "I'm sorry, but…" — again.


4. Believes Everything It Read Online

AI is only as good as its training data, and much of that data comes from the internet — the same place where conspiracy theories, clickbait headlines, and outdated manuals live.

Imagine if your college professor's knowledge base included YouTube comment sections and Yahoo! Answers circa 2010.
That's what many AIs are up against.

Even worse:
AI models struggle to tell credible sources apart from garbage unless humans painstakingly teach them.
(And even then, biases sneak in.)


5. Cheaper, Faster Asian Version

Much like the economic history of industrial production, we're witnessing a parallel in AI:
Cheaper, faster, more aggressive competition emerging outside of Silicon Valley.

Countries across Asia — from China to Indonesia to Vietnam — are launching their own models, often optimized for speed and cost rather than depth or nuance.

Sometimes that's great for accessibility.
Sometimes it means quality suffers dramatically.

Example: Some "lite" language models produced overseas boast incredible download speeds, but when tested, they invent facts at rates even higher than their Western counterparts.


6. Promised a Messiah (AGI) to Fix Everything

Whenever today's AI limitations are pointed out, enthusiasts often say:
"Don't worry, AGI is coming."

AGI — Artificial General Intelligence — is the mythical "real" AI:

  • It will reason.
  • It will understand.
  • It will innovate.

In theory.

In practice?
It remains a moving goalpost, always five years away, no matter which year you're in.

We are asked to forgive today's broken tools in the name of a future perfect machine.
But maybe — just maybe — the real issue isn't waiting for a messiah.
Maybe the issue is that we built AI to look too much like ourselves: rushing, overpromising, and hoping the next version will finally be the one that saves us.


Other Points You Shouldn't Miss:

  • Echo Chamber Effect: AIs trained on AI-generated content will likely regurgitate amplified biases at even greater intensity.
  • Legal and Ethical Blind Spots: Current models are largely clueless about the nuance of real-world law, ethics, and cultural values.
  • Attention over Accuracy: In many corporate races, making the AI sound impressive has taken precedence over making it trustworthy.

Final Illustration

Imagine you build a robot based on the average human internet user.
It speaks fast, it sounds smart, it references a lot of content...
but it's:

  • 30% confident nonsense,
  • 20% outdated memes,
  • 10% fake quotes,
  • and a desperate 40% of polite apologies and "I'll improve soon."

Now imagine that robot running your company.
Or answering your kids' questions.
Or advising your government.

That's where we are.


The New AI: Less a Savior, More a Mirror

If we were hoping for something better than ourselves,
what we got instead
was a remarkably accurate reflection — flaws, overconfidence, blind spots, and all.

The future of AI isn't just about building smarter machines.
It's about becoming smarter humans first.

Support Us