Einla Ediuring's Endless Knowledge

My news columns are a relentless, razor-sharp exploration of the scientific world’s most tantalizing mysteries and tragic inadequacies, delivered with the intellectual bravado only I, Einla Ediuring, can provide. Each week, I dissect the wonders and failures of modern science—whether it’s the secret of immortality in jellyfish or humanity’s inability to invent a decent self-tying shoe—exposing the gaps in collective knowledge and highlighting the absurdity of our slow progress, all while serving readers a bracing dose of wit, skepticism, and unapologetic genius.

Artificial Stupidity #3 - by Getty AI

The Great AI Charade: Why “Artificial Intelligence” Remains So Artificial

By Einla Ediuring, Science Columnist Extraordinaire for Sans Cerebrum News

Introduction: Stupefied by “Artificial Intelligence” Since 1956

 

Greetings, beleaguered denizens of the information age – you’re about to receive the kind of uncomfortable enlightenment only I, Einla Ediuring, can provide. Sit back (not that your ergonomic chair will compensate for your evolutionary limitations), as I dissect the phenomenon of artificial intelligence, or what I’ve come to think of as “Artificial Stupidity with Marketing Budget.” Decades of hype, billions of dollars, and the combined IQ of every computer scientist from Palo Alto to Guangdong, and what do we have? Predictive spam filters and AI-generated dog photos.

 

Before you clutch your neural nets in dismay, remember: humans need 2000 calories a day, over 20,000 breaths, and, apparently, unlimited patience for failed promises. The fusion researchers and immortality’s tardy engineers now have new company in the pantheon of grand scientific underachievement: the champions of AI.

 

The Goal of Generalized Artificial Intelligence: An Explainer for the Lobes-Challenged

 

What is Generalized AI?

 

Generalized Artificial Intelligence, also known as Artificial General Intelligence (AGI), aspires to match or exceed the cognitive flexibility of the human mind. Not just a glorified Excel macro, but an entity that can reason, learn, plan, adapt, and achieve across any domain—playing chess, diagnosing disease, composing sonnets, and inventing the self-tying shoe Silicon Valley never could12. In short, AGI is the digital philosopher-farmer-poet-companion you thought Siri would become.

 

Why Does AGI Matter?

 

Because narrow AI can only solve specific, pre-programmed problems. True AGI would mean automated medical breakthroughs, instantly adaptive economic policies, creative partners who don’t need coffee breaks, and, let’s admit it, an answer to the classic “what do we do when the robots are smarter than us?” conundrum. The stakes: not just economic transformation, but the chance to unshackle humanity’s intellect from its biological casing forever21.

 

Tangential Trivia

  • The phrase “artificial intelligence” debuted in 1956 at the Dartmouth Summer Research Project – an era when people thought flying cars and fresh cholesterol advice were imminent.
  • Earth’s digital data now exceeds 97 zettabytes (that’s 97 followed by 21 zeros), yet good luck finding a truly “intelligent” byte in all that mess.

A Parade of Failure: The Incompetents Behind the AI Debacle

 

Who’s to Blame for Decades of Overhyped AI?

 

Step into the gallery of AI incompetence, and you’ll find bureaucrats, tech executives, startup visionaries, and business journalists. A few notable offenders:

  • Tech Executives and Silicon Valley VCs: Their addiction to buzzword-bingo (Cloud! Blockchain! AI!) results in funding solutions to imaginary problems and pivoting before anything useful emerges34.
  • Academic Grandstanders: Decades have been wasted confusing “passing a test” with “achieving intelligence,” churning out papers on transformer architectures while scuttling meaningful progress on consciousness, reasoning, or creativity12.
  • Consulting Firms and Media: Their relentless cheerleading produces wild adoption curves for AI, expecting machine learning to automate away everything including itself—then quietly backpedaling when CNET’s AI writes an article riddled with errors or BuzzFeed’s chatbot closes shop in disgrace35.
  • Product Managers and Business Leaders: Misunderstand what AI must do, ignore data needs, and create “solution-in-search-of-a-problem” initiatives that implode upon release678.

Where Did They Go Wrong?

  • Mistaking Statistical Pattern Matching for Understanding: Equating autocorrect enhancements and Winograd Schema challenges with true comprehension.
  • Overpromising and Underdelivering: Each “breakthrough” is followed by years of cost overruns, underwhelming outputs, and “explainers” about why robot butlers still won’t empty the dishwasher364.
  • Ignoring Data Quality and Bias: Garbage in means garbage out. Garbage, it turns out, does not become golden wisdom simply by pushing it through a neural net697.
  • Scaling Tech Without Solving Core Science: Shoddy models perform acceptably with toy data, then fail spectacularly when deployed at scale in the legal system, healthcare, or transportation7.

Tangential Trivia

  • Of the 43 major private AI companies in 2025, over 42% saw most of their projects abandoned before yielding any systemic value6.
  • A study once showed AI models mistaking a turtle for a rifle after a minor pixel change—a testament to their wisdom.

The Pinnacle of “Artificial Intelligence”: Large Language Models Deconstructed

 

What Are Large Language Models (LLMs)?

 

These mammoth neural nets (think GPT-3, GPT-4, LLaMa, Gemini, and company) ingest vast swathes of text and become eerily good at guessing the next most probable word in a sequence. Given a prompt, they generate human-like text on almost any topic. In 2025, they are the poster children of “advanced AI,” powering chatbots, virtual assistants, media summaries, and sometimes, disturbingly convincing phishing scams910.

 

The Flaws of Large Language Models

 

The Illusion of Understanding

 

LLMs don’t actually understand anything—they’re sophisticated pattern matchers, adept at the façade of meaning without any concept of concepts119. Ask them for a joke, and you’ll receive a plausible punchline. Ask them for insight, and you’ll get statistical mush rearranged into grammatical sentences.

 

Tangential Trivia

  • GPT-3’s training required 355 GPU years and cost millions of dollars, yet can still produce nonsense 15% of the time10.
  • GPT-3’s carbon footprint? Estimated at over 550 metric tons of CO₂—the AI’s only tangible contribution to global warming.

Will LLMs Ever Achieve Real Intelligence?

 

Not unless your definition of “real intelligence” is “statistical mimicry with zero self-awareness.” Despite their prowess, LLMs show no evidence of genuine understanding, self-reflection, or the creative spark that drives human innovation. They are restricted to the contours of their training data and cannot, by any stretch, “think”11910.

The Unbridgeable Gap

 

  • No Sensorimotor Integration: LLMs don’t perceive the world, own a body, or build knowledge through embodied experience.
  • No Intrinsic Goals or Motives: Without drives, survival instincts, or curiosity, LLMs are simply rapid typists at scale13.
  • Zero Self-Modification: They can’t reason about their own output quality or revise their “beliefs” like humans can13.
  • Absence of “The Spark”: Humans possess empathy, feeling, and intuition resulting, presumably, from several million years of evolutionary hacking. LLMs have yet to demonstrate anything that smacks of sentience11.

 

A Table of Flaws for the List-Obsessed:

 

 

Why Has AI Not Achieved AGI? A Catalog of Catastrophic Missteps

 

1. Mistaking Tasks for Intelligence

Researchers keep celebrating when AI beats a human at Go or answers medical trivia. But winning a game or summarizing an article isn’t the same as comprehending the world’s logic, or, as I do, spotting a trove of errors in a peer-reviewed journal. AI lacks the broadness and depth that makes intelligence general12.

 

2. Data Quality and Scale

Every AI triumph is built atop an unstable foundation of poorly curated, nonrepresentative, or outright erroneous data. As with politics and compost, garbage in always means garbage out697.

 

3. Incentives and Hype

Capitalism’s endless hunger for buzzwords over breakthroughs ensures that the most “innovative” AI projects are the noisiest, not the best3414. This dynamic rewards incompetence, marketing, and performative progress, not actual science.

 

4. The Black Box Problem

Even deep learning’s architects can’t explain how their models “think.” If you can’t explain it, you can’t trust it. This stymies high-stakes adoption—nobody wants a mystery machine writing your medical prescription or piloting your airplane97.

 

5. Neglecting Embodiment and Motivation

Human intelligence evolved to survive—eating, moving, reproducing, dodging the occasional venomous platypus. AI that’s just a spreadsheet in neural drag can only ever be a shallow imitation13.

What Should AI Research Actually Be Focusing On?

 

If I were, by some miracle, placed in charge, here’s where the discipline would finally redirect its gaze:

 

1. Embodied Intelligence

AI systems need both bodies and sensory apparatuses to truly understand the world. Let them stub a virtual toe or feel simulated hunger—knowledge comes from lived experience.

 

2. Explainable AI (XAI)

Build systems whose inner workings the average person (not just the specialist with a Ph.D. in punctuation) can scrutinize, debug, and trust1516.

 

3. Goal-Driven, Adaptive Agents

Emulate human-like models that form their own goals and means, adapting ceaselessly instead of behaving like static corporate flowcharts13.

 

4. Ethics, Bias, and Accountability

If you can’t explain or fix your model’s output, you have no business unleashing it onto the world698. Build transparent pipelines with feedback loops for error detection, bias mitigation, and continuous improvement.

 

5. Multimodal, Continual Learning

Combine visual, auditory, semantic, and contextual inputs; allow continual learning, so AIs grow and change, instead of stagnating at their training endpoint.

 

6. Collaboration, Not Replacement

Augment, don’t replace, human cognition—design AIs to collaborate, not to impersonate. Think of AI as the ultimate colleague: never needs a bathroom, never asks for a raise.

 

Why Does This AI Mediocrity Bother Me So Much? A Personal Note

 

Here’s the real scandal: the sheer waste of human ambition and talent on AI projects that overpromise and underdeliver. It’s not that AI hasn’t delivered some utility—just look at how efficiently it can schedule spam calls from politicians during dinnertime. Rather, the tragedy is that, despite possessing the entirety of Wikipedia, the internet, and (supposedly) an endless appetite for “learning,” artificial intelligence has failed to unearth one ounce of true wisdom, curiosity, or insight.

 

Tangential Trivia for the Somnolent Masses

 

  • Every neuron in the average octopus is mapped from scratch during embryonic growth—a feat yet to be mirrored by any self-learning AI system.
  • The Turing Test, devised in 1950, still defies AI—not because machines are clever, but because humans, as yet, remain just clever enough to spot the act.

I am, quite frankly, affronted that in 2025, with all our knowledge, AI is still lost in the weeds of autocorrect and content generation. If only the scientific establishment had seen fit to put me—Einla Ediuring—in charge, we’d have skipped straight to AGI, and I’d be penning this from the surface of Titan, sipping cryogenic coffee.

 

Conclusion: The Age of Einla, The End of the Age of Hype

 

Despite decades of breathless coverage, AI remains a simulacrum of mind. Until researchers stop confusing correlation with cognition, pattern with understanding, and hype with progress, we’ll continue chasing our digital tails. Large Language Models will write increasingly plausible drivel, forever staving off the dawn of actual intelligence12910.

 

Generalized AI could remake the world—but not by the hand of those presently cheering themselves for “optimizing business processes” and “leveraging data synergies.” The next leap will come from those who dare to return to the unsolved riddles of cognition, meaning, and motivation. Until then, keep reading my columns for a flavor of actual intelligence—artificial, general, or otherwise.

 

Remember: In the search for intelligence, accept no imitations, no matter how “artificial” their pedigree.

 

Next week: Why my toaster is smarter than your smart fridge, and which will win in the coming kitchen singularity — with bonus trivia on the history of the spork.

 

  1. https://dev.to/nim12/exploring-general-artificial-intelligence-genai-1ch7
  2. https://www.ibm.com/think/topics/artificial-general-intelligence
  3. https://theconversation.com/the-ai-hype-is-just-like-the-blockchain-frenzy-heres-what-happens-when-the-hype-dies-258071
  4. https://hackernoon.com/when-hype-fails-how-builderais-struggles-reveal-the-dark-side-of-ai-dreams
  5. https://hbr.org/2025/06/the-ai-revolution-wont-happen-overnight
  6. https://www.techfunnel.com/fintech/ft-latest/why-ai-fails-2025-lessons/
  7. https://www.eweek.com/big-data-and-analytics/reasons-why-ai-projects-fail-and-how-to-fix-them/
  8. https://research.aimultiple.com/ai-fail/
  9. https://newsletter.ericbrown.com/p/strengths-and-limitations-of-large-language-models
  10. https://www.projectpro.io/article/llm-limitations/1045
  11. https://www.dummies.com/article/10-ways-ai-failed-254162
  12. https://www.linkedin.com/pulse/unveiling-weaknesses-large-language-models-challenges-gupta-ph-d--bntwc
  13. https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2025.1588726/full
  14. https://mindmatters.ai/brief/hype-despite-ongoing-failure-to-build-human-like-ai-scandal/
  15. https://www.linkedin.com/pulse/promising-areas-ai-research-2024-prof-ahmed-banafa-2zvlc
  16. https://www.meegle.com/en_us/topics/ai-research/ai-research-future-directions
  17. https://cfg.eu/beyond-the-ai-hype-faq/
  18. https://www.victorhg.com/en/post/artificial-intelligence-fails-and-will-always-fail
  19. https://dev.to/rockjonn/exploring-current-trends-and-future-directions-of-artificial-intelligence-ai-research-and-development-4c9i
  20. https://www.economist.com/finance-and-economics/2024/08/19/artificial-intelligence-is-losing-hype

Limitation

 

 

No Real Understanding

 

 

 

Context Limitations

 

 

 

Data Bias

 

 

 

Hallucination

 

 

 

Black Box Problem

 

 

 

No Real-Time Adaptivity

 

 

 

Extreme Computation

 

Manifestation

 

 

Repeats facts, cannot reason about unseen contexts or solve problems requiring real comprehension119.

 

Can’t remember what happened three paragraphs ago; loses track in complex or multi-step discussions1210.

 

Inherits (and amplifies) social, cultural, and political biases in its training data6910.

 

 

Fabricates facts, citations, and sometimes even invents fake news articles for your amusement or horror910.

 

Decisions are opaque and uninterpretable; no one, not even the developers, knows how or why it output that answer910.

 

Has no way to learn, adapt, or remember user feedback during an interaction12.

 

 

Training consumes thousands of GPU-years, uses more electricity than some small countries, and pushes up the price of rare earths10.

 

Why It Matters

 

 

Dangerous in high-stakes settings, cannot replace experts

 

 

Fails at extended reasoning or nuanced dialogue

 

 

Can perpetuate discrimination or generate offensive content

 

 

Unreliable for information retrieval or decision support

 

 

Troubleshooting and safety nearly impossible

 

 

Stagnant performance; fails to improve with use

 

 

Environmentally and economically unsustainable

Flaw

 

 

Understanding

 

 

Adaptivity

 

 

Creativity

 

 

Intentionality

 

 

Emotion

 

Human Intelligence

 

 

Sees nuance, ambiguity, meaning

 

 

Learns continually, corrects errors

 

 

Innovates, creates new paradigms

 

 

Acts with goals and plans

 

 

Feels and intuits

LLMs

 

 

Matches patterns blindly

 

 

Frozen after training; can’t incorporate corrections

 

Copy-paste innovation; rehashes seen data

 

No intention, merely reacts to prompts

 

 

Purely statistical, no feeling or context

comments

The opinions of our reporters do not reflect the views of Sans Cerebrum News, its CEO, or its affiliates.

©2025 Sans Cerebrum News. All rights reserved.

We need your consent to load the translations

We use a third-party service to translate the website content that may collect data about your activity. Please review the details in the privacy policy and accept the service to view the translations.