|
Getting your Trinity Audio player ready...
|
Unpacking the hype, limitations, and the dawn of the human comeback
By Timothy Page
Researched in collaboration with the Grok.ai web-search algorithm. Sources verified both by AI and human. Also inspired by the video by Farrell McGuire titled: I Infiltrated a Bizarre AI Cult.
Artificial intelligence has been sold as the ultimate game-changer—a tool that could revolutionize everything from creative work to business operations. Billions have been poured into AI development, with promises of superhuman efficiency and innovation. But as we hit mid-2025, a sobering reality is emerging: AI often falls short of these lofty expectations. Many companies are discovering that relying too heavily on AI can lead to costly missteps, prompting a wave of rehiring human workers.
This commentary explores the genuine limitations of AI—including the underlying reasons why they persist—the gap between how people perceive image-generating AI versus language models, and why businesses are pulling back from all-in AI strategies. Drawing on recent reports and data, we’ll see that while AI has its place, it’s no silver bullet.
The realities and limitations of relying on AI
At its core, AI is a powerful pattern-recognition system trained on vast datasets. It excels at tasks like summarizing text or spotting trends in data, but it struggles with nuance, context, and true understanding. One major limitation is data quality: AI systems are only as good as the information they’re fed. If the data is outdated, biased, or incomplete, the outputs can be unreliable or even harmful.1 This happens because AI learns patterns from historical data, which often reflects real-world imperfections like societal biases or gaps in representation—essentially, “garbage in, garbage out.”Ibid.
For instance, in business settings, fragmented or inconsistent data makes it hard for AI to deliver accurate insights, leading to decisions that miss the mark.2 High resource costs exacerbate this; training AI demands enormous energy, time, and computing power, stemming from the complexity of processing massive datasets, which limits accessibility and scalability for many organizations.Ibid.
Another key issue is AI’s lack of real reasoning. While models can mimic logical steps, they often fail at complex logic tasks where precision is critical, even when correct solutions exist.3 Why? AI relies on statistical correlations rather than causal understanding or common sense, which humans develop through experience—it can’t intuitively grasp context or adapt to novel situations without explicit training.1 This is especially problematic in high-stakes areas like healthcare or finance, where errors can have serious consequences.
Ethical concerns add another layer: AI can perpetuate biases from its training data, such as gender or ethnic stereotypes, which erode trust and limit its usefulness in diverse business environments.4 These biases arise because training datasets are often sourced from the internet or historical records that embed human prejudices, and AI lacks the moral framework to self-correct.Ibid. Moreover, many AI models are “black boxes,” offering no clear explanation for their outputs, which stems from the opaque nature of deep learning algorithms, making it hard to build accountability or trust.1
AI also falls short in creativity and ethics; it remixes existing patterns but can’t innovate from emotion or intent, as it operates without genuine inspiration or moral reasoning.Ibid. Vulnerability to attacks is another flaw—small input changes can fool AI due to its pattern-based sensitivity.Ibid. Recent surveys highlight how these limitations play out in practice. A McKinsey report from early 2025 found that while 71% of organizations use generative AI, only a fraction see mature, organization-wide impact—most are still experimenting with pilots that don’t scale.5 The U.S. federal government introduced 59 AI-related regulations in 2024 alone, up from previous years, reflecting growing awareness of these risks.6 Overall, reliance on AI without addressing these flaws can lead to wasted resources and suboptimal outcomes.
The disparity between image-generation AI and language models
People often lump all AI together, but there’s a clear divide in how image-generating AI (like tools that create pictures from text prompts) and language models (like those powering chatbots) are perceived and what they actually deliver. Image-gen AI is frequently viewed as a creative powerhouse—think of it as a digital artist that can whip up stunning visuals in seconds. However, users expect it to handle details like accurate text within images or realistic proportions, and it often falls short, leading to frustration.7
Why these shortcomings? Image models are trained on vast image datasets but struggle with fine-grained rendering due to technical constraints in processing complex elements like text or anatomy, often producing artifacts from incomplete learning of real-world variations.8 These models can produce biased or stereotypical outputs, such as favoring certain ethnicities in generated faces, which highlights their limitations in representing the real world fairly.9 This bias occurs because training data draws from existing images that may underrepresent diverse groups.10
Language models, on the other hand, are seen more as intellectual assistants—expected to provide factual answers, write reports, or code snippets. Yet, they “hallucinate,” inventing details that sound plausible but are wrong, because they’re designed for pattern-matching rather than true comprehension.11 This stems from their training on massive text corpora, where they learn to predict sequences but not verify facts, leading to outputs based on probabilities rather than knowledge.Ibid.
A 2024 study noted that language models favor AI-generated content over human-written, potentially creating echo chambers of biased information.12 While image AI’s flaws are visual and immediate (like a deformed hand in a picture), language models’ errors can be subtler and more insidious, spreading misinformation in business reports or customer interactions. Security risks also differ: Language models may expose sensitive data if prompted cleverly, due to their open-ended nature.10
This disparity fuels mismatched expectations: Image-gen is hyped for its “wow” factor but criticized for inaccuracies, while language models are trusted for knowledge but disappoint on reliability. Generative AI as a whole—including both—can encompass images, videos, or music, but language-focused models dominate business use due to their text-based nature.Ibid. In essence, image AI feels more artistic but limited in precision, whereas language models promise intelligence but deliver sophisticated guesswork, often due to outdated training data or inability to scale without massive resources.Ibid.
Businesses wake up: AI isn’t the answer, and humans are coming back
The corporate world jumped on the AI bandwagon, but many are now hitting the brakes. A staggering 95% of generative AI pilot programs at companies are failing to deliver rapid revenue growth or measurable impact, according to a recent MIT report based on interviews with 150 leaders and analysis of 300 deployments.13
Why such high failure rates? Key reasons include unclear business objectives, where AI is deployed without defined goals, leading to misaligned efforts; poor data quality causing issues like overfitting or bias; lack of team collaboration, isolating technical and business sides; and talent shortages, as building skilled AI teams is costly.14 This echoes broader trends: Over 80% of AI projects fail outright, double the rate of non-AI IT initiatives, per a 2024 RAND study.15 Gartner predicts that 30% of generative AI projects will be abandoned by the end of 2025 after proof-of-concept stages, often due to poor data quality or high costs.16
S&P Global’s 2025 data is even more telling: 42% of businesses scrapped most of their AI initiatives this year, up from just 17% in 2024, citing elevated failure rates amid rapid adoption.17 Another report pegs failure rates at 70-98% for enterprise AI, with many stuck in pilot purgatory.18 These numbers aren’t anomalies; they’re a pattern driven by overhype and underdelivery.
As a result, companies are rehiring humans to fill gaps AI couldn’t bridge. Swedish fintech Klarna, after replacing customer service staff with AI chatbots, backtracked in 2025 and brought back human agents to handle complex queries and build trust, admitting AI led to declining service quality.19 Australia’s Commonwealth Bank laid off 45 call-center workers in early 2025 to deploy AI voice bots, only to face backlash over poor service quality—such as increased call volumes and ineffective handling—prompting rehires for better customer satisfaction and an apology for underestimating complexities.20 Duolingo, after considering AI to replace contractors, reversed course and maintained hiring paces, recognizing AI’s limits in nuanced tasks.21
This trend is widespread: A 2025 analysis calls it “the year of rehiring humans after AI fails,” with firms opting for hybrid approaches where AI augments, rather than replaces, human expertise—55% of companies now regret AI layoffs.Ibid, McKinsey notes that while AI use jumped, companies are now focusing on retraining staff and redesigning workflows to integrate humans effectively.5
A look at a balanced path forward
AI isn’t going away—it’s too embedded in our tools and processes. But the past year has shown that blind reliance ignores its core limitations: from data dependencies and biases to high failure rates in real-world applications. The perceptual gap between image-gen’s creative allure and language models’ factual facade only amplifies disappointments. And with over 90% of AI goals falling short in many cases, businesses are wisely pivoting back to human strengths like empathy, adaptability, and critical thinking.
The lesson? Treat AI as a collaborator, not a replacement. By doing so, we can harness its potential without the pitfalls. As reports from MIT, RAND, and others underscore, sustainable success comes from realistic expectations and human-AI partnerships.
References
- Top 10 Limitations of AI & Why They Matter in 2025 – VisionX – https://visionx.io/blog/limitations-of-ai/
- The Surprising Reason Most AI Projects Fail – And How to Avoid It at … – https://www.informatica.com/blogs/the-surprising-reason-most-ai-projects-fail-and-how-to-avoid-it-at-your-enterprise.html
- The 2025 AI Index Report | Stanford HAI – https://hai.stanford.edu/ai-index/2025-ai-index-report
- 5 Major Challenges of AI in 2025 and Practical Solutions to … – https://www.workhuman.com/blog/challenges-of-ai/
- The state of AI: How organizations are rewiring to capture value – McKinsey – https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
- Summary of Artificial Intelligence 2025 Legislation – https://www.ncsl.org/technology-and-communication/artificial-intelligence-2025-legislation
- Navigating the Nuances and Pitfalls of AI Image Generation – LinkedIn – https://www.linkedin.com/pulse/navigating-nuances-pitfalls-ai-image-generation-mark-jones-93l6e
- Why do most AI image-generation models have such a … – Reddit – https://www.reddit.com/r/artificial/comments/x2xpi8/why_do_most_ai_imagegeneration_models_have_such_a/
- Generative Artificial Intelligence Biases, Limitations and Risks in … – https://www.sciencedirect.com/science/article/pii/S0001299824000461
- Large Language Models (LLMs) vs. Generative AI: What’s the Difference? – Coursera – https://www.coursera.org/articles/llm-vs-generative-ai
- When AI Gets It Wrong: Addressing AI Hallucinations and Bias – https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/
- AI–AI bias: Large language models favor communications … – PNAS – https://www.pnas.org/doi/10.1073/pnas.2415697122
- MIT report: 95% of generative AI pilots at companies are failing – https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/
- AI Fail: 4 Root Causes & Real-life Examples in 2025 – https://research.aimultiple.com/ai-fail/
- The Root Causes of Failure for Artificial Intelligence Projects … – RAND – https://www.rand.org/pubs/research_reports/RRA2680-1.html
- Why 75% of AI Projects Fail to Deliver ROI—and How Enterprises … – https://www.linkedin.com/pulse/why-75-ai-projects-fail-deliver-roiand-how-can-turn-things-minett-jssac
- AI project failure rates are on the rise: report – CIO Dive – https://www.ciodive.com/news/AI-project-fail-data-SPGlobal/742590/
- AI Failure Statistics – RheoData – https://rheodata.com/ai-failures-stats/
- Companies backtrack after going all in on AI – Information Age | ACS – https://ia.acs.org.au/article/2025/companies-backtrack-after-going-all-in-on-ai.html
- Now that’s an embarassing U-turn – bank forced to rehire human workers after their AI replacement fails to perform – https://www.techradar.com/pro/now-thats-an-embarassing-u-turn-bank-forced-to-rehire-human-workers-after-their-ai-replacement-fail-to-perform
- 2025: the year of rehiring humans after AI fails – TechFinitive – https://www.techfinitive.com/2025-the-year-of-rehiring-humans-after-ai-fails/
If interested in seeing the Farrel McGuire video mentioned at the start:
Discover more from Vermont Daily Chronicle
Subscribe to get the latest posts sent to your email.
Categories: News Analysis, Science and Technology









This is good news.
“AI isn’t going away—it’s too embedded in our tools and processes.” It is also imbedded inside of our bodies and attached to our bodies. The wearables that monitor your stats, administer doses, is all downloaded and monitored. If they want to take you out, and not to lunch, all they have to do is enter the right code. There is also the self-assembling nanotechnology injected or ingested. When they began chipping your pets, that was just the start. Now it is us – as in taking the mark and can’t buy or sell without it (digital currency, digital healthcare = digital, transformed, mallable, trackable, controllable humans.) I remember a popular song from a band, Porno for Pyros in 1993 called Pets – the verse: ” My friend says we’re like the dinosaurs – Only we are doing ourselves in – Much faster than they ever did – We’ll make great pets” How prophetic looking at what is going on now.
If you think Yuval Harari is a demon – look into Peter Thiel (his name does spell out The Reptile) and Alex Karp of Palantir. Find out how many government contracts they hold and listen to what these guys say. Peter Thiel has a conference coming up in San Francisco: (The Christian Post August 27, 2025) “Scheduled for Sept. 15, Sept. 22, Sept. 29 and Oct. 6, Thiel’s lecture will explore the theological and technological dimensions of the Antichrist, drawing upon religious thinkers such as French philosopher René Girard, who Thiel studied under at Stanford University, Francis Bacon, Jonathan Swift, Carl Schmitt and John Henry Newman. Each lecture — which will not be transcribed or shared with the public — is designed to form a “cohesive series,” and admission is only offered to the entire lecture series, rather than individual dates.” Ear ticklers and twisted tongues going to tell us about the antichrist….sure, probably knows that beast personally and professionally.
The psychological and physiological warfare unleashed by the psychos with the biggest bank accounts should send chills down many spines. Moreover, they know exactly where, who, when and how those chills occur to verfiy their plan is working. How do I get a copy of my Minority Report or is it classified? Interesting how these freaks of misery are building our “book of life” through their intrusive and possessive AI – playing God? History shows that game doesn’t end well.
Nothing is more static than government regulation. Don’t fall into that trap. What’s worse, an “intrusive and possessive AI – playing God?”, or an intrusive and possessive government – playing God?
In the final analysis, we must trust our own judgement based on the information (rational and intuitive) we have available to us at any given time.
Again, “The great virtue of a free market system ….. is [that it is] the most effective system we have discovered to enable people …… to deal with one another and help one another.” ― Milton Friedman
“The great virtue of a free market system ….. is [that it is] the most effective system we have discovered to enable people …… to deal with one another and help one another.” ― Milton Friedman
The problem is that people DON’T help each other, despite the effective system. Out social and moral failure is clear. This is what happens when people secularize a society. https://vermontdailychronicle.com/page-from-reformation-to-redistribution/
Timothy: That people don’t help each other is not a deficiency of free markets. It’s a deficiency of those people’s character and experience, those who don’t realize that ‘what goes around, comes around’. Not helping, cheating, and fraud are regressive human conditions. They are, ultimately, doomed to fail – because those who have been cheated or defrauded or not helped by others will stop doing business with the ne’er-do-wells. And it can’t legislate morality, no matter how benevolent a government oligarchy claims to be.
“The key insight of Adam Smith’s Wealth of Nations is misleadingly simple: if an exchange between two parties is voluntary, it will not take place unless both believe they will benefit from it. Most economic fallacies derive from the neglect of this simple insight, from the tendency to assume that there is a fixed pie, that one party can gain only at the expense of another.” – ― Milton Friedman
What free market? The “free market” you speak of is nostalgia – a by-gone era. How does main street compete with Amazon or Walmart? Why was Home Depot and Walmart open during Convid-19 while all other “free market” businesses were ordered to shut down? There is no free market – it is all leveraged by big behemouths known as Blackrock, State Street and Vanguard. Do some digging and found out who actually owns what – who controls what – start with the USA debt of $37+ trillion – who is holding our bundled debt notes and what is the collateral? We are and that is not cool, but it is a fact. No one can make America great again because we don’t own it anymore. We’re all renters if the Truth be known.
Indeed, Melissa. But what alternative are you proposing?
In one sense the ‘free market’ is nothing more than a state of mind. That a government, an institution that even its founders realized was inherently prone to corruption by ‘factions’, is co-opted by ‘behemoths’, the basic concepts of individual liberty and freedom still exist, not only in, and protected by, that government’s by-laws (i.e., The U.S. Constitution), but by our frame of mind.
I’m reminded of the ‘Freedom Speech’ in the movie Braveheart.
“I see a whole army of my countrymen here in defiance of tyranny. You have come to fight as free men, and free men you are. What will you do without freedom? Will you fight?”
“Fight, against that? No! We will run, and we will live.”
“Aye. Fight, and you may die. Run, and you’ll live, at least a while. Dying in your beds many years from now, would you be willing to trade all the days, from this day to that, for one chance, just one chance, to come back here and tell our enemies that they may take our lives, but they’ll never take our freedom?”
Your remark, Melissa, gives me hope. Some continue to poo-poo the precepts of Aristotle, Adam Smith, Milton Friedman and many others, … those who understood that free enterprise was, and remains to this day, the fairest, most productive and reasonable form of governance humankind has ever discovered.
But you, on the other hand, still get it. No matter what happens, they can never take your freedom of mind.