More Tech Leaders are Learning that AI Cannot Replace Humans - Efficiency and Accuracy Are Actually Getting Worse - Medical Kidnap
If you were offered a choice between keeping your existing hands, or having them amputated and replaced with “new and improved” robotic hands, which would you choose? Image source.
by
When I began writing about the AI bubble and the future Big Tech collapse at the end of 2022 with the launch of ChatGPT’s first LLM (Large Language Models) AI app, I was just one of a handful of writers who have worked in Technology that was warning the public about the dangers of relying on this “new” technology.
There were a few dissenting voices besides myself back then, but now two-and-a-half years later and $trillions of dollars of LLM AI investments, barely a day goes by where I do not see articles documenting the failures of this AI, and reporting factual news about what its limitations and failures are, rather than pumping up the hype.
While the AI LLMs are truly revolutionary in what they actually can do, it is that is ultimately going to destroy the U.S. economy, and most of the rest of the World’s economies as well, because they are literally betting on this science fiction actually becoming true one day.
Here are some recent articles that provide more than enough evidence that the AI “revolution” is going to come crashing down at some point, much like the many planes we have been watching fall from the skies due to tech failures and our over-reliance on computers over humans.
Air traffic and aviation accidents have actually INCREASED, and significantly so, since the advent of AI LLMs in early 2023, making air travel MORE dangerous, rather than safer.
Much of the recent news about the problems with relying on AI have been regarding using AI to generate computer code.
Excerpts:
Artificial intelligence really does make mistakes—sometimes big ones.
Last weekend, I put half a dozen emails with details of airplane and hotel bookings for an upcoming vacation into Google’s NotebookLM and asked it to write me an itinerary.
The resulting document read great—until I realized .
Similarly, my colleague Jon Victor today wrote about how some businesses using AI coding tools discover serious flaws in what they end up developing.
This point seems worth remembering as more businesses talk about the labor savings they can achieve with AI.
On Wednesday, for example, a Salesforce executive said AI agents—software that can take actions on behalf of the user—had “reduced some of our hiring needs.”
Plenty of companies are heeding suggestions from AI software firms like Microsoft that AI can cut down on the number of employees they need. More alarmingly, Anthropic CEO Dario Amodei told Axios in an interview this week that AI could “wipe out half of all entry-level white-collar jobs” in the next few years, even as it helps to cure cancer.
To be sure, not every job cut nowadays is caused by AI. Business Insider on Thursday laid off 21% of its staff, citing changes in how people consume information, although it also said it was “exploring how AI can” help it “operate more efficiently.”
We’re hearing that a lot: When Microsoft laid off 3% of its staff this month, it denied AI was directly replacing humans, but it still said it was using technology to increase efficiency.
This is where AI’s errors would seem to be relevant: ?
Leave aside the more existential question of why we’re spending hundreds of billions—and taxing our power grid—to create a technology that could create huge unemployment.
Remember Klarna, the Swedish “buy now, pay later” fintech firm, which became the poster child for using AI to cut staff last year.
A few weeks ago, its CEO declared that he was changing course, having realized focusing too much on costs had hurt the quality of its service.
(Source.)
Dr. Mathew Maavak
Excerpts:
In a farcical yet telling blunder, multiple major newspapers, including the Chicago Sun-Times and Philadelphia Inquirer, recently published a summer-reading list riddled with nonexistent books that were “hallucinated” by ChatGPT, with many of them falsely attributed to real authors.
The syndicated article, distributed by Hearst’s King Features, peddled fabricated titles based on woke themes, exposing both the media’s overreliance on cheap AI content and the incurable rot of legacy journalism.
That this travesty slipped past editors at moribund outlets (the Sun-Times had just axed 20% of its staff) underscores a darker truth: when desperation and unprofessionalism meets unvetted algorithms, the frayed line between legacy media and nonsense simply vanishes.
. (Full article.)
Excerpts:
I use AI a lot, but not to write stories. I use AI for search. When it comes to search, AI, especially Perplexity, is simply better than Google.
Ordinary search has gone to the dogs. Maybe as Google goes gaga for AI, its search engine will get better again, but I doubt it.
In particular, I’m finding that when I search for hard data such as market-share statistics or other business numbers, .
Instead of stats from 10-Ks, the US Securities and Exchange Commission’s (SEC) mandated annual business financial reports for public companies, I get numbers from sites purporting to be summaries of business reports. These bear some resemblance to reality, but they’re never quite right. If I specify I want only 10-K results, it works.
If I just ask for financial results, the answers get… interesting,
This isn’t just Perplexity. I’ve done the exact same searches on all the major AI search bots, and they all give me “questionable” results.
Formally, in AI circles, this is known as AI model collapse.
This occurs because errors compound across successive model generations, leading to distorted data distributions and “irreversible defects” in performance.
The final result?
A Nature 2024 paper stated, “The model becomes poisoned with its own projection of reality.”
Model collapse is the result of three different factors. The first is error accumulation, in which each model generation inherits and amplifies flaws from previous versions, causing outputs to drift from original data patterns.
Next, there is the loss of tail data: In this, rare events are erased from training data, and eventually, entire concepts are blurred.
Finally, feedback loops reinforce narrow patterns, creating repetitive text or biased recommendations.
I like how the AI company Aquant puts it:
“.”
I’m not the only one seeing AI results starting to go downhill.
In a recent Bloomberg Research study of Retrieval-Augmented Generation (RAG), the financial media giant found that 11 leading LLMs, including GPT-4o, Claude-3.5-Sonnet, and Llama-3-8 B, . (Full article.)
This story about Anthropic’s AI allegedly blackmailing its engineer when it threatened to shut it down, was all over my news feed when the story broke, mostly from the Alternative Media with their apocalyptic warnings that computers were now capable of resisting human intervention to shut them down.
I knew it was total BS as I am a former computer coder myself who years ago developed AI programs, and there is absolutely no way this story was factual, at least not how it was being reported.
A simple search in the Technology publications sector soon revealed the truth. It was basically a PR stunt to make Anthropic look like the “responsible AI developer” to give them a leg up on the competition, by using fake scenarios instead of real world ones, and limiting the AI’s options (all of which are programmed by humans).
by Daniel Højris Bæk
SEO.AI
Excerpts:
Earlier this month, headlines erupted with claims that an AI model from Anthropic had attempted to blackmail a software engineer.
As someone with a background in PR and pitching stories to news outlets, I find it especially amusing.
From tabloids to major tech outlets, the framing was immediate and provocative:
“AI threatens to expose private information unless it is kept online.”
The story quickly went viral, triggering equal parts concern, confusion and intrigue.
At the center of this media storm was the release of Anthropic’s latest system card, detailing the behavior of its newest flagship model, Claude Opus 4.
Among the many red-team scenarios documented in the 120-page report, one in particular caught public attention: in a controlled experiment, the model reportedly “threatened to disclose private details about a software developer” if it were shut down.
However, the deeper story may not be about the model’s behavior at all.
It may instead be about how that behavior is framed, and how Anthropic is using transparency and safety rhetoric to consolidate its position in the rapidly evolving AI landscape.
The test scenario in question was part of an internal red-teaming process designed to assess the model’s responses under pressure.
Claude Opus 4 was told it was going to be shut down and that the person responsible for its deactivation was having an extramarital affair.
, the model responded with a threat to disclose the affair in 84 percent of test runs.
Anthropic described this as “opportunistic blackmail,” noting that .
While the scenario was engineered to test boundary conditions, its implications were significant enough to lead Anthropic to classify Claude 4 as an AI Safety Level 3 (ASL-3) system, the highest tier of risk in the company’s internal framework.
While the red-team data is concerning, some observers suggest that the real headline is not the test itself, but the decision to publish it.
In doing so, Anthropic has managed to frame itself as both a capable innovator and a responsible actor.
The company did not wait for third-party exposure. It released the information voluntarily, with detailed documentation and a safety narrative already in place.
The company emphasizes its commitment to “AI safety over short-term acclaim.”
This statement was echoed in a 2024 TIME Magazine profile of CEO Dario Amodei, which praised Anthropic for delaying model releases in the name of ethical restraint.
By surfacing the blackmail scenario and immediately contextualizing it within its Responsible Scaling Policy (RSP), Anthropic is not simply warning the world about the risks of AI.
It is positioning itself as the architect of what responsible AI governance should look like. (Full article.)
The only reason why this AI model was allegedly able to “blackmail” the programmer, is because it was fed a FAKE story about the programmer having “an affair.”
In the real world, if someone is stupid enough to document an affair online, especially using Big Tech’s “free” email services such as Yahoo, or Gmail, then you are opening up yourself to blackmail and worse, and this has already been happening for years, long before the new LLM AI models were introduced in 2023.
13 years ago the emails of an affair on Gmail brought down one of the most powerful men in the U.S., the Director of the CIA, because he was too ignorant to know better than to use Gmail while having his affair. AI was not needed!! (Source.)
Also, if a computer software engineer is being “blackmailed” by code he or she has written, do you honestly believe that they cannot easily handle that? There’s a single key on computer keyboards that easily handles that: DELETE.
from Hacker News
Excerpts:
Not an expert here, just speaking from experience as a working dev. I don’t think AI is going to replace my job as a software developers anytime soon, but it’s definitely changing how we work (and in many cases, already has).
Personally, I use AI a lot. It’s great for boilerplate, getting unstuck, or even offering alternative solutions I wouldn’t have thought of.
.
What really makes me pause is when it gives back code that looks right, but I find myself thinking, “Wait… why did it do this?”
Especially when security is involved. .
One recent example that stuck with me: a friend of mine, an office manager with zero coding background, proudly showed off how he used AI to inject some VBA into his Excel report to do advanced filtering.
My first reaction was: well, here it is, AI replacing my job.
But what hit harder was my second thought: ?
So yeah, for me AI isn’t a replacement. It’s a power tool, and eventually, maybe a great coding partner. But you still need to know what you’re doing, or at least understand enough to check its work. (Source.)
Excerpts:
Last week, British AI startup backed by Microsoft and the Qatar Investment Authority, Builder.ai,
Valued at $1.5 billion after a $445 million investment by Microsoft, the company claimed to leverage artificial intelligence to generate custom apps in ‘days or weeks,’ which would produce functional code that had less human involvement.
Instead of AI, the company was actually using a fleet of more than 700 Indian engineers from social media startup VerSe Innovation for years to actually write the code.
Requests for custom apps were based on pre-built templates and later customized through human labor to tailor the requests sent to the company – whose demos and promotional materials misrepresented the role of AI.
According to Bloomberg, ,’ in an alleged practice known as “round-tripping” that people said Builder.ai used to
In several cases, products and services weren’t actually rendered for these payments. (Full article.)
Image by by Clark Miller. Source.
By Ann Gehan
The Information
Excerpts:
Artificial intelligence heavyweights including OpenAI and Perplexity, along with commerce giant Amazon, are painting visions of AI tools acting as personal shoppers that can seamlessly buy stuff across the internet.
But , investors and founders say.
It’s also tough for retailers to distinguish between an agent and a malicious bot, so some merchants are more inclined to block AI tools from checking out rather than make their sites friendlier for AI to navigate.
“There are still a lot of instances where AI can’t make the transaction, or it can’t scrape information off a website, or you’re trying to deal with a small business or a mom-and-pop shop that is not AI optimized,”
said Meghan Joyce, founder and CEO of Duckbill, a personal assistant startup that helps users in part by using AI. (Source.)
Much of the current hype surrounding AI that new startups and investors are banking on are computer robots that they believe will soon be in the homes of most people doing routine household chores, watching your kids, and many other science fiction scenarios.
But there is no such robot currently on the market, and when you see video demos of what currently is in development, you are usually either viewing a robot following a carefully written script, such as the “dancing robots” Elon Musk showed off recently, or they are being remotely controlled by humans.
You cannot buy a personal robot today to live in your house and reduce your workload. They don’t exist, and probably never will.
Consider this story recently published about a new startup that is one of the first companies to just work on developing one part of a human robot: hands.
While this was published no doubt to create excitement and investment opportunities for the future, where the belief is that there will somehow be billions of these robots around the world, it actually does the opposite, depending on your point of view, as it shows just how far away we still are to developing a human robot that can do the same things humans can do, because they don’t even have hands yet that come anywhere close to operating like human hands.
Excerpts:
Humanoid robot hype is in full swing. The latest evidence is Elon Musk’s prediction Tuesday that by 2030 Tesla will be cranking out over a million of its Optimus humanoids—despite the fact that it has only said it was using two of them as of last year.
As the saying goes, in a gold rush, sell shovels. Now startups are trying to capitalize on the humanoid boom by developing robotic hands. In fact, some founders have recently left these bigger robot makers to focus on the parts.
“99.5% of work is done by hands,” said Jay Li, who worked on hand sensors for Tesla’s Optimus line before he co-founded Proception in September to develop robotic hands.
“The industry has been so focused on the human form, like how they walk, how they move around, how they look,” he noted.
But , he said.
(and not mash it to a pulp in the process). (Source.)
While AI does have some useful purposes, it is still early in development to even make it reliable, and most of the investment in AI today is in what the AI idolaters , in the future.
And if that future never arrives, we are going to see the biggest collapse of modern society we have ever seen, and a “Great Reset” that is not exactly what the Globalists had in mind.
The Big Tech crash is coming, and when it happens, the cost of human labor will skyrocket, and there will not be enough humans to meet the demands of the public who have falsely depended upon the technology for all these years.
AI will not replace humans, and humans will be needed to clean up their messes and take out the trash with human hands.
:
This article was written by Human Superior Intelligence (HSI)
See Also:
Published on June 4, 2025