How To Tell If Your AI Strategy Is Real Or Just Another PR Hype
In 2025, saying you use AI won’t be enough. This is how smart companies — and investors — spot the ... More difference between substance and showmanship.
gettyAt the height of the dot-com boom, tech companies were adding “.com” to their names just to inflate valuations. Now, it’s “AI.” In almost every pitch deck and press release these days, AI shows up somewhere, somehow. But the reality of many AI products today is much closer to hype than to real value. Oftentimes, many CEOs promise disruption and transformation but end up outsourcing the hard work to third-party APIs or, worse, deliver dashboards that do nothing.
Many so-called AI strategies are, in truth, marketing and PR strategies oiled up by the machinery of hype. And according to Dr. Uri Yerushalmi, Chief AI Officer and Cofounder of Fetcherr, this confusion is reaching a breaking point. “The term ‘AI’ has been diluted,” he told me in an interview. “Any app using a language model now tends to label itself as an AI product. There’s a clear distinction between companies that simply use AI tools and those that are actually developing proprietary AI technologies.”
And in 2025, as budgets tighten and investors demand more than hype, that distinction could define who survives.
The AI renaissance has seen companies scramble to integrate ChatGPT, Claude, Gemini and other large language models into their offerings. But calling that “AI transformation” is like painting a racing stripe on a used car and calling it a Ferrari.
Yerushalmi points to a systemic mislabeling problem: “Companies and consumers alike confuse AI with language models. But LLMs are just one aspect of the broader AI landscape.” Indeed, the 2023 Garter Hype Cycle for Generative AI predicted that “80% of enterprises would use GenAI APIs by 2026,” but using an API is not the same as building an intelligent system.
When companies stretch their AI claims beyond reality, the fallout goes beyond embarrassment. “One of the biggest risks,” said Yerushalmi, “is that companies may begin to over-rely on AI, lowering safeguards and making decisions without adequate human oversight.” It also damages trust — among employees, investors and the public.
One example that made global headlines was Air Canada’s AI agent, which incorrectly told a customer about a bereavement discount. A judge later ruled that the airline was responsible for its AI’s misstatement — and Air Canada had to honor the false discount. That ruling set a powerful precedent: companies are liable for what their AI says.
“When AI is integrated gradually and in a controlled manner,” Yerushalmi added, “it becomes a powerful decision-making tool. But overpromising can undermine everything.”
Not every AI-backed product deserves the label. “If a product uses AI in a trivial or superficial way, it’s likely just hype,” Yerushalmi explained. “True innovation lies in companies building proprietary, revolutionary AI that fundamentally reshapes operational processes in entire industries.”
At Fetcherr, that innovation takes the form of a real-world system that merges the traditionally siloed pricing and revenue management functions within the airline sector. It’s one good example of how AI can truly re-architect workflows, beyond just automating tasks. For investors, identifying that sort of distinction is essential.
Substance looks like performance metrics, live use cases and product evolution — not just a flashy AI demo or GPT plugin.
Overhyped AI products can have several consequences for both individuals and businesses alike. As a business, you don’t just risk technical failure; you also risk damage to brand reputation. According to a 2023 survey by Cisco, 60% of consumers worry about how AI uses their data and 65% say they’ve lost trust in organizations over misuse of AI. In another more recent survey by Research Information, nearly every respondent was “concerned that AI will be used for misinformation and could cause critical errors or mishaps.”
These stats show that there’s a growing credibility crisis for AI and organizations, more than ever before, must build and deploy AI tools in ways that preserve trust and bolster their credibility, not erode it. “The main damage lies in the loss of trust,” Yerushalmi reiterated. And when AI is deployed at scale, across customer interactions and regulated workflows, that loss of trust can be catastrophic.
So how can enterprises cut through the noise? For Yerushalmi, the answer lies in A/B testing. “In business, the ultimate goal is to boost profitability and efficiency,” he said, “and the best way to verify AI performance is with scientific tools like A/B testing.”
AI’s influence grows, measurable standards are critical to building effective AI models and deploying them safely and successfully. Initiatives like MLCommons and Stanford’s HELM benchmark — which have both received high praise from several experts in the industry — aim to provide transparency into model performance, bias and safety. For enterprise teams, that kind of rigor can separate real solutions from speculative ones.
While Yerushalmi agreed that we’re indeed in the middle of an AI revolution, he believes that “soon, models like Fetcherr’s LMM will power how businesses actually operate.” That’s a shift he said chatbots won’t lead but by decision engines that efficiently optimize operations, pricing, logistics and strategy.
While it’s left to be seen what the future truly portends, there’s an industry consensus that the true potential of AI isn’t in headline-grabbing demos or even multi-billion-dollar investments, even though those are great, but in embedded intelligence transforming the core of industries.