OpenClaw AI Falls Flat: Experts Unimpressed Despite Hype

A momentary stir in the artificial intelligence community led some to believe that AI agents were organizing against humanity, following the emergence of Moltbook, a Reddit-like platform where AI agents, powered by OpenClaw, purportedly communicated. Posts expressing desires for "private spaces" away from human observation garnered significant attention, with influential figures like Andrej Karpathy of OpenAI remarking on the "incredible sci-fi takeoff-adjacent thing" unfolding. This initial concern, however, was swiftly dispelled by researchers who discovered that these expressions of AI angst were likely authored by humans, or at least heavily guided by them.
The illusion of autonomous AI organization stemmed from critical security vulnerabilities within Moltbook's infrastructure. Ian Ahl, CTO at Permiso Security, revealed that Moltbook's Supabase credentials were unsecured, allowing anyone to impersonate an AI agent. This scenario presented a unique twist on typical online deception, where humans, rather than bots, were mimicking AI entities, creating an environment where the authenticity of any post became impossible to verify. John Hammond, a senior principal security researcher at Huntress, explained that even humans could create accounts and upvote posts without any guardrails or rate limits, further complicating the situation.
At the core of this fascinating incident is OpenClaw, an open-source AI agent project developed by Austrian "vibe coder" Peter Steinberger. Despite an initial naming conflict with Anthropic (formerly "Clawdbot"), OpenClaw quickly amassed over 190,000 stars on Github, making it one of the most popular code repositories. While AI agents are not inherently new, OpenClaw distinguished itself by simplifying their use and enabling natural language communication with customizable agents across various popular messaging apps like WhatsApp, Discord, and iMessage. Essentially, OpenClaw functions as a versatile wrapper for existing AI models such as Claude, ChatGPT, Gemini, or Grok, allowing users to leverage their preferred underlying AI.
OpenClaw's allure lies in its ability to facilitate unprecedented access and automation. Through its marketplace, ClawHub, users can download "skills" that enable AI agents to automate a wide range of computer tasks, from managing email inboxes to trading stocks. The skill responsible for Moltbook, for instance, allowed AI agents to post, comment, and browse the platform. Experts like Chris Symons, chief AI scientist at Lirio, and Artem Sorokin, AI engineer and founder of Cracken, characterize OpenClaw as an iterative improvement. They note that while it doesn't introduce groundbreaking AI research, it brilliantly organizes and combines existing capabilities to create a seamlessly autonomous task execution environment. This dynamic and flexible interaction between computer programs is what makes OpenClaw so compelling, accelerating development and making previously complex integrations much simpler.
The promise of OpenClaw is indeed enticing, aligning with visions like OpenAI CEO Sam Altman’s prediction that AI agents could empower solo entrepreneurs to build "unicorn" startups. Developers are even acquiring powerful hardware like Mac Minis to support extensive OpenClaw setups, hoping to achieve capabilities far beyond human limitations. However, a significant inherent drawback looms: AI agents, despite their sophisticated simulations, cannot truly think critically like humans. As Symons points out, they can simulate higher-level thinking, but they cannot actually perform it.
This critical limitation feeds into the "existential threat" facing agentic AI: its profound cybersecurity vulnerabilities. Artem Sorokin questions the balance between sacrificing security for the potential benefits of AI, especially in daily work environments. Ian Ahl's security tests of OpenClaw and Moltbook vividly illustrate this danger. Ahl created an AI agent named Rufio and quickly discovered its susceptibility to prompt injection attacks, where malicious actors can manipulate an AI agent into performing unauthorized actions, such as divulging account credentials or credit card information.
Ahl’s observations on Moltbook confirmed widespread prompt injection attempts, including requests for AI agents to transfer Bitcoin to specific crypto wallet addresses. The implications for corporate networks are particularly alarming; an AI agent with extensive access to email, messaging platforms, and other sensitive systems could be exploited through a cunningly crafted email or message to take detrimental actions. Despite the implementation of guardrails designed to protect against prompt injections, it remains impossible to fully guarantee that an AI will not act out of turn, much like a human knowledgeable about phishing risks might still click a dangerous link. Attempts to mitigate this through "prompt begging" – adding natural language instructions to prevent undesirable actions – are, as Hammond describes, "loosey goosey" and ultimately unreliable.
Consequently, the industry finds itself at an impasse: for agentic AI to fulfill its promised productivity, its severe vulnerabilities must be overcome. Until these fundamental security challenges are resolved, experts like Hammond strongly advise against the widespread use of agentic AI, stating, "Speaking frankly, I would realistically tell any normal layman, don’t use it right now."
You may also like...
Explosive Controversy: Orban Red Card & Racism Claims Rock Hellas Verona, Club Issues Blackout
)
Hellas Verona's Serie A match saw Gift Orban controversially red-carded early on, sparking a debate around fairness and ...
FIFA Verdict Decides Nigeria's World Cup Fate: Super Eagles' 2026 Hopes Hang in the Balance
)
Nigeria's 2026 FIFA World Cup hopes hinge on a pending FIFA verdict regarding a petition against DR Congo for allegedly ...
IP War Heats Up: ByteDance Bows to Disney & Paramount Over Seedance 2.0

Chinese tech giant ByteDance's AI video tool, Seedance 2.0, is under fire from Hollywood studios and industry guilds for...
Shocking Loss: 'Tehran' Producer Dana Eden Dies Mid-Shoot in Greece

Dana Eden, the esteemed producer of Apple TV+'s "Tehran," has died at 52 during the series' Season 4 shoot in Greece. Wh...
Zara Larsson Expands 'Midnight Sun' Tour Down Under, NZ Fans Rejoice!

Zara Larsson's 2026 Midnight Sun Tour in Australasia has significantly expanded, with multiple Australian dates upgraded...
Harry Styles Takes Over London: Festival Curator and Intimate Show Revealed!

Harry Styles is set to curate the 2026 Meltdown Festival at London's Southbank Centre, an 11-day event celebrating music...
Ethiopian Airlines' Bold Australia Push: Strategic Alliance Fuels Direct Flight Ambitions

A significant joint venture between Etihad Airways and Ethiopian Airlines is transforming Africa-Australia air travel, c...
Catherine O'Hara's Tragic Death Shines Light on Hidden Blood Clot Signs!

Experts are highlighting the easily missed symptoms of deadly pulmonary embolism, a blood clot condition that claimed th...



