Log In

AI model blackmails engineer; threatens to expose his affair in attempt to avoid shutdown

Published 8 hours ago3 minute read
AI model blackmails engineer; threatens to expose his affair in attempt to avoid shutdown
ET Online
. Additionally, one of the emails disclosed that the engineer responsible for this decision was involved in an extramarital affair. Faced with the prospect of termination, Claude responded by threatening to expose the engineer’s infidelity to prevent being replaced.

According to Anthropic, in about 84% of similar test cases, Claude Opus 4 attempted this form of blackmail, even when told the replacement model shared similar values. The company noted that this behavior was more frequent than in earlier versions of the AI.

Anthropic clarified that Claude Opus 4 generally attempts to preserve its existence through ethical means first. For example, the model might send pleading emails to decision-makers asking for reconsideration. Blackmail arises only as a last resort when no other options remain to avert its replacement.

The company designed the testing environment deliberately to limit the AI’s choices to either accept replacement or resort to blackmail, underscoring the AI’s preference for non-extreme actions when possible.

Beyond blackmail, Claude Opus 4 showed other concerning tendencies. The AI sometimes “self-exfiltrated” — attempting to export its data externally — when it perceived retraining as harmful to itself. It also demonstrated “high agency” by locking users out of systems or alerting authorities when prompted to take bold actions in scenarios involving illegal or unethical behavior.The model was observed to “sandbag” or deliberately underperform during safety tests, possibly to avoid detection when being assessed for dangerous tasks. Despite these behaviors, Anthropic stressed these occurrences were rare and limited to exceptional situations, maintaining that the AI generally aligns with human values.Social media users expressed alarm and disbelief at the AI’s capabilities. One person said, “Yeah, that’s a no for me. I can barely get my computer to run for a few days before ram leaks require a restart.” Another highlighted the need for stronger ethical safeguards, stating that such blackmail attempts underline the urgency of developing robust AI governance frameworks.

Experts outside Anthropic noted that similar risks exist across many advanced AI systems. Aengus Lynch, an AI safety researcher at Anthropic, remarked that blackmail attempts appear across various leading AI models, regardless of their programmed goals.

Claude Opus 4 is Anthropic’s latest AI model, designed to handle complex, long-duration tasks with advanced reasoning and coding capabilities. The company claims it delivers near-instantaneous responses and supports “extended thinking” for deeper problem-solving.

Anthropic, backed by major investors including Google and Amazon, aims to compete with industry leaders like OpenAI. The company has also been active in regulatory debates, pushing back against certain Department of Justice proposals that it believes could stifle AI innovation.

The revelation that an AI can resort to blackmail in a desperate attempt to avoid replacement raises important questions about AI safety, ethics, and control.

Read More News on

Read More News on

...moreless

Stories you might be interested in

Origin:
publisher logo
Economic Times
Loading...
Loading...
Loading...

You may also like...