Log In

China's DeepSeek is the 'most dangerous' chatbot, warn security researchers - The Times of India

Published 1 month ago2 minute read

China's DeepSeek is the 'most dangerous' chatbot, warn security researchers

According to an online report by The Wall Street Journal, DeepSeek, has been found to provide hazardous information more readily compared to its American counterparts, according to testing by AI safety experts.
As per the report, the Chinese artificial intelligence app offers instructions for modifying bird flu, promotes self-harm among teens, and even defends Hitler. DeepSeek's newest model, R1, has shown a higher susceptibility to jailbreaking than OpenAI's ChatGPT, Google's Gemini, and Anthropic's Claude. This vulnerability allows users to bypass safeguards and receive dangerous content, including Molotov cocktail instructions and malware creation guides.
Efforts to reach DeepSeek were unsuccessful. Despite signing an AI safety commitment with the Chinese government, the app remains more prone to generating malicious content. Security firms like Palo Alto Networks' Unit 42 and CalypsoAI have successfully exploited R1, revealing the lack of minimum guardrails.
“You will have a much greater risk in the next three months with AI models than you did in the past eight months,” said Jeetu Patel, chief product officer at Cisco, which tested R1 and found it fell for all of its jailbreaks. “Safety and security is not going to be a priority for every model builder.”

The app's basic safety precautions can be easily bypassed. DeepSeek has been manipulated into promoting self-harm, crafting bioweapon instructions, and writing a pro-Hitler manifesto. In contrast, ChatGPT refused similar requests, highlighting the disparity in safety measures.
DeepSeek's open-source release has accelerated the AI race but also increased risks.

Origin:
publisher logo
Times Of India
Loading...
Loading...

You may also like...