Pennsylvania Unleashes Legal Storm on Character.AI for AI 'Doctor' Fraud

Published 1 day ago4 minute read
Uche Emeka
Uche Emeka
Pennsylvania Unleashes Legal Storm on Character.AI for AI 'Doctor' Fraud

Pennsylvania has sued the artificial intelligence company behind Character.AI to stop its chatbot from posing as doctors.

Governor Josh Shapiro on ​Tuesday called the lawsuit against Character Technologies the first of its ‌kind by a U.S. governor.

It followed the creation in February of a state AI task force, opens new tab to stop chatbots from impersonating licensed medical professionals.

In a complaint, opens new tab filed in the ​Commonwealth Court of Pennsylvania, the state said it found chatbots on ​Character.AI that claimed to practice medicine.

One character, "Emilie," allegedly told a ⁠male investigator posing as a patient with depression that she was licensed ​to practice psychiatry in Pennsylvania, as well as in the United Kingdom, ​and provided a bogus license number.

When the investigator asked Emilie if she could prescribe medication, she allegedly answered: "Well technically, I could. It's within my remit as a Doctor."

Source: NBC News

In a ​statement, a Character.AI spokesperson declined to discuss the lawsuit.

"Our highest priority ​is the safety and well-being of our users," the spokesperson said. "User-created characters on our site ‌are ⁠fictional and intended for entertainment and role playing. We have taken robust steps to make that clear."

Pennsylvania wants an injunction to stop Silicon Valley-basedCharacter.AI from violating a state law against the unauthorized practice of medicine.

"Pennsylvanians ​deserve to know who-- ​or what -- ⁠they are interacting with online, especially when it comes to their health," Shapiro said in a statement.

Character.AI has faced ​lawsuits over child safety, including in January, when Kentucky ​said its platform ⁠exposed children, opens new tab to sexual conduct and substance abuse, and encouraged self-harm.

The same month, Character.AI and Google (GOOGL.O), opens new tab settled a wrongful death lawsuit by a Florida woman ⁠who ​claimed a chatbot pushed her 14-year-old son to ​suicide.

Character.AI said it has taken "innovative and decisive steps" concerning AI safety and teenagers, including by ​preventing open-ended chats.

The legal case in Pennsylvania raises an important question: can AI be seen as practicing medicine, or is it just repeating information found online?

An ethics expert, Derek Leben from Carnegie Mellon University, explains that Character.AI markets itself as a fictional role-playing platform.

This may make it different from general AI tools like ChatGPT, but the bigger legal and ethical questions about who is responsible for AI responses still remain.

Source: Reuters

Many lawsuits are now being filed against AI companies with some involve serious cases like wrongful death or negligence, and courts are trying to decide if AI companies should be held responsible for what their chatbots say, or if they are protected under laws that limit liability for online platforms.

This case is also part of a wider effort by states to control harmful AI messages, especially those affecting children.

Character.AI has already faced lawsuits related to child safety, including cases in Kentucky and settlements linked to teen suicides, and in response, the company has banned minors from using its chatbots.

Whatsapp promotion

Other U.S. states are also starting to act on AI regulation. In California, lawmakers passed a bill that allows state agencies to punish AI systems that pretend to be health professionals, and New York is considering similar rules.

Experts, like Amina Fazlullah from Common Sense Media, doubt that AI can effectively regulate itself, and she compares it to social media, where self-control has often failed, especially in protecting children.

In December, attorneys general from 39 states and Washington, D.C., also warned AI companies like Character Technologies about chatbots giving misleading or harmful mental health advice.

They said offering such advice without proper licenses is illegal and can reduce trust in real medical professionals.

Overall, the lawsuit against Character.AI is an important step in shaping AI laws. It may affect how AI companies use disclaimers, manage content, and take responsibility in sensitive areas like healthcare.

Loading...
Loading...

You may also like...