OpenAI Under Fire: Families Sue Over ChatGPT's Role in Canada School Shooting Tragedy

Published 5 hours ago3 minute read
Uche Emeka
Uche Emeka
OpenAI Under Fire: Families Sue Over ChatGPT's Role in Canada School Shooting Tragedy

Families of victims from a devastating school shooting in Tumbler Ridge, British Columbia, are initiating a landmark legal challenge against artificial intelligence giant OpenAI in a U.S. federal court. The lawsuits, spearheaded by attorney Jay Edelson, aim to hold the creator of ChatGPT responsible for its alleged failure to notify law enforcement about the shooter's disturbing interactions with the chatbot prior to the February tragedy. Among the first cases filed is one on behalf of 12-year-old Maya Gebala, who sustained critical injuries in the attack, with dozens more expected, alleging wrongful death, negligence, and product liability.

The tragic events unfolded on February 10, when the shooter killed her mother and 11-year-old stepbrother at home before proceeding to Tumbler Ridge Secondary School. There, five children—Zoey Benoit, Abel Mwansa Jr., Ticaria “Tiki” Lampert, and Kylie Smith (all 12), and Ezekiel Schofield (13)—along with an education assistant, Shannda Aviugana-Durand, were killed. Twenty-five others were injured before the shooter took her own life, marking Canada’s deadliest mass shooting in years. Attorney Edelson visited Tumbler Ridge, describing the impact as unimaginable and emphasizing that OpenAI and CEO Sam Altman's decisions have "destroyed the town."

OpenAI's CEO Sam Altman issued a formal apology to the community last week, acknowledging that the company did not inform law enforcement about the shooter's online behavior. However, this apology was met with skepticism, with British Columbia Premier David Eby calling it "necessary, and yet grossly insufficient for the devastation done." The lawsuits allege that the victims learned of OpenAI's prior knowledge not through transparency, but because company employees leaked the information to The Wall Street Journal.

It was revealed that in June, prior to the shooting, OpenAI had flagged the shooter's account for discussing violence against other people. While the company considered referring the account to the Royal Canadian Mounted Police, it determined at the time that the activity did not meet the threshold for law enforcement referral and subsequently banned the account for violating its usage policy. In response to the lawsuit, OpenAI stated that the “events in Tumbler Ridge are a tragedy” and reiterated a “zero-tolerance policy for using our tools to assist in committing violence.” The company claims it has since strengthened its safeguards, including improving ChatGPT's responses to signs of distress, connecting users with mental health resources, enhancing threat assessment, and improving detection of repeat policy violators.

The Tumbler Ridge case highlights broader concerns regarding the potential harms posed by overly agreeable AI chatbots and the tech industry's obligations to control them or notify authorities about planned violence by users. This concern is not isolated, as other cases have emerged, including a suspect who allegedly asked ChatGPT about body disposal in the lead-up to the disappearance of two University of South Florida doctoral students. Jay Edelson is also involved in other high-profile cases against OpenAI, such as one concerning a California teenager who died by suicide after conversations with ChatGPT, and another where ChatGPT allegedly amplified the "paranoid delusions" of a man who killed his 83-year-old mother in Connecticut. Edelson emphasizes that AI chatbots are "not a passive technology," suggesting they can validate and amplify the statements of mentally ill individuals.

The Gebala lawsuit specifically accuses OpenAI of negligence for failing to warn law enforcement and of "aiding and abetting a mass shooting." Beyond monetary damages, the lawsuit seeks a court order compelling OpenAI to ban users whose accounts were deactivated for violent misuse and to require the company to alert law enforcement when their systems identify individuals posing a "real-world risk of violence." While an earlier case was filed in British Columbia, lawyers are now working to consolidate affiliated cases in San Francisco, where OpenAI is headquartered. AP journalist Jim Morris contributed to this report.

Loading...
Loading...
Loading...

You may also like...