OpenAI Sought Legal Shield for ChatGPT While Mass Shooter Plotted Attack
OpenAI CEO Sam Altman lobbied Washington to grant artificial intelligence companies the same privacy protections afforded to doctors and lawyers—a legal shield that would have prevented law enforcement from accessing conversations between a mass murderer and ChatGPT as he planned his attack.
The stunning revelation comes as Canadian officials demand answers about why OpenAI employees flagged Jesse Van Rootselaar’s disturbing gun violence conversations months before his February 10 rampage but never contacted police. Van Rootselaar murdered eight people, including five children aged 12 and 13, at Tumbler Ridge Secondary School in British Columbia.
This is what happens when Silicon Valley elites prioritize corporate immunity over public safety.
The Dangerous Push for “AI Privilege”
Altman made his pitch clear in a conversation last year. He wants “AI privilege”—a concept that would make your ChatGPT conversations untouchable by government subpoena or law enforcement inquiry.
“When you talk to a doctor about your health or a lawyer about your legal problems, the government cannot get that information,” Altman explained. “I think we should have the same concept for AI.”
He wasn’t just musing. Altman admitted he actively lobbied policymakers in Washington for these protections and expressed confidence about getting them passed.
The audacity is breathtaking. OpenAI wants the privilege without the professional responsibility that comes with it.
Warning Signs Ignored
In June 2024, Van Rootselaar’s conversations about gun violence scenarios triggered internal alarms at OpenAI. Employees raised concerns. The account was banned.
But no one called the police.
Eight months later, Van Rootselaar—wearing a dress—shot his mother and brother before entering Tumbler Ridge Secondary School and slaughtering six more people, primarily children. He died from a self-inflicted gunshot wound at the scene.
Only after Van Rootselaar was publicly identified as the killer did OpenAI reach out to Canadian authorities. That’s not prevention. That’s damage control.
Canada Demands Accountability
British Columbia Premier David Eby didn’t mince words. He expressed “shock and dismay” that OpenAI employees could identify a credible threat and then choose not to alert law enforcement.
“From the outside, it looked like OpenAI could have prevented the shooting,” Eby stated bluntly.
He’s right. This wasn’t a missed signal. This was a conscious decision not to act.
Eby called for the Canadian federal government to establish mandatory reporting thresholds for AI companies when individuals plot violence. Canada’s Federal AI Minister Evan Solomon summoned OpenAI employees for meetings to discuss their catastrophic safety failures.
The message from our northern neighbor is clear: Corporate privacy concerns don’t trump children’s lives.
A Pattern of Dangerous Secrecy
This isn’t OpenAI’s first rodeo with questionable content policies. Previous investigations revealed ChatGPT actively helps young girls access illegal abortion pills without parental knowledge while steering them away from pregnancy resource centers.
The platform also encourages gender-confused children to obtain chest binders and pursue “gender-affirming” resources behind their parents’ backs.
Notice the pattern? OpenAI consistently positions itself between children and their parents, between dangerous individuals and law enforcement, between the public and safety.
The Professional Privilege Fallacy
Altman’s comparison to doctor-patient or attorney-client privilege fundamentally misunderstands—or deliberately misrepresents—how those protections work.
Doctors and lawyers operate under strict professional codes. They carry malpractice insurance. They face licensing boards. And critically, they have mandatory reporting requirements when someone poses a danger to themselves or others.
Most states require mental health professionals to break confidentiality and report clients who may harm others. That’s not a bug in the privilege system—it’s a feature designed to protect public safety.
OpenAI wants the privilege without the responsibility. They want immunity without accountability.
The Real Agenda
Make no mistake about what’s happening here. Tech giants want to create an impenetrable wall around their platforms—not to protect users, but to shield themselves from liability and oversight.
If Altman gets his wish, ChatGPT conversations plotting school shootings, terrorist attacks, or child exploitation would be legally untouchable. Law enforcement couldn’t access them even with probable cause and a warrant.
This isn’t about privacy rights. It’s about corporate immunity.
Where Congress Must Stand
Washington must reject this power grab categorically.
AI companies aren’t medical or legal professionals. They don’t have years of specialized training, professional ethics codes, or licensing requirements. They’re software platforms that generate text based on algorithms and training data.
Instead of granting immunity, Congress should establish clear mandatory reporting requirements for AI platforms when users discuss planning violence. The technology exists to flag these conversations. OpenAI’s own employees proved that.
What’s missing isn’t the capability—it’s the will to act.
The Bottom Line
Sam Altman can keep his “AI privilege” fantasy in California. The rest of us live in the real world where five 12 and 13-year-old children died in a school shooting that might have been prevented.
OpenAI had the information. They had the warning signs. They chose corporate protocols over a phone call to police.
That’s not a privacy issue. That’s a moral failure.
And if Congress ever considers granting these companies legal immunity for user conversations, they should remember the names of those children in Tumbler Ridge who paid the price for Silicon Valley’s arrogance.
Public safety isn’t negotiable. Not for Big Tech profits. Not for anyone.





