
AI Safety Concerns Rise: Canada Probes OpenAI, Meta Restricts Abortion Info, ChatGPT Medical Errors
Financial Post
•
Monday, February 23, 2026
•
Ottawa, ON, Canada
AI safety and regulation are under increasing scrutiny across multiple sectors. OpenAI is being investigated in Canada following a mass shooting incident involving a suspect who interacted with ChatGPT. Meta is facing criticism for restricting access to abortion information for minors on its platforms. Meanwhile, a new study highlights the potential for dangerous errors in ChatGPT's medical advice, and legal experts are debating whether AI platforms should be liable for the content they generate. ## Latest Update A study reveals ChatGPT-4 failed to correctly triage approximately 10% of medical emergencies, potentially leading to delayed care and life-threatening outcomes. Experts emphasize that AI is not a substitute for clinical judgment in high-stakes situations. ## Timeline * 2026-02-23: Canadian lawmakers summon OpenAI executives to explain safety protocols after revelations that a mass shooting suspect interacted with ChatGPT. * 2026-02-25: Leaked documents reveal Meta's policy of restricting its AI chatbot from discussing abortion-related topics with users identified as minors. * 2026-03-01: Legal experts debate whether Section 230 protections should extend to generative AI, potentially exposing AI companies to liability for harmful outputs. * 2026-03-02: A study reveals ChatGPT-4's failure to correctly triage approximately 10% of medical emergencies. ## What to Watch * Legislative action in Canada regarding mandatory reporting of potential violent activity identified by AI systems. * Further legal challenges to Section 230 protections for generative AI platforms in the US. * Public health implications of AI-driven medical advice and the potential for misdiagnosis or delayed care.