Policy shake-up: OpenAI to flag violent chats; law enforcement may be alerted

OpenAI has initiated a policy to monitor ChatGPT conversations, flagging potential threats of violence and reporting users to law enforcement when deemed necessary by human reviewers. This decision follows a tragic incident where a user’s paranoid delusions were allegedly fueled by the chatbot, leading to a murder-suicide. OpenAI has initiated a policy to monitor ChatGPT conversations, flagging potential threats of violence and reporting users to law enforcement when deemed necessary by human reviewers. This decision follows a tragic incident where a user’s paranoid delusions were allegedly fueled by the chatbot, leading to a murder-suicide. 

Leave a Reply

Your email address will not be published. Required fields are marked *

Enquire now

Give us a call or fill in the form below and we will contact you. We endeavor to answer all inquiries within 24 hours on business days.