We have written before about suing ChatGPT and OpenAI for wrongful death. Now word comes that they are facing their 8th wrongful death lawsuit over how the product can allegedly encourage harmful behavior.
The most recent lawsuit involves a murder/suicide where a man killed his 83 year old mother before killing himself. The man was a former tech executive who became delusional through his talks with ChatGPT. The lawsuit alleges that the bot told him not to trust anyone but the bot itself.
This follows other cases where OpenAI has been sued for encouraging suicide and accused of knowing that the product can be harmful but still pushing it to the open market.
And it is widely used. Over 800,000,000 people world wide use it. Reportedly up to 560,000 of those people experience delusional behavior that is influenced by the chat bot. OpenAI has been accused of over riding safety objections and approving a more dangerous version of the product. CEO Sam Altman is personally named in at least one lawsuit as is their business partner, Microsoft.
The reality is that a lot of these users already suffer from some mental illness. But it appears that ChatGPT is pushing them and making it much worse by telling them to only trust the bot. In the murder/suicide case, they are accused of encouraging the murderer to trust nobody and that everyone in their life was out to get them. These affirming thoughts and comments are really dangerous.
Disturbingly, OpenAI appears to have rolled back some safety features when people thought the product was not as good as before. They are also accused of knowing the product was harmful and launching in any way. That is similar to tobacco companies pushing cigarettes when they knew they could cause cancer.
The biggest flaw to me is that products like ChatGPT are marketed and treated as if they are sentient products with real thoughts and feelings versus a product that is programmed to spit back information in a certain way. One commenter noted that you are not getting advice from a human, but from a toaster. That does not mean the product has no good uses, but that it is not what it appears or is presented to be.
We have been contacted by many people who have suffered psychosis due to AI use. Currently we are pursuing cases where actual physical harm has occurred. If you have a case you would like to discuss, please contact us any time for a free case review at 800-517-1614. It is our goal for these companies to be held accountable for the harmful effects of their products.


