"ChatGPT told a young mother in Maine that she could talk to spirits in another dimension. It told an accountant in Manhattan that he was in a computer-simulated reality like Neo in The Matrix. It told a corporate recruiter in Toronto that he had invented a math formula that would break the internet, and advised him to contact national security agencies to warn them.
The Times has uncovered nearly 50 cases of people having mental health crises during conversations with ChatGPT. Nine were hospitalized; three died. After Adam Raine’s parents filed a wrongful-death lawsuit in August, OpenAI acknowledged that its safety guardrails could 'degrade' in long conversations. It also said it was working to make the chatbot 'more supportive in moments of crisis.'"
Important piece written by @kashhill & @jenvalentino. Link below.
-
As I've written several times, AI chatbots are dangerous and must be regulated; the current level of oversight and enforcement is too low.
Also, children should NOT use AI chatbots unsupervised.
Hopefully, 2026 will be the year of AI chatbot oversight, regulation, and enforcement.
LLMs dilute and ultimately destroy authentic, ethnic, and local human knowledge by making it undiscoverable.
I'm not sure if society is ready to pay this price.
I'm sorry to break the news, but AI companies don't care about you, your well-being, or your mental health.
They also don't care about society, education, employment, or child safety.
As long as it's profitable, they really don't care. They might also break the law if they think that ‘it is worth it.’
Understanding AI's legal and ethical challenges is an essential skill today (although most people haven't realized it yet).