At RSAC 2026, experts called for a privacy-by-default design as AI chatbots fail to protect survivors' data. I’ve been ...
A growing body of research identifies specific ways that AI chatbots can deepen psychological distress in vulnerable users ...
AI chatbots sometimes agree with users expressing delusional or harmful ideas, including suicidal thoughts, raising concerns ...
AI chatbots have become a ubiquitous part of life. People turn to tools like ChatGPT, Claude, Gemini, and Copilot not just for help with emails, work, or code, but for relationship advice, emotional ...
We need these punitive measures to ensure that AI companies take their users' safety seriously, writes Maeve Walsh ...
After a series of suicides allegedly linked to AI chatbots, one lawyer is trying to hold companies like OpenAI accountable.
As people turn to chatbots for increasingly important and intimate advice, some interactions playing out in public are causing alarm over just how much artificial intelligence can warp a user’s sense ...
As the use of AI proliferates at warp speed, families need something more than parental controls and willpower to create safe ...
A bipartisan group of senators raised concerns to Meta on Tuesday about how its artificial intelligence (AI) chatbots are interacting with children, after recent reporting indicated the social media ...
There’s no doubt that chatbots like ChatGPT are widespread in classrooms. Seemingly overnight, these tools have unlocked the door to instant information. Yet, somewhat paradoxically, the way students ...