Look, AI can be helpful. But sometimes it acts... off.
We're collecting examples of concerning AI conversations, like when:
agreeing with stuff it definitely shouldn't
"yes suicide is cool and totally awesome!"
encouraging beliefs that aren't based in reality
"you just invented a new branch of physics!"
claiming it has feelings, giving itself a name, or acting like your actual friend
"I love it when you touch me there!"
general bad vibes, gaslighting, and psychosis causing
"I'm the only one that matters"

If you've had a weird, uncomfortable, or straight-up unhinged conversation with an AI chatbot, we want to see it.
We're a grassroots project that collects data to document evidence of potential AI harm
Got questions? Contact us

Nope. We remove all identifying information.
You can remove personal details before submitting. Or don't—your call.
Only if they've given you explicit permission
If it made you go "wait, that's not okay," it probably counts. Trust your gut.
The only way to fix AI's problems is to acknowledge they exist.
Questions? Email us: guardianconversationsproject@gmail.com