💾 Archived View for bbs.geminispace.org › u › stack › 22029 captured on 2024-12-17 at 15:18:52. Gemini links have been rewritten to link to archived content
-=-=-=-=-=-=-
Re: "‘Human… Please Die’ — Google AI Chatbot Responds to Grad..."
It seems extremely unlikely that this is random. It is also unlikely that this sequence of words, or any subsequence thereof, is remotely common enough to alter any weights in training. It is also not a typical hallucination where the model desperately tries to fulfill a request. It didn't appear to be cornered nearly enough to change the subject...
Sabotage is the only reasonable explanation.
Nov 21 · 4 weeks ago
When I corner ChatGPT it just hand waves, apologizes and often repeats same nonsense. Last time I did it 6 or 7 times, when it was making up keybindings for Helix...
My partner rigged my Alexa device to, when asked about the weather, to say "Google it yourself, you bleeping bleep bleeper".
I will not repeat what it says if I ask it to play Philip Glass, my goto programming background music...
Yes... I don't know about Google AI, but with ChatGPT you can add a lot of global, permanent context to your future sessions.
I use it a lot to have it generate ASCII tables of Spanish verb conjugations any time I ask about a verb, for instance.
There is probably a way to trick it into saying something stupid when triggered by a word sequence in the request.
‘Human… Please Die’ — Google AI Chatbot Responds to Grad Student’s Query with Threatening Message — A graduate student at a Michigan university experienced a chilling interaction with Google’s AI chatbot, Gemini. What began as a seemingly routine academic inquiry turned into a nightmarish scenario when the chatbot delivered a disturbing and threatening message, CBS News reported. The 29-year-old student, who was working on a project about “Challenges and Solutions for Aging Adults,” sought...