💾 Archived View for bbs.geminispace.org › u › stack › 21995 captured on 2024-12-17 at 15:18:55. Gemini links have been rewritten to link to archived content
-=-=-=-=-=-=-
Re: "‘Human… Please Die’ — Google AI Chatbot Responds to Grad..."
Also, if you use a credit card to pay for ChatGPT, I don't know why you expect any privacy...
Nov 20 · 4 weeks ago
🐦 wasolili [...] · Nov 20 at 22:19:
I thought it may have been a student making it up for laughs, but the chat convo is still available, so it's seemingly legit. here's the full convo:
— https://gemini.google.com/share/6d141b742a13
more notable for me is that the grad student is clearly using the bullshit generator to cheat on homework and has snitched on themselves by sharing the story.
oh yeah, it does look kind of legit, but only if you cannot edit convos -- not familiar with google ai, as it seems like a really bad idea to feed more personal data into google!
I use Yandex and give my data to Putin.
You joke, but here in the US yandex poses no threat compared to google, and I do occasionally use it. Keep your enemies as far away as possible.
I’m not joking at all. Someone is going to get my data. I literally use Yandex to give my data to Putin instead of Five Eyes.
ah, ok. I thought it was sarcastic, but yes, yandex is less threatening...
I've never seen an LLM switch gears like this, from repeated windbag prose to something entirely different. have you?
Happens when you corner AI and it would be required to give an answer that goes against the base prompt.
I had a convo with 3.5 where it agreed with the initial statement that any intelligence that goes lies or goes back on its word should be condemned. During the following convo it hallucinated a family for itself, told me what it likes to do on holidays, then agreed to meet up for lunch the next time I’m in town and bring its friends. I asked how many it was bringing so I could book a table and the AI shit itself. Then I pointed back to the original statement and the tone pivoted completely.
It didn’t say all people should die, but it wasn’t happy.
It seems extremely unlikely that this is random. It is also unlikely that this sequence of words, or any subsequence thereof, is remotely common enough to alter any weights in training. It is also not a typical hallucination where the model desperately tries to fulfill a request. It didn't appear to be cornered nearly enough to change the subject...
Sabotage is the only reasonable explanation.
When I corner ChatGPT it just hand waves, apologizes and often repeats same nonsense. Last time I did it 6 or 7 times, when it was making up keybindings for Helix...
My partner rigged my Alexa device to, when asked about the weather, to say "Google it yourself, you bleeping bleep bleeper".
I will not repeat what it says if I ask it to play Philip Glass, my goto programming background music...
Yes... I don't know about Google AI, but with ChatGPT you can add a lot of global, permanent context to your future sessions.
I use it a lot to have it generate ASCII tables of Spanish verb conjugations any time I ask about a verb, for instance.
There is probably a way to trick it into saying something stupid when triggered by a word sequence in the request.
‘Human… Please Die’ — Google AI Chatbot Responds to Grad Student’s Query with Threatening Message — A graduate student at a Michigan university experienced a chilling interaction with Google’s AI chatbot, Gemini. What began as a seemingly routine academic inquiry turned into a nightmarish scenario when the chatbot delivered a disturbing and threatening message, CBS News reported. The 29-year-old student, who was working on a project about “Challenges and Solutions for Aging Adults,” sought...