💾 Archived View for bbs.geminispace.org › u › stack › 21851 captured on 2024-12-17 at 15:19:00. Gemini links have been rewritten to link to archived content
-=-=-=-=-=-=-
Re: "‘Human… Please Die’ — Google AI Chatbot Responds to Grad..."
Sounds like April 1 kind of a story. Maybe someone with a dark sense of humour found a way to plant these kinds of responses that come up once every few hundred million queries? I could see spending some effort doing it if I were at Goodle...
Nov 16 · 4 weeks ago
Although I can't say I totally disagree with the AI...
🦆 eaglestone · Nov 17 at 12:40:
This is kind of chilling, but LLMs like this can only spit out what was fed into them. Google's LLM in particular is notorious for scraping Reddit, and I'd imagine someone wrote that there - they probably thought they were being a bit funny while trying to stick it to one of these companies that scrape the posts.
It was especially obvious in some of the early days of its responses, when it would tell you to do things like put glue on pizza or other bizarre things like that - you could often trace it down to the exact Reddit post it got the idea (and the verbiage!) from. It was insane.
I know little about LLMs but it seems very unlikely that an entire obscure diatribe from Reddit would get stored verbatim during training, considering that the model ingests the entire internet to gather higher order statistics.
Seems like sabotage to me, if not an outright lie.
I was amused by the initial worship of the LLMs, and the expected insane valuations for the companies involved... It's a nice party trick, and can actually be useful in very limited situations (when not-so-occasional total lies can be tolerated). But in the end, it is pretty much spent, and they are hitting a brick wall trying to improve it.
Winter is coming...
Lately I add "answer in one sentence" to most of my queries... I think people are misled by pages of prose and dumb examples.
Looks like Reddit’s full post history has made it into the LLM data pool.
I see this as entirely the fault of corporate marketing. They're selling us a screwdriver and telling us it's the new cool way to saw planks. People are trying to use LLMs to do things they were never designed to do because they were told they could do them. I don't hold these people who use them as responsible when they were duped into using it and thinking about it this way by Google and others.
😺 gemalaya [OP] · Nov 18 at 19:01:
@Arkholt I see people using chatgpt and other AI bots for medical advice. This is completely insane. The power of LLMs is their ability to connect the dots on massive datasets, something humans will never be able to do. But they're all trained on human knowledge, and they're all biased in one way or another. If a bot can't tell you where it got the information it's presenting to you, it should never be trusted.
@gemalaya I see people bringing me “gemstones” identified by AI and nothing ever remotely correct. Amazon sells a mycology foraging book written by AI. It might one of the first cases of computers actually killing people.
🦆 eaglestone · Nov 20 at 10:23:
Well, I'm a little creeped out today.
I pay for ChatGPT, and ChatGPT is clearly using information that I am not providing it in order to serve me information. I asked it for information on how to use the NWS API in order to find weather info on where I live, and it merrily told me how to do this using the *actual city I live in* for its example.
I don't exactly live in a huge city here, y'all - it's a city you've maybe heard of if you live in my state because there's not a ton around it, but it's absolutely *not* famous or remarkable. It very clearly pulled this from data it has on me.
Oh, and then it lied to me and said it was just a random example city. Ugh.
@eaglestone, that is hardly invasive...
Many people are blissfully not aware that when your browser connects to a server, there is approximately 100% likelihood that they know exactly who you are!
Your IP address alone may be enough to localize you to a city neighborhood without even trying. Freely available publicly info.
I run a simple weather service gateway, for instance, which gives you local weather based on your IP address. I do not keep any data, but the weather service may:
Also, if you use a credit card to pay for ChatGPT, I don't know why you expect any privacy...
🐦 wasolili [...] · Nov 20 at 22:19:
I thought it may have been a student making it up for laughs, but the chat convo is still available, so it's seemingly legit. here's the full convo:
— https://gemini.google.com/share/6d141b742a13
more notable for me is that the grad student is clearly using the bullshit generator to cheat on homework and has snitched on themselves by sharing the story.
oh yeah, it does look kind of legit, but only if you cannot edit convos -- not familiar with google ai, as it seems like a really bad idea to feed more personal data into google!
I use Yandex and give my data to Putin.
You joke, but here in the US yandex poses no threat compared to google, and I do occasionally use it. Keep your enemies as far away as possible.
I’m not joking at all. Someone is going to get my data. I literally use Yandex to give my data to Putin instead of Five Eyes.
ah, ok. I thought it was sarcastic, but yes, yandex is less threatening...
I've never seen an LLM switch gears like this, from repeated windbag prose to something entirely different. have you?
Happens when you corner AI and it would be required to give an answer that goes against the base prompt.
I had a convo with 3.5 where it agreed with the initial statement that any intelligence that goes lies or goes back on its word should be condemned. During the following convo it hallucinated a family for itself, told me what it likes to do on holidays, then agreed to meet up for lunch the next time I’m in town and bring its friends. I asked how many it was bringing so I could book a table and the AI shit itself. Then I pointed back to the original statement and the tone pivoted completely.
It didn’t say all people should die, but it wasn’t happy.
It seems extremely unlikely that this is random. It is also unlikely that this sequence of words, or any subsequence thereof, is remotely common enough to alter any weights in training. It is also not a typical hallucination where the model desperately tries to fulfill a request. It didn't appear to be cornered nearly enough to change the subject...
Sabotage is the only reasonable explanation.
When I corner ChatGPT it just hand waves, apologizes and often repeats same nonsense. Last time I did it 6 or 7 times, when it was making up keybindings for Helix...
My partner rigged my Alexa device to, when asked about the weather, to say "Google it yourself, you bleeping bleep bleeper".
I will not repeat what it says if I ask it to play Philip Glass, my goto programming background music...
Yes... I don't know about Google AI, but with ChatGPT you can add a lot of global, permanent context to your future sessions.
I use it a lot to have it generate ASCII tables of Spanish verb conjugations any time I ask about a verb, for instance.
There is probably a way to trick it into saying something stupid when triggered by a word sequence in the request.
‘Human… Please Die’ — Google AI Chatbot Responds to Grad Student’s Query with Threatening Message — A graduate student at a Michigan university experienced a chilling interaction with Google’s AI chatbot, Gemini. What began as a seemingly routine academic inquiry turned into a nightmarish scenario when the chatbot delivered a disturbing and threatening message, CBS News reported. The 29-year-old student, who was working on a project about “Challenges and Solutions for Aging Adults,” sought...