💾 Archived View for rawtext.club › ~winter › gemlog › 2023 › 2-23.gmi captured on 2023-07-10 at 14:37:40. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2023-03-20)
-=-=-=-=-=-=-
I'm semi-active over at cohost, a lot less since I started writing in Geminispace and on the small web, mostly just posting pictures and some odd longer-form writing that fits better there than anything else. I feel a bit out of place there, being neither queer nor trans, but happy still to have a small presence in a community specifically welcoming to them.
Sometimes I see patch notes because someone rechosts them into my feed, and that was the case today, where the devs talked about how Notion, maker of the software they use for drafting all their communications and patch notes, had just shoehorned in a very obnoxious feature,
activating an open language model query every time you hit the space bar on an empty paragraph, and suggesting that you use it instead whenever you start typing a slash command, which has traditionally been reserved for doing things like “creating section headings” or “embedding a file.”
As a user, I want to be annoyed to shit.
in the past month or two there has been a wave of otherwise uninvolved companies deciding to just rub GPT on their product and call it a feature. this is both obviously unsustainable and actively harmful to any sort of human-centered design, and we fucking hate it and wish it would go away.
I don't think I would've see this at my last job, but I bet I would've at the job before that, where product management would try to hop on the latest trend to avoid, I don't know, falling out of some Gartner Magic Quadrant or some such thing. It's hard to understate the hold that Gartner reports have on enterprise software and its buyers. Even if that's not the case in this particular instance, I'll bet some executive "surveyed the landscape" (read something stupid on LinkedIn) and decided that they weren't going to get beat on this world-changing Chat Jeep thing.
I'm hoping this is just a dumb trend, and that as the flaws of ChatGPT become better known, it becomes seen less as a panacea and more as a tool that might be suitable in certain situations. But I don't think we're there yet. As James Vincent points out, ChatGPT is a mirror test, and a lot of ostensibly smart people are failing.
Introducing the AI Mirror test, which very smart people keep failing
(as an aside - are they actually smart if...never mind)
I recently heard the term "stochastic parrot", which I think is wonderful, and as best I can tell is from a paper by Emily M. Bender. What a lot of people don't seem to realize is that these chatbots aren't in any way attempts to intelligence, or sentience, or whatever else they're trying to project on them - they're just predictive text, wildly better than previous generations, stringing together a few plausible sentences at the time.
On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜
That's it. And in that sense, they're impressive, able to make sense of large amounts of data when something is particularly well known. But because they're trained on these massive datasets, they can only ever predict based on the training data. In that sense, they're delightfully simple. I can't get answers about Norm's World, a BBS I called in 1994 that ran on an Apple ][ in a guy's closet and periodically the computer had to be restarted because there was some glitch in the BBS software that periodically caused it to freeze, because none of that information is online, at least not available on the web, and if it's not available on the web, it likely wouldn't have been part of ChatGPT's training. Let's see what I can find out about Norm's World in ten years time, if Geminispace gets hoovered up for info as well.
I'm waiting for people to fall out of love with ChatGPT, which seems to have got its hooks in people in a way that the last wave of exhausting bullshit (cryptocurrencies and NFTs) didn't. More "normal" people are falling for this. People aren't thinking critically about it, and I get it, not everyone has e.g. a computer science background and can reason about what these magical processes actually do. But smart people should know better and should try harder, and smart people should at least be able to grasp the limits of these things, and that the shine should wear off quickly. At some point maybe we'll stop seeing "ChatGPT for..." cluttering up the software and services we use. But we're not there yet. And the dystopia of the faintly dull marches on.