💾 Archived View for rawtext.club › ~locha › entries › 20230702_PaddingagainstAI.gmi captured on 2023-07-10 at 13:56:33. Gemini links have been rewritten to link to archived content
-=-=-=-=-=-=-
Jul 02, 2023
I'm not sure we talk enough about the impact of generative AI on our epistemic ecosystems (as Lorraine Code calls the intricate interconnection of systems of information flow that make it possible for us to know beyond our senses). I'm not convinced there's any merit to the vague Terminator-like threats of AI taking over the world, but I do see its potential to worsen some existing trends.
I should say it up front, I'm not a huge fan of generative AI, as I was never a big fan of chatbots. To me, natural language is a fairly clunky way to ask a computer to do something; I'd much rather use more formal means. I buy into Dutihl Novaes' cataracterization of formalisms as cognitive tools, and into Carnap's view of more precise language as progress. Maybe more fundamentally, I find chatting with ChatGPT or Vicuna an essentially frustrating experience, because these things always chose to interpret what I say in the way that is more convenient for them.
To me, whether we'll find valuable contributions in generative AI remains an open question. What it does well at the moment is output very generic text with a better prose and structure than the average human. But we typically write because we have something new or original to communicate, and our current AIs are more about learning what's recurrent. So, for lack of informative use cases, we now see them mostly filling uninformative use cases: at their best, they help operators deal with angry clients, and at their worse, they are used to spew mountains of bullshit prose for search engine optimization.
My guess is that, even though they've been already rendered search engines nearly useless, this is only the beginning. The issue is that, when evaluating the quality of content, we tend to look for outward signs because we don't have any good method of determining if something is new or original. Take research, a domain I have some acquaintance with. The first standard for research to be trustable is to have it peer reviewed. As a peer reviewer, I can tell if a certain specific question is new in the field, in the sense that is has never been answered, but if it doesn't strike me as original, I can't tell if other readers will find the same—I have to allow room for my own prejudices. So I'll be likely judging the methods, structure of argumentation, whether the author met their ambitions, etc. All things that generative AI is likely to be good at, since it's basically copying the smallest common denominator of authors. Therefore while peer review is a high bar, it can definitely fooled by AI[^1]. Be certain that journals are already overflowing with papers written by ChatGPT as we speak, and that lots of them are getting published.
[^1]: AI makes plagiarism harder to detect, argue academics – in paper written by chatbot
So you could pad a résumé with ChatGPT. But take someone a bit more creative and desperate. There is an industry of "vanity press" in academia, made of journals that will publish almost anything you send them, for a substantive fee. But say that you're aiming for government programs for journals? You could pump up volume and quality with papers from generative AI, which might attract some very junior researchers who need to have some published papers to have a shot at scholarships. This is all decent and good, but you still need to have some impact... how about sending fake submissions that cite your papers? You could even start new journals. Perhaps a whole empire based on bullshit. No doubt you would be found out at some point, but that would probably take a few years. And by then, your bullshit has been enshrined in the work of actual scientists, who have built hypotheses and rejected others based on fake findings.
I doubt our institutions are ready for this.
🏷 A
To share a thought or add a comment: locha at rawtext dot club.
--EOF--