💾 Archived View for oberdada.pollux.casa › gemlog › 2023-04-18_creativity_repression.gmi captured on 2023-04-26 at 13:04:33. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2023-04-19)
-=-=-=-=-=-=-
There's a good side, and then there's a doomsday scenario. I'll try to be nuanced.
Who is pushing us artists to use AI? Nobody, because it promises unheard of artistic liberation and a creative explosion – an opportunity we don't want to miss? Or is someone nudging us? And in a near future, when everyone uses some form of AI-assisted artistic tools, why resist?
I would argue there are a few good reasons to resist, or at least to reflect on the pros and cons. First and foremost: outsourcing creativity may have the effect of weakening our inherent creative capability. It's like driving a car instead of walking and getting in good shape, or using a pocket calculator and forgetting how to do mental calculations, or taking photos everywhere instead of trying to recall what a place looked like. The danger of all these useful tools is that we forget to excercise our own capabilities and gradually lose them. Creating art or music is in part a problem solving skill, a skill that can be largely outsourced to AI tools.
I've seen someone express the sentiment that it's useless to make music or art when AI tools can do it better - if not yet, maybe soon. My reply is as above: practising the skills is a value in itself. On the other hand, using AI tools doesn't have to kill the user's creativity. Instead, it may push it in new directions, to new levels of abstraction, if not necessarily of quality.
Then there is the derivative nature of generative AI systems which must be trained on large corpora of data, copyrighted or not. If you are a believer in intellectual property rights, theft is a virtually indispensable part of it. However, one must remember that in composition, painting, or sculpting the artist also builds on predecessors. Borrowing from others without it being easily noticed, or alluding to famous works while adding something new is standard practice and even encouraged. Now, with AI models tranined on previous art and music, new works can be created with negligible effort. In a culture already drowning in pictures and music, there will be a deluge of ever more. Stylistic melanges will be the norm and the easily recognisable trait.
Other arguments for resisting have to do with the fact that machine learning consumes a lot of computation, hence energy. I suppose this is a one-time cost as the model is trained on a particular corpus, which may then be used many times over to generate a wide variety of material. However, why would users be content with a single fixed corpus when it would be fun to constantly extend the model?
There is also a more sinister side to it, although this is up for debate. It may be argued that the use of AI for artistic purposes legitimises any of its more nefarious uses. One can easily argue against that conclusion. It's not the technology, it's what we do with it. That argument is often heard in the form: Guns don't kill people, people kill other people. Hence those who use their pet AI thing for artistic creation may claim that they vehemently oppose its use in oppression, surveillance, paper clip manufacturing, and all the rest, and that their use in no way contributes to any of that. On the other hand, those who use AI for art projects in enticing ways inadvertently do a kind of advertising for AI in the broad sense. It may not be their intent, but the effect will be more or less the same.
Doomsday perspectives have again reached public consciousness the past months with the campaign to halt development for half a year, while a new startup devoted to truth and understanding the universe can go ahead on its own.
...an AI that cares about understanding the universe is unlikely to annihilate humans because we are an interesting part of the universe.
Surely you are joking, Mr Musk? In case I'm not clear, let me point out the anthropomorphism of a computer model purportedly being able to find something interesting, as opposed to giving such an impression, not to mention the anthropocentric self-satisfaction in the assumption that humans, to some non-human entity, would be found interesting.
A fellow geminaut writes about the dangers of an AI enforced police state:
gemini://willowashmaple.smol.pub/2023-03-17-artificial-intelligence-ultimate-police-state
An AI + facial recognition system known as "Gabriel" has been pushed on schools, churches, synagogues, and community centres for free by some unknown philantrophist. Predictive policing is proposed to avert mass shootings, etc. What could possibly go wrong?
A long, more philosophical article:
I have also touched on this topic in a few earlier glögg posts.