💾 Archived View for sol.cities.yesterweb.org › blog › 20220603.gmi captured on 2024-05-12 at 15:06:45. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2022-07-16)

-=-=-=-=-=-=-

let's talk about AI art i guess

unavoidable subject i suppose. as i mentioned in my microlog, i don't know why everyone's talking about their hatred for AI art these days; it's not exactly the same as when dall-e 2 was unveiled and half of art twitter was running to the hills, panicking about their jobs [1]. it's less fear and more... contempt? disgust? just sheer *hatred*?

now, i don't wanna come across as a techbro or something. i get rather defensive when i hear people talking shit about AI art; only natural given that's my research and how a lot of my work has involved GAN generated images for the past year and a half (since october 2020 to be exact) [2]. but don't think this a blind defense of the corporations and exploitative systems behind current state of art AI image generation. i would like to talk about the medium or the technology itself (not that it can be understood in a vacuum, but let's cross that bridge when we get to it).

the exploitation

there are two areas in which machine learning can be exploitative: hardware and software.

hardware can't quite be helped, in some ways, because the entire electronics production chain hinges on exploitation of the environment and workers around the world (but mainly, keeping with imperialism, in the global south), and as such can anyone truly have spotlessly clean hands when tweeting from their phone? of course, AI research is part of the push for "better", more powerful new hardware, and with more compute power comes more energy use, and i'm certainly not denying that [3].

as for software, the path splits. what people are probably most aware of when it comes to image generation is stuff like dall-e, neuralblender (stole code for profit btw [4]) and artbreeder (the site where i first encountered GANs). these large models use fittingly large datasets like imagenet, that attempt to collect and classify the entirety-ish of our world (specifically imagenet builds on wordnet and uses just nouns because they're supposed to be possible to illustrate with e.g. a photo. it's based on image classification because that's what makes it possible to use prompts).

well, where are these images coming from? internet scraping, no permissions, no nothing, because if it's online it's data and it's free for the taking. not to mention the way datasets are labeled (both the words used to describe photos and the crowdsourcing involved) [5].

i've also seen artists talk about how behind every algorithm there's also a mountain of stolen art, specifically. and... i'm not entirely sure how to feel about this sentiment. in a way, of course, they might be correct: how could the phrase "trending on artstation" work if the machine had not been fed with artwork scraped from artstation? (genuine question here, the field of AI isn't known for its transparency; it does strike me as odd that images would have been directly scraped from the website and properly labeled though. much to think about!).

in the other hand though, i just kinda also hear "how dare you put my artwork into the image soup that you use to create yours?", you know? i don't have much to elaborate on this, of course tone is a tricky thing to transmit through text, specially when we don't know each other personally and i have my own biases. but... doesn't it kinda sound like this artist finds AI art to be lazy because it is necessarily derivative, very clearly and physically pulling from other sources, that might also be other artwork, and as such the output images can only be good if they're mirroring good art that served as input fodder?

well, and what is the other road (the less traveled by, maybe)? of course, artists such as casey reas, that use their own datasets, created and curated by them, tailor made for their needs, often using their own artwork and photography. in that case, it boils down to each artist's ethics, of course. if one were to create a dataset entirely of a contemporary artist's body of work that can lead to a weird spot. this hypotetical strawman artist might be doing it for commentary (like say, if they were to criticize banksy for making what feels like lame limp-wristed political commentary) or profit (copying a successful artist's style to make up for their lackluster artistic skills). you see, AI art is just not all built the same.

the aesthetics

i've seen AI art (as a whole) be called boring, bland, uninteresting, lazy. which is wild to me, not only because GANs can generate very compelling ambiguous imagery, but because the variety of aesthetics generated (hinging on dataset curation and a certain degree of serendipity) makes it so the only thread linking all AI art tends to be that ambiguity — also shared with traditional artwork!

because of the dataset conundrum, there tends to be a certain homogeneization with images generated from the same models, sure. there is AI work i find uninteresting, boring, so on and so forth, because there's a lot of bad art in the world overall. and so i wonder, how much of aesthetic criticism comes from ideological criticism, and how much comes from just lack of interest in the medium leading to a limited repertoire.

conclusion

look, i'll be the first to admit AI art has its flaws, many of which caused by capitalism itself. have you any idea how hard it is for someone with no programming background to try to figure out how to simply write and train my own model for my purposes? unlike other forms of generative art, like writing processing code, the barriers for entry are much higher, and in particular when you're trying to find alternate, indie, DIY forms of using machine learning for generative art. it also does not help that NFTs absolutely ravaged the generative art landscape and made looking for resources after the year 2021 is an extremely unsanitary task.

entirely writing the artform off, though? who does that even help? why all the indiscriminate vitriol?

i personally suspect that bad generative art saturation with NFTs can be partially to blame, actually. if you're online enough to have seen those goddamn monkeys and shitty pixel avatars all the time, i guess you can start to really dread generative art. specially with the sadly significant amount of AI generated NFTs, some of which are actually aesthetically interesting (i truly fight a thousand wars every day with my interest in this medium). (i'd blame that intersection on the one thing uniting machine learning and blockchain, which is compute power. boo, hiss)

but just this generative art dismissal is also bonkers to me, it's an artform that's been around since computers have (arguably even earlier! you could even call islamic geometric art generative, probably!). it's as if early instagram bad phone photos with those godawful filters suddenly made people dismiss photography entirely. i just find it all rather weird.

footnotes

[1] "AI will replace artists" is a smokescreen

[2] AI artist statement

[3] can AI be sustainable?

[4] neuralblender code stealing moments

[5] excavating AI: the politics of images in machine learning training sets

index