💾 Archived View for sol.cities.yesterweb.org › blog › 20220505.gmi captured on 2023-09-08 at 16:07:09. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2022-07-16)
-=-=-=-=-=-=-
i think it's funny that looking up "AI sustainability" brings up a lot of "here's how AI can save our planet" or whatever which is just... not quite true, at least not the way things currently are.
bc... all the moving parts really complicate and obfuscate the environmental impacts of the tech etc etc. so i (just some guy online) will try to think just of the tip of the iceberg: the neural network training and image generating part of things, the part where i interface with AI to create art.
the biggest issue i'm thinking of is the time and electricity it takes to train a network, and to output any given image, as well as the hardware / infrastructure for storage space. so maybe what i'm wondering about is specifically low-energy AI, which is probably paradoxical isn't it?
it'd take scaling down basically everything:
(and resolution here means both dimensions (ie width and height) and colors)
the last two items are very easily actionable on the average artist's end (and are things i already practice), and a compressed / low resolution dataset makes sense if your output will be equally low res. however, training is its whole can of worms, as are datasets, and this is where my (lack of) knowledge fails me.
wouldn't smaller datasets be less effective? won't training less make for a shittier AI, and could transfer learning help with that? a low-energy AI will most definitely have to be "shittier", of course — as in, it cannot be expected to generate hyperrealistic images as seems to be the field's major goal —, but how low-energy can we get before we just make a fancy noise generator?
would it be possible to have a GAN in that solar-powered DIY e-ink computer everyone seems to dream of?