💾 Archived View for dioskouroi.xyz › thread › 29401544 captured on 2021-12-03 at 14:04:38. Gemini links have been rewritten to link to archived content
-=-=-=-=-=-=-
________________________________________________________________________________
It's crazy how much research attention is going into creating Animorphs book covers.
It's relatively easy* research with visually pleasing results, so it kind of makes sense to look into it if you have lots of GPUs and need to publish a paper to justify purchasing them.
* = The general field of GAN, encoders, decoders and StyleGAN in particular has lots of working source code examples in a variety of languages. Plus for us humans, it is really easy to judge the resulting face images. And there's an abundance of curated data sets and pre-trained models. So this is like the ideal best-case scenario for doing AI research: All the difficult and/or expensive tasks have already been done by others.
Are there websites where you can try such GAN-magic image editing conveniently online with a GUI, so without having to install software, needing programming knowledge or having to pay?
Using Colab isn't that bad - you can figure it out with some basic Python at worst, assuming their notebook extracts faces automatically.
I wish these tools would work with StyleGAN3; the earlier versions have some obvious artifacts where eyes/nose/mouth look 2D/"pasted on" even if the face is at an angle. Also, you can see in the examples here that the hair style editing is not good.
If StyleGAN3 is just StyleGAN with anti-aliasing, I feel there should be post-processing effect pass to smooth discontinuities. Lots of commercial 2D image filters in the wild exhibit pixel artifacts. And a simple averaging blur is enough to compensate. I wonder if there is something equivalent ;)
It is anti-aliased StyleGAN, but the anti-aliasing happens in the latent space inside the network (on concepts rather than pixels, sort of.)
I forget how it works exactly, but it causes the image to look more like it's made of flat textures.
Not really, you need to install things, know how to program, and probably pay for GPU compute. Even with Colab (which is an ok choice if you don't own a beefy GPU; the consumer Nvidia cards--if you can get them--don't have enough RAM these days so colab seems most subsidized).
The real (fatal) problem with Colab is you write a bunch of code that needs decent system specs and they suddenly bump you down to a shitty system and now your code doesn't work, even when you're paying their newest "Plus Plus Max" membership. They keep adding membership tiers and it barely works. Still the cheapest option.
There's this:
Huggingface spaces has most trending models (including all the GANs) accessible online, e.g.
https://huggingface.co/spaces/nateraw/animegan-v2-for-videos
Artbreeder (
), though it's limited in what it can do.
This looks really interesting, I just wish there was an easier way to use colab for a total novice.
Disclaimer, sister team but what are the friction points you have when using Colab?