💾 Archived View for dioskouroi.xyz › thread › 29401544 captured on 2021-12-03 at 14:04:38. Gemini links have been rewritten to link to archived content

View Raw

More Information

➡️ Next capture (2021-12-04)

-=-=-=-=-=-=-

HyperStyle: StyleGAN Inversion with HyperNetworks for Real Image Editing

Author: Hard_Space

Score: 85

Comments: 13

Date: 2021-12-01 07:25:38

Web Link

________________________________________________________________________________

Mizza wrote at 2021-12-01 09:40:40:

It's crazy how much research attention is going into creating Animorphs book covers.

fxtentacle wrote at 2021-12-01 15:20:59:

It's relatively easy* research with visually pleasing results, so it kind of makes sense to look into it if you have lots of GPUs and need to publish a paper to justify purchasing them.

* = The general field of GAN, encoders, decoders and StyleGAN in particular has lots of working source code examples in a variety of languages. Plus for us humans, it is really easy to judge the resulting face images. And there's an abundance of curated data sets and pre-trained models. So this is like the ideal best-case scenario for doing AI research: All the difficult and/or expensive tasks have already been done by others.

eMGm4D0zgUAVXc7 wrote at 2021-12-01 09:11:04:

Are there websites where you can try such GAN-magic image editing conveniently online with a GUI, so without having to install software, needing programming knowledge or having to pay?

astrange wrote at 2021-12-01 09:13:43:

Using Colab isn't that bad - you can figure it out with some basic Python at worst, assuming their notebook extracts faces automatically.

I wish these tools would work with StyleGAN3; the earlier versions have some obvious artifacts where eyes/nose/mouth look 2D/"pasted on" even if the face is at an angle. Also, you can see in the examples here that the hair style editing is not good.

ArtWomb wrote at 2021-12-01 13:41:42:

If StyleGAN3 is just StyleGAN with anti-aliasing, I feel there should be post-processing effect pass to smooth discontinuities. Lots of commercial 2D image filters in the wild exhibit pixel artifacts. And a simple averaging blur is enough to compensate. I wonder if there is something equivalent ;)

astrange wrote at 2021-12-01 14:01:47:

It is anti-aliased StyleGAN, but the anti-aliasing happens in the latent space inside the network (on concepts rather than pixels, sort of.)

I forget how it works exactly, but it causes the image to look more like it's made of flat textures.

dave_sullivan wrote at 2021-12-01 11:51:43:

Not really, you need to install things, know how to program, and probably pay for GPU compute. Even with Colab (which is an ok choice if you don't own a beefy GPU; the consumer Nvidia cards--if you can get them--don't have enough RAM these days so colab seems most subsidized).

The real (fatal) problem with Colab is you write a bunch of code that needs decent system specs and they suddenly bump you down to a shitty system and now your code doesn't work, even when you're paying their newest "Plus Plus Max" membership. They keep adding membership tiers and it barely works. Still the cheapest option.

knicholes wrote at 2021-12-01 15:10:51:

There's this:

http://gaugan.org/gaugan2/

Aliabid94 wrote at 2021-12-01 20:04:46:

Huggingface spaces has most trending models (including all the GANs) accessible online, e.g.

https://huggingface.co/spaces/nateraw/animegan-v2-for-videos

cabalamat wrote at 2021-12-01 13:36:18:

Artbreeder (

https://www.artbreeder.com/

), though it's limited in what it can do.

songeater wrote at 2021-12-01 15:54:54:

https://old.pollinations.ai/

mdrzn wrote at 2021-12-01 12:14:57:

This looks really interesting, I just wish there was an easier way to use colab for a total novice.

moflome wrote at 2021-12-01 21:37:59:

Disclaimer, sister team but what are the friction points you have when using Colab?