💾 Archived View for dioskouroi.xyz › thread › 29443721 captured on 2021-12-04 at 18:04:22. Gemini links have been rewritten to link to archived content
-=-=-=-=-=-=-
________________________________________________________________________________
Now you can generate cheezy NFT art from public domain images.
This is very neat, but if you want to see a true (human) master at work, check out Mark Ferrari's GDC talk here:
I'd love to see an algorithm that could approximate this degree of mastery :)
The problem is that none of these posterisation algorithms (reducing bit depth of channels and optionally dithering) match the quality of somebody placing the pixels by hand. It always looks more like 'deep fried' images as opposed to pixel art. None of the samples are appealing (except for perhaps that one with palm trees).
The Blade Runner one is missing the buildings, robocop has no mouth, BJ is missing part of the right side of his head.
Just started watching that talk, lots of extremely impressive artworks, I'd never heard of the colour cycling techniques he talks about before which are very neat.
Interactive demo here (which is how I became aware of the submitted repo, although neither are created by me):
https://huggingface.co/spaces/akhaliq/Pyxelate
One case in which ANN intervention could possibly discard unwanted results.
See also...
https://github.com/hpjansson/chafa
Thanks so much for this! I'm not an artist at all so this will be great for my indie game dev needs
"I made a program that makes images look worse" seems like a realistic, but weird claim.
And accurate. It's just re-scaling and palette indexing, no different than what people toyed around with using Photoshop in 1998. It looks nothing at all like hand-pixeled art.
I'm kinda curious how this compares to imagemagick -> limit the colorspace. Though getting imagemagick to produce some of those styles of images may be difficult or impossible (like the purple/pink pattern on the bottom left corgi).