pouët.net

Go to bottom

Learned filters for executable graphics

category: code [glöplog]
 
So how do you create renderings that look like paintings?

That's what I attempted for this year's Revision by training a tiny machine learning model with a stylized version of an ugly raymarched image. I think I succeeded in making it look like an ugly painting but there's still like a million different alternatives to explore.

I put the rendering and training codes up on a public repo for everyone else to learn from. The code is quite annoying to run and has many small details, so I'm opening this thread for questions and discussion. My aim is to make it easier for others to experiment with similar techniques because I think there's lots of potential here.

Code: https://github.com/seece/SingleImageNCAFiltering

About the machine learning model. It's based on small cellular automata which is a niche choice but hey, they had a shader code up for taking at ShaderToy and a GLSL exporter so I rolled with it. These are their example shaders:

- https://www.shadertoy.com/view/styGzD
- https://www.shadertoy.com/view/slGGzD

I modified the original to start from my raymarched rendering instead of random noise so that the cellular automata iteration only modifies the input image instead of recreating something similar from scratch. So the training code takes a "gbuffer" and a target PNG as inputs and prints out some GLSL matrices at the end.

I generated the target image with StableDiffusion because I can't paint. I took the original 960x540 raymarched image, cropped a 512x512 part of it, and stuffed it in SD's img2img feature. Then I used a prompt like "a realistic painting of a cube by rembrandt" or something. I upscaled the image with "SD upscale" script included in the popular Stable diffusion web UI tool. See docs/sd_settings.png in the repo for details.

If you have any questions I'm happy to help! Can't wait to see your paintings lol
added on the 2023-04-16 15:21:08 by cce cce
I love that you're exploring this! I didn't particularly like your entry, felt a bit barren of content, just focused on the filter itself, but the potential of the technique is awesome!

Should probably have been disqualified according to the compo rules though, since the rules stated no AI inspired graphics were allowed, and the subjective definition of AI these days seems to cover training machine learning models. Definitely think this should be allowed and encouraged though. Takes space to include the filter model in the code.
added on the 2023-04-16 15:36:01 by psenough psenough
These neural cellular machines are so cool imo. This also a good example how "AI" (neural network techniques + tools to train them) opens a lot more opportunities than just using some Midjourney/dall-e/SD model to generate pictures.

Maybe next time you can also see if using more dimensions than just 4 (RGBA) gives better results. Having more state per pixel probably helps the model to generate more complex patterns. Maybe. :)
Thanks for publishing this. Definitely an infinitely more interesting approach than just using image synthesizers (it's not even a comparison, honestly). It's also interesting in that it doesn't fully supplant any existing techniques, but complements them in a novel way, yet at a pretty significant cost in terms of file size. More options in and diversity in the competition is always healthy.

I tend to agree with ps though that just using plain SD to generate the target is still might be in violation of the compo rules. That part in the process might need an alternative solution.
added on the 2023-04-16 17:02:38 by noby noby
Nice work and thanks for the write-up; really glad you dug into this.
added on the 2023-04-17 11:51:55 by ferris ferris
It might technically break the rules, but in that case I think the rules might need to be tweaked somehow. To me this seems like a really cool tool in the toolbox and banning it would be like banning hypnoglow or raymarching.
added on the 2023-04-17 13:09:55 by Preacher Preacher
This was one of my favorites in the compo! Thanks for releasing rendering and training code :)

What comes to ML/AI and compo rules: it's nice to have some entries that challenge the rather narrow "no AI" rule. I'd prefer allowing ebtries as long as tools are attributed by the author. Let the audience decide their level of appreciation.
added on the 2023-04-17 17:19:10 by waffle waffle
I think the comporules banning AI generated content do not apply here. Training your own AI models should be perfectly fine. The line is anyway blurry when are you just optimizing some parameters of an effect and at what point it turns into "AI". Note that the AI generated target picture appears nowhere in the final product, so I don't see a problem here.
added on the 2023-04-17 18:12:14 by pestis pestis

login

Go to top