pouët.net

Go to bottom

Texture generator questions

category: general [glöplog]
So I was hoping on asking those questions in nvscene instead of here because of pouetification (I know I'm getting myself into troubl here), but lately a large amount of technical thread led me to ask it here (besides, people were busy with their entries and I didn't want to annoy).

some technical details:
all data is vec4 (float4) (or at least will be)
size of a texture is always NxN where N = 2^x
three types of 'nodes' - generator, modifier and operator.

So to my questions:
1) How are you dealing with creating textures of different size? I have some hard time making some of my filters produce the same output among different size of textures (especially big jumps such as from 256x256 to 4096x4096). This happens mostly with kernel-based filters such as normal map and blur.

2) this is somehow the same as the first question, but this one is to the point. a generator that creates a buffer with random pixels on it, how would you deal with the same problem as above? sine one pixel is one pixel, should it be scaled to a rect when the texture grows? if so, should it shrink when it gets smaller? since I'm using a 'default' texture size of 256x256 (0-255 byte range), I wonder what will a smaller texture (such as, say 32x32) will have as a 'pixel' rect size?

3) quality vs. speed vs. size: which one do you favor? I've seen some implementation that use the fake perlin noise implementation instead of the original one, which increased speed, decreased size but severely hurt the quality. I myself prefer quality over everything else, and then size-optimize the code manually with various tricks - but perlin noise is still slow and it'll always remain as such. I just like to hear some opinions.

4) what is an (uncompressed) code size I should aim to? currently it's 15k including PNG export.

I'd appreciate help and some healthy discussion.
added on the 2008-08-28 10:43:29 by LiraNuna LiraNuna
Quote:
a generator that creates a buffer with random pixels on it [...] should it be scaled to a rect when the texture grows?

what benefit do you have from the bigger texture then?
let's translate it to sound:
if you're working at 44.1khz instead of 22.05, would you just store every sample twice?
I suggest visiting a lecture on signal processing (seriously).
added on the 2008-08-28 11:10:49 by hfr hfr
LiraNuma, I'm not an expert in texture generation, but your question looks interesting, so, my opinion:

1) I suppose that if you are using different size textures, the best is to use a vectorial format. For example, using width and height values in the range [0, 1) as floats. For blur you will need blur radious, and for normal maps to take in mind the steps size to calc the slopes.

2) In a vectorial system there are no points (points are not directly representable). You can have circles, or squares, with a radious or a width, for example.

3 and 4) I don't know.
added on the 2008-08-28 12:01:16 by texel texel
3) I do my procedural texture buildup with GLSL. You can make perlin noise in a pixel shader, and give it a random-value texture as input. Should be pretty fast. I can animate my textures (change perlin noise values) in each frame with a decent framerate; my textures are 512x512.
added on the 2008-08-28 12:05:22 by xTr1m xTr1m
Quote:
what benefit do you have from the bigger texture then?

Uhm.. quality? scalable operators means that you can create huge textures with more detail than their smaller 'fathers'.

Comparison of 512x512 with 1024x1024 (not the same output, but still the same look)
BB Image
BB Image

about your comparison to sound, the answer would be to 'stretch' the sound over double, leaving blanks in between missing stuff. but since it's generated, you would have extra values that will give it the extra quality.

texel: I'm using 0-255 as 0-1 since bytes are easier to store (storing floats isn't fun either).

xTr1m: Thanks for your suggestion, GLSL textures will be the next step, I'm just playing around for now. also, are your textures always 512x512? my concern is the increasing support in HD as demoparty screens, which kind of force you of using hi-res textures to avoid the interpolation 'artifacts'.
added on the 2008-08-28 21:13:25 by LiraNuna LiraNuna
LiraNuma: the format for floats should not be a problem. A float can be stored as a byte, 2 bytes, 5 bits, or whatever.

I was not talking about format, but about working in vectorial...
added on the 2008-08-28 21:50:39 by texel texel
Hi Liran!

I think your best bet is to always work in the biggest resolution needed, and then later scale down if necessary for memory or speed purposes.

For a tree of operations that would mean that if a node wants 1024x1024 then all their sources should be forced to 1024x1024, and that forcing should be recursive upto the root generators.

Regarding "pixels" when scaling up or down, I'd say that the generic case is an antialised circle with subpixel position (ie: position is a float from 0 to 1) and float radius where 0 = invisible and 1 = a dot as big as the texture. That solves your scaling problem nicely.

Sampling an antialiased circle which has effective resolution less than 1 pixel can be solved easily by doing an analitical calculation on the 9 affected pixels by comparing the exact area that the circle covers on each square.
added on the 2008-08-28 22:51:48 by winden winden
when implementing kernel based filters one cannot hardcode the kernel weights, but compute them on the fly depending on the sampling frequency (aka texture resolution), just like in sound synthesis.

I personally use floats for colors, and floats for pixel coordinates, always from 0 to 1 so all textures are resolution independant.
added on the 2008-08-28 23:25:53 by iq iq
In order of decreasing silliness:

4) Depends what you're trying to achieve.

3) Depends on the context. For a 4k, obviously size, for a 64k, size as well, but with more room for speed/quality concerns. Generally, there are much better ways to make visuals look good than with massive textures, I think. Rather see something fresh done in low res than a higher-res version of something I've seen a million times before. And, on the big screen all that detail tends to disappear anyway.

2) Depends what you're trying to achieve with the texture. 256x256 texels of random colour won't have any clearly defined structure, so there's no "correct" way that its 512x512 counterpart should look in relation to that. If the idea is to stay "true to the original", then simply scaling the 256x256 texture (nearest-neighbour/bilinear/whatever-looks-best-for-your-particular-application) would give you the best results, but this would defeat the purpose of using a bigger texture anyway.

Another way to look at random noise is this: as you scale the texture down with filtering it converges on a 50% grey image. If the "ideal" then is a texture of infinite size and detail, anything of finite size would simply be a flat 50% grey.

1) Making filters behave the same way at different scales is anything from trivial to impossible. Gaussian blur, for instance, scales very easily. I think the key is to stick to precisely defined filters. If you have a clear idea of what the filter does, doing the same thing at a higher resolution wouldn't be hard. But if you've arrived at a result just by putting random numbers into a kernel to see what happens, then you have the same problem of no clear definition of what it's "supposed" to look like at a different resolution.

Woah big answer. You may pouetize the thread now.
added on the 2008-08-28 23:55:14 by doomdoom doomdoom
As for upscaling your textures, have a look at this.

exemplar, jacobian field, result

BB Image
BB Image
BB Image
added on the 2008-08-29 00:55:57 by Inopia Inopia
Quote:
For a tree of operations that would mean that if a node wants 1024x1024 then all their sources should be forced to 1024x1024, and that forcing should be recursive upto the root generators.

That's what I do, but the size is determained by the render node in a log2 number. 8 = 256, 10 = 1024 etc.. Problem with your idea is that I cannot scale UP (for example 4096x4096), and rendering in those resolutions will take a nice amount of time.

Quote:
when implementing kernel based filters one cannot hardcode the kernel weights, but compute them on the fly depending on the sampling frequency (aka texture resolution), just like in sound synthesis.

Do you mind giving me an example about how should the kernel scale?

Inopia: wow, this is amazing - but it's not really what I meant, I want the texture on size 256x256 to look 'the same' as a texture that was rendered (using same nodes/params) on a hi-res size such as 4096x4096 but of course, will have more detail.
added on the 2008-08-29 07:23:58 by LiraNuna LiraNuna
@lira: I know what you wanted to achieve, I'm just saying it might be a nice thing to add to your toolbox.

I haven't read the paper, so I don't know if it's easy to implement in realtime, but it looks like a cool effect :)
added on the 2008-08-29 08:23:49 by Inopia Inopia
Inopia, Ah, I see, this is indeed a wonderful addition, and I'll have to look at it. I got most of my core done, now polishing up, size optimizing, fixing bugs and then adding a GUI.

According to the demos (videos) they've shown, this seems to be more than possible to do real time.
added on the 2008-08-29 08:25:34 by LiraNuna LiraNuna
and now its time to pouetificate everything!@!$@#$! tinfoil penis texture!@!@!
added on the 2008-08-29 09:08:35 by s0r s0r
Lira:

Quote:
For a tree of operations that would mean that if a node wants 1024x1024 then all their sources should be forced to 1024x1024, and that forcing should be recursive upto the root generators.


I think what is meant there, is that the final output resolution should feed back through the chain to the source generators. I.e. instead of specifying a bunch of filters at 1k x 1k and processing them, you set a bunch of generators without any resolution, and you have a final 'render' stage where you set the resolution. The size from that gets passed back to the source, so if you want to change res it's just the one setting and it's hard to make mistakes.

I use quartz composer on mac as a base demo tool, and it works like that. It does get a little confusing at times when you're writing filters, but it works really well. The other thing you might want if you go that way is a 'crop' filter - the crop dimensions get passed back up the chain the same as the render size, but it's needed when you want to mix different sized images (e.g. you want to place one texture inside another, say to add windows to a building texture).
added on the 2008-08-29 11:12:27 by psonice psonice
the biggest thing i can recommend is, make the thing based on what you actually need it for. i.e get a few meshes in a scene, with a camera, and lighting+shaders, then put the textures on it and see how it looks. that should lead your development imo - it should tell you a lot about what you need.

when it comes to the size issue i used to support all different sizes, but in panic room i went for everything at 1024x1024. it helped a lot - it made caching shared layers really easy for example.
for size vs speed vs quality, quality is the objective, size is a necessity, and as for speed - in a 64k if you're dealing with a lot of large textures it becomes a necessity too. yep, you have to achieve all three.
added on the 2008-08-29 12:48:44 by smash smash
i think you should watch chaos/farbrausch seminar about texture generation in wk3 where they give a full review of the subject + some technical tips
added on the 2008-08-29 13:02:49 by Tigrou Tigrou
hmm.. might aswell try and ask.
while developing an optimizer for my msc thesis i been dwelling with this idea of applying it to procedural texture generation aswell. useful applications such as trying to determine operand stacks that compact better, and automatic generation of matching textures from given photo.
im abit behind on the current state of the art of procedural texture generation though, not being involved with gamedev or 64k coding and all, last time i messed with a texgen was 3 years ago, so does anyone have or know of a decently advanced opensource texgen that i can do some tests with? preferably c#. please mail me ps at scene dot org if you're interested.
added on the 2008-08-29 13:21:12 by psenough psenough
Just a question I had in mind today...

In your tiny intros texture generators, how do you store step sequences?

I mean, there could be some data and a function that reads the data in "texture generator language", or you can just apply the filters by calling directly the functions, as code... so, how do you do it?

Do you try both? I'm thinking, and it looks like if for some cases one might occupy less than the other...
added on the 2008-08-29 20:11:14 by texel texel
Just a question I had in mind today...

In your tiny intros texture generators, how do you store step sequences?

I mean, there could be some data and a function that reads the data in "texture generator language", or you can just apply the filters by calling directly the functions, as code... so, how do you do it?

Do you try both? I'm thinking, and it looks like if for some cases one might occupy less than the other...
added on the 2008-08-29 20:11:18 by texel texel
Quote:
I think what is meant there, is that the final output resolution should feed back through the chain to the source generators. I.e. instead of specifying a bunch of filters at 1k x 1k and processing them, you set a bunch of generators without any resolution, and you have a final 'render' stage where you set the resolution. The size from that gets passed back to the source, so if you want to change res it's just the one setting and it's hard to make mistakes.

That's what I do. My trees are upside-down. the tree that rendered that golden leaf was:
Code:RenderNode | BumpMap / \\ Normal Fill | Perlin

I do that because then I can traverse the tree using DFS which is very compact. If there is a 'better' way of doing it, I'd love to hear suggestions.

The RenderNode has information about the size and without it the tree cannot be created. The size is described as three variables (for reuse) which are log2 (8=256x256), 1<<log2 (256) and total pixels (256*256).

Quote:
i think you should watch chaos/farbrausch seminar about texture generation in wk3

Link please?

Quote:
does anyone have or know of a decently advanced opensource texgen that i can do some tests with? preferably c#.

I can share mine, it's still not finished, but it has a lot already, although it's written in C++ and is targeted at linux (compiles just fine on MSVC).
added on the 2008-08-29 20:20:56 by LiraNuna LiraNuna
I'm more into something like this:

void generateCoolWoodTexture( vec3 out *rgb, float x, float y )
{
// do magic math here (sinf, perlin, fmodf mainly)
}

so, no operators or primitives or anything, just plain code (compresses very well). I also make the cubemaps and 3d modeling like this.

However at some point I implemented the tree thing. To decompress the tree I had the operators in one stream, the conections in another, and the constant params into another. As usual I guess.
added on the 2008-08-29 20:29:54 by iq iq
i found it : http://theprodukkt.com/downloads/slides_procTextures.pdf
during my search i also found this http://conspiracy.hu/articles.php?id=1 (seems for beginners)

no sure it will help...
added on the 2008-08-29 23:30:20 by Tigrou Tigrou
Quote:
I can share mine, it's still not finished, but it has a lot already, although it's written in C++ and is targeted at linux (compiles just fine on MSVC).


are they ever finished? :D

guess i'll have to update my c++ port of the optimizer. the thing is i had ported it already 3 months ago (also to java/processing), but then i added more stuff to the c# and now i need to do the port all over again, considering that the whole optimizer "framework" is still under development i predict it will get really annoying (and possibly hard to trackback) in the future. i like parallel processing but this is abit ridiculous. :(
where is all that promised programming language interoperability when you need it :(
added on the 2008-08-30 00:28:15 by psenough psenough
why don't you just stick to C? You have to simplify man.

(don't come with the portability issue, most if not all platforms have a C compiler)
added on the 2008-08-30 04:27:08 by iq iq

login

Go to top