pouët.net

Go to bottom

New rendering way?

category: code [glöplog]
Inspired by the raymarching threads and the 4kb procedural images, I've thought that maybe it could be possible to do a different rendering way for procedural. I'm not sure if there is something like this already or not, but the idea:

Basically montecarlo rendering:

1) Lets define N 3d shapes. They could be volumetric or surfaces, this should not matter. For a surface, it could be a function(x,y) that returns a z value.
2) We generate a random number from 0 to N-1 to pick a shape.
3) We generate as much random values as the shape function arguments.
4) We call the function with the random values, and project that point to screen, drawing it if inside the screen with a z-buffer.
5) We go to 2) for enough time until we have repeated it enough so the image is statistically completed.

It would need a good random generator, but it would be very easy to define shapes I think... What do you think?
added on the 2011-05-30 21:29:36 by texel texel
Doesn't compute.
added on the 2011-05-30 21:35:58 by xernobyl xernobyl
The idea is that you dont shoot ray but you do single point intersection with the objects in the world, right?

Then, you would get an unlit volume...


added on the 2011-05-30 21:43:24 by T21 T21
Speaking of volume, would it help raymarching to create a distance field volume as a first pass?
A volex would then point to the object ID (s) to the closest object.

added on the 2011-05-30 21:55:15 by T21 T21
nice idea.
added on the 2011-05-30 22:06:01 by yumeji yumeji
T21: doing that with enough resolution would take ages.
added on the 2011-05-30 22:14:51 by xernobyl xernobyl
Just to clarify the idea a bit as I understood it: texel is suggesting a parameteric surface representation of objects, i.e. f is a function that maps a number of parameters to a 3D point on the surface of the object. (this could be for instance be the f(theta, phi) returns point on the surface of a sphere)

Then repeatedly choosing a random inputs to the function f gives you a random pixel on screen, which you draw; and you're stocastically filling the screen with pixels.
added on the 2011-05-30 22:18:31 by revival revival
You might aswell just render the shapes to a volume or point array at loadtime, maybe with different LOD's.

But rendering so many points in realtime that the shapes seem solid is not going to be feaseable I think.

And while using random points could be fine for a gritty effect, you would still need heuristics for choosing the amount of points to sample for each object. This might of course not be a problem.

Also, like T21 said, you're going to have a hard time doing lighting unless you render to volumes first or do it in screenspace.

Generating shapes might be easy, but generating functions so that they return a somewhat uniform pointcloud might not be (and if it is, it might not be size-efficient), and all non-rigid transformations applied to the data will need different sampling methods.


But it could all make a pretty good 4k I suppose, I'm looking forward to it!
montecarlo rendering:

1) shapes. should not matter. a function(x,y) that returns a z value.
2) a random number to pick.
3) random values as the function arguments.
4) call the function, and drawing it with a z-buffer.
5) go to 2) until the image is "completed".

a z-buffer filled with noize. huh?
added on the 2011-05-30 23:02:19 by yumeji yumeji
Quote:
And while using random points could be fine for a gritty effect, you would still need heuristics for choosing the amount of points to sample for each object. This might of course not be a problem.


My guess is that you don't need any heuristics... just time to render. Of course, little objects will get more points that big objects, but if you wait enough time, everything will be rendered, isn't it?

The heuristic would speed up the rendering... but for 4k procedural images, speed might not be a problem I think...

@revival: yes.

In fact, the idea is somewhat similar to reyes/micropolygons... but much more simpler to execute.

added on the 2011-05-30 23:15:30 by texel texel
OK, I was overseeing the fact that you meant this technique for procedural images, and not realtime demos.
flure: I called it montecarlo rendering, but it has nothing to do with montecarlo gi rendering... in any case, I don't find a better name for describing it...
added on the 2011-05-31 01:02:32 by texel texel
You need a good way to "predict" where to sample the "function" in order to hit the image plane - else this will be horrible.
added on the 2011-05-31 01:19:43 by las las
If I understood right, you're basically raytracing random points on random objects in the scene to render it, and using the z buffer to get the right result at the end. Is that right?

If so, I guess you don't fill all pixels, and you save some time by not drawing all objects at all pixels, so it's potentially faster. But... against that you can't easily do this on the GPU I think, and the GPU would do this in an 'ordered' way extremely fast. Likely so much faster, that there's no reason to use monte carlo methods, no?

Still, interesting idea! Maybe there is some potential here. Instead of drawing the pixels, how about you add points to a mesh representing the selected object? This way you interpolate the unrendered spaces, and longer render time adds more detail. A mesh can be reused in the next frame too, where objects aren't animated.

(Oh, and why would this type of render by unlit? When you find the surface point, it's usually easy to find the normal + texture coords too. Shadowing might be hard.)

Another way could be something like this:
1. Raytrace random rays from the light source
2. Where it hits, find the pixel coordinates, and draw the pixel (again, using z). Light intensity is proportionate to ray length (inverse square..)
3.Bounce the ray from this point (in a random direction if it's a matte surface, or a straight reflection/refraction for metal/glass etc) and draw the next point it hits. Intensity is now "total ray length * (material brightness + reflectivity)".
4. Bounce a few more times.

You'd get excellent lighting in this case. Shadows would happen naturally, indirect illumination would be built in, so would caustics. And it should be possible to accelerate the rendering a lot by marking light directions that do not affect screen space after several bounces.

Is it possible to render like that on the GPU? I know it's fast enough to render a large number or samples with a fairly simple scene. You could render in 'light space' instead of screen space, and just render say screenspace coordinates, light intensity, and object ID to the output image.

With that information the 'output' shader can render the scene very quickly, but we need to sort the pixels into the right order. Can that be done efficiently?
added on the 2011-05-31 01:26:59 by psonice psonice
Hey, hey, hey... it works!!!

http://www.romancortes.com/ficheros/rendertest.html

Could you help me please to find if I invented this rendering way!? If not, I'm enjoying it as if I did... :P
added on the 2011-05-31 02:35:38 by texel texel
Well, it does work :) Bonus for doing it in javascript too. No idea if you're the first to try it or not, but who cares? Thinking of something like this, then making it - that's where the enjoyment is.
added on the 2011-05-31 02:59:28 by psonice psonice
Okay that's kinda cool now. ;)
added on the 2011-05-31 03:29:46 by las las
Well congratulations but sorry but it's not new, there was this company that claimed to get "realtime raytracing" by randomly tracing 1 out of 16 pixels at each frame e.g.

http://www.redway3d.com/pages/technology.php

( was it this one? or don't i remember correctly :) )

Anyway, why would you want to trace random pixels? you get cache coherency for writing pixels in a sequential manner, or do you wish to provide an approximation of the final picture this way?
added on the 2011-05-31 12:32:31 by nystep nystep
we use monte carlo raymarching (combined with temporal reprojection - the smart way to "do a few each frame") for ambient occlusion (since frameranger actually) - but monte carlo isnt that nice for shaders if texture lookups are involved because as nystep mentioned you lose the cache coherence.
added on the 2011-05-31 12:35:19 by smash smash
texel, if I get the idea, isnt it a special case of point cloud rendering where the points are generated on the fly (as you have simplified parametric surfaces)? Splatting could be used to make things converge faster as in this video.
added on the 2011-05-31 14:58:26 by auld auld
@nystep, @smash: I believe my explaination was not clear enough.

I'm not tracing (raytracing) anything at all. I'm proyecting. I'm also not raymarching.

And also, I don't want any texture coherence nor using it in gpus... this could be just good for 4kb procedural images. Software rendering. The advantages: it is very easy to define any shape. You can even define a volume and it will be rendered. And also, the rendered algorithm is stupidly easy and little to implement... ideal for little size coding. It could easily render nurbs for example, just from the definition of the nurb.
added on the 2011-05-31 15:01:09 by texel texel
You could still GPU accelerate this. Feed the GPU a shader describing how to calculate surface points on the current object, and a texture containing your randomised parameters. It outputs a list of screenspace coordinates, plus Z, lightinging, object ID or whatever, then you take that back to the CPU and use it to draw your random pixels to screen. Not ideal, but for complex to draw objects that work well on a GPU it'd be many times faster.
added on the 2011-05-31 15:26:56 by psonice psonice
Ideally you would need a way to have a uniform distribution of your points in screen space, which is unlikely since even the very use of a parametric function will result most of the time in non uniform distribution.

Maybe it could be used cleverly for some non photo realistic rendering, with patches instead of pixels...
added on the 2011-05-31 18:43:31 by Zavie Zavie
psonic if you need a better random number generator then rand() use this:
http://www.flipcode.com/archives/07-15-2002.shtml
It was tested using http://www.stat.fsu.edu/pub/diehard/
You can modify it to give you 4 floating point random value in the 0.0f-1.0f range by masking 0xFFFFFF and then mul by 1/0xFFFFFF
added on the 2011-05-31 18:51:03 by T21 T21

login

Go to top