Raymarching Beginners' Thread
category: code [glöplog]
How fast is really fast? Does it converge at realtime framerates or is it just "fast" as in "not horribly slow"? :)
he claims it's anywhere from 60fps to 60spf, depending on scene complexity and (more importantly) tweaks on the pathtracer.
How many spp? And let's define converge as "it might look like having slight bit of filmgrain".
Are we talking about straight forward pt or bdpt or else?
Are we talking about straight forward pt or bdpt or else?
I'm guessing asking Inigo will probably be the easiest :)
Hi Sherlock!
I guess he's reading here anyways and I don't need an answer at the speed of light, in addition to that - Ferris seems to have some connections there.
I guess he's reading here anyways and I don't need an answer at the speed of light, in addition to that - Ferris seems to have some connections there.
Oh, forgive me for thinking that a back-and-forth between you guys felt odd, given that a link to Inigo's Twitter account was what started the discussion of "what ifs" :)
Ah, there's moss in the upper side of the thing. That at least explains the green stuff in the GI bounces of that other image he posted. :)
from facebook:
iq wrote:
so, this is it. yesterday i implemented another path-tracer to do global illumination (full Montecarlo integration, although not really importance sampled yet, and with diffuse and glossy BRDFs only for now). thing is i'm tired of tuning fake bounce lights and spending hours tweaking occlusions and sof shadows in my procedural experiments. so there, this must be my 4th or so path-tracer/GI thing i write, but this time it runs in the GPU
two answers by him:
it varies a lot. anything from 60 frames per second to 60 seconds per frame, depending on the resolution, amount of noise you want, etc, in my laptop computer (GeForce 560M)
sort of. it's not base on a BVH/kdtree, but on procedural raymarching, not implicit. that means that different rays need different iteration count until finding the intersection, meaning that ray coherency is still important, although not as critical as in a bvh-tracer cause i don't have caches to trash, but still I do have thread idling while the slower ray in the tile finishes its job, which depends on the random light path that was chosen for it. so, being clever on how to cast the rays is still important, and in fact as soon as i have some time i'll try to make the rays coherent, and randomize them in the external integration step, not in the inner tracing part. as usual, GPUs are fast, but it's not easy to feed them properly.
iq wrote:
so, this is it. yesterday i implemented another path-tracer to do global illumination (full Montecarlo integration, although not really importance sampled yet, and with diffuse and glossy BRDFs only for now). thing is i'm tired of tuning fake bounce lights and spending hours tweaking occlusions and sof shadows in my procedural experiments. so there, this must be my 4th or so path-tracer/GI thing i write, but this time it runs in the GPU
two answers by him:
it varies a lot. anything from 60 frames per second to 60 seconds per frame, depending on the resolution, amount of noise you want, etc, in my laptop computer (GeForce 560M)
sort of. it's not base on a BVH/kdtree, but on procedural raymarching, not implicit. that means that different rays need different iteration count until finding the intersection, meaning that ray coherency is still important, although not as critical as in a bvh-tracer cause i don't have caches to trash, but still I do have thread idling while the slower ray in the tile finishes its job, which depends on the random light path that was chosen for it. so, being clever on how to cast the rays is still important, and in fact as soon as i have some time i'll try to make the rays coherent, and randomize them in the external integration step, not in the inner tracing part. as usual, GPUs are fast, but it's not easy to feed them properly.
sorry, iq, for posting before you did, but i wanted to enlighten the ones just discussing what could be....them not connected to you via fb! :/
hey, yes, so this is a straightforward implementation of a path-tracing algo. the screenshot posted above and the one in my facebook wall are computed with 3 and 4 light bounces, 128 rays per pixel (hence the noise/grain). the thing is not specially fast, and i think it can be speed up a lot by casting the rays more coherently, but there's no intro coming, i don't this can be used as it is just yet, not in this simple form at least (one could bake the illumination into a pointcloud or a 3d texture and make a nice 4k intro perhaps, but that wasn't the point as much as just having a simple framework to produce pretty static procedural images and stuff like that without spending hours tweaking lights and occlusions and shadows).
don't get confused though, as you probably know writing a path-tracer need only two page-down and one hour of work. making an efficient one takes years of development.
don't get confused though, as you probably know writing a path-tracer need only two page-down and one hour of work. making an efficient one takes years of development.
iq: about the framerate question, i was curious to know how long it took for that specific image on fb :) given that level of noise, resolution, features etc.
I did some tests myself, and with almost no algorithmic optimizations, I was able to get similar quality, but nowhere near actually being usably realtime. A similar fractal with similar detail and image quality took around 40 seconds to render (without using accumulation over multiple frames ofc). This was after just a couple hours of mathturbation tho; curious to see more of iq's actual data :)
i cannot recall right now. but that should probably take something like half a minute?
most of the time is spent right now in the marching. if you decrease the quality/precision/epsilon of the fractal (and the iteration count), suddenly you get very reasonable speeds.
also, if you don't raymarch but raytrace implcit surfaces like spheres, planes, etc, it IS full realtime.
lastly, i guess it'd be work baking the distance field into a 3D texture or sparse voxel.
anyway, it's definitely usable for procedural graphics entries juts as it is, out of the box, without doing anything clever at all as Ferris pointed (i also spend a couple of 2 hours only on this, i didn't even implement accumulation)
most of the time is spent right now in the marching. if you decrease the quality/precision/epsilon of the fractal (and the iteration count), suddenly you get very reasonable speeds.
also, if you don't raymarch but raytrace implcit surfaces like spheres, planes, etc, it IS full realtime.
lastly, i guess it'd be work baking the distance field into a 3D texture or sparse voxel.
anyway, it's definitely usable for procedural graphics entries juts as it is, out of the box, without doing anything clever at all as Ferris pointed (i also spend a couple of 2 hours only on this, i didn't even implement accumulation)
Hey does anyone know how to do domain warping like in the demo "Texas "by Keyboarders?
http://www.youtube.com/watch?v=oRqzfyhn9Ig
It looks like it's not pure raymarching - but that's just my humble opinion and I'm probably terribly wrong.
http://www.youtube.com/watch?v=oRqzfyhn9Ig
It looks like it's not pure raymarching - but that's just my humble opinion and I'm probably terribly wrong.
the latter is the case :)
texas doesnt do raymarching at all.
its normal meshes which get deformed and displaced by vertex and geometry shaders.
texas doesnt do raymarching at all.
its normal meshes which get deformed and displaced by vertex and geometry shaders.
It looks like it's mesh based.
Really? is this 100% fact - it looks too smooth for geometry mangling - but that could be by broken eyes :P
i'd say its a 100% fact cause i checked the shaders used :)
since no raymarching code inside and geometry shaders used for the deformation it must be mesh based
since no raymarching code inside and geometry shaders used for the deformation it must be mesh based
people did make 4ks without raymarching once upon a time.. :)
you mean oldschool 4ks? :)
thx gopher - that is awesome! geom mangling FTW!
Does anyone think is possible with ray marching? I can see that deforming the domains would overlap and u'd get ambiguity errors - must be a simple way to do this with RM?
Does anyone think is possible with ray marching? I can see that deforming the domains would overlap and u'd get ambiguity errors - must be a simple way to do this with RM?
When all you have is a hammer...
...everything looks like a nail!
HAMMER IT.
Actually I'm tempted to do a 4k with polys which looks like sphere tracing - just for the lulz.
HAMMER IT.
Actually I'm tempted to do a 4k with polys which looks like sphere tracing - just for the lulz.
Quote:
I can see that deforming the domains would overlap and u'd get ambiguity errors - must be a simple way to do this with RM?
Handle multiple domain repetitions and build the union of them.
Quote:
Handle multiple domain repetitions and build the union of them.
Las can you explain that a bit further my feeble mind doesn't understand what you wrote properly.
I think this might work also: If you modify the ray "in flight" i.e. change it's y based on some xz sin modulation. I know that warping the ray at the start can give you sphere like domain warps e.t.c.