pouët.net

Go to bottom

Raymarching Beginners' Thread

category: code [glöplog]
Nope. HLSL. It's the shader that went a bit more complex.
added on the 2011-08-05 04:34:12 by las las
Uncovering Static. That is all. Cannot wait for the Direct to Video post. xD Also, I learnt a new lesson in demomaking. Release your demo before Smash does all the effects you were planning to use. but in 64k! Perlin noise destruction of raymarched bulidings was my effect dammit xD
added on the 2011-08-08 09:27:40 by Mewler Mewler
Raymarched buildings? I could have sworn those were marching cubes.
added on the 2011-08-08 10:27:53 by pommak pommak
Yeah, that didn't look raymarched. It looked very polygonal. Knowing smash it'll probably be something else entirely new though!
added on the 2011-08-08 10:35:13 by psonice psonice
"this is completely realtime in every way. the only precalc is to compile the shaders.
the synth is based on physical modelling; the rendering on distance field manipulation; the lighting on raycasted
ambient occlusion." Read the readme xD
added on the 2011-08-08 10:39:34 by Mewler Mewler
Mewler: Just do it with more style and less artifacts - and you will be fine ;)

Hmm I guess I can't finish this intro until evoke. Still experimenting with a lot of stuff - seems you will have to wait - tUM.
added on the 2011-08-08 10:43:16 by las las
Mewler: Distance field doesn't mean it has to be raymarched.
added on the 2011-08-08 10:47:49 by pommak pommak
Hmm, what else could you do with Distance Fields :0 I guess we'll find out.
added on the 2011-08-08 11:23:09 by Mewler Mewler
Maybe the scene is being rendered into a 3d texture using distance fields, then the 3d texture is rendered with marching cubes, something like that? Look at the first scene, it looks very much polygon rendered - you'd have to actually add code to fake that in a raymarcher.

I'm looking forwards to the blog post on this one. It's strange to say smash's blog posts are better than his demos, and that NOT be insulting to his demos, no? :D
added on the 2011-08-08 11:42:07 by psonice psonice
Mewler: you could render them with marching cubes for example :P
added on the 2011-08-08 11:46:52 by pommak pommak
psonice has leading considering the technique quiz :)
added on the 2011-08-08 12:03:51 by gopher gopher
It's just glorified metaballs then :P
added on the 2011-08-08 14:06:43 by jalava jalava
psonice : I thought exactly the same thing...
added on the 2011-08-08 15:12:35 by flure flure
back on topic.. this is something that we came up with some years ago, just released on HPG and which might be helpful for ray marching, too (as one could exploit coherency between neighboring pixels much much better, especially if one has many, many objects in a scene)?!
http://ainc.de/Research/MemlessRT.pdf
added on the 2011-08-08 19:53:25 by toxie toxie
interesting, toxie. did you implement it? how does it compare to the good old kd-tree or bih ?
added on the 2011-08-08 20:28:58 by iq iq
the implementation so far was/is rather simplistic, especially the 'parallelization' (i.e. there is none ;)), so performance on single core is very good, one has a ton of additional benefits (coherence, mem bandwidth, cache usage, fully dynamic scenes, etc), but, as said, making it run on GPU (1000s of threads) is not exactly super trivial and would take some time to get speedy..
but IMHO there is a lot of potential in this scheme that just waits to be exploited for all kinds of stuff.. ;)
added on the 2011-08-08 20:36:19 by toxie toxie
I'll have a look at the paper tomorrow ;)
added on the 2011-08-08 20:51:23 by las las
on the 64k: pommak told it like it, to my eye as well, is :)

paper -_- good!
added on the 2011-08-08 21:03:08 by superplek superplek
las: would be interesting to know if you could see some potential for raymarching of whole regions (instead of one single 'pixel' at a time), cause i somehow have the feeling that this could work out easily for complex scenes (i.e. getting to the silhouette of an/very near to an object could be amortized between neighboring pixels?), but don't have that much experience with all the dirty tricks involved in the ray marching inner loop and how this would not work very well with this kind of scheme..
added on the 2011-08-08 21:41:29 by toxie toxie
toxie: Doesn't seem that gpu (/compute api) mapable. Am I right the rays are handled individually when you have decided partition of the primitives?
You could do all the bounding test, partition, recursion etc on the cpu and then just produce the terminating DirectlyIntersect() jobs for the gpu (which will all run in the end, with like 1 thread per ray). And then maybe the resulting gpu intersects + shading will produce new rays for another iteration.
Otherwise we should move the ray determination/partitioning to the gpu (batch them up in breath-first fashion), but I guess that would still be too many gpu-cpu iterations per frame.

For primary rays something like I did in traskogen would probably be more efficient, so I would nok look too much into spatial coherence between rays - this is mostly useful for the "random" secondary rays.
added on the 2011-08-08 23:25:11 by Psycho Psycho
you're right that this is not very GPU friendly out-of-the-box.. and if, then would definetly require CUDA/OpenCL/DC to not drive you insane while optimizing.. ;)
but hybrid solutions are even more painful in my opinion, so doing some stuff on CPU and some on GPU is not an option for me..
my hope is that the upcoming GPU generation will be very useful for this kinda scheme though.. so not too much trickery required anymore (like multiple kernels/calls) and maybe almost as simple to code as the single threaded version (of course some more algorithmic issues there, but we think that there is some more or less simple ways to solve these issues)..
added on the 2011-08-08 23:39:01 by toxie toxie
While you're all busy cranking those GPUs as hard as possible with all this shit, you all have quad core CPUs sitting around with nothing to do but render one lousy quad now and then.

Try moving a bunch of work to the CPU, take advantage of those idle cores that actually LIKE branching and stuff, and leave just the highly parallelised rendering stuff at the end to the GPU. Then it can kick back and relax for a while, and you can go plan some seriously awesome lighting or something to get it busy again.

(Or in other words, think a bit more about load balancing between the CPU + GPU. I've just done that with a very GPU heavy video processing app, and spreading the load like this gave *huge* rewards, speed resolution + number of concurrent fx all went up by a big amount.)
added on the 2011-08-08 23:55:20 by psonice psonice
psonice, how did you do it in a way so that it became practical? Lots of rewriting involved? CUDA?
@psonice: correct, but the load balancing for this kind of stuff can be painful.. it might slow down things on high end (multiple?) GPUs vs. lower end CPUs (not that unlikely setup), but speed up everything else (of course also not that unlikely setup ;)).. so at the end you end up with a hell lotta special cases to suit both.. :(
added on the 2011-08-09 00:09:27 by toxie toxie

login

Go to top