Raymarching Beginners' Thread
category: code [glöplog]
toxie: I would like to get in contact with you - is the *inc.de email address still working?
@las: yup, go ahead..
@T21: The one serious issue the intel stuff has is, that all the AVX madness hinders the average coders to actually code something really useful and fast for the board (i.e. everything that is not just PURE ray traversal and tri intersection). Even with all the compiler help there, all complex code will still fail to be SIMDed efficiently (or SIMDed at all). While one can argue that this problem is the same on GPUs (it's just hidden away by the hard- and software layers), at least one doesn't get insane from all the low level coding that is necessary to have it running somehow efficiently -at all-.
@T21: The one serious issue the intel stuff has is, that all the AVX madness hinders the average coders to actually code something really useful and fast for the board (i.e. everything that is not just PURE ray traversal and tri intersection). Even with all the compiler help there, all complex code will still fail to be SIMDed efficiently (or SIMDed at all). While one can argue that this problem is the same on GPUs (it's just hidden away by the hard- and software layers), at least one doesn't get insane from all the low level coding that is necessary to have it running somehow efficiently -at all-.
@iq: does the mail-addy on your homepage still work? Alex Keller wanted to contact you but either all mails got lost, or.. ..
Just in case everyone is asleep... www.shadertoy.com
https://twitter.com/iquilezles/status/306158359831277569
wow
wow. thx iq!
Cool, thanks! Now it's time for AMD to fix their shader compiler so the history does not crash :-(
Feature request: add support for javascript (for setting uniforms) for keeping a state (e.g. games or controlling more than 2 axes of a camera in a 3d scene) or doing complicated math stuff that otherwise would have to be done per-pixel (or at least per-vertex if that was supported). You can use web workers for sandboxing, all browsers that support WebGL also seem to support web workers.
Feature request: add support for javascript (for setting uniforms) for keeping a state (e.g. games or controlling more than 2 axes of a camera in a 3d scene) or doing complicated math stuff that otherwise would have to be done per-pixel (or at least per-vertex if that was supported). You can use web workers for sandboxing, all browsers that support WebGL also seem to support web workers.
Damn, forgot that web workers, while they cannot directly interact with the DOM, can still issue XMLHttpRequest and a few other ones. There are people who try to solve that by nulling all known potentially hazardous objects but I've got an awkward feeling about that - I don't like blacklisting. Google Caja looks much better, especially since they use whitelisting and they gave many thoughts on possible attack vectors.
Well I've got a philosophical question: OpenGL4.x or DirectX11? I'm talking somewhat capable PCs(and at least win7 maybe) here if it isn't obvious. Some people say that DX is better and some people say that OGL is better. But which one has better shader compiler(w/optimisation I think)? Which one produces smaller code(1k,4k possibilities)? Which one is more suitable for marching? Which one is more suitable for anything else?
I might be an OGL fanboy, but anyway... If you might want to port to other platforms go OGL, else than that there pretty much equal I'd say. You're only running a shader anyway and speed is dependent on the shader compiler which varies between platforms, vendors and versions. The DX compiler seems to suck less though for most people.
1k and 4k is doable on both platforms too as pouet shows...
So. Uhm. A matter of choice?! ;P
1k and 4k is doable on both platforms too as pouet shows...
So. Uhm. A matter of choice?! ;P
If I wanted to mix raymarching in a fullscreen quad with some regular geometry, how would I go about setting up the camera in the raymarch fragment shader so that it had the same coordinate system as my traditionally-rendered stuff? I'm having a bit of a math-related brain-fade trying to figure out how to turn my projection and view matrices into something the shader can turn into ray_origin and ray_direction...
I wrote about this exact topic in my blog: http://franktovar.de/2012/03/26/combining-raytraced-and-mesh-rendering-with-multisampling-using-direct3d-10-1/
Thamks! I also managed to find http://sizecoding.blogspot.co.nz/2008/12/code-for-moving-camera-in-glsl.html, so I should be able to get things going. All I'm using it for is writing out the ray direction to the gbuffer so that I can render the sky in the next pass. The geometry so far is a mix of mesh and raycast terrain tiles :)
@T21, toxie: Actually, I did a comparison of OptiX, Aila-Laine's raytracer and Embree for my master's thesis, of which the report is soon available on the internet. Results, in short, for speed is that GPU raytracers on a GTX 680 are 1.5-3 times faster than a 4-core IVB i7.
I mainly did a comparison off power efficiency though, and you can have 2-3 such i7s while not consuming more power than a GTX 680, whjich makes them roughly equivalent in terms of speed if you throw the same amount of energy at them. This ignores hw price, of course, which I haven't looked at.
I mainly did a comparison off power efficiency though, and you can have 2-3 such i7s while not consuming more power than a GTX 680, whjich makes them roughly equivalent in terms of speed if you throw the same amount of energy at them. This ignores hw price, of course, which I haven't looked at.
@xTr1m, iq: thanks - got it working - after (ahem) getting the triangle winding correct and multiplying the projection and view matrices in the correct order... I hate it when I waste time on simple mistakes. I was also going to post a couple of screens last night, but the mother in law is staying in MY coding room and kicked me out.
Hi there. Did somebody knows good techniques to add some "glow" around objects.
What i have tried some far :
In main distance function (f) i accumulate distance to one object then use this information to modulate color. This works but results are not so nice.
(sorry but it seams indentation is broken (space are automatically removed), at least this is like this in preview :()
What i have tried some far :
In main distance function (f) i accumulate distance to one object then use this information to modulate color. This works but results are not so nice.
Code:
acc = 0
while()
{
//main raymarching loop
d = f(px,py,pz);
if(d < 0.01)
{
break;
}
r = 1000/acc; //add red glow stuff around box
}
f(px,py,pz)
{
s = cylinder();
b = box();
acc += box(); //glow around box
return min(s ,b, ...);
}
(sorry but it seams indentation is broken (space are automatically removed), at least this is like this in preview :()
You can use the number of steps to add some glow to the intersection. And it works : the closer the ray goes to a surface (without actually touching it), the more steps it will need to go beyond.
Good idea. To only glow a specific object and not the whole scene : should i check if ray goes below a certains distance from object to glow ? (because it will never touch it)
I'd say to do 1000*iterations/acc, or replace 1000 with your favorite constant. accumulating the box distance is definitely a good approach.
I have one question regarding raytracer/raymarchers :
When setting up origin and direction rays, what is the best practice ?
To modify origin or direction ray depending the screen pixel position ?
I have seen both practices.
vs
same about initial "z" values. What are the best ? To go from -1 to 1 or the opposite ?
When setting up origin and direction rays, what is the best practice ?
To modify origin or direction ray depending the screen pixel position ?
I have seen both practices.
Code:
ro = vec3(x, y, -1);
rd = vec3(0, 0, 1);
vs
Code:
//make more sense to me
ro = vec3(0, 0, -1);
rd = vec3(x, y ,1);
same about initial "z" values. What are the best ? To go from -1 to 1 or the opposite ?
The way I see it, ray origin is the position of the eye, or the camera. It makes no sense that the camera has many positions (therefore, no x and y in ro).
Then you need some way of generating ray directions which are fanned out to span a frustum, that's why I take x and y positions for the ray direction vector, which is normalized afterwards.
There's no "viewing plane" as such, there's no near plane clipping. Everything your ray hits is visible by the camera.
Then you need some way of generating ray directions which are fanned out to span a frustum, that's why I take x and y positions for the ray direction vector, which is normalized afterwards.
There's no "viewing plane" as such, there's no near plane clipping. Everything your ray hits is visible by the camera.
Guess it depends on whether or not you want perspective? 1st method = parallel rays, 2nd method = point camera, diverging rays. The 2nd method is obviously a lot more common.
I think the question relate to the picture
What is the distance people use from camera to projection plane, right?
Strangely I rare is ever scene graphics books, lectures, article, etc.. etc..
where the system is setup using real life optics.
both Eye or camera CCD are surface receptor array. So each ray of light being registered is identified from its location, the true 'origin'.
side note: 'origin' make no sense in the real world or a path tracer.
To emulate a microspcore the light path might look like this
What is the distance people use from camera to projection plane, right?
Strangely I rare is ever scene graphics books, lectures, article, etc.. etc..
where the system is setup using real life optics.
both Eye or camera CCD are surface receptor array. So each ray of light being registered is identified from its location, the true 'origin'.
side note: 'origin' make no sense in the real world or a path tracer.
To emulate a microspcore the light path might look like this
(I hope my 'english' can be understood.)
note. look at how the ray ends in this microscope case. on a 2d array.
note. look at how the ray ends in this microscope case. on a 2d array.