Raymarching Beginners' Thread
category: code [glöplog]
From a 3d point (where ray intersect) + surface normal, is it possible to generate u,v map coordinates ? (in order to apply a 2D texture to a object defined just by 3D formula ...)
its certainly very simple but i am thinking about this since beginning of afternoon and cannot find the solution. maybe its not possible.
its certainly very simple but i am thinking about this since beginning of afternoon and cannot find the solution. maybe its not possible.
Basically that's a parametrization problem and you need a mapping from your R^3 position to your R^2 texcoord. There are tons of different ways to map 2d textures on 3d objects.
One trivial example: You have a plane in XZ and you want texture coords for it... just take xz from the hitpoint then.
Another rather simple one: procedural 3d textures - just take xyz from the hitpoint and insert it into your mighty procedural texture formula.
Some buzzwords to google for:
polar coordindates, cylindrical coordinates, spherical coordinates, projective texture mapping...
In case you have a discrete 3D SDF you could even store u/v for each voxel.
One trivial example: You have a plane in XZ and you want texture coords for it... just take xz from the hitpoint then.
Another rather simple one: procedural 3d textures - just take xyz from the hitpoint and insert it into your mighty procedural texture formula.
Some buzzwords to google for:
polar coordindates, cylindrical coordinates, spherical coordinates, projective texture mapping...
In case you have a discrete 3D SDF you could even store u/v for each voxel.
Well, after some thinking I realize my question was quite stupid : for a given point and normal you have a plane, which means an infinite number of u'v coordinates...
Uhmmm... and? That doesn't tell you anything about a proper parametrization for UVs.
"proper" is of course up to what you want to do with your texture.
OK I give up and ask for help -- been fighting with this thing (on and off) for days now. Anyone care to look at this screenshot?
Let's say I wanna have both an infinity wavy "floor" and an infinite wavy "ceiling". I get "holes" if the xz position of the camera is near the floor or ceiling, farther away it looks fine.
Distance functions:
float DistFloor (const in vec3 vPos) {
float f1 = 0.5 + sin(vPos.x);
float f2 = vPos.y;
return (f2 < f1) ? f1 - f2 : f2 - f1;
}
float DistCeiling (const in vec3 vPos) {
float f1 = 30.5 + cos(vPos.z);
float f2 = vPos.y;
return (f2 < f1) ? f1 - f2 : f2 - f1;
}
The branching was just experimental to support (sub-optimally) for the cam to be both above or below either of those planes, since elsewhere I'm interpreting negative distance as "inside an object".
But one cannot ever be "inside a plane", hence distance should always be positive.
Now what's up with those holes? The rays at those points obviously completely skip the plane and travel further into space. But why... at each marching step, no matter the position or epsilon or direction, the "terrain" / height-field / plane distance functions above should return a mathematically correct value matching the overall curvature, right? I'm not getting it! What am I missing? Sucks to be a n00b :D
Let's say I wanna have both an infinity wavy "floor" and an infinite wavy "ceiling". I get "holes" if the xz position of the camera is near the floor or ceiling, farther away it looks fine.
Distance functions:
float DistFloor (const in vec3 vPos) {
float f1 = 0.5 + sin(vPos.x);
float f2 = vPos.y;
return (f2 < f1) ? f1 - f2 : f2 - f1;
}
float DistCeiling (const in vec3 vPos) {
float f1 = 30.5 + cos(vPos.z);
float f2 = vPos.y;
return (f2 < f1) ? f1 - f2 : f2 - f1;
}
The branching was just experimental to support (sub-optimally) for the cam to be both above or below either of those planes, since elsewhere I'm interpreting negative distance as "inside an object".
But one cannot ever be "inside a plane", hence distance should always be positive.
Now what's up with those holes? The rays at those points obviously completely skip the plane and travel further into space. But why... at each marching step, no matter the position or epsilon or direction, the "terrain" / height-field / plane distance functions above should return a mathematically correct value matching the overall curvature, right? I'm not getting it! What am I missing? Sucks to be a n00b :D
OK think I got it, seems to be similar to the non-distance-preserving deformations problem... if I reduce the step size as in vPos = vCamPos + (vDir * fTotalDist * 0.25) the issue goes away, even with a factor as high as 0.7. I guess better to "step back" from inside the distPlane() func only tho so that other simpler geometry (spheres lol) doesn't suffer from the slowdown. Let's see :P
No, problem is that you try to raymarch an infinitely thin plane. You need *some* volume.
So either give your planes some thickness or just make them completely solid on the outside.
So either give your planes some thickness or just make them completely solid on the outside.
Are you sure that's it? If there's no y-distortion (ie a flat plane at a given y position -- or even sloping up or down but linearly) with no sin()/cos() deformations, the "infinite thinness" does not seem to be a problem -- plus, as above, reduced step-size resolves this but I don't like to step more than should be necessary.
Well... a "plane with some volume" would just be an infinite wide and deep box of some height right? -- guess I can try =)
Well... a "plane with some volume" would just be an infinite wide and deep box of some height right? -- guess I can try =)
Code:
return (f2 < f1) ? f1 - f2 : f2 - f1;
Why measure absolute distance at all? Why not signed distance:
Code:
return f1 - f2;
OK with a box in place of the plane -- 0.125 units high, 9999 units wide and deep -- and this deformation function:
float DistWavyThinBox (const in vec3 vPos, const in vec3 vBoxSize) {
float d1 = DistBox(vPos, vBoxSize); // standard
float d2 = 0.5 + sin(vPos.x) * cos(uTime);
return d1 - d2;
}
I get almost the same results -- with each step reduced by a factor less than 1, looks OK, with non-reduced stepping, holes and artifacts. But then of course that's just what iq sayd at http://www.iquilezles.org/www/articles/distfunctions/distfunctions.htm under "distance deformations".
Well I knew about that before but somehow I was just hoping I could ray-march a however-deformed plane without "smaller than should be necessary" step sizes, seeing as the height function should return a smooth correct value at any coordinate.
float DistWavyThinBox (const in vec3 vPos, const in vec3 vBoxSize) {
float d1 = DistBox(vPos, vBoxSize); // standard
float d2 = 0.5 + sin(vPos.x) * cos(uTime);
return d1 - d2;
}
I get almost the same results -- with each step reduced by a factor less than 1, looks OK, with non-reduced stepping, holes and artifacts. But then of course that's just what iq sayd at http://www.iquilezles.org/www/articles/distfunctions/distfunctions.htm under "distance deformations".
Well I knew about that before but somehow I was just hoping I could ray-march a however-deformed plane without "smaller than should be necessary" step sizes, seeing as the height function should return a smooth correct value at any coordinate.
doomdoom that was just a leftover artifact from experimentation. Of course I'd revert to the shorter non-branching subtraction once this issue would be resolved =)
either you go with return abs(f1-f2)-.1, to have a thin floor, or, if you never intent to be below the floor anyway you just return f1-f2;
Ofcourse you may overshoot an infinite thin plane, as you distance function is not correct for anything but pure vertical rays, so if you really need a really thin plane you'll have to reduce your stepsize a bit.
Ofcourse you may overshoot an infinite thin plane, as you distance function is not correct for anything but pure vertical rays, so if you really need a really thin plane you'll have to reduce your stepsize a bit.
voxelizr: I don't understand how you can see anything through the floor if you use a signed distance function. I can understand marching past the precise boundary of the floor because the distance function is incorrect, but your rays seem to just march on if they miss the surface, even while the distance function is returning negative values. I can't imagine why that happens. Shouldn't you have logic more or less like this:
And if so, how can you see anything through the floor? The ray should stop as soon as any of the distance functions returns less than zero.
Code:
while (true)
{
float minimumDistance = distFloorA(...);
minimumDistance = min(minimumDistance, distFloorB(...));
minimumDistance = min(minimumDistance, distTorus(...));
minimumDistance = min(minimumDistance, distSphere(...));
...
if (minimumDistance < epsilon) break;
vPos += minimumDistance * rayDirection;
}
And if so, how can you see anything through the floor? The ray should stop as soon as any of the distance functions returns less than zero.
That's because as per the above the distFuncs with the temporary (if f2<f1 ? f1-f2 : f2-1) logic for the floor/ceiling always returned positive values so the ray marched on and on util maxDist was reached, hence "holes" rather than... artifacts. In fact, it was the weird artifacts that got me started on this...
OK I resolved this, Psycho was of course correct, the step-size needs to be down-adjusted. Now I'm banging my newbie-head once again... trying to graduate into "real" terrains. See how fine this now works for sine/cosine-based "terrains":
Now the next experiment toward a terrain -- instead of a height function based on sin()/cos() maths, let's use a noisy heightfield texture! Well... that doesn't go down so well at all.
I'm not even loading an image file at this point. I'm just uploading a 3x3 R16F data texture, specifically the following grid:
1, 1, 1
1, 10, 1
1, 1, 1
This should give nice pyramids. I'm within the min/max-height range of my previously used sin()/cos()-based heightfield distance function (-10 .. 10) -- that one used in the above screenshot was:
float DistFloor (const in vec3 vPos) {
float fHeight = 10 * sin(vPos.z * 0.125 /* SmoothCurve(uFlags[FLAG_TIME])*/) * cos(vPos.x * 0.125);
return abs(vPos.y - fHeight);
}
Now the above data texture that's basically a flat floor with a single elevation is set to GL_LINEAR filtering and GL_REPEAT wrapping. The idea is that the GPU should linearly interpolate a pyramid out of the flat-floor elevation values (1) and the peak elevation value (10). Just like sin()/cos() returns a smooth curve for any given value, so should the hardware bilinear texture interpolation give a smooth elevation from those control-points, right?
Wrong:
I'm getting pyramids all right. But once again they're full of holes! Surely can't be the step-size *again*? I believe it's fairly conservative and captures the sin/cos curves within the same elevation interval after all.
Or is gpu texture filtering just really not as linear as I'd hoped? There still seem to be discrete steps (causing holes). This is from just-beneath the "pyramid" (rather, the bumpy-floor):
Gonna play with this more of course. Just wondering who else ran into such issues before? I guess I need a bigger texture to begin with, seems like the hardware linear interpolation is still too discrete. Though when sending varyings from vertices to fragments, it works much more smoothly just like I need it. Isn't the same stuff done with textures? Might try to roll my own interpolation then I guess.
(And yeah I really don't want to evaluate snoise(x, z) at each marching step for 10s or 100s of thousands of pixels.)
Now the next experiment toward a terrain -- instead of a height function based on sin()/cos() maths, let's use a noisy heightfield texture! Well... that doesn't go down so well at all.
I'm not even loading an image file at this point. I'm just uploading a 3x3 R16F data texture, specifically the following grid:
1, 1, 1
1, 10, 1
1, 1, 1
This should give nice pyramids. I'm within the min/max-height range of my previously used sin()/cos()-based heightfield distance function (-10 .. 10) -- that one used in the above screenshot was:
float DistFloor (const in vec3 vPos) {
float fHeight = 10 * sin(vPos.z * 0.125 /* SmoothCurve(uFlags[FLAG_TIME])*/) * cos(vPos.x * 0.125);
return abs(vPos.y - fHeight);
}
Now the above data texture that's basically a flat floor with a single elevation is set to GL_LINEAR filtering and GL_REPEAT wrapping. The idea is that the GPU should linearly interpolate a pyramid out of the flat-floor elevation values (1) and the peak elevation value (10). Just like sin()/cos() returns a smooth curve for any given value, so should the hardware bilinear texture interpolation give a smooth elevation from those control-points, right?
Wrong:
I'm getting pyramids all right. But once again they're full of holes! Surely can't be the step-size *again*? I believe it's fairly conservative and captures the sin/cos curves within the same elevation interval after all.
Or is gpu texture filtering just really not as linear as I'd hoped? There still seem to be discrete steps (causing holes). This is from just-beneath the "pyramid" (rather, the bumpy-floor):
Gonna play with this more of course. Just wondering who else ran into such issues before? I guess I need a bigger texture to begin with, seems like the hardware linear interpolation is still too discrete. Though when sending varyings from vertices to fragments, it works much more smoothly just like I need it. Isn't the same stuff done with textures? Might try to roll my own interpolation then I guess.
(And yeah I really don't want to evaluate snoise(x, z) at each marching step for 10s or 100s of thousands of pixels.)
Btw the texture-replacement distance function was similar to the sin/cos one:
float DistFloor (const in vec3 vPos) {
float fHeight = texture(uTex0, vPos.xz * 0.05).r;
return abs(vPos.y - fHeight);
}
float DistFloor (const in vec3 vPos) {
float fHeight = texture(uTex0, vPos.xz * 0.05).r;
return abs(vPos.y - fHeight);
}
I think what people were trying to explain is that your problem was (and still is) that you are trying to raymarch infinitely thin surfaces and due to numerical issues you get holes. The easiest way to avoid that is using signed distances. I. e. instead of
return abs(vPos.y - fHeight);
You use
return (vPos.y - fHeight);
or (if you have problems with negative distances)
return max(vPos.y - fHeight, 0.0);
or (if you want to go below your scene aswell)
thicknessOfFloor = 2.0;
return abs(vPos.y - fHeight) - thicknessOfFloor;
return abs(vPos.y - fHeight);
You use
return (vPos.y - fHeight);
or (if you have problems with negative distances)
return max(vPos.y - fHeight, 0.0);
or (if you want to go below your scene aswell)
thicknessOfFloor = 2.0;
return abs(vPos.y - fHeight) - thicknessOfFloor;
Thanks chock! I was hoping that would be it, but -- no difference with any of those alternatives sadly. So that's not it, at least not the whole explanation.
Interestingly, when using a floorThickness of 5, the (still pourous) bumps have their upper half chopped off. Kinda makes sense though.
Interestingly, when using a floorThickness of 5, the (still pourous) bumps have their upper half chopped off. Kinda makes sense though.
voxelizr: The texture interpolators on your gpu has limited precision, so you're bound to get some variation of this issue no matter what you do. But make sure you use a floating-point texture, some hardware have higher precision interpolators for those.
voxelizr: Now that really baffles me. Would you mind posting the whole shader on pastebin or to send it to me (chock at nuance-family point de)
Hell yeah, will do! Will port it to WebGL and put it on GLSL Sandbox shortly.
voxelizr: Yes... encountered this problem back when doing nevada in 2008 ;)
IIRC i figured the texture samplers had 6 bit subtexel precision, and while the radeons were reqular, the geforces somewhat jittered.
It's especially a problem if you want to do multi octave perlin noise landscapes..
And btw, your should still return the signed distance to the *volume*, instead of the absolute distance to the *surface*. ie no return abs(vPos.y - fHeight);
IIRC i figured the texture samplers had 6 bit subtexel precision, and while the radeons were reqular, the geforces somewhat jittered.
It's especially a problem if you want to do multi octave perlin noise landscapes..
And btw, your should still return the signed distance to the *volume*, instead of the absolute distance to the *surface*. ie no return abs(vPos.y - fHeight);
voxelizr: it looks like the lack of signing causes it to me too. Does your marching function handle negative distances correctly?
What I suspect happens: The distance function is based on the vertical distance to the texture, NOT the distance to the nearest surface. Worst case, if you had a vertical 'wall' and the ray is travelling horizontal to the ground, at a point just next to the wall the distance function might return a value of 1. If it moves forwards by 1, it's deep inside the wall - or worse, it passes through the wall completely and continues the other side.
Your use of abs() in the distance function makes it even worse. If the ray steps slightly inside that wall, the distance from the surface might be -0.1. But your function says the distance is +0.1, so the ray will move forwards - deeper into the floor. Then it might have a distance of -0.2, but you get +0.2, and the ray accelerates out the other side of the material.
It's essential to return a negative value (ideal) or worst case max(dist, 0.) like chock said. If you use the max(dist, 0.) you'll very likely see artefacts without a very small step size (which hurts performance), so you really want to return a signed distance.
Make sure your march algo is handling negative values correctly. If the distance is negative, it should step backwards to the surface. I suspect this might be your problem.
Btw, I've done some work on this myself, and it works well :) Small tip: use mipmapping on your textures, and select higher mipmap levels the further away from the camera you get. This gives you very cheap smoothing, which can increase visual quality *a lot*, and also lets you use bigger step sizes at higher distances, which helps speed too.
You can also use a normal map to save the normal calculation step (this is an 'extra' texture read of course, but the alternative is several texture reads from the height map, so you still gain).
What I suspect happens: The distance function is based on the vertical distance to the texture, NOT the distance to the nearest surface. Worst case, if you had a vertical 'wall' and the ray is travelling horizontal to the ground, at a point just next to the wall the distance function might return a value of 1. If it moves forwards by 1, it's deep inside the wall - or worse, it passes through the wall completely and continues the other side.
Your use of abs() in the distance function makes it even worse. If the ray steps slightly inside that wall, the distance from the surface might be -0.1. But your function says the distance is +0.1, so the ray will move forwards - deeper into the floor. Then it might have a distance of -0.2, but you get +0.2, and the ray accelerates out the other side of the material.
It's essential to return a negative value (ideal) or worst case max(dist, 0.) like chock said. If you use the max(dist, 0.) you'll very likely see artefacts without a very small step size (which hurts performance), so you really want to return a signed distance.
Make sure your march algo is handling negative values correctly. If the distance is negative, it should step backwards to the surface. I suspect this might be your problem.
Btw, I've done some work on this myself, and it works well :) Small tip: use mipmapping on your textures, and select higher mipmap levels the further away from the camera you get. This gives you very cheap smoothing, which can increase visual quality *a lot*, and also lets you use bigger step sizes at higher distances, which helps speed too.
You can also use a normal map to save the normal calculation step (this is an 'extra' texture read of course, but the alternative is several texture reads from the height map, so you still gain).
chock: here's the beast, codebase reduced to what's currently required to run, fog removed... DistFloor() is line 32ff... texture stuff is commented-out as there isn't any in the sandbox.
Psycho: ah well if that's the case seems like I can't rely on HW samplers to do my maths for me? Damn shame.
Re "signed distance to volume, not surface" -- I get that for normal geometry, but aren't "terrain planes" a special case with no intrinsic volume? But even for normal geometry... the ray just marches to the surface of the sphere/box/whatever.
psonice -- great advice, will look into these points. I suspect you may be correct that my marching isn't handling negative distances properly. I tweaked it at some other point so that inside-geometry "sort of works" but precisely this may now break my "terrain marching". Great idea on the mipmaps too. My first brute-force try was a 1200x1200 height-map and I got way less than 1fps, I think (max-dist of 3000000 doesn't help here too I guess). At least I was glad to see that sampling isn't that slow with small maps. By the way, how do you interpolate between height values, just linearly or with some sort of fractal magic? The normal map idea sounds really smart but doesn't linear interpolation in-between fetched normal texels give you the weirdest artifacts?
Psycho: ah well if that's the case seems like I can't rely on HW samplers to do my maths for me? Damn shame.
Re "signed distance to volume, not surface" -- I get that for normal geometry, but aren't "terrain planes" a special case with no intrinsic volume? But even for normal geometry... the ray just marches to the surface of the sphere/box/whatever.
psonice -- great advice, will look into these points. I suspect you may be correct that my marching isn't handling negative distances properly. I tweaked it at some other point so that inside-geometry "sort of works" but precisely this may now break my "terrain marching". Great idea on the mipmaps too. My first brute-force try was a 1200x1200 height-map and I got way less than 1fps, I think (max-dist of 3000000 doesn't help here too I guess). At least I was glad to see that sampling isn't that slow with small maps. By the way, how do you interpolate between height values, just linearly or with some sort of fractal magic? The normal map idea sounds really smart but doesn't linear interpolation in-between fetched normal texels give you the weirdest artifacts?