## Raymarching Toolbox Thread

**category:**code [glöplog]

las: your version casues me artifacts

a simple two state

*sign*implementation would be to use*step*:**Code:**

`dot(p,normalize(step(0.,p)-.5))`

for the sake of completeness, if one wanted to have a generic "no zero case"

In our specific case though we can spare ourselves the multiplication since we're normalizing anyway.

*sign*implementation that would be:**Code:**

`step(0.,p)*2.-1. `

In our specific case though we can spare ourselves the multiplication since we're normalizing anyway.

rudi had asked for a tetrahedron in some other thread, so i came up with this compact (so very usable in a 4k) version:

And if you want it really compact, use this macro instead! ;)

**Code:**

```
// tEtrAhEdrOn
float tetrahedron(float3 p, float a)
{
return max(
(abs(p.x)+abs(p.y)+abs(p.z)-a)/3,
p.y);
}
```

And if you want it really compact, use this macro instead! ;)

**Code:**

```
#define tetra(p,a) max((abs(p.x)+abs(p.y)+abs(p.z)-a)/3,p.y)
```

This is the same as:

Which in turn is just another scaling on my octahedron cut in half:

(flip sign on p.y to have it point up) :)

**Code:**

`max((dot(abs(p),vec3(1))-a)/3.,p.y);`

Which in turn is just another scaling on my octahedron cut in half:

**Code:**

`max(dot(p,normalize(step(0.,p)-.5))-a,p.y);`

(flip sign on p.y to have it point up) :)

oh, haha!

i was rotating it on all axis...didn´t even realize it´s upside down! :)

i was rotating it on all axis...didn´t even realize it´s upside down! :)

Do you guys have a good source for the theoretical background for raymarching? I'm not sure I understand the background on these.

google alot, and one idea would be to read the important bits in iq's articles.

**Quote:**

google alot, and one idea would be to read the important bits in iq's articles.

http://iquilezles.org/www/articles/distfunctions/distfunctions.htm

Like this one? That who you mean by IQ?

**Quote:**

That who you mean by IQ?

That is the right one, yes.

baordog: Yes. Also study Raytracing, how the general idea of that is. Then raymarching is about reducing the amount of steps (marching) the ray which it takes to get to an object (i.e. intersection). There are also other tricks. See sphere tracing. Distance functions are a way to represent different primitive objects. Like that iq, article says you build up more complex object from elementary ones. So its a good idea to separate between the internal workings of Raymarching (the actual rendering system) versus the Signed Distance Functions that are representing the objects. Anyways Good Luck!

**Quote:**

Do you guys have a good source for the theoretical background for raymarching? I'm not sure I understand the background on these.

I suggest reading this excellent article on Ray Tracing: https://www.scratchapixel.com/lessons/3d-basic-rendering/introduction-to-ray-tra cing

As already pointed out, there is a lot in common.

In both, the main idea is to scan the screen (you can imagine it happening in 2D, like scanning an array of screen pixels) -- then creating a ray from the eye (camera) to the direction of that pixel on screen. For each created ray, you need to then find out if it hits an object, and if so, you find the color and paint it.

There are many different ways to find which object was hit, and the pixel color for that hit. One of them is using Distance Fields and Sphere-Tracing, which is what Inigo Quilez explains in a very cool way.

Here is an easy to follow link from his site: http://iquilezles.org/www/articles/simplegpurt/simplegpurt.htm

And of course, you should definitely browse and read some ShaderToy samples: https://www.shadertoy.com/

Just adding: when you are ray-tracing from the CPU, you need to write the "screen scan" loop yourself. When you are ray-tracing on the GPU, the GPU does that for you, so your pixel-shader is called for every screen pixel, so you start from the ray step from there.

So, the pixel-shader is called for each screen pixel that is about to be processed. You then find the eye-screen ray, and continue from there, finding which object is hit and the color (and of course anything else you want to calculate). Then you write to the pixel and you're done.

Hope this makes sense.

So, the pixel-shader is called for each screen pixel that is about to be processed. You then find the eye-screen ray, and continue from there, finding which object is hit and the color (and of course anything else you want to calculate). Then you write to the pixel and you're done.

Hope this makes sense.

I never thought Hugi would be a useful resource, but here is another one: Raytracing-Primer

**Quote:**

Quote:That who you mean by IQ?

That is the right one, yes.

Yeah I know ray marching is used for some of the cool generative geometry stuff. That's why I couldn't thread that with my limited knowledge of ray tracing. I thought ray tracing was usually used to make realistic reflections / shadows / light.

I'll have to read IQ's articles to get a better hold on how you can use that to generate those cool generative geometries.

@baordog

Raytracing/marching both just shoots rays through each pixel to find what is visible on the screen (closest intersection), much easier to explain than rasterization really :)

Raymarching is an alternative that is iteratively intersecting the geometry with less complicated intersection code (compute distance from point to primitive surface as opposed to intersect line with primitive surface).

It then has the added benefit of repeating and warping becoming very intuitive.

This shadertoy tutorial definitely makes things click after you've read some and don't fully grasp it yet:

https://www.shadertoy.com/view/4dSfRc

There's this example showing a lot of iq's code if you like poking around code more than reading:

https://www.shadertoy.com/view/Xds3zN

Cupe explains a bit about Mercury's distance function library and shows DO's and DONT's for distance fields, which was very helpful for me when I was learning more about this:

https://www.youtube.com/watch?v=T-9R0zAwL7s

At the end of the day, you need to know what you can and can't do with a distance function, and how to avoid or solve glitches.

Raytracing/marching both just shoots rays through each pixel to find what is visible on the screen (closest intersection), much easier to explain than rasterization really :)

Raymarching is an alternative that is iteratively intersecting the geometry with less complicated intersection code (compute distance from point to primitive surface as opposed to intersect line with primitive surface).

It then has the added benefit of repeating and warping becoming very intuitive.

This shadertoy tutorial definitely makes things click after you've read some and don't fully grasp it yet:

https://www.shadertoy.com/view/4dSfRc

There's this example showing a lot of iq's code if you like poking around code more than reading:

https://www.shadertoy.com/view/Xds3zN

Cupe explains a bit about Mercury's distance function library and shows DO's and DONT's for distance fields, which was very helpful for me when I was learning more about this:

https://www.youtube.com/watch?v=T-9R0zAwL7s

At the end of the day, you need to know what you can and can't do with a distance function, and how to avoid or solve glitches.

Thanks for the help so far guys!

One thing I have been wondering about lately is how you would implement reflection/rarefaction with these techniques.

I almost never see people do glass, or semi-transparent surfaces with this technique. Is it expensive, or difficult to implement?

One thing I have been wondering about lately is how you would implement reflection/rarefaction with these techniques.

I almost never see people do glass, or semi-transparent surfaces with this technique. Is it expensive, or difficult to implement?

The way I did translucency was to do limited recursion with a ray stack, "ray" meaning a combination of ray-instance variables like vec3 position, vec3 direction, float contribfactor. Instead of just fully reflecting/bouncing the ray off, push or "spawn" a refracted ray on the stack that travels to the other side of the material boundary, ie. from outside to inside, or inside to outside. There will be another refraction when the ray exits the object on the other side. Every refracted and reflected ray only gets a part of the color-contribution factor that its "parent" ray gave it, depending on the material's properties, and when it hits something, it adds/mixes to the total color value. This looked somewhat nice, but didn't seem to add enough compared to the amount of shader code added. Maybe I should try it again sometime.

Thinking about it more, maybe the contribution factor should be a vec3 as well, so the material seems to filter the light passing through it according to its own color. Try it and you'll see. It's a hack anyway. Adjust the parameters until it looks nice.

Ray marching glass: https://www.shadertoy.com/view/llcXD8

Code is a mess (I was doing a daily shader back then, no time for tidying up ;) but hopefully the refraction part is simple enough.

One thing you have to keep in mind is that for realistic glass, you need both reflection and refraction. When viewed close to parallel glass is reflective, straight on it's transparent (see fresnel). There's a few ways to handle that:

- Fake it (like I did in that shader toy - I just add a reflection of the cube map for basic fresnel effect, then the ray gets refracted and the reflection part is ignored)

- Split the ray when you hit glass, and trace both the refracted and reflected part. This gets messy, especially if you want fresnel reflections of other glass objects, you end up with a ray stack like yzi

- Montecarlo it. On hitting the glass, randomly either reflect or refract the ray. The randomness should be weighted by the fresnel factor. This gives you a noisy mess, but if you integrate the result of multiple rays you get accurate glass (and pretty much free AA, depth of field, and since you're almost there already may as well path trace it all and get amazing lighting too ;)

Code is a mess (I was doing a daily shader back then, no time for tidying up ;) but hopefully the refraction part is simple enough.

One thing you have to keep in mind is that for realistic glass, you need both reflection and refraction. When viewed close to parallel glass is reflective, straight on it's transparent (see fresnel). There's a few ways to handle that:

- Fake it (like I did in that shader toy - I just add a reflection of the cube map for basic fresnel effect, then the ray gets refracted and the reflection part is ignored)

- Split the ray when you hit glass, and trace both the refracted and reflected part. This gets messy, especially if you want fresnel reflections of other glass objects, you end up with a ray stack like yzi

- Montecarlo it. On hitting the glass, randomly either reflect or refract the ray. The randomness should be weighted by the fresnel factor. This gives you a noisy mess, but if you integrate the result of multiple rays you get accurate glass (and pretty much free AA, depth of field, and since you're almost there already may as well path trace it all and get amazing lighting too ;)

Glass like things are certainly possible:

http://www.pouet.net/prod.php?which=61721

http://www.pouet.net/prod.php?which=58262

http://www.pouet.net/prod.php?which=70383

http://www.pouet.net/prod.php?which=68695

http://www.pouet.net/prod.php?which=73153

There's also this paper which - among other things - explains how to handle the epsilon to avoid self intersections in case you are trying to do any glass like / translucent stuff:

http://erleuchtet.org/~cupe/permanent/enhanced_sphere_tracing.pdf

There are some further simple tricks involved... Beyond the scope of this post. :D

http://www.pouet.net/prod.php?which=61721

http://www.pouet.net/prod.php?which=58262

http://www.pouet.net/prod.php?which=70383

http://www.pouet.net/prod.php?which=68695

http://www.pouet.net/prod.php?which=73153

There's also this paper which - among other things - explains how to handle the epsilon to avoid self intersections in case you are trying to do any glass like / translucent stuff:

http://erleuchtet.org/~cupe/permanent/enhanced_sphere_tracing.pdf

There are some further simple tricks involved... Beyond the scope of this post. :D

So, given all the cool functions in the ray marching tool box, I know how to tile a shape across the scene.

What I can't wrap my head around quite yet is how to tile my shape into a circular shape. See, what I want to do is make a big tunnel of primitives and zoom through it.

Or maybe even tile them in a helix?

I don't think I saw anything here addressing curved repetitions like that. What are the basic building blocks?

I found things on shadertoy like:

https://www.shadertoy.com/view/XtXBD2

but a haven't found one simple enough to tell me how it's done.

What I can't wrap my head around quite yet is how to tile my shape into a circular shape. See, what I want to do is make a big tunnel of primitives and zoom through it.

Or maybe even tile them in a helix?

I don't think I saw anything here addressing curved repetitions like that. What are the basic building blocks?

I found things on shadertoy like:

https://www.shadertoy.com/view/XtXBD2

but a haven't found one simple enough to tell me how it's done.

well, that ain´t one of the easy tasks, although it ain´t that hard if you know how it works.

Let me try to explain it with some code inbetween:

If i use float2/float3/float4 then this is DirectXs equivalent to OpenGLs vec2/vec3/vec4

First we need some standard-rotation-code-for-one-dimension:

This eats a 2Dimensional-point-in-object-space...it´s always the two dimensions you don´t want to rotate, so if you want to rotate around the X-Axis, you feed it your p.yz here.

The second input is your rotation in Radians, so ranging from -PI to PI. -> -PI=0°, -PI/2=90°, 0=180°, PI/2=270°, PI=360° ...so -PI and PI are the same, as you just closed the circle. There are helpers in HLSL/GLSL you could use, in directX it´s "radians(numberOfDegrees)", f.e. "radians(45)" would yield "-PI/4", using these helpers may make it easier for you to understand what you are doing. There´s also "degrees(floatOfRadians)" of course.

We need some more Rotation-Code:

Ok, this is where it gets a bit hard and also i don´t exactly know how to explain this, with my lacking math-skills! ;) But let me try:

In the first code-line we calculate the segments (didn´t find a better word for it now, sorry!) of the Axis-Rotation, which means we find out what rotation our current point in space (or better said the Axis) should have.

In the second line we apply the rotation using the angle, which we calculated in the first-code-line, which we get by using the "atan2()" (arctangent). The "floor()", "/PI*segments" and the "*PI/segments" - stuff is to get it into its correct domain, sort of, that´s where my brain knows why it works but can´t explain it correctly! :/

To make use of this code you need to offset a bit in negative direction.

Also to make it a Tunnel you want to add standard domain-repetition again.

So code for a Tunnel could look like this, goes into your map()-function:

The "+1000.0" is an HLSL-specific-thing, fmod() works a bit different than mod() in GLSL.

I hope this code works, i wrote it all off of my head, if it doesn´t please report back!

I also hope i could atleast help a bit with my half-explanation! (I didn´t know what i was getting myself into when i started answering here, got quite a wall of text!)

Maybe someone else here can explain it a bit better!

But atleast you have some code to play around with and know a bit better how it works now!

Let me try to explain it with some code inbetween:

If i use float2/float3/float4 then this is DirectXs equivalent to OpenGLs vec2/vec3/vec4

First we need some standard-rotation-code-for-one-dimension:

**Code:**

```
float2 Rotate1Axis(float2 rotationPosition, float rotationInRadians)
{
return
cos(rotationInRadians)*rotationPosition +
sin(rotationInRadians)*float2(-rotationPosition.y, rotationPosition.x);
}
```

This eats a 2Dimensional-point-in-object-space...it´s always the two dimensions you don´t want to rotate, so if you want to rotate around the X-Axis, you feed it your p.yz here.

The second input is your rotation in Radians, so ranging from -PI to PI. -> -PI=0°, -PI/2=90°, 0=180°, PI/2=270°, PI=360° ...so -PI and PI are the same, as you just closed the circle. There are helpers in HLSL/GLSL you could use, in directX it´s "radians(numberOfDegrees)", f.e. "radians(45)" would yield "-PI/4", using these helpers may make it easier for you to understand what you are doing. There´s also "degrees(floatOfRadians)" of course.

We need some more Rotation-Code:

**Code:**

```
float2 RotatePolar(float2 rotationPosition, float segments)
{
// calculate segments
float2 segmentedSpace = Rotate1Axis(rotationPosition, -PI / (2*segments) );
// float2 segmentedSpace = Rotate1Axis(rotationPosition, degrees(0) / (2*segments) ); // version using degrees()
return
Rotate1Axis(rotationPosition, floor(atan2(segmentedSpace.x, segmentedSpace.y) / (PI * segments) ) * (PI / segments);
}
```

Ok, this is where it gets a bit hard and also i don´t exactly know how to explain this, with my lacking math-skills! ;) But let me try:

In the first code-line we calculate the segments (didn´t find a better word for it now, sorry!) of the Axis-Rotation, which means we find out what rotation our current point in space (or better said the Axis) should have.

In the second line we apply the rotation using the angle, which we calculated in the first-code-line, which we get by using the "atan2()" (arctangent). The "floor()", "/PI*segments" and the "*PI/segments" - stuff is to get it into its correct domain, sort of, that´s where my brain knows why it works but can´t explain it correctly! :/

To make use of this code you need to offset a bit in negative direction.

Also to make it a Tunnel you want to add standard domain-repetition again.

So code for a Tunnel could look like this, goes into your map()-function:

**Code:**

```
// standard domain repetition on Z-Axis
p.z = fmod(p.z+1000.0, 2.0) - 1.0;
// Polar Rotation around one axis -> Z in this case
p.xy = RotatePolar(p.xy, 8.0);
// mandatory offset, so you dont have a big blob of geometry in the middle of the screen
p.y -= 8.0;
return Sphere(p, 1.0);
```

The "+1000.0" is an HLSL-specific-thing, fmod() works a bit different than mod() in GLSL.

I hope this code works, i wrote it all off of my head, if it doesn´t please report back!

I also hope i could atleast help a bit with my half-explanation! (I didn´t know what i was getting myself into when i started answering here, got quite a wall of text!)

Maybe someone else here can explain it a bit better!

But atleast you have some code to play around with and know a bit better how it works now!