depth sorting error
category: code [glöplog]
Ok, so I made myself a nice little 3d floppydisk to use in what will probably be my last demo and the biggest one for nerve (wink).
However, I'm getting a bug in the depth sorting code. Exactly what's happening:
Imagine there is a big rectangular quad with width = 100, z pos =0
and there is a small quad in the middle of the top quarter of the big quad with width=10, z pos =0.2.
Now, while it's orthogonal to the camera, It's depth sorted properly by taking z-averages.
However, if I rotate both quads about 60-75 degrees so they are mostly facing away from the camera, the z-average of the bigger quad is closer to the camera than the z-average of the smaller one, therefore the smaller one gets occluded.
Is there any way I can fix this?
Or do I have to use a z-buffer in which the z-values are interpolated (currently my z buffer uses z-averages, yielding the same result)
However, I'm getting a bug in the depth sorting code. Exactly what's happening:
Imagine there is a big rectangular quad with width = 100, z pos =0
and there is a small quad in the middle of the top quarter of the big quad with width=10, z pos =0.2.
Now, while it's orthogonal to the camera, It's depth sorted properly by taking z-averages.
However, if I rotate both quads about 60-75 degrees so they are mostly facing away from the camera, the z-average of the bigger quad is closer to the camera than the z-average of the smaller one, therefore the smaller one gets occluded.
Is there any way I can fix this?
Or do I have to use a z-buffer in which the z-values are interpolated (currently my z buffer uses z-averages, yielding the same result)
Sorting polygons by only sorting a single point on the polygon does not give a robust sorting, even in the case of non-intersecting triangles (as you observe here; average z should be identical to the z-coordinate of the center-point). A z-buffer-less fix in the case of non-intersecting polygons, is to find the polygons that have overlapping z-ranges, and sort them by finding out what side of each other they are. By doing the same check for the camera, you should be able to provide a consistent sorting with respect to the camera. This is basically what's called Painter's Algorithm (although there seems to be widespread confusion as to what actually constitutes the algorithm - I consider it to mean anything that sorts polygons to solve occlusion).
Of course, a much easier hack is to simply tessellate the polygons so the problem becomes less likely, or to use a z-buffer. A z-buffer will also trivially solve intersecting polygons, which Painter's Algorithm requires polygon-clipping to solve.
Of course, a much easier hack is to simply tessellate the polygons so the problem becomes less likely, or to use a z-buffer. A z-buffer will also trivially solve intersecting polygons, which Painter's Algorithm requires polygon-clipping to solve.
Shouldn't the order in which you paint the objects depend on the camera angle? In other words, don't take the original Z value, but take the transformed Z value depending on the camera angle. Maybe this will suffice to solve the problem.
Cheap trick that should work in your case: sort the triangles using their furthest vertex.
Adok: What on earth makes you think zorke means model-space rather than view-space z?
kusma: zorke wrote:
So you are right, he is computing the view space coordinates. Sorry.
Quote:
However, if I rotate both quads about 60-75 degrees so they are mostly facing away from the camera, the z-average of the bigger quad is closer to the camera than the z-average of the smaller one, therefore the smaller one gets occluded.
So you are right, he is computing the view space coordinates. Sorry.
give me my world space, noob!
If it's a 3D floppy disk, assuming it's simple, isn't it convex? In which case you simply need backface culling and no other hidden surface removal is required. You don't even need to sort.
you can take the projection of the camera eye in polygon's plane, but it's a trick.
Bartoshe: That sounds like kusma's suggestion. This projection you mentioned can indicate the side of the plane that the camera eye is on. Could you explain it some more?
Sounds like a local BSP. Anyway, if you don't want to use a z-buffer, you won't escape splittng your object in several ones. Either you use a BSP to know in which order your sub-objects should be rendered. Or, as already suggested, you can tesselate. Tesseelation can be pre-computed, or done in real-time only when necessary. When working on Playstation 1 games, we had the same issues, and in general, our criteria to tesselate was:
- Size of the polygon on sreen
- abs(dot(plane_normal, camera_lookat)
- Size of the polygon on sreen
- abs(dot(plane_normal, camera_lookat)
If you mean my suggestion, then almost. It's kind of building a BSP-tree, except you can skip most of the common-case work by only looking at polygons with a screen-space z-overlap. If you also make sure you only do this for polygons that actually overlap, then the real workload might not be that bad.
However, conditional tesselation has some other neat benefits as well, like allowing you to get away with affine texture mapping rather than perspective correct.
However, conditional tesselation has some other neat benefits as well, like allowing you to get away with affine texture mapping rather than perspective correct.
applying plane equation gives the orthogonal distance between eye and polygons, so absolute value give the order to draw, bu it's specific to the built two polys case.
thanks for the input guys :)
I think I'll go for an interpolated z buffer instead... if I have any problems, they will be posted here!
I think I'll go for an interpolated z buffer instead... if I have any problems, they will be posted here!
A z-buffer certainly is the easiest solution, indeed.
hot tip: remember to compute 1/z and interpolate that rather than z directly. No need to take the reciprocal again for the comparison, though.
@kusma : what benefits result from interpolating 1/z? and Idk if this would work when I'm using a fixed point z buffer which works like
Code:
*(zbuffer+i)=zpos*65536 // because it's a float, and we want to preserve the data after the point (to get the actual integer value, we read an integer and do a right shift by 8)
never mind : http://www.hugi.scene.org/online/coding/hugi%2016%20-%20cozbuf.htm for the record :)
Sorry, I know you're already going the zBuffer route ( and that allows way more things ), but did you try the simple trick I mentioned i.e. to use the furthest Z instead of the average ? It's super cheap computationally and works for scenes without intersecting objects and polygons that don't vary insanely in size ... which sounds like the scene you are dealing with.
zorke: Basically, perspective correction. But when I think about it, I might be wrong. For some reason, I seem to remember some detail that z should *not* be interpolated perspective correctly. *Gets coffee*
p01: what makes the furthest point on the poly better for sorting than any other one point on the poly? It's super-easy to construct a case that breaks in this case, and my experience is that none of these simple choices (nearest, furthest, midpoint) works particularly well.
1/z is linear in screenspace, so that's what you should interpolate.
If you want to interpolate other gradients, you can interpolate u/z linearly, and then perform u/z/(1/z) to get perspective-correct u at every pixel.
If your platform is fast enough, z-buffer is generally the best solution. For low-end systems (eg Amiga, 486 or lower PC), you generally need to resort to other methods, since performing a test per-pixel is too slow. Z-sorting per poly is very fast, and sorting errors can usually be mostly hidden (some methods are already mentioned above).
Another approach is a span-buffer, where you solve overlapping polygons on a per-scanline basis (where you can use the linear 1/z-based equations to calculate intersections in screenspace).
If you want to interpolate other gradients, you can interpolate u/z linearly, and then perform u/z/(1/z) to get perspective-correct u at every pixel.
If your platform is fast enough, z-buffer is generally the best solution. For low-end systems (eg Amiga, 486 or lower PC), you generally need to resort to other methods, since performing a test per-pixel is too slow. Z-sorting per poly is very fast, and sorting errors can usually be mostly hidden (some methods are already mentioned above).
Another approach is a span-buffer, where you solve overlapping polygons on a per-scanline basis (where you can use the linear 1/z-based equations to calculate intersections in screenspace).
Midpoint never works.
In my experience using the furthest/closest point + back_to_front/front_to_back drawing works for simple scenes. Yes, it is super easy to find a scene where this doesn't work but that should be enough to render a floppydisk in 3D.
OTOH that's a good opportunity for zorke to make his teeth with zbuffers and co, so if he's not rushed by a deadline, sure. A z buffer opens the door to one million little tricks and cool effects.
In my experience using the furthest/closest point + back_to_front/front_to_back drawing works for simple scenes. Yes, it is super easy to find a scene where this doesn't work but that should be enough to render a floppydisk in 3D.
OTOH that's a good opportunity for zorke to make his teeth with zbuffers and co, so if he's not rushed by a deadline, sure. A z buffer opens the door to one million little tricks and cool effects.
+1 for taking the furthest point. I remember p01 telling me the same thing back when I was doing 3D in JS 2D canvas (where z-buffering isn't an option), and it improved things a lot. No idea *why* it works better than any other choice, but it does.
Well a floppy disk is pretty thin, that's the problem.
If it has a polygon label on one side that's not as wide as the disk, while rotating the hidden label will wrongly appear in front of the disk.
If it has a polygon label on one side that's not as wide as the disk, while rotating the hidden label will wrongly appear in front of the disk.