John carmack plans to switch to realtime raytracing...
category: general [glöplog]
@iq: actually mi wasn't bought to only demonstrate nvidias powerfulness.. i can't go into details but be sure that nvidia is of course aware that RT must be used in the future, no matter how..
Gargaj : If you have specific anecdotes in mind, just give me a few keywords so I can dig it by myself and stop bothering you with details. :)
toxie (and Auld), then that's good news to my hears. For now I just dream of the raySamplers in GLSL (plus raycast() and shadowRayCast() funcs).
it would be nice indeed. what about gatherphotons() too?
iq: The most obvious problem with GLSL-support for ray-intersections is that efficient data structures can be slow to build (ie. should/must be done off-line), and can be very much implementation dependent. It's not a complete show-stopper - you can always have a standardize on a given layout or supply a slow api-method of building these structures combined with a way of serializing / deserializing these to memory (and/or disk) and have applications cache the results - the problem actually reminds me a bit about texture compression. I believe we'll see something like this in the not-too-distant future.
@gargaj: wth? Doom? where's the love for the chainsaw and rocket launcher? You will have to nullify your statement immediately, or else I will call upon my good friends in ODD and make them release 50 new moose demos before summer! THIS IS NOT A HOLLOW THREAT!!!!1
lug00ber: DO YOUR WORST
I was wondering, if in current geometry shaders you can use: texture access, an array of temporaries, and an integer to index that array of temporaries. it's all we fucking need to traverse a kd-tree on the gpu and output triangles that shall be rasterized. the day you do this your 3d engines will fly in speed... currently cpu raytracers can outperform a geforce 8800 gtx on really big models...
nystep: that's just wrong!
a) you gotta build that damn acceleration structure -> at least O(n), if not O(n log n)
b) if you say "oh my, the mesh is static anyways, so we can just build the structure once and keep it in mem" then consider the exact same thing for rasterization, too! so while the rasterizer may still need to render a few more triangles then the raytracer, its also possible to use a data structure to find out which portion of triangles actually must be rendered by the GPU!
a) you gotta build that damn acceleration structure -> at least O(n), if not O(n log n)
b) if you say "oh my, the mesh is static anyways, so we can just build the structure once and keep it in mem" then consider the exact same thing for rasterization, too! so while the rasterizer may still need to render a few more triangles then the raytracer, its also possible to use a data structure to find out which portion of triangles actually must be rendered by the GPU!
yep, but the fact that it's done like that today (occlusion queries on scene kd/oc-trees) it doesn't mean that what he says is not working too.
What I like on his idea is that it's all gpu based, so, no synchronization and stall problems as with current cpu.plus.gpu.occl.query.octree.traversal approaches. Also, the granularity of today triangle chunks is around 8k triangles, what might (or not, I'm not sure) be a disadvantage compared to what he says. I don't know, I would hapilly give a try to his idea if I had the time :)
What I like on his idea is that it's all gpu based, so, no synchronization and stall problems as with current cpu.plus.gpu.occl.query.octree.traversal approaches. Also, the granularity of today triangle chunks is around 8k triangles, what might (or not, I'm not sure) be a disadvantage compared to what he says. I don't know, I would hapilly give a try to his idea if I had the time :)
toxie, i'm talking about static scenes of course :) .. so we forget about the time it takes to build an acceleration structure. If it takes a couple of days, we just don't care. the same thing for rasterisation? that's what i said in previous posts, you can build all the acceleration structures you want, use all the lod you wish, you always end up with poping artifacts and the rendering time/number of triangles curve always goes in favor in raytracing: in such a way that there is always a magic number of triangles above which any raytracer can be faster than any rasterizer, let it be hardware or software. if you use more cores, this point occurs faster in favor of raytracing, if you buy tri-SLI it only moves that point further in favor of rasterisation. but in the end, considering that you have to multiply the number of triangles by roughly 1000 to see raytracing speed being divided by 2, there is always a point from which raytracing is faster.
and well, in my vision of the things, you should use raytracing only for what it's good at: static scenes. If you need dynamic data, then you can take advantage of both worlds by merging the frame/depth buffers on the gpu side after an upload. so it's my vision of things: raytracing for static data (most of your game scenes actually, just consider how much percent of the geometry is dynamic in a game like crysis: i'd be very surprised it's above 10...); and rasterisation for dynamic triangles...
ah okay.. but all that hybrid stuff scares me to death! cause even current RT and/or GPU engines are already complex enough, a hybrid will then contain even more hacks..