nVidia RTX innovation or marketing bullshit?
category: code [glöplog]
So, what's your opinion on nVidia's new announcement of it's "10 years in the making" accelerated realtime raytracing cards?
Innovation and marketing bullshit aren't mutually exclusive.
At face value it's cool as heck, but I'll save the excitement for after I see actual performance numbers. The "10GRays/s" sounds like a somewhat cherry-picked figure since the memory bandwidth of the device would give a bound of about 62 bytes per ray, enough to read maybe one triangle and a single BVH node.
I'm very excited about a future where raytracing is finally practical for all kinds of use but a) I'd like to have a single trace() function at shader level instead of a billion new hoops to jump through, and b) I'd like to see it on all vendors.
I'm very excited about a future where raytracing is finally practical for all kinds of use but a) I'd like to have a single trace() function at shader level instead of a billion new hoops to jump through, and b) I'd like to see it on all vendors.
@gargaj xor should be a word
that "trace" function would always have to be somewhat async since it's run time isn't constant
It just works...
https://www.youtube.com/watch?v=uJ-TMAD8dJY
https://www.youtube.com/watch?v=uJ-TMAD8dJY
well, people are complaining they're fucking expensive for just a 15% performance boost on existing, conventional benchmarks/games, but well, that's comparing apples with pears. looking forward to the 2170 when the RT tech is hopefully more established in modern day gaming :) it looks bloody sexy in this https://www.youtube.com/watch?v=KJRZTkttgLw, but a day later they release a showcase of Shadow of the Tomb Raider with apparently RTX arch shaders and to be fair you hardly see any lighting difference/improvements to more conventional non-RT techniques other than magically disappearing beer glasses (i assume they forgot to port the glass shader or smth :P)
xernobyl: that property also holds e.g. for texture fetches with can take a hundred cycles easily, but if it's in some cache it will be much faster. The GPU switches the warp that is waiting for such a request (memory access, texture access, instruction fetch, ...) for a different warp that is not blocked. this "latency hiding" makes it appear from the perspective of the individual warp as if the request didn't take any time, because the warp was just not running. this rescheduling is super lightweight (but it works only only if the workload is sufficiently heterogeneous between warps. if all eligible warps wait for something, then you get an actual stall and your GPU is underutilized).
It just works... you open up your commercial engine, add a cube, add a light source, turn on RTX, submit to Revision... it just works!
Does anyone have any details on how they've implemented it? Like how the scene description is delivered to the GPU...
it's all possible because nvidia licensed sega blast processing technology
NVidia does what AMDon't
It's probably a bit of both, yes.
Knowing nvidia, the numbers are probably real, but only for some synthetic test case that has nothing to do with real-world performance.
That doesn't mean that the cards aren't absolute beasts, though..
Knowing nvidia, the numbers are probably real, but only for some synthetic test case that has nothing to do with real-world performance.
That doesn't mean that the cards aren't absolute beasts, though..
and just after he says "all the shadow mapping artifacts are gone" there's some kind of a depth biasing/post-filtering problem
It's actually pretty interesting, RTX technology seems to combine rasterization with raytracing where required (lighting, reflections and stuff). It's good to see graphics become more photorealistic with the support of the GPU, without faking to much stuff that is. :)
NVIDIA GeForce RTX - Official Launch Event
NVIDIA GeForce RTX - Official Launch Event
Quote:
...without faking too much stuff that is. :)
We're not gonna fake it
No! We ain't gonna fake it
We're not gonna fake it
Anymooooore
Or maybe we will, cuz a cubemap is cheaper than 16k rays. :)
I foresee a transition period were tracing rays will only be viable for small and super sharp surfaces. And not with PBR, which would really be awesome.
No! We ain't gonna fake it
We're not gonna fake it
Anymooooore
Or maybe we will, cuz a cubemap is cheaper than 16k rays. :)
I foresee a transition period were tracing rays will only be viable for small and super sharp surfaces. And not with PBR, which would really be awesome.
Thinking of it, it really is about complexity (polygons) vs fidelity.
And complexity is hard to give up.
So it only gets interesting once you no longer need more polygons.
And complexity is hard to give up.
So it only gets interesting once you no longer need more polygons.
we gotta take these lies
and make them true
somehow
and make them true
somehow
Funny how people talk about raytracing as not faking it, because obviously calculating rays bouncing from polygons is 100% accurate representation of reality.
@Zplex, this made my day!
Zplex: awesome, thank you! :-D
sauli: and mostly simply treating light as particles instead of waves