pouët.net

Go to bottom

nVidia RTX innovation or marketing bullshit?

category: code [glöplog]
Give a creative man a pencil, he creates masterpiece.
Give a dumb man lots of tool, he even can't create shit.
RTX could be a pencil for a creative man.
added on the 2018-08-24 13:43:56 by Volantis Volantis
Useless crap. We already have this https://www.youtube.com/watch?v=x19sIltR0qU
looks very promising, if they can get hardware acceleration going full on with good quality software optimizations, looks very cool to be honest, i love raytracing and how lifelike it looks compared to regular rasterization, also voxels are very cool. delta force 1&2 games still look unique and amazing with voxel engine.
added on the 2018-08-26 18:01:26 by ae ae
According to this page, RT core is used for computing ray triangle intersections and BVH Traversal.

My guess is that SM core is also used for all the rest you need when doing raytracing.

There is also a new tensor core which can do some computation on 4x4 matrices more info here. AFAIK it can be used for IA/Physics/deep learning .
added on the 2018-08-27 12:45:10 by Tigrou Tigrou
they're stepping up their game too fast. to my knowledge nobody even uses the tensor cores (for ML that is) to the full extent - which exist for quite some time now. the best i read was a speed up of 2, compared to the theoretic speed up of ~9. maybe that's why the audience was talked into loving raytracing. driver developers, game developers, they are not ready yet to properly use RTX.
added on the 2018-08-27 13:07:18 by HellMood HellMood
Ray tracing is conceptually simple and very often useful, even if just for hard shadows and perfect mirrors. What makes the RTX thing risky is requiring new hardware; most users won't have it for many years. So you can either target a small subset of the potential users, only use RTX for small eye candy and not the main rendering, or write a fallback for everything. None of these seem ideal. Having it as an easy replacement functionality for envmaps etc in UE4 might help with this a lot tho.

For ML you target people with actual money so buying a few GPUs might not be a problem -- I haven't followed closely enough to know if the bubble has completely burst by now.
added on the 2018-08-27 15:11:46 by msqrt msqrt
AFAIK the Tensor-Cores were first introduced with the Titan V Card and can also be found in the latest Nvidia Quadro pro series. So this is indeed the first time Tensor Cores are available on a consumer Card. I dont know if they are also used for the RT stuff or if this is done on additional cores, but that would make for 3 different cores on one card and I doubt that.
Another interesting feature is DLSS which is some sort of anti-aliasing based on some algorithm created with deep learning and should increase performance and visual enhancements a lot while using far less performance-power than todays (t)aa mechanisms.
At least thats what I read on teh intarwebs.
added on the 2018-08-27 15:54:49 by wysiwtf wysiwtf
i think rtx is the first big breakthrough since dx9, looking forward till everyone has it.
added on the 2018-08-27 16:11:08 by ae ae
never heard of compute shaders?
added on the 2018-08-27 16:11:39 by msqrt msqrt
can they best a raytraced scene?
added on the 2018-08-27 16:15:01 by ae ae
they can raytrace a scene. and do pretty much anything else.
added on the 2018-08-27 16:18:31 by msqrt msqrt
accelerated raymarching ... would it help to have hardware support for that ?
I mean the main loop part that goes step by step. For the rest, (evaluating the distance functions) I think SM (which AFAIK process pixel shaders) will do the job (as it is now)
added on the 2018-08-27 19:50:21 by Tigrou Tigrou
isn't hardware accelerated raytracing many times faster than doing it in software mode compute shaders? i don't get why everyone is so upset about rtx. looks like a complete positive scenario for everyone, unless someone just hates raytracing. if it can be improved over time like most other things in nature it will become faster and faster down the line and possibly overtake everything else.
added on the 2018-08-27 21:17:21 by ae ae
I think people have been saying that about raytracing for decades. :)
added on the 2018-08-27 21:30:57 by fizzer fizzer
i001, yes, the tracing part can be many times faster but as I pointed out on the previous page, earlier methods (i.e. this) show that with the current GPUs the slowest part is not the tracing logic itself but just reading the nodes/triangles from memory. I haven't seen any independent numbers from RTX and I'll be very surprised if they surpass software solutions by even a 2x difference. There are other factors (being able to trace and compute simultaneously and saving memory bandwidth from not doing wavefront approaches since there's no extra register pressure for doing traces and other compute in the same kernel) to it so we'll see how it turns out in the end.

My main dislikes about this is are that it's a vendor specific thing and that their advertised numbers seem funky. Otherwise yeah, ray tracing is the way to go.

Tigrou, the distance evaluation part is by far the most expensive in ray marching and it has to be generic code to be flexible enough, so what exactly would you want to accelerate?
added on the 2018-08-27 21:55:56 by msqrt msqrt
Tigrou: nope

msqrt: the API is not vendor specific: DXR as part of directx could be implemented by any hardware vendor, same with the upcoming vulkan support. It's not one of those gameworks things, it is actually part of directx/vulkan. Of course, at the moment the only hardware is single-vendor. The paper you link was written by three people all working at nvidia (who have tons more raytracing research published), so I think it's safe to assume that someone at nvidia has been thinking hard about those hardware implications you mention. But yeah, I agree the published perf data is not exactly comprehensive. First people will have the hardware in their hands end of september IIRC.

fizzer: I remember a very nice siggraph talk a few years back titled "raytracing is the future and ever will be" :)
added on the 2018-08-27 22:33:38 by cupe cupe
Yes, the HW being NV-only is what I worry about. I'd guess AMD and Intel are already working on implementations, at least software ones. Let's just hope the performance isn't too bad (AMD hasn't released much research on ray tracing, might be a lot of catching up to do) so this would actually be widely usable.
added on the 2018-08-27 22:52:24 by msqrt msqrt
Dunno about research papers but AMD has actively been working on their OpenRays / RadeonRays technology for years.
added on the 2018-08-28 08:24:22 by MuffinHop MuffinHop
Yes, but the only perf numbers I've seen look pretty rough and there seems to be no info on how the newest cards would perform, so I'm expecting it's not too good either.
added on the 2018-08-28 13:02:55 by msqrt msqrt
Seems people don't get how Nvidia are handling this too well. Rough description as I understand it (may be wrong in places):

1. Rasterise scene same as it's done now
2. Trace from the rasterised surfaces for lighting / shadows / reflections. For matte surfaces this is done at 1 sample per ray (i.e. terrible quality), possibly at lower resolution too. For shiny surfaces 1 ray is enough anyway.
3. Hand over to the tensor cores, where a machine learning algorithm does noise reduction. Think something like temporal AA where it's getting data from several frames plus general noise reduction, but using ML to get better results.

So when it's used, it'll be using the standard compute / rasterising cores, the RT cores and the tensor cores together. And because it's basically tracing at terrible quality levels, perf on that side doesn't need to be that high - the ML stuff will clean it up and upscale.
added on the 2018-08-28 13:47:05 by psonice psonice
@psonice, sounds like we can get this then, just looking much better? Fingers crossed =)
added on the 2018-08-28 14:28:42 by HellMood HellMood
... that’s most likely exactly what they’re doing
added on the 2018-08-28 17:01:59 by msqrt msqrt
psonice : i think (3) is not being done in real-time applications, it's all spatial/temporal denoising (bilateral filters and taa).
added on the 2018-08-28 18:31:27 by smash smash
psonice/smash: there was a very nice keynote at HPG by Colin Barré-Brisebois of EA/SEED on what they do. Looking at his blog, it seems that talk isn't online (yet?), but from glancing over his GDC talks they seem to cover most of it. After nvidia's announcement, those talks now make more sense ;) See his blog: https://colinbarrebrisebois.com/

Also, it's not like nvidia is doing 1,2, and 3: The gamedevs/engine devs do that. And for 3, agree with smash, it seems to be mostly SVGF or derivatives thereof, so no inferencing - although some of the newer demos seem to use nvidia's DLSS. I think the jury is still out on classic vs. ML reconstruction/AA/denoising filters.
added on the 2018-08-28 19:23:11 by cupe cupe
honestly i'm mostly excited to see what type of unintended applications can be accelerated with the RT hardware (same for tensor cores..). In any case, can't complain about more/novel hardware :)

Hopefully they can find some way to have higher (real) raytrace throughput on future hardware without relying on die shrinks, tho...
added on the 2018-08-29 06:34:28 by shuffle2 shuffle2

login

Go to top