pouët.net

Go to bottom

AMD-ATI vs NVIDIA, the OpenGL showdown

category: general [glöplog]
Quote:
Where's the problem in sharing between windows of a same process? I've been doing that for ages (not in an ATI, though). Just use the same render context and switch between windows using wglMakeCurrent.
Afaik what you need glsharelists for is when you deal with multiple processes each with its own render context.

Quoting from the docs for wglShareLists:
"When you create an OpenGL rendering context, it has its own display-list space. The wglShareLists function enables a rendering context to share the display-list space of another rendering context; any number of rendering contexts can share a single display-list space."
and
"You can only share display lists with rendering contexts within the same process. However, not all rendering contexts in a process can share display lists. Rendering contexts can share display lists only if they use the same implementation of OpenGL functions."

In short, the separate namespaces are simply mandated by the spec (regardless of whether you think that makes sense or not), and there's no support for cross-process sharing.

There's a general pattern there - aside from the usual driver bugs (and yeah, ATI definitely has more OGL-related bugs than NV does), NV drivers are usually pretty lenient in what they accept whereas ATI, Intel, 3Dlabs etc. try to follow the spec to the letter. Which is a pain if your code ends up being unintentionally broken without you noticing since "it works just fine here".
added on the 2008-07-29 18:01:54 by ryg ryg
all you say makes me think that the radeon HD 4850 i raised a question about lately is definitely not a bad choice as a cheap gpu for 3d experiments

if it works with ati's glsl, it works with nvidia --> can i take that as a rule of thumb ?
ATI drivers are generally pickier, but especially with GLSL, you just have to test. The really stupid bugs are fixed everywhere, but nontrivial shaders are still a hit-and-miss affair on all HW. For me, that's one of the main advantages of D3D nowadays - the HLSL compiler is also kind of fickle (prepare to fall back to an older version sometimes), but it compiles to shader binary code, and the shader binary->actual GPU code translators inside drivers are orders of magnitude less complex than GLSL compilers and hence significantly more mature.
added on the 2008-07-29 20:17:02 by ryg ryg
does opengl 3.0 change the approach about glsl btw ? (for instance what you mention about no shader byte code)
No idea. OpenGL ES has compiled shader code. Among other things. Actually, GL ES is basically the better GL in a lot of respects :)
added on the 2008-07-29 20:30:21 by ryg ryg
Quote:
if it works with ati's glsl, it works with nvidia --> can i take that as a rule of thumb ?


ATI forces you to write stricter syntax, so at least you won't get compile errors on nvidia.

We do mostly our shader development on nvidia and usually we need to tweak shaders when testing them on ATI for the first time. Sometimes because of bad syntax and other times because of ATI bugs.

On the other hand I've also had a glsl related driver crash bug only on nvidia+vista. ati+xp & nvidia+xp worked like a charm. So I guess you never can be totally sure that your code will work.

Still, I've learned to hate ATI while doing glsl stuff.
added on the 2008-07-29 21:18:30 by pommak pommak
when i first did 3d accel stuff, i had nv hardware, but i had more problems with matrox and s3 than i did with ati cards. then i had all ati cards for a pretty long time (from 2001 on), and was seriously annoyed when i had to fix stuff to make it work on nv hardware (all using d3d though, i only used more of gl functionality than basic gl 1.2 from 2005 on or so). now i have a nv card again (gf8600gt), but i like to think i'm past vendor-related whining. you have to test it everywhere anyway.

of course, that is me now, without any problem of that kind to solve. just wait for me to bitch and curse loudly if i actually have to debug such crap :)
added on the 2008-07-29 22:09:54 by ryg ryg
Quote:
does opengl 3.0 change the approach about glsl btw ? (for instance what you mention about no shader byte code)


sometimes it seems to me that opengl 3 designers will never accept to implement loading og (pre)compiled shaders (to intermediate, hw neutral and easy to parse bytcode) even if they know it's a superb idea, just cause they don't want to accept that microsoft guys had a good idea doing so. Like the device caps...
added on the 2008-07-30 00:49:47 by iq iq
iq: That's because it's not a good idea. Fictional assembly languages add fictional limitations - compiling directly from high-level code can in many cases generate better code than by going though byte code languages. At least that's what the shader-compiler team at work tells me.
added on the 2008-07-30 01:20:50 by kusma kusma
ryg: "In short, the separate namespaces are simply mandated by the spec"
Wait...WGL... spec?! Let me quote the GL_NV_render_texture_rectangle extension spec:
"Additions to the WGL Specification

First, close your eyes and pretend that a WGL specification actually
existed. Maybe if we all concentrate hard enough, one will magically
appear.
"
added on the 2008-07-30 01:30:11 by kusma kusma
then they have a problem if they expect games to use gl one day... who can wait for the compile time while rendering? (and another thread will not do since only one thread can own the rendering context at a time) Opengl will become just a toy like this...
added on the 2008-07-30 01:36:30 by iq iq
Quote:
Fictional assembly languages add fictional limitations - compiling directly from high-level code can in many cases generate better code than by going though byte code languages.

adding one feature shouldnt really imply the removal of another though.
added on the 2008-07-30 01:45:46 by Gargaj Gargaj
iq: I'm not sure about what nv/at doesi, but we're perfectly able to compile shaders without noticable performance drops. On mobile phones.

Gargaj: No, but what's the point in adding it? OpenGL ES 2.0 did the right thing and added implementation-specific binary formats right away... Now THAT's actually useful.
added on the 2008-07-30 01:51:16 by kusma kusma
That should of course be "nv/ati does"...
added on the 2008-07-30 01:51:51 by kusma kusma
kusma: maybe thats because on mobile phones you arent making long enough shaders to really stress the compiler. :)
(something it's pretty easy to do on pc)
added on the 2008-07-30 10:09:38 by smash smash
smash: Nope, it's not :)
added on the 2008-07-30 10:28:08 by kusma kusma
Quote:
On ATI too. This is Windows-related: if a driver doesn't react or just runs a spin loop for a certain amount of time (5 seconds I think) Windows will assume either the driver or the HW has crashed and automatically BSOD. So now drivers do the same check and take care to abort and execute an internal reset before that happens, since BSODs don't look too good :).


Under Vista? In XP I can still render a single quad taking like 30s. I guess the driver can answer even though the gpu is working..


And yes, writing glsl on ati hardware is easier if you want your code to run on both. There may still be weird compiler bugs, but you won't have all the syntaxical problems and you have better error messages.
added on the 2008-07-30 17:39:39 by Psycho Psycho
Quote:
Under Vista? In XP I can still render a single quad taking like 30s. I guess the driver can answer even though the gpu is working..

Under Win2k and XP too, at least with older drivers. Guess they've worked around that now. I guess on Vista the GPU core being busy for a long time is more of a problem since the 3D HW is actually being used for the GUI (with Aero enabled anyway).
added on the 2008-07-30 21:11:04 by ryg ryg
This kind of means that for offline rendering, one could not build a "very lengthy" shader?
added on the 2008-07-30 21:36:44 by _-_-__ _-_-__
again, it depends on the driver/os combo you're running.
added on the 2008-07-30 21:41:00 by ryg ryg
Or split the screen into tiles.

login

Go to top