Strange behaviour of ATI 5xxx with regards to multisampling
category: general [glöplog]
Hi all,
I have a strange problem with my new ATI 5xxx which I can't fix.
I use FBOs like that:
glGenRenderbuffersEXT(1, &FBOId2[num]);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, FBOId2[num]);
glRenderbufferStorageMultisampleEXT( GL_RENDERBUFFER_EXT, multisample,GL_RGBA8,sizex,sizey);
glGenRenderbuffersEXT(1, &FBODepthBuffer[num]);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, FBODepthBuffer[num]);
glRenderbufferStorageMultisampleEXT( GL_RENDERBUFFER_EXT, multisample, GL_DEPTH_COMPONENT32,sizex,sizey);
and everything works fine when multisample is 0. As soon as I put multisample > 0 (2,4 etc.) I get weird artifacts on anything involving depth buffer calculations (shadow mapping etc.). I can describe the artifacts as some sort of pixelization. The more the multisample the worst the effect (with multisample == 8 I just see 4x4 pixel blocks). I have also tried this with GL_DEPTH_COMPONENT24 and 16. It is the same.
This is not an issue with ATI 4xxxx and my 8 series nvidia. I tried to multisample just the rgba8 renderbuffer (so that depth buffer has no multisampling) but it doesn't work (won't show anything on screen).
Any ideas? It is very weird that the behaviour is so different to the old series ATIs.
I have a strange problem with my new ATI 5xxx which I can't fix.
I use FBOs like that:
glGenRenderbuffersEXT(1, &FBOId2[num]);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, FBOId2[num]);
glRenderbufferStorageMultisampleEXT( GL_RENDERBUFFER_EXT, multisample,GL_RGBA8,sizex,sizey);
glGenRenderbuffersEXT(1, &FBODepthBuffer[num]);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, FBODepthBuffer[num]);
glRenderbufferStorageMultisampleEXT( GL_RENDERBUFFER_EXT, multisample, GL_DEPTH_COMPONENT32,sizex,sizey);
and everything works fine when multisample is 0. As soon as I put multisample > 0 (2,4 etc.) I get weird artifacts on anything involving depth buffer calculations (shadow mapping etc.). I can describe the artifacts as some sort of pixelization. The more the multisample the worst the effect (with multisample == 8 I just see 4x4 pixel blocks). I have also tried this with GL_DEPTH_COMPONENT24 and 16. It is the same.
This is not an issue with ATI 4xxxx and my 8 series nvidia. I tried to multisample just the rgba8 renderbuffer (so that depth buffer has no multisampling) but it doesn't work (won't show anything on screen).
Any ideas? It is very weird that the behaviour is so different to the old series ATIs.
And this is the depth map that I get with multisampling = 0 and =8. No other processing is involved inbetween.
My machine does not get through to asd.gr
Looks like an issue with the depthbuffer-compression. Have you tried allocating a stencil-buffer as well? AMD have previously packed depthstencil together, so could potentially be a workaround.
When I read about it, it is usually called the z/stencil buffer... I think the traditional stencil buffer is "obsolete". (What ever happened to w-buffering? I remember the DX7sdk sample.)
If AA works the way I think it does, it could be including a outlier (stencil value) in the mean (of depth samples). I dunno, I nvr done 3D stuff. I need to start coding someday.
If AA works the way I think it does, it could be including a outlier (stencil value) in the mean (of depth samples). I dunno, I nvr done 3D stuff. I need to start coding someday.
I don't allocate a stencil buffer.
Navis - no, but try to allocate a stencil buffer as well and see if anything changes. In all likeliness it's just a driver-bug, but possibly you can work around it.
not that it fixes your problem but:
actually thats quite normal with ati cards/drivers :)
Quote:
It is very weird that the behaviour is so different to the old series ATIs.
actually thats quite normal with ati cards/drivers :)
looks like a depth compression / tiling problem to me or maybe something to do with zcull? You should check the specifics on zcull and rop tiling and compression of rendertargets in vram. Maybe the resolve is in the incorrect format? I cannot comment on gl implementations though tbh, been a while since I used the gl api.
navis: why the hell are you casting the shadow of a chicken??!?!??!
Mostly what gopher said. Old ATI sometimes generate lots of problems, even in the most simple renderings.
Defiance, I find calling HD5XXX 'old' hardcorely hi-fi.
Maali, :D
actuallly i forgot to draw the chin-thing that chickens have, i was already wondering 'wtf am i forgetting?' when i drew it :DDD
It's not a chicken, but good drawing anyway.
I've discovered that the size of "tiles" is the same regardless of the size of rendering window. It is 6x6 pixels !?!
I've discovered that the size of "tiles" is the same regardless of the size of rendering window. It is 6x6 pixels !?!
lol i meant msqrt.
It's also mentioned in the opengl wiki.
Oh, renderbuffer versus render to texture, my bad :) I've been wondering what are renderbuffers good for for some time anyways, time to google ->
If it's not a chicken, then what is it?
For me it looks like a guy sniffing into the nose of a very big sleeping dog...
fail...
Wtf ARE Renderbuffers anyway? Why not use an FBO with a texture?
Renderbuffers are your best choice when you're not going to read the result as a texture, because they don't have to live by the texture-mapping hardware's rules. This means that they can some times get a more efficient memory-representation. For instance, some hardware only supports writing to 24-bit packed RGB, but requires to use 32-bit XRGB if it's going to be read as a texture. I think depth-buffer compression also is such a case on some hardware. At worst they are the same as a texture, so you don't really lose anything. A typical use-case for this is depth/stencil-buffers for scene rendering. They can also be used for some purposes that textures can't, like render to vertex arrays.
Quote:
added on the 2010-05-27 17:56:15 by RareWtFailWhale
Their exaclty the same thing, only different.