D3D11 abstract those darn resource views
category: code [glöplog]
Meh... Seems like I've become a D3D11 fan over one weekend. But there is one thing that bugs me, those utterly annoying resource views.
How do you handle them? On demand? I mean create when needed and discard afterwards? Sounds like abysmal performance... Store in some fat wrapper class? That is, a union of all resource types and pointers to all the views?
How do you handle them? On demand? I mean create when needed and discard afterwards? Sounds like abysmal performance... Store in some fat wrapper class? That is, a union of all resource types and pointers to all the views?
I have a material class that stores all the required D3D11-resources for a given material, such as shaders and textures.
It doesn't store the actual textures, but only the views.
So basically the views are created when a material uses them (they go via a cache, so they are re-used for shared materials, using refcounting), and discarded when nobody uses them anymore.
It doesn't store the actual textures, but only the views.
So basically the views are created when a material uses them (they go via a cache, so they are re-used for shared materials, using refcounting), and discarded when nobody uses them anymore.
Quote:
Store in some fat wrapper class? That is, a union of all resource types and pointers to all the views?
That's the way I'm doing this. I create the resource together with all views. This way, I'll always have all I need, wherever and whenever I need it.
I have a bunch of thin wrappers (subclasses of a D3D render device style thing) that include the view or views that I use. So regardless of who owns the resource, the object that's passed around has got what you need. This is fine for most (demo) purposes really.
However if we're talking about a more professional/performance-oriented renderer I'd go with what Scali offered and be smart about where and when you actually need that particular object.
However if we're talking about a more professional/performance-oriented renderer I'd go with what Scali offered and be smart about where and when you actually need that particular object.
EvilOne, you just scratched the surface of the many "interesting" problems you will face when going d3d11.. Let's see if you are still a fan in a couple of weeks ;)
Good luck!
Good luck!
Nothing a few strategically placed hashtables can't fix :)
true :) this is a useful approach for blend/raster/bla state objects and also input layout management. you can find some useful tips in my NVscene 2014 talk ..
Another tip: the tight coupling between input layouts and vertex shader blobs is really annoying. you have 3 options:
1) just restrict yourself to a single vertex format for all your materials / mesh shaders.
2) use the hashtable cache approach as described by scali (hash the input layout structure contents to a uint32 for example)
3) avoid the issue alltogether and simply don't use vertexbuffers at all! just put your vertex data in a texture and use a texture fetch in the vertex shader. that's what the hardware does internally nowadays anyway.
Another tip: the tight coupling between input layouts and vertex shader blobs is really annoying. you have 3 options:
1) just restrict yourself to a single vertex format for all your materials / mesh shaders.
2) use the hashtable cache approach as described by scali (hash the input layout structure contents to a uint32 for example)
3) avoid the issue alltogether and simply don't use vertexbuffers at all! just put your vertex data in a texture and use a texture fetch in the vertex shader. that's what the hardware does internally nowadays anyway.
Still fan... I'm going the easy route and cram all and anything into a single class including the resource views. And if I need a special view I have a simple ResourceView class. Not the best solution, but works.
Hey spike, that pdf was a nice read.
While we are at it... I'm currently wrapping the constant buffers. I expect the driver to do buffer renaming, when using the same "size-class-buffer" ten times in a row?
glad i could help a bit.
Regarding constant buffer usage: I think we had some discussion abou that in another thread some time ago, iirc.
anyways, here's a good read: Constant Buffers without constant pain (NVidia developer site)
Regarding constant buffer usage: I think we had some discussion abou that in another thread some time ago, iirc.
anyways, here's a good read: Constant Buffers without constant pain (NVidia developer site)
IIRC the big difference between D3D10+ and other APIs (D3D8/9 and OpenGL) is that the constant buffers are persistent between shaders.
So you can use them for storing global data.
(Note also that the same constant buffer can be used for multiple parts of the pipeline, so you can share them between vertex and pixel shaders for example).
One approach is to order your data to frequency of updates... Eg, rarely updated, updated once per frame, updated for each material change, updated for each object change.
That way you can make the most of the persistent state.
So you can use them for storing global data.
(Note also that the same constant buffer can be used for multiple parts of the pipeline, so you can share them between vertex and pixel shaders for example).
One approach is to order your data to frequency of updates... Eg, rarely updated, updated once per frame, updated for each material change, updated for each object change.
That way you can make the most of the persistent state.
in theory persisting buffers and ordering update frequency seems to make sense, yes.
on the other hand, iirc, a lot of people say that in practice it doesn't make a big difference..
(and esp. on new APIs like DX12 and Vulkan there are a lot of suggestions out there to just upload the whole constant data every time..)
on the other hand, iirc, a lot of people say that in practice it doesn't make a big difference..
(and esp. on new APIs like DX12 and Vulkan there are a lot of suggestions out there to just upload the whole constant data every time..)
Yes, I guess in practice you need to find a certain balance.
Too many different constant buffers won't be too efficient either (they tie up resources inside the driver/GPU, even if you don't update them... so the driver has to work around these when you update other buffers).
And there's a certain overhead involved with mapping/unmapping to update a buffer and upload it to the GPU registers, so a certain amount of data is 'free' to update anyway.
I guess this is a typical scenario where you can't find a single optimal solution, but it depends very much on what kind of scenes you are rendering, what GPU and what driver you are using.
In practice I generally just use 1 or 2 constant buffers anyway.
Too many different constant buffers won't be too efficient either (they tie up resources inside the driver/GPU, even if you don't update them... so the driver has to work around these when you update other buffers).
And there's a certain overhead involved with mapping/unmapping to update a buffer and upload it to the GPU registers, so a certain amount of data is 'free' to update anyway.
I guess this is a typical scenario where you can't find a single optimal solution, but it depends very much on what kind of scenes you are rendering, what GPU and what driver you are using.
In practice I generally just use 1 or 2 constant buffers anyway.