serious D3D9 coding question
category: general [glöplog]
As already suggested in other topics I did a google search first, but I just couldn't find any helpful advice on my problem:
I have to port a hardware accelerated graphics rendering interface from OpenGL to Direct3D in what is now quite a collection of code, and I am stuck with setting a suitable Vertex Declaration in D3D9 with 'IDirect3DDevice9::CreateVertexDeclaration' and 'IDirect3DDevice9::SetVertexDeclaration'. The Vertex Data has not only strides between the single Vertex instances but also within the vertex Data, meaning that the position and normal of every vertex take in fact 4 bytes of memory instead of 3 for alignment reasons (SSE optimization).
OpenGL had no problem with this Vertex Array format and rendered everything fine, but the D3D debugger tells me that this is not allowed.
My question is: Is there any way to make D3D swallow this data format or am I completely stuck here facing a complete rewrite of the codes internal data format? (the latter one would be quite a disaster... but you've probably figured that out by now ;-))
Also, the color format in a vertex declaration seems to be only 4 bytes unsigned ARGB integer and nothing else. The only integer color format OpenGL supports in the current code is unsigned int ABGR, and I found no support for ARGB uint color in OpenGL vertex arrays or ABGR uint color in D3D9... so is there really no other way than to store the color for each vertex twice, so that in future the renderer interface can be Graphics API independent?
thanks in advance,
Jan Panier
I have to port a hardware accelerated graphics rendering interface from OpenGL to Direct3D in what is now quite a collection of code, and I am stuck with setting a suitable Vertex Declaration in D3D9 with 'IDirect3DDevice9::CreateVertexDeclaration' and 'IDirect3DDevice9::SetVertexDeclaration'. The Vertex Data has not only strides between the single Vertex instances but also within the vertex Data, meaning that the position and normal of every vertex take in fact 4 bytes of memory instead of 3 for alignment reasons (SSE optimization).
OpenGL had no problem with this Vertex Array format and rendered everything fine, but the D3D debugger tells me that this is not allowed.
My question is: Is there any way to make D3D swallow this data format or am I completely stuck here facing a complete rewrite of the codes internal data format? (the latter one would be quite a disaster... but you've probably figured that out by now ;-))
Also, the color format in a vertex declaration seems to be only 4 bytes unsigned ARGB integer and nothing else. The only integer color format OpenGL supports in the current code is unsigned int ABGR, and I found no support for ARGB uint color in OpenGL vertex arrays or ABGR uint color in D3D9... so is there really no other way than to store the color for each vertex twice, so that in future the renderer interface can be Graphics API independent?
thanks in advance,
Jan Panier
in your vertex declaration declare ie the Pos as Vector3, but at the next element set the offset to be 16 bytes.
you can store the color as TEXCOORD.
you can store the color as TEXCOORD.
You have full control over this in D3D9 using the offset parameters in the vertex declaration. Assuming that your vertex instances has a constant stride (something else would be weird), what you refer to as stride "whithin the vertex data", is actually just padding.
So if your vertex format looks like this:
struct { // assuming 1-byte alignment
uint8 px, py, pz, padding0;
uint8 nx, ny, nz, padding1;
};
A proper vertex declaration would involve something like:
elm[0].Usage = position;
elm[0].Offset = 0;
elm[1].Usage = normal;
elm[1].Offset = 4;
and the total stride for the vertex buffer is 8.
As for the color format, as far as I can tell D3D doesn't support unsigned int32 vectors in vertex declarations. Its anyway a odd choice of format. It's not like its was going to look any better with int32 than int8... If you are going to spend 32 bits on a color component, i recommend using float instead, as you will never have any trouble with finding support for them now or in the future.
So if your vertex format looks like this:
struct { // assuming 1-byte alignment
uint8 px, py, pz, padding0;
uint8 nx, ny, nz, padding1;
};
A proper vertex declaration would involve something like:
elm[0].Usage = position;
elm[0].Offset = 0;
elm[1].Usage = normal;
elm[1].Offset = 4;
and the total stride for the vertex buffer is 8.
As for the color format, as far as I can tell D3D doesn't support unsigned int32 vectors in vertex declarations. Its anyway a odd choice of format. It's not like its was going to look any better with int32 than int8... If you are going to spend 32 bits on a color component, i recommend using float instead, as you will never have any trouble with finding support for them now or in the future.
Okay, first of all, thanks for the serious answers!
Duckers:
What i meant was indeed uint8 as color components and the uint32 describing all 4 components of the color, so there is no waste of memory there... ;-)
Duckers & pantaloon:
That padding you both describe there was the first thing I tried with the D3D Debugger telling me that there is no padding allowed in D3D9, in fact I did set the position and normal offsets to the correct values (0 and 16 bytes respetively) and it did not render my vertex array at all printing the error message mentioned above...
Duckers:
What i meant was indeed uint8 as color components and the uint32 describing all 4 components of the color, so there is no waste of memory there... ;-)
Duckers & pantaloon:
That padding you both describe there was the first thing I tried with the D3D Debugger telling me that there is no padding allowed in D3D9, in fact I did set the position and normal offsets to the correct values (0 and 16 bytes respetively) and it did not render my vertex array at all printing the error message mentioned above...
[QUOTE]What i meant was indeed uint8 as color components and the uint32 describing all 4 components of the color[/QOUTE]
uhm.. D3DDECLTYPE_UBYTE4 ?
Color doesn't need to be sent in to the shader as usage=color, you can use whatever semantic you like. We use texcoords for everything
Quote:
and I found no support for ARGB uint color in OpenGL vertex arrays or ABGR uint color in D3D9.
uhm.. D3DDECLTYPE_UBYTE4 ?
Color doesn't need to be sent in to the shader as usage=color, you can use whatever semantic you like. We use texcoords for everything
re. padding, can you have D3D read non interleaved data from the same buffer?
yeah, use multiple streams for that. but interleaved data is better for the pre-transform vertex cache.
its funny how someone plain answers a question correctly and then more answers follow :)
Quote:
That padding you both describe there was the first thing I tried with the D3D Debugger telling me that there is no padding allowed in D3D9
That doesn't sound too answered to me.
DJSelwynGummer: yes, that's exactly why I was not satisfied with the first reply...
the "color as texcoord" or "anything as texcoord" suggestions, despite being good, were not as helpful as one could think: I have to leave the renderer shader-free.
the "color as texcoord" or "anything as texcoord" suggestions, despite being good, were not as helpful as one could think: I have to leave the renderer shader-free.
ryg: so I understand that multiple streams are the solution to my problem. do you have any experience regarding the performane hit compared to only one render stream?
aside from the cache hit, basically none.