Question about fragment programs and floating/16bit format
category: general [glöplog]
Hi,
I've been using opengl with hlsl, trying to work out how to sample a 16/32 bit format sampler. I setup my texture with glTexImage2D and GL_UNSIGNED_SHORT or GL_FLOAT.
Then I try float c=tex2D (sample,UV).x;
How is my value represented ? they seem to be packed in the 0..1 range, but I'm still not sure. Any ideas ?
I've been using opengl with hlsl, trying to work out how to sample a 16/32 bit format sampler. I setup my texture with glTexImage2D and GL_UNSIGNED_SHORT or GL_FLOAT.
Then I try float c=tex2D (sample,UV).x;
How is my value represented ? they seem to be packed in the 0..1 range, but I'm still not sure. Any ideas ?
Navis: GL_UNSIGNED_SHORT is normalized to [0..1] range, yes. I don't know about floating point textures.
I don't think hlsl will work in opengl ;)
Integer formats are normalized to [0..1] range, while float textures have their full value.
Integer formats are normalized to [0..1] range, while float textures have their full value.
thanks, I mean glsl (through cg) sorry.
so, the integer formats retain all significant bits after normalization ?
so, the integer formats retain all significant bits after normalization ?
I don't think it's a generic question, I think it's upto the driver and thus the implementation to decide what to do with the significance issue.
I'm using glsl with both float32 and float16.
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA16F_ARB, xres, yres, 0, GL_RGBA, GL_FLOAT, buffer );
or
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA32F_ARB, xres, yres, 0, GL_RGBA, GL_FLOAT, buffer );
Buffer can be null for dynamic textures, or just a buffer of regular floats if you want to copy some data to them. When sampling thge texture in glsl you get a regular floating point value as expected. I find it quite nice, you don't have to worry about normalizations. For GL_UNSIGNED_SHORT-ed textures I don't know, I never used them. I would go for GL_FLOAT even in F16, it's very easy that way and you can represent any float number (bigger than 1.0 or whatever). I think the last opengl introduced the GL_HALF external format flag, but I didn't try it yet...
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA16F_ARB, xres, yres, 0, GL_RGBA, GL_FLOAT, buffer );
or
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA32F_ARB, xres, yres, 0, GL_RGBA, GL_FLOAT, buffer );
Buffer can be null for dynamic textures, or just a buffer of regular floats if you want to copy some data to them. When sampling thge texture in glsl you get a regular floating point value as expected. I find it quite nice, you don't have to worry about normalizations. For GL_UNSIGNED_SHORT-ed textures I don't know, I never used them. I would go for GL_FLOAT even in F16, it's very easy that way and you can represent any float number (bigger than 1.0 or whatever). I think the last opengl introduced the GL_HALF external format flag, but I didn't try it yet...
sometimes reading pouet makes me feel incredibly stupid. i think i only understood 4 or 5 separate words from all of the above text :(
Navis: Yes. But make sure you have an internal-format that does.
iq: right I see. So the trick is to use GL_RGBA32F_ARB instead of GL_FLOAT ? ( haven't checked if they are the same thing yet tho)
xrs: haha, cheer up. Programming languages are designed to be as functional and understandable as possible (ok, at least they should be) whereas natural languages are riddled with incoherency og fallgruver. It's much more impressive to witness a native chinese person speak English than for anyone to speak HLSL :)
Navis: be careful which 16bit format you use, nvidia and ati support different things. I wrote some stuff before that worked fine on my ati card but gave a lovely black screen on nvidia. From what I remember, it's something like both companies support the 16bit half-float format (RGBA16h) but only ati supports the 16bit integer version (RGBA16). This was on osx, so possibly it's a driver thing that doesn't affect windows..
Also, I think both float + int textures get normalised to 0..1, but you're free to do what you want with it inside the shader. The catch though is how you output it.. I've always normalised to 0..1, but I understand you can use full range and pass it to another shader or whatever BUT it's implementation specific, probably varies from vendor to vendor, and generally best avoided.
Also, I think both float + int textures get normalised to 0..1, but you're free to do what you want with it inside the shader. The catch though is how you output it.. I've always normalised to 0..1, but I understand you can use full range and pass it to another shader or whatever BUT it's implementation specific, probably varies from vendor to vendor, and generally best avoided.
hm..., GL_RGBA32F_ARB or GL_RGBA16F_ARB are used to select the internal format of the texture in video memory. The GL_FLOAT, GL_UNSIGNED_SHORT, GL_UNSIGNED_BYTE, GL_HALF (the external formats) are used to tell the driver in which format you are sending them the data from the CPU in case you want to fill the texture with some content from the CPU, they don't have any effect in at shading time (they are not even used if you are doing buffer==NULL).
I'm basically only interested in reading the 16bit (8bit output).So would that be then :
int res=65535*tex2D (sampl,UV);
int res=65535*tex2D (sampl,UV);