pouët.net

Go to bottom

Question about fragment programs and floating/16bit format

category: general [glöplog]
 
Hi,

I've been using opengl with hlsl, trying to work out how to sample a 16/32 bit format sampler. I setup my texture with glTexImage2D and GL_UNSIGNED_SHORT or GL_FLOAT.

Then I try float c=tex2D (sample,UV).x;


How is my value represented ? they seem to be packed in the 0..1 range, but I'm still not sure. Any ideas ?
added on the 2009-02-04 13:04:02 by Navis Navis
Hah. I can perfectly see how such a thread would go:

Navis: GL_UNSIGNED_SHORT is normalized to [0..1] range, yes. I don't know about floating point textures.
added on the 2009-02-04 13:17:06 by kusmabite
kusmabite
I don't think hlsl will work in opengl ;)
Integer formats are normalized to [0..1] range, while float textures have their full value.
added on the 2009-02-04 13:20:06 by Psycho
Psycho
thanks, I mean glsl (through cg) sorry.

so, the integer formats retain all significant bits after normalization ?
added on the 2009-02-04 13:22:13 by Navis
Navis
I don't think it's a generic question, I think it's upto the driver and thus the implementation to decide what to do with the significance issue.
added on the 2009-02-04 14:15:38 by Decipher
Decipher
I'm using glsl with both float32 and float16.

glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA16F_ARB, xres, yres, 0, GL_RGBA, GL_FLOAT, buffer );

or

glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA32F_ARB, xres, yres, 0, GL_RGBA, GL_FLOAT, buffer );

Buffer can be null for dynamic textures, or just a buffer of regular floats if you want to copy some data to them. When sampling thge texture in glsl you get a regular floating point value as expected. I find it quite nice, you don't have to worry about normalizations. For GL_UNSIGNED_SHORT-ed textures I don't know, I never used them. I would go for GL_FLOAT even in F16, it's very easy that way and you can represent any float number (bigger than 1.0 or whatever). I think the last opengl introduced the GL_HALF external format flag, but I didn't try it yet...
added on the 2009-02-04 14:22:28 by iq
iq

sometimes reading pouet makes me feel incredibly stupid. i think i only understood 4 or 5 separate words from all of the above text :(
added on the 2009-02-04 14:27:40 by xrs
xrs
Navis: Yes. But make sure you have an internal-format that does.
added on the 2009-02-04 14:30:54 by kusmabite
kusmabite
iq: right I see. So the trick is to use GL_RGBA32F_ARB instead of GL_FLOAT ? ( haven't checked if they are the same thing yet tho)
added on the 2009-02-04 14:37:49 by Navis
Navis
xrs: haha, cheer up. Programming languages are designed to be as functional and understandable as possible (ok, at least they should be) whereas natural languages are riddled with incoherency og fallgruver. It's much more impressive to witness a native chinese person speak English than for anyone to speak HLSL :)
added on the 2009-02-04 14:38:16 by Hyde
Hyde
Navis: be careful which 16bit format you use, nvidia and ati support different things. I wrote some stuff before that worked fine on my ati card but gave a lovely black screen on nvidia. From what I remember, it's something like both companies support the 16bit half-float format (RGBA16h) but only ati supports the 16bit integer version (RGBA16). This was on osx, so possibly it's a driver thing that doesn't affect windows..

Also, I think both float + int textures get normalised to 0..1, but you're free to do what you want with it inside the shader. The catch though is how you output it.. I've always normalised to 0..1, but I understand you can use full range and pass it to another shader or whatever BUT it's implementation specific, probably varies from vendor to vendor, and generally best avoided.
added on the 2009-02-04 14:42:58 by psonice
psonice
hm..., GL_RGBA32F_ARB or GL_RGBA16F_ARB are used to select the internal format of the texture in video memory. The GL_FLOAT, GL_UNSIGNED_SHORT, GL_UNSIGNED_BYTE, GL_HALF (the external formats) are used to tell the driver in which format you are sending them the data from the CPU in case you want to fill the texture with some content from the CPU, they don't have any effect in at shading time (they are not even used if you are doing buffer==NULL).
added on the 2009-02-04 14:45:22 by iq
iq
I'm basically only interested in reading the 16bit (8bit output).So would that be then :

int res=65535*tex2D (sampl,UV);

added on the 2009-02-04 14:45:55 by Navis
Navis
go to page of 1
added on the 2009-02-04 16:01:49 by kb_ kb_
omg, we got a clairvoyant within us.
added on the 2009-02-04 16:07:35 by xTr1m xTr1m
I pressed F5 by mistake (or some other random combination of keys that resent the question).
added on the 2009-02-04 16:09:16 by Navis Navis

login

Go to top