pouët.net

Go to bottom

Question about GLSL floating point constants

category: general [glöplog]
Is it cool to leave out periods from floating point constants in GLSL? Specifically, I noticed that my NVIDIA driver is happy with

Code:vec3(1,2,3)


while I believe the spec says that should be written as

Code:vec3(1.,2.,3.)


Furthermore, I noticed that this is ok to do both in ShaderToy (WebGL) and OpenGL, but in OpenGL, my driver is also happy with something like

Code:void someFunc(float a) ... someFunc(1);


while this gives an error in ShaderToy. I'm a bit short of bytes, so is it ok to optimize those periods away? In particular, will ATI hate my intro if I leave out those periods?

Alternatively, if you have an ATI/AMD card and want to help me out by testing my prod, it would be fantastic! Drop me an email -> veikko.sariola at Google's well known mail service.
added on the 2021-03-27 16:51:39 by pestis pestis
If you don't have an AMD card at hand, the AMD Shaderanalyzer usually does a good job testing compilation for you.
added on the 2021-03-27 17:16:00 by Gargaj Gargaj
That's neat, thanks! I'll check it out and pray I don't have to add those periods, as I have no clue where to gain those bytes back :)
added on the 2021-03-27 17:50:04 by pestis pestis
Tested. Compiles. So I assume it's ok!
added on the 2021-03-27 17:58:23 by pestis pestis
Quote:
spec says


OpenGL compilers do not know what is it specs

only GLSL compiler that "in most cases" follow spec is glslang than compile SPIR and SPIR-V shaders (you can download vulkan sdk it compiled there)

Quote:
WebGL

in Windows OS is translated by ANGLE to DX11 so you not writing GLSL, you writing code that will be converted to HLSL and HLSL compiler compile your code

in Linux or in windows using
chrome.exe --use-angle=gl
WebGL will be OpenGL so your code wont work

and Shadertoy has ALOT of shaders that "does not work in ANGLE WebGL" and/or "does not work in OpenGL WebGL"

in MacOS WebGL is just pure nightmare, less than 20% of "what you expect to work" will actually work, and Apple do everything to sabotage Webbrowser advanced technologies because they care only about moneys from appstore.
in Mac WebGL translated to Metal shader language and... for most cases does not work at all(when you shader longer than 2 lines of code something will be broken 100%)

WebGL and WebGL2 in current state is very dead technology that no one use (exists just for fun and becoming deprecated in next browser releases(Apple will drop WebGL at end of this year I think) and killed by corporations(google does not update webgl for years, thousands of bugs ignored no one care about this bullshit ussles technology)

Quote:
ATI hate my intro if I leave out those periods?

develop and test your code in outside Vulkan application then port to Shadertoy
I know people who do demose love to keep initialized variables or do something like "float a;a=-a;" initializing variables in broken way that become broken on other than "demo developer" videocard

I can link you my app that I use to test my own shaders, but to compile shaders you have to download vulkan sdk (or compile glslang yourself)
https://github.com/danilw/vulkan-shadertoy-launcher/releases/
using this (launching shader there) you can be 99% sure that you follow specs and your code work on most of videocards

Quote:
Alternatively, if you have an ATI/AMD card and want to help me out by testing my prod, it would be fantastic!


AMD closed-source OpenGL drivers is a joke if you on WIndows I can feels bad for you :(
in Linux AMD opensource driver (AMDGPU https://en.wikipedia.org/wiki/AMDGPU ) does work much much much better with no bugs in OpenGL and Vulkan for AMD, Windows AMD Vulkan driver also "very bad"(but better than OpenGL)
added on the 2021-03-27 19:03:45 by Danilw Danilw
Quote:
If you don't have an AMD card at hand, the AMD Shaderanalyzer usually does a good job testing compilation for you.

AMD Shaderanalyzer actually has same bugs like their WIndows close source driver
absolutely broken shit
I report a lot of bugs last year in it and their driver, they don't care
I wont report bugs to them anymore
Im not "free betatester for corporation bullshit software"
added on the 2021-03-27 19:10:11 by Danilw Danilw
That's awesome.
added on the 2021-03-27 19:20:25 by Moerder Moerder
Quote:

the spec says

Which GLSL specification exactly (i.e. which version)? Usually you specify the GLSL version in the first line of your shader using the "#version XXX" pragma (If you don't do that you get something like GLSL 1.20 if I recall correctly and you really don't want that), additionally there is the glslang reference compiler which you can use.

The GLSL "dialect" you use in connection with WebGL adheres to a slightly different specification (GLSL ES something thing) than native OpenGL GLSL. It's a mess.
added on the 2021-03-27 20:25:13 by las las
I currently have #version 330 there
added on the 2021-03-27 20:32:49 by pestis pestis
I think you could use glslangValidator to check your code for spec conformance.
added on the 2021-03-27 20:40:59 by cce cce
I read the specs; seems like ints should be implicitly converted to floats in quite many places, but in my experience, I get a lot of errors trying to mix integers with floats. Oh well.
added on the 2021-03-27 20:49:17 by pestis pestis
Quote:
I think you could use glslangValidator to check your code for spec conformance.


Thanks! I'll do this. If it passes, and still doesn't work out there, I can say at least I tried my best.
added on the 2021-03-27 20:51:22 by pestis pestis
There's a lot of mentions of "the spec", but not a whole lot of actually quoting it, so here's some relevant parts from the GLSL 3.30 specification.

Since the main question is about vector type constructors, we can look at chapter 5.4 on Constructors (emphasis mine):
Quote:
Constructors use the function call syntax, where the function name is a type, and the call makes an object of that type. Constructors are used the same way in both initializers and expressions. (See section 9 “Shading Language Grammar” for details.) The parameters are used to initialize the constructed value. Constructors can be used to request a data type conversion to change from one scalar type to another scalar type, or to build larger types out of smaller types, or to reduce a larger type to a smaller type.

This is further elaborated in 5.4.2 Vector and Matrix Constructors, which says:
Quote:
If the basic type (bool, int, or float) of a parameter to a constructor does not match the basic type of the object being constructed, the scalar construction rules (above) are used to convert the parameters.

And the rules above refer to 5.4.1 Conversion and Scalar Constructors:
Code: int(bool) // converts a Boolean value to an int int(float) // converts a float value to an int float(bool) // converts a Boolean value to a float float(int) // converts a signed integer value to a float bool(float) // converts a float value to a Boolean bool(int) // converts a signed integer value to a Boolean uint(bool) // converts a Boolean value to an unsigned integer uint(float) // converts a float value to an unsigned integer uint(int) // converts a signed integer value to an unsigned integer int(uint) // converts an unsigned integer to a signed integer bool(uint) // converts an unsigned integer value to a Boolean value float(uint) // converts an unsigned integer value to a float value

So, that should be pretty clear that all the built-in type constructors can and will convert any basic type to another, as the constructors themselves also serve as type conversion utilities, no need to initialize a float vector type explicitly with floats. (Sidenote: these constructors and conversion rules are also present in the GLES/WebGL versions of GLSL). However, we can also more generally look what the spec says on implicit type conversions in general, and quoting from chapter 4 Variables and Types introduction (emphasis mine):
Quote:
The OpenGL Shading Language is type safe. There are no implicit conversions between types, with the exception that an integer value may appear where a floating-point type is expected, and be converted to a floating-point value. Exactly how and when this can occur is described in section 4.1.10 “Implicit Conversions” and as referenced by other sections in this specification.

There we have it then, ints can be implicitly converted to floats, and nothing else (the table on 4.1.10 essentially just lists what the above quotation says in words). The cases where the implicit conversion can happen are referred to in multiple places in the specification, and in includes at least assignments, expressions (between any two operands), comparisons, the resulting type of any of the three ternary operator expressions and function return values. (Sidenote 2: to my understanding none of these implicit conversion rules are present in the GLES/WebGL versions of GLSL.) All in all in most cases you should be able to rely that types based on the integer basic type can be converted to floating point based types. Also worth to mention while I believe function parameters are also subject to implicit conversion (seems to always work but can't find a mention in the specification), the built-in functions to my understanding are not subject to implicit conversions, and instead often provide multiple overloads. The consequence is that, for an example with smoothstep, you cannot write:
Code:smoothstep(0, 1, myFloatValue)
but instead have to use:
Code:smoothstep(0., 1., myFloatValue)

Check the available overloads for rest of the functions you need to use from the specification or the OpenGL references pages for your specific GLSL version. Other than that empirically verify that your drivers actually behave like the spec suggests they should, and verify everything with validator tool las and cce mentioned.
added on the 2021-03-28 03:28:28 by noby noby
+1 for why method overloading is a shit concept :P
Yes, my initial post was flat out wrong; noby's summary of the spec is correct. Part of the confusion is exactly from the overloaded functions not doing implicit type conversions and from WebGL. I'll use the validators to see if I find actual differences from the spec.

Should have realized that constructors are also type converters.
added on the 2021-03-28 09:02:14 by pestis pestis
lol I can remove most of my periods, which I put there for ShaderToy prototyping. New scene, here we come! Thanks guys!
added on the 2021-03-28 10:22:34 by pestis pestis
noby: Even smoothstep actually works without periods, presumably because smoothstep has two overloads:

genType smoothstep (genType edge0,
genType edge1,
genType x)

genType smoothstep (float edge0,
float edge1,
genType x)

I assume the second version makes it also accept ints as the first and second parameter, causing implicit type conversion. At least glslangValidator & NVidia drivers are happy.
added on the 2021-03-28 10:48:40 by pestis pestis
On nvidia more recent GLSL versions do not like mixed smoothstep arguments. fyi
added on the 2021-03-28 15:56:36 by LJ LJ
Mixed you mean first is float and second is e.g. vec3? But if both first and second are floats, is that ok?
added on the 2021-03-28 20:36:27 by pestis pestis
Quote:
I assume the second version makes it also accept ints as the first and second parameter, causing implicit type conversion. At least glslangValidator & NVidia drivers are happy.

It should, assuming built-in function parameters are eligible for implicit conversion. However I couldn't find such a mention in the specification earlier, thought I didn't read it all the way admittedly. On my NVIDIA drivers int parameters to smoothstep only work if the interpolant is int as well (which should match the first of those overloads). Interesting to hear that the validator deems it correct, might be a driver issues then :)

Quote:
On nvidia more recent GLSL versions do not like mixed smoothstep arguments. fyi

This is my experience as well, and not just with recent drivers but in the last +2 years that I've had an NVIDIA card.
added on the 2021-03-28 23:33:37 by noby noby
Another relevant issue: on nvidia, type conversions confuse function overloading. if you have functionName(float param) and functionName(float param1, float param2) and then call functionName(7), the nvidia driver will complain that there is no function with matching parameter types. AMD does the sane thing (which may well differ from what the spec requires) and first uses the number of arguments to narrow down the options before trying type conversions.
added on the 2021-03-29 00:44:55 by Seven Seven
Reporting in:

I currently have a line like, where p is vec2.

Code:smoothstep(0,2,min(25-p.x,11-abs(p.y)))


From my reading, that min will produce a float so this is effectively

smoothstep(<int>,<int>,<float>)

Passes glslangValidator & works with my driver (#version 330).
added on the 2021-03-29 10:15:37 by pestis pestis
If I remember correctly it changes with the version ... something like
Code:smoothstep(.5,1,sin(time)) // (float, int, float)
would work on 130 but it would fail with 430.
added on the 2021-03-31 00:39:00 by LJ LJ
What you really get totally depends on the vendors implementation of the GLSL compiler.
A driver update to version Y can potentially break something that works and compiles just fine with version X (you can massively improve your chances by sticking to the specification and using proper validation, as mentioned above).

This has been a serious issue in the past and probably one of the reasons for the existence of tools like gshaderreplacer.
added on the 2021-03-31 13:27:13 by las las
In the end, in addition to developing on a NVidia card, I 1) put glslangValidator step into the CMake script; 2) tested the final submission AMD Shaderanalyzer.

Shaderanalyzer complained about ternary ? <float> : <int> (i.e. ... ? 1 : 1.), which passed both NVidia and glslangValidator. Fortunately, I had a few bytes left to fix these to make AMD happy.
added on the 2021-03-31 13:51:21 by pestis pestis

login

Go to top