Best way to send time to a shader in 4k/1k?
category: code [glöplog]
I see in IQ's 4k framework that he uses glColor3f to send time data to the shaders. Doesn't seem to work for me. Any other clever hacks to send the current time to a shader in 4 or less kilobytes?
Why doesn't it seem to work? What's the symptoms?
glColor3f to send time data to the shaders ??
how it's possible to get the color from OpenGL into the shader? Using the background color? Or just writing a pixel somewhere on teh screen?
how it's possible to get the color from OpenGL into the shader? Using the background color? Or just writing a pixel somewhere on teh screen?
Worksformetoo(tm)
Check it out in the Accio demo found here: http://www.iquilezles.org/www/material/isystem1k4k/isystem1k4k.htm
Basically he's doing glColor3f(t,sinf(.25f*t),0.0f); every frame and then using gl_Color.x in the fragment shader to grab the data. Saves quite a few bytes not having to set up the necessary extensions to do it the proper way.
Symptoms of it not working for me? Its always 0 in the shader xD
Basically he's doing glColor3f(t,sinf(.25f*t),0.0f); every frame and then using gl_Color.x in the fragment shader to grab the data. Saves quite a few bytes not having to set up the necessary extensions to do it the proper way.
Symptoms of it not working for me? Its always 0 in the shader xD
You're doing something wrong. What happens if you pass in glColor3f(0.5, 0.5, 0.5) and try drawing with it in the shader?
Best way to use shaders in OGL 4k, thanks to ARB:
Then somewhere in your C++ main function:
You don't need a vertex shader, just pass width&height into your uniform vec4 of the shader, then do the math there (gl_FragCoord.xy/U.xy). That's how I also get the intro time in U.z and sync data from one 4klang instrument in U.w.
Quote:
/* an array of shader code strings, in my case I have an include shader code with many useful functions as the first array element, and then the actual shader code with the main function as the second array element. This makes small multi-shader intros with code-reuse possible :)
*/
__forceinline unsigned int createProgram(const char** shaders)
{
return ((PFNGLCREATESHADERPROGRAMVPROC)wglGetProcAddress("glCreateShaderProgramv"))(GL_FRAGMENT_SHADER, 2, shaders);
}
Then somewhere in your C++ main function:
Quote:
((PFNGLUSEPROGRAMPROC)wglGetProcAddress("glUseProgram"))(shaderProgram);
((PFNGLUNIFORM4FPROC)wglGetProcAddress("glUniform4f"))(0, width, height, introTime, get_Envelope(1));
glRects(-1, -1, 1, 1);
You don't need a vertex shader, just pass width&height into your uniform vec4 of the shader, then do the math there (gl_FragCoord.xy/U.xy). That's how I also get the intro time in U.z and sync data from one 4klang instrument in U.w.
Hey thanks xTr1m and Preacher. Managed to get it working, just needed to copy gl_color to a varying vec3 in the vertex shader. But I'm probably just going to use xTr1m's method, looks neat xD
what do you think about using the lightsource? shouldn't that be equal small...
Putting a constant variable through a vertex interpolator (which is what you're doing when using glColor3f()) doesn't seem smart, just use the constant/uniform instead, it's made for that. That reminds me; there used to be some obvious precision loss in shader calculations, I wonder how that is now and how that is for vertex interpolators. Google ahoy!
(doesn't seem smart -> you're doing extra work, perhaps suffer precision loss --then again the code to do it vs. uniform setup might just be smaller! i'm not into pc ogl)
i use something like glRectf(time/5000, time/5000, -time/5000, -time/5000); in this piece of shame: http://www.pouet.net/prod.php?which=57718
it is like 20-30 bytes more convenient to just use glRecti(t,t,-t,-t), but for whatever reason nvidia proprietary drivers loose float precision too soon (noticeable after ~10sec). this doesn't happen with opensource/mesa (ati, intel) drivers, but who cares about opensource these days.
it is like 20-30 bytes more convenient to just use glRecti(t,t,-t,-t), but for whatever reason nvidia proprietary drivers loose float precision too soon (noticeable after ~10sec). this doesn't happen with opensource/mesa (ati, intel) drivers, but who cares about opensource these days.
Hm, can't find anything that would claim you can't just depend on the fully specified precision these days. Good.
w23: hm. so it's the driver that screws over the precision. i wonder where, how and why.
If you have the capability, use callbacks from the synth for perfect time-sync
My synth has something like this function:
void registerCallback( CALLBACK_T*, void* arg)
where "CALLBACK_T" is a typedef to the function pointer shape of blarrrrghh(int note, int channel, void* arg). Then you pass in your function and anything you like as "arg", and it gets called every time a note gets pressed or released and passes "arg" back.
void registerCallback( CALLBACK_T*, void* arg)
where "CALLBACK_T" is a typedef to the function pointer shape of blarrrrghh(int note, int channel, void* arg). Then you pass in your function and anything you like as "arg", and it gets called every time a note gets pressed or released and passes "arg" back.
Auto-triggered events on instruments is by no means "perfect time-sync".
ok, "time-sync to within accuracy of buffer size, latency, and speed of routine that processes the event" for the pedantic.
You missed my point completely: auto-triggering stuff on instrument-events is super-highway to boring-ass, predictable sync.
sorry. I'll try to throw in a few curveball events next time just to surprise you.
just measure your time in eg. quarter notes instead of seconds (so that 1.0 == 1 beat) and suddenly its easy to do procedural sync stuff, eg. "float strobo=1.-mod(2.*time,1-.);" or do synced camera cuts, synced motions (sin and cos with time*pi*some_power_of_two to the rescue), etc. Or do more complicated stuff either on CPU or as texture lookup / shader constant array / both.
beatsync ftw