pouët.net

Go to bottom

4k into scenes timings

category: code [glöplog]
Hi 4kb people

How do you store timings of scenes in your intros? What approach, solution is the best and why? Example:
if (currtime > 00000 && currtime < 12000)
drawScene1();
if (currtime > 12000 && currtime < 25000)
drawScene2();
if (currtime > 25000 && currtime < 36000)
drawScene3();
...
Thank you!
Depends entirely on your context, you can do a jump table if that fits your code/data settings better.
added on the 2016-08-15 15:54:49 by Gargaj Gargaj
This is (as you may be aware) an actual pain in the ass to write, *and* it compresses worse than having an array:

int16_t scene_lengths[NUM_SCENES];

just iterate through it, find out which scene you are currently in, and then either use a switch statement or a second array with function pointers to call the corresponding code for it.

(Note that storing lengths instead of absolute starting values is smaller, since scenes will probably all be multiples of the same time long, if you sync to music at all. And your time unit should be "beats", otherwise syncing always hurts.)
added on the 2016-08-15 15:58:18 by urs urs
urs: What do mean "beats"? Could you please clarify?
What does the above code do if currtime == 12000 or currtime == 25000 ?
added on the 2016-08-15 16:09:36 by pohar pohar
I write all the timing stuff in the shader. No x86 machine code, except to get the time into a float in GLSL. Everything is based on the music's beats and bars, so all transitions and hits that happen in the demo are expressed as musical bar/beat times. Calculate a float multiplication factor to convert the time variable into a musical beat or bar counter. Make the song in blocks of, say, 8 bars, so you can use that as demo part counter if you like.
added on the 2016-08-15 16:34:28 by yzi yzi
While I'm hesitant to disagree with urs, I think the one you suggested is totally viable anyway as well. Example: https://github.com/armak/pbr-whitespace/blob/master/src/shaders/fragment_original.fs#L160 (the intro)

I didn't consider that to be too arduous to write (which was in less than one day), and looking at the crinkler report on average each character takes about 0.05-0.5 bits per byte, so to me at least the compression ratio is more than good enough. Just parameterize your code and scenes well and write everything in a consistent structure. I think the ratio of the minified syncs vs after compression was at least 10:1, or better.
added on the 2016-08-15 16:41:14 by noby noby
Quote:
urs: What do mean "beats"? Could you please clarify?


Your timing source should always be your audio device. This is the only reliable way to remain in audio-video sync, as opposed to any sort of timer or framecounter. It will normally give you the play position in samples, calculate music beats from that (by dividing by samples-per-beat). That way, your syncpoints will most likely be nice integer values (and probably multiples of 4).

The mercury demotool, for example, doesn't even have a concept of "time in seconds", all times are specified in terms of music beats.

This is of course useless if you use music with a highly non-even rhythm, in which case you'll have to figure it out yourself. :)
added on the 2016-08-15 16:49:53 by urs urs
non even rhythms can usually be constructed from even rhythms by adding small patterns of positive or negative delays. storing the pattern of "groove" as relative differences to the predicted beat is smaller then storing the entire absolute pattern values.
added on the 2016-08-15 16:59:48 by psenough psenough
having done a reasonable amount of sync, fully agree with urs. Convert time to beats, save your sanity (and some bytes ;)
added on the 2016-08-15 17:05:13 by psonice psonice
Shader source for Reionization can be found from here.

Basically all the timing is done in shaders by using beat as a time unit.
added on the 2016-08-15 17:16:06 by pommak pommak
Beats ( or position ) work great in 1kb too.
added on the 2016-08-15 17:21:31 by p01 p01
You could use a pointer to a function that is updated by a callback function once the time is reached. Then you would not need if/else branches. It would probably run faster. Never tried it though.
added on the 2016-08-15 17:26:31 by Adok Adok
As most of the others, I use the beat number based on the audio sample being played at a particular time.
added on the 2016-08-15 18:46:18 by merry merry
Back in the day when demo music was always MOD, I used to use a "get song position" function that returned timings in the form "100 * songpos + row", so it was easy to write specific musical time values like 416 = song position 4, row 16. A similar function can easily be used in shader code as well, assuming the song is composed of equal-length pattern blocks.
added on the 2016-08-15 21:32:28 by yzi yzi
it is also useful not only to base the timing on beats (or whatever your basic music unit is)) but also going a step higher and use whole patterns as a key for switching between scenes etc. as well.

I would not waste too much time into wondering whether a table/switch/if... might be smaller unless you´re running out of space since the compiler usually optimizes it quite well, but as a rule of thumb avoiding large numeric values (especially odd ones requiring a lot of semi-random bytes, this includes absolute addresses/pointers) tends to compress better.
added on the 2016-08-15 23:34:11 by T$ T$
So far i've just used ifs and careful tweaking of animation params, but I just started looking into marrying GroundControl and noby's 4k framework...
added on the 2016-08-15 23:41:37 by visy visy
I use the audioTime|beat|row as a float, it's easy to interpolate, normalize within a beat, and cast to an Int to throw in a bitmask to trigger other sounds or effects. ... at least that works for JavaScript 1kb prods. That should scale to 4kb even if 4kb audio is more elaborated.
added on the 2016-08-16 01:01:42 by p01 p01
Is there any "state of the art" (programming side) 4k or 64k source someone recommend to look at ?
added on the 2016-08-16 11:24:05 by Tigrou Tigrou
tigrou: if i was starting a 4k in 2016 i would go with this: https://github.com/in4k/pbr-introsystem. not sure about 64k, probably mercury's signed distance field library would be the best starting point.
added on the 2016-08-16 15:09:09 by psenough psenough
yep, we used pbr-introsystem for our latest 4k. Seems minimal and workable enough.
added on the 2016-08-16 15:52:51 by visy visy
Aaaand now I really need to get the repo up to date... :)
added on the 2016-08-16 15:53:18 by noby noby
Tigriou you could base on the sources of detached. Having looked at that pbr introsystem i can easily conclude that there's still unused sizecoding optimization potential there.
added on the 2016-08-16 16:46:40 by xTr1m xTr1m
xTr1m: Yes, the current version there is quite unoptimal sizewise. I think sliced close to 200 bytes off of it since iirc. Your examples are quite a bit more streamlined, more so than the current one I'm using too. Should consider an asm rewrite.
added on the 2016-08-16 16:55:36 by noby noby
On top of scene timing, I usually have a mechanism by which I can look up, for any point in time, when each music channel was last triggered prior to that time. This is very useful for making things happen synchronized to the music. Just don't overdo it. :)

Clinkster has a built-in feature for generating and querying this information.

Some useful extensions:
- Separate trigger information for individual tones, so that the different notes in a melody can trigger different events.
- Total number of times channel (or individual tone) has been triggered so far.

Used heavily in e.g. Wishful Twisting. :)
added on the 2016-08-17 16:26:05 by Blueberry Blueberry

login

Go to top