pouët.net

Go to bottom

Crinkler & shaders: how to achieve the best compression ratio?

category: code [glöplog]
I'd like to open a discussion about the shaders compression in Crinkler. First, I've noticed that macros like "#define r return" are counterproductive and result in bigger compressed code (no matter how many macros I inserted, it was bad).

For the variables names, I decided to use first the most used letters in the rest of the code. Then, I wondered if I should name a variable "W" (where "W" is an unused letter), or "ee" (where "e" is the most letter), or "fl" (where "fl" is the most frequent bigram in the code). I found that single letters were the best choice. Then, it was better to combine most frequent letters, instead of using the most frequent bigrams (that surprised me).

That's the strategy currently used in GLSL Minifier. It works quite well, although I believe it's possible to improve it. I had a quick look at the PAQ compression, but some of you might have already tried different strategies, or have suggestions?

At some point, I thought I could use the type information, as variables of similar type are more likely to be used in the same way. But, I can go further: For each letter, I can look in which context it's used (and give the same name to things used in similar contexts).

Which size is the context used in Crinkler to predict the next bit? Should the naming of variables depend on the binary representation of the letters? Does it make sens to do the analysis on the bits - or looking at bytes (i.e. characters) is enough?


In a future version of GLSL Minifier, I will also try to inline and remove variables when it's possible. For instance, in the expression "float f = 4. * x; return f;" the variable f could be removed. This will require a more complex analysis, and a detection of side-effects.
added on the 2010-06-20 01:19:03 by LLB LLB
well i did some wired shit a while ago with a small grammer that always repaces strings according to some rules until only terminal symboles are left in the string. using the buildin preprocessor could still be more beneficial, if used correctly.
added on the 2010-06-20 01:48:18 by abductee abductee
crinkler is using a context order of 1 to 8 byte (those "mask-models" are the one selected by crinkler while compressing). It's not a surprised that bigrams are less efficient than single letters, since false bigrams (like "fl") are perturbating probabilities for real one ("fl(oat)").

For #define, they could compress well when the definition of your function is much smaller using a #define than a plain glsl declaration... otherwise It wont help on substitution... context modeling is much more efficient at this "template substitution". Although, using #define in certain case is helping a lot to templatize large portion of the code...

For byte vs bits, well, you have probably to take care of both! ;) I guess that selecting letters that have a larger sequence of common bits could help, but you have to check it...
added on the 2010-06-20 01:48:57 by xoofx xoofx
Thanks a lot!
added on the 2010-06-20 12:52:19 by LLB LLB
After some experimentation, I got interesting results. Instead of reusing always the same names (which increased the frequency of some letters), my tool now uses the context in which variables are used. The goal is to increase the frequency of bigrams. At the end, the shader compresses even better (~10 bytes on average).

I have updated GLSL Minifier today and tried it on real 4k intros. It was able to save 41 bytes on Retrospection, 56 bytes on Valleyball, 48 bytes on the pre-release version of Another Theory. I'd love to get more data and see what I could improve.

See more detailed stats
added on the 2010-10-15 02:05:05 by LLB LLB
Brief thread derail: Why aren't shaders compiled into an intermediate binary (like MSIL or Java Bytecode)? Are what nvidia and amd do so radically different that such a thing is impossible? Wasn't HLSL converted into assembly at one point?
added on the 2010-10-15 03:44:29 by QUINTIX QUINTIX
HLSL was (and still is) compiled into a bytecode which is then translated by the driver. The idea is sound, the problem is just that the HLSL compiler tries to do a lot of optimizations (minimizing number of registers used, everything gets inlined, loop unrolling, conditional stripping, etc.) that drivers then partially or fully undo to actually generate code for the target hardware.

It'd make life easier for drivers (and shorten compile times considerably) if the HLSL compiler tried a little less hard :)
added on the 2010-10-15 05:11:34 by ryg ryg
word
@QUINTIX, afair, generated bytecode from HLSL is usually much larger than optimized HLSL itself, and also doesn't compress as well...
added on the 2010-10-15 10:00:47 by xoofx xoofx
Nice one LLB!
added on the 2010-10-15 14:08:37 by raer raer
ryg: I believe he meant why aren't they compiled before runtime, i.e. included in the binary as binary themselves rather than packed ASCII. But yes, those compilers really do try way too hard :) .

Also, what @lx said.
added on the 2010-10-15 15:39:43 by ferris ferris
and then the drivers, like ryg said, spend a lot of time 'deoptimizing' and recompiling the bytecode which in turn spawns a whole new hurdle if you actually have a lot of them (i.e in a game)
added on the 2010-10-15 17:49:32 by superplek superplek
"and then the drivers, like ryg said, spend a lot of time 'deoptimizing' and recompiling the bytecode which in turn spawns a whole new hurdle if you actually have a lot of them (i.e in a game)"
well the "deoptimization" doesn't take much time - it's basically just the driver converting the code into its own IR, which destroys things like the exact register assignment. it's just pointless for the HLSL compiler to be spending a lot of time trying to find the "optimal" unroll level for loops etc. when it doesn't know what HW it's targeting, what the scheduling constraints (or even actual microcode instructions) are, or even how the number of registers used influences performance. it just makes compilation of dynamic branch-intensive shaders take forever and a day without actually being useful to anyone :)
added on the 2010-10-15 19:33:00 by ryg ryg
preach!
added on the 2010-10-15 20:39:47 by ferris ferris
yeah 'recompiling' was to be interpreted loosely :)
added on the 2010-10-15 21:31:48 by superplek superplek
Shader Minifier (yes, that's the new name) now supports HLSL. You should all use it now - even Elevated would be smaller by using this tool!

(detailed statistics will come later)
added on the 2011-02-10 00:43:52 by LLB LLB
Sounds as if that tool will save me a lot of time ;)
Nice job - I'll give it a try.
added on the 2011-02-10 00:53:15 by las las
url please
added on the 2011-02-10 00:54:28 by xernobyl xernobyl
too late :D
added on the 2011-02-10 01:00:03 by las las
whilst i like the tool - i also find the stats demotivating. from what i see you only gain 3-10 bytes using this tool. that's not massive :-)
How can a tool that does better than hand-optimizing a shader file NOT be useful? Just wondering...
added on the 2011-02-10 12:06:07 by raer raer
rasmus: It depends on shaders.

If you use Shader Minifier on an already optimized shader, you can save up to 50 bytes. For instance, try it on Cdak (shader extracted from their release). It removes 109 bytes on the uncompressed shader, which become 41 bytes after Crinkler compression. I think that's not bad.

Sure, it's not perfect and it might sometimes give you a bigger result - but you can often fix it by hand to get best compression ratio.

Anyway, it also saves time.
added on the 2011-02-10 13:19:16 by LLB LLB
@LLB
is it for party version or the final?
party version was fairly unoptimized=)
added on the 2011-02-10 14:02:11 by unc unc

login

Go to top