pouët.net

Go to bottom

Re: mesh encoding for 4k intros

category: general [glöplog]
skrebbel: Because generation usually gives you extruded cubes and cylinders.
added on the 2007-10-22 10:27:13 by kusma kusma
thx, it's uploaded
added on the 2007-10-22 10:27:16 by iq iq
iq, maybe a "current best" column in you table would be good.
Also any idea how to judge quality of lossy compression?
added on the 2007-10-22 11:31:24 by auld auld
iq: Why would you want to only include the compressed size of the data in the results, and not the size of the decompression code? This makes it sort of irrelevant for 4k coding. It actually makes the challenge irrelevant in general as you can keep refining your model to better predict a specific mesh to a point where the entropy under this model is basically zero.

Imo the challenge should be to find a good trade-off between the compressed size of the decompression code and compressed size of whatever data you might use during decompression. The kolmogorov complexity in a crinkler/kkrunchy environment basically :)
added on the 2007-10-22 11:47:39 by mentor mentor
mentor, I completelly agree. Both compressed mesh data and decoding code will be taken into accout for the results.
added on the 2007-10-22 11:51:13 by iq iq
doesnt this all sound very interesting for 64ks, though? :)
added on the 2007-10-22 12:02:40 by smash smash
absolutely :)
added on the 2007-10-22 17:57:14 by makc makc
uuhhh, smash, are you interested on 64k intros again ??...

auld, I will build the tables tonight, and highlight "best score so far" row.
added on the 2007-10-22 18:24:22 by iq iq
iq: god no, but hopefully someone who is and is interested in doing more than some cubes and so on might try it, and then we might see something cool :)
added on the 2007-10-22 18:36:50 by smash smash
cubes OR ribbons :)
added on the 2007-10-22 18:38:01 by keops keops
spikecubes!
spikeribbons!
added on the 2007-10-22 18:50:57 by kusma kusma
i'll just post some of my experiences with mesh compression here: candytron used this algorithm ("Near-Optimal Connectivity Encoding of 2-Manifold Polygon Meshes", kusma posted this already) for mesh compression. don't remember details about the mesh - it was pretty close to 2000 edges (1950 or so), but i don't remember the vertex/face counts anymore :) (probably both around 1000). if i were to do it again, i would probably use "Compressing Polygon Mesh Connectivity with Degree Duality Prediction", which uses an equivalent traversal scheme, but maps more nicely to the halfedge variant we used and would probably reduce code size. the candytron mesh also used arbitrary-degree polygons (though the vast majority were quads), which in hindsight was probably unnecessary and certainly made things more complicated. the code for the mesh decompressor was around 1.5k (before packing), mesh geometry+connectivity was around 1.2k or so - compressed, but not with kkrunchy; the kkrunchys back then didn't adapt quickly to new data but were good at storing basically uncompressible data, so this worked out. actually we originally wanted to store texture coordinates for the mesh too, which meant we needed to support multiple copies of the same vertex with different attributes (normals, texture coords, etc.), which incidentally is something that is really important in practice (in my opinion anyway) but isn't directly supported by most mesh coders, so you need to jump through a bunch of additional hoops to make it work (also, it's completely ignored in most texts on mesh data structures, and with good reason, because it complicates things immensely). also, it's a real joy to support this properly throughout a mesh generator - e.g. some edge may be a cut in the mapping coordinates, so you may not smooth texture coordinates across that boundary when subdividing, but you still need to smooth the positions. fun fun fun. anyway, the mesh coder for candytron supported this, but in the end we didn't have the space for the uv coordinates per vertex and also had to dumb down the texture generator to a ridiculous level (basically just envmaps, as anyone who watches the intro will notice :), so it wouldn't have done us any good anyway.

so why was it this big? animation, mainly. first, the skeleton we used was ridiculously complex (90 bones), and nearly all vertices had 4 weights. just storing the default poses for the skeleton and the weight map was bigger than the mesh data IIRC, but we also had to store the animation, which was predictably huge given the number of bones, even though we completely quantized it to death (as is clearly visible in the party version; final is a lot better in that regard - i also quantized the geometry more nicely there). overall animation data was around 4.5k or so (compressed), and all mesh-related data together was around 8k (compressed). looking back, we should've insisted on a simpler skeleton. well, as they say, "experience is what you get when you didn't get what you wanted".

finally, some comments and pointers on the posts made in iqs presentation and in this thread:

  • i agree that quad-meshes are way more useful than tri-meshes, especially with subdivision. catmull-clark subd is simpler (in terms of subd stencils) than most tri-based schemes and looks way better. and the lack of "redundant" edges is definitely helpful too.
  • similarly, yes, proper geometry coding is way harder than getting the topology down, because most simple predictors simply don't work well for low-poly meshes. though i guess you could try fitting coefficients for an "optimal" linear predictor based on adjacent vertices per mesh - not sure if it helps much, but it's only a couple of floats to store and very little code, so it wouldn't need to be great to be worthwhile. martin isenburg also goes into the topic of which predictors are optimal for (regular) high-degree polygons - in case you're interested, read this.
  • in general, reading the papers by martin isenburg (just google it) is a good time investment, because he doesn't stop at the very basic stuff (i.e. coding of triangle meshes) but also has done a lot of work on arbitrary polygon meshes and prediction schemes for different types of data. in early 2003, i read pretty much everything there was on mesh compression then, and his papers gave a lot of answers i couldn't get elsewhere :).
added on the 2007-10-22 19:09:39 by ryg ryg
keops: ribbon
added on the 2007-10-22 19:29:32 by p01 p01
ryg: 90 bones with 4 weights? dont you have a big artist-beating stick to sort out that kind of thing? :)
added on the 2007-10-23 10:47:20 by smash smash
:)
added on the 2007-10-23 12:29:58 by iq iq
in the interests of sharing and sharing alike, here's a bit about my experiences in mesh compression:

on dead ringer, meet the family and so on i did something rather naive compared to all this edgebreaker malarky. however, we got some excellent results.
1. we only store half the mesh, and mirror it on load. seeing as the character was modelled like this in the first place, it makes sense.
2. the vertices are quantised 8 bit. we save x, y and z in separate streams because it helped the compression rates with kkrunchy. your mileage may vary.
3. all the geometry is triangulated. we simply couldnt get it all as quads for such low poly meshes - the modeller needed the freedom to do it how he wanted. the mesh is designed for subdivison, with creases in the right places. the dead ringer character was around 700 tris for a half; the meet the family monkey was just over 1000 for a half; the face in dead ringer was closer to 2000 for a half.
4. here's the strange part - we tristrip it, then just store vertex data, no indices. while that gives some vertex duplication, for a mesh that strips well it can be a lot smaller than storing the indices and the vertices. in theory, a perfect stripping mesh would give us one new vertex per triangle and no duplications, and index data would be redundant. of course thats not going to be the case in real life, but our data stripped pretty well. and the decompression code is almost nothing.
we got around 1-1.5k for the mesh data for the character in dead ringer. this was dwarfed by all the data for the logos and tags, which was more like 8k.
we weld the mesh on startup.

5. skinweights, normals, uvs (if necessary) and tangents etc are all generated in the tool. the skin weights are generated using distance to bone (with tweakable params per bone), generally hard-skinned to one bone per vertex for the sake of tweakability. that was good enough for lightwave for some years, and it turned out to be good enough for us.
the mesh is then subdivided and the weights are smoothed, which gives us some softness to the skinning. we use butterfly subdivision due to it's small size and simplicity and because we're only dealing with tris. (i use catmull clark in our demos in order to match lightwave, make it work correctly with atlas uvs for lightmaps and so on with the subdivision, and also because we have to deal with arbitrary polygons.)
due to the rendering style we were able to get away with generating the uvs for the meshes in dead ringer and meet the family using simple mapping techniques. to be honest, uv mapping an arbitrary mesh is still the hardest thing ive found about 64k mesh generation, if you need actual good uvs for lightmapping, normal maps or similar. i tried a lot of methods, some home-rolled ones involving edge following and also the start of an implementation of lscm - which would be great, but pretty large code and very slow. in the end i just avoided the whole ugly issue as much as possible - simple uv mapping on arbitrary meshes, baking lighting to vertex colours and not textures, etc.
normals are generated using auto-generated smoothgroups using some region growing jobby. incidentally we used the same thing to split meshes into sections for uvmapping when we absolutely needed to uv map organic or complex meshes. (che for example rendered lightmaps to textures, so needed better uv mapping.)

6. our skeleton had around 16 bones, which is the perfect amount for vertex shader skinning on vs1.1. i go by the philosophy of avoiding locks if at all possible - dead ringer had approximately 0 dynamic vertex buffers.
7. we bake ao to the vertex colours during precalc, like most other meshes.
8. the "clever part" was the way the animation was handled in dead ringer. i started off with several (>10)mb of fk motion capture data, then made an auto process to convert it to ik targets. ik data is just massively smaller than fk for several reasons. for a start you dont need the whole hierarchy, just the leaf nodes. then if you think about fk, to keep a node globally stationary (e.g. a foot on the ground in a walk cycle) you need a lot of rotation data locally to keep it like that as the hierarchy above it moves. in ik you just have a static position (and some term to generate a rotation that keeps the foot flat when it's at 0 height :) ). after some curve fitting and quantisation, we got around 20-30 seconds of animation into something like 7k. in comparison, the single walk cycle in che guevara was a quantised fk animation and took > 4k.

if i had to deal with more general meshes, i probably would have put the effort into implementing some mesh compression papers. as it was, we had a scheme that worked well enough for the mesh we had, and it was very easy to implement and simple to decompress. the animation code was probably more worthwhile to spend the time and effort on.
added on the 2007-10-23 13:15:12 by smash smash
Generating elements give cool results, too.
added on the 2007-10-23 14:06:17 by L.C.F. L.C.F.
Anything more complex than a cube is wasted effort anyway.
added on the 2007-10-23 14:31:37 by doomdoom doomdoom
smash: 90 bones/4 weights: that was the first time i ever worked with bones, and tbh i didn't have a clue whether the values were reasonable or not - i just assumed they were because gizmo told me "i need that much or it looks like crap". well :)

storing half+mirroring: used it for the geometry in candytron, but not the connectivity (because a lot of polygons cross the mirror axis - making that work would've probably been a lot of hassle, and more importantly it would've traded probably >100 bytes of code for maybe ~150 bytes of extra data, so who cares).

triangulate+stripify: tried that in an early attempt for candytron, but since we had a general polymesh, we needed to tag which edges were introduced by triangulation. together with the index data (didn't think about using a nonindexed strip+welding, good idea!) that was still pretty large.

bone weights: next time i'd probably use fitted spherical (elliptical?) weights to start with, then code the differences to the actual weights. minimally more code than using spherical weights directly and gives the artist the freedom to fix weights where they need it.

agree that uvs are hard. we mostly work around it by starting with simple meshes, mapping those properly, then freezing the uv coordinates and bending the mesh into shape afterwards (just look at the kkrieger level geometry, should be pretty obvious what i mean). we originally meant to use lightmaps in kkrieger, but ditched them for precalc time reasons, and that was at a point where i only did direct lighting (for testing purposes): it took a lot of time just to generate UVs (just grouping "nearly enough coplanar" faces+using a planar projection with the best-fit axis-aligned plane) and packing them into a texture. damn annoying stuff (especially since i had spent nearly two weeks fulltime on this when it became pretty clear that it wasn't going to work out for bigger rooms, not with the precalc time budget we had anyway).

animation: candytron was 100% fk. we planned to do ik to at least get rid of the knee/elbow joints and some other easy targets, but we didn't have the time. kkrieger used ik and basically had a state machine that did the walk animations (did the legs directly, arms were hand-animated and the code kept them synchronized to the legs - simple but worked great). it didn't use bones though (didn't have the space). that definitely worked out nicely, especially since that one piece of code (maybe 2.5k or so) generated plausible walk cycles for 4 enemies with completely different anatomy (a biped, a quadruped, the three-legged walking-mushroom-thingies and the spiders).
added on the 2007-10-23 18:04:08 by ryg ryg
In our case implementing the mirroring was about 50 bytes, and didn't introduce any exra vertex since the modelers where working in mirror mode anyway (195/95/156 and Paradise). In the 4k version the mirriroring is 25 bytes or so if I remember well, but it was done with a hack: we directly duplicate all vertices, and just adjust the index arrays, what means few vertices get unused (good enought for 4k).

For bone weights, for now I only used distance to the bone (point-segment) and then apply a gaussian fallof (not even per bone decay control... may be in future); it seems to work well enough for human characters.

For animation, fk is a pain specially for procedural animation. For compressed curves... ryg, I guess you can make more agresive quantification as you move down on the bone hierarchy... any experience on that?
added on the 2007-10-23 19:06:56 by iq iq
candytron just quantized based on the variance of the curves. most curves were quantized to 2 or 3 bits/key. doesn't get much lower than that in practice :)
added on the 2007-10-23 19:40:47 by ryg ryg
Here you have the video of the seminar that 'iq' did on bcnparty 111 (2007).

http://www.scenesp.org/ftp/2007/bcn111/videos/talks/bcnparty111_events_-_mesh_codification_for_4k_intros_by_iq.avi

enjoy!
added on the 2007-11-07 16:34:09 by xphere xphere
cool
added on the 2007-11-07 19:06:33 by psenough psenough

Really interesting thread, this one. So here is how we did it in
Benitoite:

The connectivity is compressed using EdgeBreaker encoding, but with a twist: Whenever we encode an S (split) symbol, we additionally store which of the known vertices the S triangle hits. While this information is redundant, it allows for a much simpler implementation. With this information, the decompressor always knows which vertex is referred to, which means we can do without the whole backtracking/zipping part of the corner table algorithm. The connectivity decompressor is around 100 bytes.

The vertex positions are quantized to fit within one byte, and then the values are delta coded per coordinate. The vertex ordering imposed by EdgeBreaker place adjacent vertices together, so delta coding based on this ordering makes sense. We also experimented with parallellogram encoding, but we found that simple delta coding gave better results with the mesh we had.

The subdivision method was the simplest we could think of. First the mesh is subdivided using D3DXTessellateNPatches and stitched together using D3DXWeldVertices, and then the resulting mesh is smoothed many times (25 times to be exact). This technique is probably only useful for very smooth meshes.

Texture coordinates were not an issue, as we map the model using 3D textures. The normal vectors are computed using D3DXComputeNormals.

The model in Benitoite consists of 150 vertices and 296 triangles. The connectivity data (the clers symbols) compressed to around 60 bytes (about 2 bits per triangle), while the vertex position data was quite a lot bigger - around 300 bytes.
added on the 2007-11-17 21:28:39 by Blueberry Blueberry

login

Go to top