Low poly image effect / could this be done in realtime?
category: code [glöplog]
Hello Internet,
For some VJing experiments I would love to use an effect like this one. Obviously it should be a post-effect that runs automatically in realtime and yet returns pleasing results. A resolution-paramenter e.g. with the number of triangles would be great.
Having resulting set of triangles with vertex-colors would be ideal, because that might come in handy for transitions.
I have seen a couple of approaches using genetic algorithms but maybe there are simpler approaches.
A semi-automatic pre-computing approach would be okay too.
Do you have any pointers or ideas?
For some VJing experiments I would love to use an effect like this one. Obviously it should be a post-effect that runs automatically in realtime and yet returns pleasing results. A resolution-paramenter e.g. with the number of triangles would be great.
Having resulting set of triangles with vertex-colors would be ideal, because that might come in handy for transitions.
I have seen a couple of approaches using genetic algorithms but maybe there are simpler approaches.
A semi-automatic pre-computing approach would be okay too.
Do you have any pointers or ideas?
Initial guess on approaching something like this at a reasonable framerate:
Blur image
Compare with original, large delta means more points needed, so generate a sort bias map
Scatter points, perhaps randomize for each pixel & compare with bias map + global bias to determine how many points there will be
Add points on corners
Compute delaunay triangulation
Sample colors (of blurred image?) at each vertex & average
disclaimer: I have no idea what I'm doing
Blur image
Compare with original, large delta means more points needed, so generate a sort bias map
Scatter points, perhaps randomize for each pixel & compare with bias map + global bias to determine how many points there will be
Add points on corners
Compute delaunay triangulation
Sample colors (of blurred image?) at each vertex & average
disclaimer: I have no idea what I'm doing
Likewise no idea what I'm doing, but I would go with a local entropy metric, and use it's gradient to move vertices around (maybe counteracted by a repulsive force between vertices). That way, the tesseleation should be temporally stable.
Not sure how much parameter fudging that would require to prevent vertices simply clumping in the wrong areas.
Also, I believe a transmission from planet kewltron kinda does what you are proposing, doesn't it?
Not sure how much parameter fudging that would require to prevent vertices simply clumping in the wrong areas.
Also, I believe a transmission from planet kewltron kinda does what you are proposing, doesn't it?
Maybe generate a random set of points with ~even spacing and create a voronoi diagram from those?
Yeah. Maybe apply edge-detection filter (sobel?) to the image to find edges, downsample to find areas with high entropy and tesselate based on that.
Step 1: Render the scene normally, and single-textured with a "density map" (bright values want higher vertex density, darker values want lower vertex density), producing a screen-space density map.
Step 2: Pull vertices from a fixed grid towards areas who want higher densities, along the gradient of the screen-space densirty map. Restrict each vertex from moving too far away from it's original home to not get a totally degenerated solution. Probably do something to prevent crossing edges also, dunno.
Step 3: Perform one iteration of iterative delanuay triangulation.
Step 4: Render, and sample the normal render at the vertex locations, but shifted slightly towards the triangle center.
Step 5: Win Assembly, share prize money with me.
Step 2: Pull vertices from a fixed grid towards areas who want higher densities, along the gradient of the screen-space densirty map. Restrict each vertex from moving too far away from it's original home to not get a totally degenerated solution. Probably do something to prevent crossing edges also, dunno.
Step 3: Perform one iteration of iterative delanuay triangulation.
Step 4: Render, and sample the normal render at the vertex locations, but shifted slightly towards the triangle center.
Step 5: Win Assembly, share prize money with me.
Primary problem that I see is that we care about some things (like eyes) more than others, and that kind of thing is sorta difficult for a computer to figure out.
My commision rate is 75% of the winnings, by the way.
@sol: Thus the density map. It allows the artist to pick what areas are more important.
There should probably also be a step that push nearby vertices apart a bit also, by the way. There, now my commision rate is 85% of the winnings.
And if you don't give a crap about detail / density control then precalced voronoi works like a charm. So simple that even AMIGAAAH can do it.
I've got good looking results only from the genetic algo ones, so if there is a solution to a realtime version I'd be happy to know about it as well!
it's definitely a realtime effect (actually we did something similar in evolution of vision), and for those points you could use "interesting features to track" from opencv. They will give you exactly what you need.
We do this effect on video in realtime by:
- computing good points on corners, tracking the points
- performing delauney triangulation from the points (this part is quite difficult to do efficiently with thousands of points on gpu)
- computing colours either from the points or averaging under the triangle
all on the gpu in compute of course. it looks cool! it even makes an appearance in hold and modify: https://youtu.be/EOiFWSHPFrk?t=96
- computing good points on corners, tracking the points
- performing delauney triangulation from the points (this part is quite difficult to do efficiently with thousands of points on gpu)
- computing colours either from the points or averaging under the triangle
all on the gpu in compute of course. it looks cool! it even makes an appearance in hold and modify: https://youtu.be/EOiFWSHPFrk?t=96
first: what kusma said (needs to take human cognition into account).
second: as with all effects that apply some style in post to the pixel data (pencil lines, oil painting style, etc): a still image may look really good, but when in motion, everything will jump and jitter all over the place. you need coherence between frames. restricting vertex movement and color change will not be enough, because sometimes there will be fast movement or strobing lights or foreground parallax etc. So you need to track features. Either like Navis suggests (but I point out that Evolution Of Vision constantly jitters, probably because they just embraced the artifact. As does A Transmission From Planet Kewltron.) - or, if you are rendering 3d content live, with a velocity map similar to what you would use for motion blur. The vertices absolutely have to move with the content, otherwise you'll just have really big triangular pixels.
Instead of kusma's grid approach maybe subdividing triangles might be nice? You'd have to split/merge triangles when more/less detail is needed in the image or where vertices have become squished/stretched too much. And fade in the splitting/merging either geometrically or by color.
It's not trivial to make something like this look good :)
second: as with all effects that apply some style in post to the pixel data (pencil lines, oil painting style, etc): a still image may look really good, but when in motion, everything will jump and jitter all over the place. you need coherence between frames. restricting vertex movement and color change will not be enough, because sometimes there will be fast movement or strobing lights or foreground parallax etc. So you need to track features. Either like Navis suggests (but I point out that Evolution Of Vision constantly jitters, probably because they just embraced the artifact. As does A Transmission From Planet Kewltron.) - or, if you are rendering 3d content live, with a velocity map similar to what you would use for motion blur. The vertices absolutely have to move with the content, otherwise you'll just have really big triangular pixels.
Instead of kusma's grid approach maybe subdividing triangles might be nice? You'd have to split/merge triangles when more/less detail is needed in the image or where vertices have become squished/stretched too much. And fade in the splitting/merging either geometrically or by color.
It's not trivial to make something like this look good :)
This thread is already my favorite just by seeing how many ways people can misspell "Delaunay".
Wow thanks for all the replies. Seems like I picked a rather complicated topic.
@kusma: Super nice description. You can keep the 85% for the purple-demo once it's done and won at Revision.
@navis: Tracking the eyes/faces with openCV is a super interesting idea!
@smash: I forgot about HAM, even considering that this scene was one of my favorite. Maybe I was subconsciously digesting and know came up with the desire to do something like this. With the additional distortion it looks soo cool. As usually, I'm quite jealous.
@cupe: I think that "Transmission from..." looks awesome already. But you're right: stable transitions between different frames and temporal coherency would bring this to another level. Using the velocity-vectors of the scene would probably make quiet a difference as well. Not really sure how to pull this off, though.
@gargaj: didn't even heard dilowny before reading this thread :-)
@kusma: Super nice description. You can keep the 85% for the purple-demo once it's done and won at Revision.
@navis: Tracking the eyes/faces with openCV is a super interesting idea!
@smash: I forgot about HAM, even considering that this scene was one of my favorite. Maybe I was subconsciously digesting and know came up with the desire to do something like this. With the additional distortion it looks soo cool. As usually, I'm quite jealous.
@cupe: I think that "Transmission from..." looks awesome already. But you're right: stable transitions between different frames and temporal coherency would bring this to another level. Using the velocity-vectors of the scene would probably make quiet a difference as well. Not really sure how to pull this off, though.
@gargaj: didn't even heard dilowny before reading this thread :-)
well, the title of the article you referred starts with "how to.." so read the article, and you learn how to do it ;-)
afaik the vectorization in Transmission and OneFinger was precalced from video, not a post-pro.
Gargaj: the guy was called 'Delone' (Boris Nikolaevich Delone), the Russian mathematician that was writing publications in french under Delaunay. So was Voronoy (Ukrainian) aka Voronoi in french or Woronoj in polish. Interestingly enough Voronoi was a teacher of Delaunay (and Sierpinski). But let's just say, the only 'correct' form is the one with Cyrillic letters ;-) Other forms are just various transliterations.
And for the topic now;) Those corner/edge features are extracted exactly for (relatively) stable tracking over time. SIFT is the keyword.
@tomkh: "Boris Delone got his surname from his ancestor French Army officer De Launay, who was captured in Russia during Napoleon's invasion of 1812." hihi :)
_-_-__: Apparently, you are also wrong, lol. According to remark by John Conway: Delone got his surname from Irish ancestor Deloney, which happened to be in Napoleonic army known as Delaunay. Now, I am not really sure if this is John Conway trolling (could he do it?) or the actual history, but nevertheless its funny ;-)
then everyone's correct and that's cool :)
Apparently his first name, Boris, is a Cyrillic interpretation of the Irish "boring", right after Napoleon read the surname discussion in this thread during his exile on st Helena. Napoleon spoke fluent French.