Polygons from intersecting line arrays
category: code [glöplog]
So i wrote this effect based on my abstract line drawing style, where intersecting lines can go above or below each other. You can see a wip here:
https://psenough.github.io/line_algo_test/
It's still very raw to what i have in mind, still just tests and additional tweaks to make it more interesting, but one of the next steps i been trying to wrap my head around and failing misserably to find a fast way to accomplish it is the following:
Having these arrays of intersecting lines, how to obtain the polygon area they are bounding? In realtime. The point was to be able to draw different textures on each of the bounding spaces as they are being segmented or broken.
On software i would just do floodfill algorythm / fullscreen pass and go through each pixel:
- if it's not green skip it
- if it's green, do the floodfill for that whole region with a new color index, keep track of how far right we went with the floodfill on this line to skip these pixels
And then use that buffer of color indexes to do more interesting stuff on another pass. Or might even be able to optimize it to do the interesting stuff on a secondary buffer while doing that floodfill loop.
I figured this must be just as easy to do on the GPU aswell, but as it turns out i'm just fucking clueless and it's not trivial at all, since it seems i can't pass a state information from pixel to pixel, and any for loop that goes through the entire texture just breaks/slows things couz that's not how GPUs are designed to do things.
Since i'm doing this on the browser i can't really do realtime floodfills on HD resolutions without dropping framerate like hell.
So i guess i'm down to noting down all the points where line arrays intersect each other and then just traversing all the lines to find the minimum length closed polygons, triangulating them and sending them to the GPU. Are there known algorythms for accomplishing this? array of intersecting lines => polygons
Or can anyone think of any smarter way to extract these sections using the GPU?
I'm a bit low on CPU (doing all the line calc is already heavy enough), so pushing any of this new stuff to the GPU would really be nice.
Considered using asm.js to do the floodfill stuff aswell, but haven't really done any work with asm.js yet so i'm a bit reluctant to open that can of worms right now.
Opinions?
https://psenough.github.io/line_algo_test/
It's still very raw to what i have in mind, still just tests and additional tweaks to make it more interesting, but one of the next steps i been trying to wrap my head around and failing misserably to find a fast way to accomplish it is the following:
Having these arrays of intersecting lines, how to obtain the polygon area they are bounding? In realtime. The point was to be able to draw different textures on each of the bounding spaces as they are being segmented or broken.
On software i would just do floodfill algorythm / fullscreen pass and go through each pixel:
- if it's not green skip it
- if it's green, do the floodfill for that whole region with a new color index, keep track of how far right we went with the floodfill on this line to skip these pixels
And then use that buffer of color indexes to do more interesting stuff on another pass. Or might even be able to optimize it to do the interesting stuff on a secondary buffer while doing that floodfill loop.
I figured this must be just as easy to do on the GPU aswell, but as it turns out i'm just fucking clueless and it's not trivial at all, since it seems i can't pass a state information from pixel to pixel, and any for loop that goes through the entire texture just breaks/slows things couz that's not how GPUs are designed to do things.
Since i'm doing this on the browser i can't really do realtime floodfills on HD resolutions without dropping framerate like hell.
So i guess i'm down to noting down all the points where line arrays intersect each other and then just traversing all the lines to find the minimum length closed polygons, triangulating them and sending them to the GPU. Are there known algorythms for accomplishing this? array of intersecting lines => polygons
Or can anyone think of any smarter way to extract these sections using the GPU?
I'm a bit low on CPU (doing all the line calc is already heavy enough), so pushing any of this new stuff to the GPU would really be nice.
Considered using asm.js to do the floodfill stuff aswell, but haven't really done any work with asm.js yet so i'm a bit reluctant to open that can of worms right now.
Opinions?
are these lines made out of small segments?
yes
Delaunay triangulation? In general, it sounds like you're trying to solve your problem "in post", where instead a better drawing method / data structure that facilitate your needs would be better in the first place.
or if your lines are not moving precalc those polygons like a boss :)
tomaes: my next attempt is in revisiting the data structure, yes.
theyeti: the lines are procedural and want to add some realtime interactivity on them at some point, so precalc is no go.
theyeti: the lines are procedural and want to add some realtime interactivity on them at some point, so precalc is no go.
Rather than tracking the lines, could you track the polygons between the lines? E.g., the lines coming in from the top left: each pair of lines defines a triangle strip between them. When you compute the coordinates, keep track of which polys they belong to. Then you can change gl.LINES / gl.drawArrays to gl.TRIANGLE_STRIP / gl.drawElements. E.g., use two triangles for each n pixels in the direction of motion, or something like that.
You won't have to change the coordinate calculation, just change what you do with the result. Since you'll already be tracking polys, you'll be able to track which textures go with each one. (Says me as if I actually know something ;) )
You won't have to change the coordinate calculation, just change what you do with the result. Since you'll already be tracking polys, you'll be able to track which textures go with each one. (Says me as if I actually know something ;) )
Might also be easier if you used long, straight lines, then added the wiggles in the fragment shader.
Just use a grid of adjacent rectangles, subdivide the edges and permutate & transform your nodes. Since the fill-space is known and cells with the same fill color can easily be matched & joined (thanks to your grid-cell data), this should be working fine. ;)
And as far as shaders go: You probably want to do this with a Geometry shader or Vertex shader, not (primarily) with a fragment shader.
And as far as shaders go: You probably want to do this with a Geometry shader or Vertex shader, not (primarily) with a fragment shader.
with those type of lines you have a risk of self-intersecting polygons if you try it with a grid
If I understand correctly, what you're looking for is a GPU polyfiller given the (oriented) boundary edges? This sounds a lot like font rendering. Modern solution: Draw the winding number of each boundary edge into a temporary float buffer, then do parallel prefix sum horizontally.
See e.g. http://pcwalton.github.io/blog/2017/02/14/pathfinder/.
See e.g. http://pcwalton.github.io/blog/2017/02/14/pathfinder/.
Quote:
with those type of lines you have a risk of self-intersecting polygons if you try it with a grid
Err, no. :) You start with adjacent rectangles that form a grid. "Those type of lines" don't come into it. You build inter-cell triangle-meshes from edge-divided rectangles. There are some obvious things you must pay attention to, but the cells don't need to be convex f.e.; Again, this is to frame the problem differently, so it gives a similar result with a simpler solution.
Quote:
If I understand correctly, what you're looking for is a GPU polyfiller given the (oriented) boundary edges?
I read it as: It's not about poly-filling as such, but about extracting them. The filling part would then be "free" ;)
Quote:
I read it as: It's not about poly-filling as such, but about extracting them. The filling part would then be "free" ;)
yeah, this is a more correct description.
if i already had the boundary edges the triangulation would be trivial :/
pathfinder might still be useful though, i'm gonna look into it.
Quote:
Might also be easier if you used long, straight lines, then added the wiggles in the fragment shader.
i considered this, but the premise is to have them do different dynamic curvatures, which on the example doesn't really transpire much. and since i have weird transparency settings i wasn't sure how to keep track of those states on the vertex shader side of things.
Quote:
Just use a grid of adjacent rectangles, subdivide the edges and permutate & transform your nodes. Since the fill-space is known and cells with the same fill color can easily be matched & joined (thanks to your grid-cell data), this should be working fine. ;)
And as far as shaders go: You probably want to do this with a Geometry shader or Vertex shader, not (primarily) with a fragment shader.
not sure i'm understanding what you're describing, have to read it more carefuly again a few times. but i'm pretty sure i don't have access to geometry shader on webgl. o_O