particles & DOF
category: code [glöplog]
bonzaj - yea, that's an inherent problem, it sounded to me like you overcame that by using an offscreen buffer though?
Some corporate bastard did this presentation, mentioning depth-masking to reduce performance-impact of explicitly checking every depth-sample.
And just to complete the link-bonanza, this article, while probably not applying directly to your problem, describes one of the most used dof-method on current consoles.
Some corporate bastard did this presentation, mentioning depth-masking to reduce performance-impact of explicitly checking every depth-sample.
And just to complete the link-bonanza, this article, while probably not applying directly to your problem, describes one of the most used dof-method on current consoles.
hornet :)
the problem that's been bugging me is how to properly use alpha stuff with solid stuff and apply post processing dof on all of it together.
post processing dof (done right) works by checking every pixel in a given max radius and determining their weight against the centre pixel based on their depths. effectively emulating a scattering blur using gather.
to accurately do that with alpha you'd need to either:
- store the solid pixel and every alpha pixel on top of it in a list per pixel, then while processing the dof blur consider each list element separately and composite them, blending in z order for all pixels in the kernel and for all pixels in the centre pixel's list separately. so you take it from processing n samples per pixel to sampling n*m samples for m elements per pixel (where m is the number of oit samples, varying per pixel, and probably being massive for particle systems etc) and sorting properly.
- actually just do scatter for everything.
ew.
the problem that's been bugging me is how to properly use alpha stuff with solid stuff and apply post processing dof on all of it together.
post processing dof (done right) works by checking every pixel in a given max radius and determining their weight against the centre pixel based on their depths. effectively emulating a scattering blur using gather.
to accurately do that with alpha you'd need to either:
- store the solid pixel and every alpha pixel on top of it in a list per pixel, then while processing the dof blur consider each list element separately and composite them, blending in z order for all pixels in the kernel and for all pixels in the centre pixel's list separately. so you take it from processing n samples per pixel to sampling n*m samples for m elements per pixel (where m is the number of oit samples, varying per pixel, and probably being massive for particle systems etc) and sorting properly.
- actually just do scatter for everything.
ew.
Smash - if you consider your per-pixel fragment-list a voxel-representation of the scene, you could trace through that... which likely would end up as a search through multiple fragment-lists for each dof-sample. Ew indeed.
So...at which stage do you add the motion blur on top of everything? :)
sagacity: you combine it with the dof, it just changes the kernel :)
If you're maintaining a list of all alpha pixels per screen pixel and sorting that list anyway that DOF/Motionblur algo also gives you draw order independent transparency for free! Go for it! :)
kb: yea, its oit with dof on top really :)
(not that it is in any way practical btw)
(not that it is in any way practical btw)
can we haz programmable blending and voxel framebuffers plz?
smash, come on, building and sorting linked lists with potentially hundreds of entries per pixel? How can that NOT be practical? ;)
But yeah, I'd probably go for properly pre-blurred billboards and perhaps some CS/GS based occlusion culling for small particle sizes (as opposed to a per-pixel z test). How's the current crop of engines (UE4 et al) doing that anyway?
But yeah, I'd probably go for properly pre-blurred billboards and perhaps some CS/GS based occlusion culling for small particle sizes (as opposed to a per-pixel z test). How's the current crop of engines (UE4 et al) doing that anyway?
raer - here you go
kb: if you use stream compaction and build up the lists in 2 passes at least you don't thrash memory accesses so badly when you're reading, and ..
fuck it, its so not practical
fuck it, its so not practical
smash/kb: reusing the MSAA-buffers for order-independent transparency isn't as stupid as you try to make it out to be on modern hardware. You only keep a few ones (4 or 8) per pixel, and use screen-space dithering to reduce banding.
Actually neither of us was mentioning MSAA at all. Of course you can get around the whole alpha issue with coverage; it'd be interesting to see what happens if you try DOF on the multisampled buffer tho. Kind of like the abovementioned list approach only with a fixed sample # per pixel. Might be worth a try for relatively simple scenes in terms of alpha layers but I doubt it'd be good for particle effects where the per-pixel draw count can easily reach the hundreds.
how about radix depth sort the particles and rendering front to back and write an alpha "something" to z. does early z culling still work?
guys, why bother. in 10 years all these issues will be solved anyway, we'll have dof/mblur/oit/aa done for us transparently in hw. just seat, relax, make few demos and wait, let the technology catch up :)
(now I'm just waiting for some oldschool wanker to rant about how IQ is killling the scene :)
kusma: no youre right, its not. youre just limited by sample count then. 16 could cut it tho for some stuff
kb: no, you didn't, but it's a way of implementing what you described without being completely horrible.
iq: in 10 years some crazy scener will have DOF and mblur'd particles running on a c64. ;)
kusma: OIT using stream compaction isnt that bad either you know, its just when you have a lot of levels..
smash: I don't think I'm aware of that approach, do you have any references?
kusma: no not off hand :) i thought nvidia were doing it like that, but that might have come from a conversation with an nvidia guy not a sample. :) ati went for linked lists (which sucks)
Aha, then I think I get it. Cool :)
Sounds like marco salvi's oit technique?
What's with the "newer" depth peeling approaches like "Dual Depth Peeling" or "Robust Order-Independent Transparency via Reverse Depth Peeling in DirectX 10" (ShaderX6, renders back-to-front, uses a single buffer)?
[/url=http://www.cescg.org/CESCG-2011/papers/TUBudapest-Barta-Pal.pdf]This smells like the ATI solution mentioned.
[/url=http://www.cescg.org/CESCG-2011/papers/TUBudapest-Barta-Pal.pdf]This smells like the ATI solution mentioned.