Go to bottom

adaptative cpu usage throttling

category: general [glöplog]

I'm trying to make a demo for low-end platforms (which I have no access to right now) Think an integrated intel gpu for instance.

Right now I have a particle based effect, whose performance is correlated with the number of active particles x number of forces/constraints.

Is there a good strategy for throttling the cpu usage of such an effect, if I'm willing to compromise on the look somewhat?

I've been thinking of monitoring the cpu usage + time to next frame on a given period and alter the number of active particles.

Anyway, does anyone have an experience with such kind of effect-throttling depending on the host?
added on the 2010-09-21 08:42:24 by _-_-__ _-_-__
Easiest would probably to do a small benchmark at the beginning (rendering black on black or some such) and throttle based on that.
added on the 2010-09-21 09:21:09 by sol_hsa sol_hsa
My experience is that throttling always looks horrible, and should be avoided like the pest.

Indeed, even if you manage to play with the number of particles, you're going to get additionnal trouble when trying to figure out how to blend still nicelly on different amount of particles. i.e. your particle effect will look like 4 blobs on the lowest end configuration and like a big white shit on the highend hardware. sorry if i'm a bit rough but you get the idea. :)
added on the 2010-09-21 09:39:08 by nystep nystep
I thought about this a while back, for an effect where I can vary the number of polys and the texture resolution for speed or quality. This kind of got abandoned before I wrote the throttling part, but my idea was to run a benchmark like sol suggested, but to do it on the loading screen as a background effect while it's doing fairly light work like loading textures. I figured a not-too-smooth loading bar effect was better than a plain loading bar :)

That should give a slightly low performance target, which would help keep the demo smooth if the OS starts arsing around in the background.
added on the 2010-09-21 10:18:47 by psonice psonice
might as well just use the good old "Quality: good/medium/bad" setup option then... i suppose the main problem is the whole aesthetics tradeoff. at least i wouldnt want to have my effects look worse on slightly less capable computers.
added on the 2010-09-21 11:58:02 by Gargaj Gargaj
What some games are doing now is changing the size of the viewport dynamically depending on framerate and upscaling at the end. For that you'd probably best use just a smoothed framerate for the metric (averaged over the last second or two for example).
added on the 2010-09-21 12:18:57 by smash smash
gargaj: so set a lower limit. Would you like your effects to look more awesome on future super high end computers? :)
added on the 2010-09-21 12:41:37 by psonice psonice
you could also try to correlate the fps count with the number of particles, i.e. if you get awesomely high fps => increase the particle count and vice versa.

(btw. this is correct usage of i.e.)
Indeed :)
added on the 2010-09-21 14:19:01 by gloom gloom
Wonder what recent fairlight demos would look like on a netbook using this method? Probably 'abstract'.
added on the 2010-09-21 14:20:31 by psonice psonice
Wonder what recent fairlight demos would look like on a netbook using this method? Probably 'abstract'.

Might be that 'create an empty black opengl window' or 'show a white dot in centre of the window' tutorials would have been a 100% visual match in such case :D
added on the 2010-09-21 14:39:06 by kbi kbi
I don't get the point.

The particle system running in software?! Why not just underclock the CPU. Or let the thread sleep for a fixed amount of time. It's just like simulating old hardware. I don't get the adaptive part. It doesn't make sense. And the GPU will handle the shit in a different manner anyway. But for example fixed funtion pipes are well .. fixed. So there shouldn't be any difference to recent hardware. Except the number of FPS to get which is to control or to estimate for the old hardware.

I'm working on a similar case. So all I did was reading out device caps and rough hardware data and estimate performance and graphics issues based on running a demo of the time that hardware was available. Which was around 2000. guess which 'demo' I used ;).
added on the 2010-09-21 15:27:46 by yumeji yumeji
The issue here is simple. He wants to parametrize the effect adaptively based on the client's CPU performance. No viewports involved, no GPU involved (as far as mentioned at least), none of that. But I'd have to go with nystep and ask yourself this: is this "feature" of any importance to the project you're working on? If so, carefully measure and find a heuristic that makes the effect look similar but more or less detailed on both sides of the spectrum. Therein lies the challenge and it's completely related to the effect/visual itself, there is no common solution.
added on the 2010-09-21 15:34:17 by superplek superplek
I actually through the question to figure if this was a silly idea to pursue. It seems indeed experimenting with a controlling parameter that is calibrated with resources on various hardware and tuning them would work. I don't mind it if it's effect dependent, just more work.

And to be more specific, yeah I'm also wondering about the GPU part of the equation, for instance, how many poly's are acceptable. It seems harder to measure too.

The issue is this, on a general purpose (non-fixed) platform, how to "degrade gracefully."

(it's not such a big deal but that demo is meant not for a competition)

The viewport adaptation method seems rather attractive for pixel-shader based effects. Isn't wipeout-hd using this method on ps3?
added on the 2010-09-21 15:57:55 by _-_-__ _-_-__
tesla by sunflower does this - if i remember correctly. when i got a new pc back then, it suddenly had more particles on screen.
added on the 2010-09-21 16:04:17 by Spin Spin
All games fuck with buffer resolutions to accomodate speed.
added on the 2010-09-21 16:08:59 by superplek superplek
depending on the system behavior, try to always show N particles onscreen, but compute just M particles based on available cpu\gpu, then approximate\interpolate the other N-M. might do the job
added on the 2010-09-21 16:28:32 by rmeht rmeht
I like your idea rmeht .. it sounds worth a good try :)
added on the 2010-09-21 20:44:11 by _-_-__ _-_-__
sounds allright yeah. hope it looks that way too since individual interaction between particles (therefore directly coupled to their amount) can make or break an effect.
added on the 2010-09-21 21:24:10 by superplek superplek
demoscene design rule n.17: if you move them fast and glowing enough, no one will notice the difference *g*
added on the 2010-09-21 21:39:04 by rmeht rmeht
i doubt particle physics being the real problem here
added on the 2010-09-21 21:41:04 by the_Ye-Ti the_Ye-Ti
very good point theyeti.. it turns out that after having a good look, some unexpected part of the system is taking way more cpu time than I expected (compared to the physics!)
added on the 2010-09-21 22:54:10 by _-_-__ _-_-__
@rmeht good point there. The simulation issue would be solved but the 'oldies' might be a lil weak to actually draw the same amount of particles. The FPS would just go down but reducing the amount would break the look, probably.

Well. It depends on the actual effect how to solve the visual problem.

Smoke types are kinda easy to degrade without losing to much details. Complex particle systems rather not.
added on the 2010-09-21 23:02:54 by yumeji yumeji
you can compute a different set of "real" particle at every frame, so every shown particle gets correctly updated in a short timeslice. wont work on very dynamic system tho, yes
added on the 2010-09-22 00:58:54 by rmeht rmeht
I'd really like to see a 'shot' of the actual effect. It'd would help finding a smart solution. mmh.
added on the 2010-09-22 01:26:05 by yumeji yumeji


Go to top