The tool vs code debate
category: general [glöplog]
As discussed outside of the scene:
http://realtimecollisiondetection.net/blog/?p=73 -- graphical shader systems are bad
http://c0de517e.blogspot.com/2008/08/commenting-on-graphical-shader-systems.html -- commentary
http://realtimecollisiondetection.net/blog/?p=73 -- graphical shader systems are bad
http://c0de517e.blogspot.com/2008/08/commenting-on-graphical-shader-systems.html -- commentary
Interesting to read, but doesn't seem to be relevant to demoscene productions. The set of interdependant graphical assets is just too small. Noone does fallbacks or multiple rendering strategies either - demos have a fixed optimum configuration, everything below is just too slow, anything leeter doesn't help either.
So for a demo, a shader generator is the SHIT. :)
So for a demo, a shader generator is the SHIT. :)
It's not the tool (code is just a tool too), it's how you use it anyway..
eye, I quoted those discussions not really for their very precise point, rather for the general topic about tools. It seems what they are really up against is a form of tool that exposes a lot of "flat" parameters to the artist, who is then encouraged to achieve results by "stacking up" things/behaviours.
Maybe it's the "stacking up" which is the real culprit, all in all, it seems to me the problem lays in the too limited ways for a tool user to interact.. Or the lack of feedback when "something's up" (it can be performance related, or weird inter-relationships that make the result "non deterministic")
Maybe it's the "stacking up" which is the real culprit, all in all, it seems to me the problem lays in the too limited ways for a tool user to interact.. Or the lack of feedback when "something's up" (it can be performance related, or weird inter-relationships that make the result "non deterministic")
As I understand it, he doesn't like these kinds of systems because they allow the artists to stack a lot of effects and routines on top of each other to make such advanced shaders that -- while they yield great-looking results -- are two heavy for implementation into game engines.
..which is sort of a non-issue, since that is what artists do. :) The job of the teamleaders and programmers is to then go into the artists' cubicle and tell him or her that "Sorry, you have to reduce the amount of steps in your shader, because it's too slow on runtime -- cut some corners will you?" .. and four years later, GAME X is released, looking a tad uglier than it could have done, seeing as there has been three generations of new GPUs since that day. :)
Eye nails it, pretty much: this is how many people work when making demos anyway, except there is no Q&A department, only "it runs at 60 FPS at _my_ computer, therefore it runs fine elsewhere" :)
..which is sort of a non-issue, since that is what artists do. :) The job of the teamleaders and programmers is to then go into the artists' cubicle and tell him or her that "Sorry, you have to reduce the amount of steps in your shader, because it's too slow on runtime -- cut some corners will you?" .. and four years later, GAME X is released, looking a tad uglier than it could have done, seeing as there has been three generations of new GPUs since that day. :)
Eye nails it, pretty much: this is how many people work when making demos anyway, except there is no Q&A department, only "it runs at 60 FPS at _my_ computer, therefore it runs fine elsewhere" :)
in the past I always fooled myself with "I want to keep my stuff as general and versatile as possible" which always resulted in way too overcomplicated code and taking way too much time, so never getting something done.
After watching chaos' seminar on 3d engine coding I ditched my old stuff and started from scratch, this time first thinking about what I really need 90% of the time and implementing it in a way that makes it easy to use and non-complex.
As chaos stated (summarized): "Don't expose all the src/dst blend states, just expose the ones really needed in code and to the artists (in a tool)"
Tools definitely speed up work and give better results: Tweaking parameters is so much less work this way and you can really tune it until the thing looks correct. You don't need to write big tools - even a simple spline editor will save you hours of edit-compile-test cycles..
After watching chaos' seminar on 3d engine coding I ditched my old stuff and started from scratch, this time first thinking about what I really need 90% of the time and implementing it in a way that makes it easy to use and non-complex.
As chaos stated (summarized): "Don't expose all the src/dst blend states, just expose the ones really needed in code and to the artists (in a tool)"
Tools definitely speed up work and give better results: Tweaking parameters is so much less work this way and you can really tune it until the thing looks correct. You don't need to write big tools - even a simple spline editor will save you hours of edit-compile-test cycles..
url for the seminar btw?
Jar: When we made "Regus Ademordna", we first had a discussion about what kind of effects we wanted, and what kind of control we wanted over them in our sync-tool. Kusma and Garble then implemented about 90% of what we agreed upon, ditched the last 10% that I wanted, and implemented an additional 10% of what he thought I wanted. Worked out great :)
imho graphical shader systems are really needed today (if you are making a failriy advanced game) otherwise you end up with a zillion shader permuations.
art tools are getting more and more important aswell to get the extra graphical quality for visual high-end games. and just to say that artist "shaders" are slow is just foolish, what usually happens after they have created all their shaders is that a programmer looks at it and optimized it (either the shadergraph or the actual outputted code). one way to easily achive this is to link a shadergraph and 2 versions of code (shadergraph outputted code and optimized code). you could also write nodes that contains alot of shadercode and the artists just connect the params from other nodes (which will give very little overhead). .... .. .
art tools are getting more and more important aswell to get the extra graphical quality for visual high-end games. and just to say that artist "shaders" are slow is just foolish, what usually happens after they have created all their shaders is that a programmer looks at it and optimized it (either the shadergraph or the actual outputted code). one way to easily achive this is to link a shadergraph and 2 versions of code (shadergraph outputted code and optimized code). you could also write nodes that contains alot of shadercode and the artists just connect the params from other nodes (which will give very little overhead). .... .. .
guardian: the seminar is here
thx :)
epic fail. second try: seminar
gloom: yes, this would be a good approach. unfortunately most of the time my demomaking-process begins with an effect that I coded and no plan/ideas of what to do next, so this wouldn't work in this case :) But this is my fault, I really should think about ideas/theme first: Spares you the "inconsistent, sucks!" remarks on pouet :)
our recent demos/64ks have shaders with multiple rendering methods (ati/nvidia), multiple techniques for each number of lights and so on - all generated with the aid of the compiler and macros, not a graph. :) erm - almost exactly like what they talked about in that post about ubershaders, actually. that has a few downsides of it's own, but its been the best approach that ive found so far.
ive never been a fan at all of graphical shader systems, because they usually fall into one of two camps:
- exposing every instruction to the user. so, making something relatively simple code-wise becomes a chore of dragging together muls and adds, and makes the job a lot harder than just typing it.
- exposing hardly anything, just some building block boxes like "do lighting" which end up not giving you anything like enough control - why not just give the artist a couple of pre-arranged shader files with exposed parameters?
when ive seen artists use these shader editors, ive mainly seen them create something that blends a shitload of texture inputs together with adds and muls, to make something that doesnt look that cool and does run like crap.
the biggest problem is, to code something, using code or graphical ui, you need to be able to understand whats going on in the first place. if you dont know how to make shaders, it wont help if you have to type it or drag boxes. once you get past the basic syntax issues, making shaders well needs a wealth of experience with maths and graphics - it's much harder to know all that than it is to learn the syntax.
having the graphical interface is like a guy offering children some sweets to get into his car, rather than just dragging them in off the street - the sweets may make it look like a better prospect initially, but in the end it all comes down to the same nasty business.
the best shader tool for an artist is called "a shader programmer". :) and a good shader tool is imo something like fxcomposer - you still code it, but it helps you along by giving you performance info, compile results etc and it lets you edit parameters on the fly so you can see immediately what results it has.
now if you were trying to use that to lead into a general debate about demo tools, you're way off the mark. the point of a demo tool is to accelerate the tasks which are stupidly annoying to do in code, like positioning cameras, choosing colours, managing resources, arranging to the music and so on. not to completely replace coding. its just about picking the right equipment for the job, be it a graphical tool or code in one form or another. (when i do a demo on my own, i'd still rather use my own demotool for a lot of the tasks, even if i have the option to code it.)
ive never been a fan at all of graphical shader systems, because they usually fall into one of two camps:
- exposing every instruction to the user. so, making something relatively simple code-wise becomes a chore of dragging together muls and adds, and makes the job a lot harder than just typing it.
- exposing hardly anything, just some building block boxes like "do lighting" which end up not giving you anything like enough control - why not just give the artist a couple of pre-arranged shader files with exposed parameters?
when ive seen artists use these shader editors, ive mainly seen them create something that blends a shitload of texture inputs together with adds and muls, to make something that doesnt look that cool and does run like crap.
the biggest problem is, to code something, using code or graphical ui, you need to be able to understand whats going on in the first place. if you dont know how to make shaders, it wont help if you have to type it or drag boxes. once you get past the basic syntax issues, making shaders well needs a wealth of experience with maths and graphics - it's much harder to know all that than it is to learn the syntax.
having the graphical interface is like a guy offering children some sweets to get into his car, rather than just dragging them in off the street - the sweets may make it look like a better prospect initially, but in the end it all comes down to the same nasty business.
the best shader tool for an artist is called "a shader programmer". :) and a good shader tool is imo something like fxcomposer - you still code it, but it helps you along by giving you performance info, compile results etc and it lets you edit parameters on the fly so you can see immediately what results it has.
now if you were trying to use that to lead into a general debate about demo tools, you're way off the mark. the point of a demo tool is to accelerate the tasks which are stupidly annoying to do in code, like positioning cameras, choosing colours, managing resources, arranging to the music and so on. not to completely replace coding. its just about picking the right equipment for the job, be it a graphical tool or code in one form or another. (when i do a demo on my own, i'd still rather use my own demotool for a lot of the tasks, even if i have the option to code it.)
In my current engine I have a c++ "shaderlib", basically a number of HLSL shader texts that get compiled upon init and after that you have a list of shader handles that you can request. There are some fixed shaders like unlit-color-only, unlit-tex-only and simple bump-env. And then I have functions that generate various permutations of phong, IBL and parallax shaders (flags for type of textures used, num of colors, etc.)
This all provides the shaders I need 90% of the time. If I need special shaders for an effect, I hardcode them in my demoeffect cpp file :)
This all provides the shaders I need 90% of the time. If I need special shaders for an effect, I hardcode them in my demoeffect cpp file :)
and for my own demosystem i don't have a shadergraph because i usually dont have that many shaders nor that many permuations of them.
Luckily this is a hobby. I LOVE coding shaders (despite the best efforts of Nvidia and ATi to make it as frustrating as possible) so I code shaders. I know, despite the efforts of people like H2O, I have the artistic eye of a blind warthog and so most of it turns out very average but - I love coding shaders.
The point I'm trying to make very badly is that this is a hobby, its meant to be fun so I think arguing about the best way (unless thats how you get your fun :-) doesnt help.
Should I post a picture of kittens now?
The point I'm trying to make very badly is that this is a hobby, its meant to be fun so I think arguing about the best way (unless thats how you get your fun :-) doesnt help.
Should I post a picture of kittens now?
i don't know a damn thing but I think tools are fine as long as you write and use your own, it's just, werkkzeug and stuff like that being used to make a demo in its entirety by people who had no input in actually making the tool puts me off a bit, bit too close to just firing up 3DSMAX or whatever you know
I don't get why people all the time act like artists can't be educated. They're actually usually quite clever people in my experience - you could give them some interesting challenges, and be amazed by the outcome. That being said, how you work isn't so important, the end-result is what matters. People have different ideas on what's the correct way forward. I personally hard-code shaders as much as possible, but that's cause I don't have the need for anything more sophisticated.
As kusma said - a lot of artists are into mel/maxscript and so on, and are quite capable of learning to code shaders.
the discussion became pointless once they introduced shaderparameters and conditional compiliing anyway :P
oh MEL, the horror
it's dangerous though, shaders do touch the gpu in a direct fashion. and it's not always as transparent as it seems, performance wise. this is where a coder might be a better idea than an artist.
as for the whole debate, i've seen a combination of datadriven/permutation-parametrized shadercompilers work well in games.
(and for our demo we just hack away without any clear guidance, but thats another thing)
as for the whole debate, i've seen a combination of datadriven/permutation-parametrized shadercompilers work well in games.
(and for our demo we just hack away without any clear guidance, but thats another thing)
(and by 'datadriven/permutation-parametrized' i meant that plugging the permutation based stuff with offline/online parameters into the data/materials with the right granularity will do the trick)
(this being without materials that actualy allow composing shaders in another way than modifying material properties. having an artist shader composer works nice to author some special materials if need be I guess, but I wouldnt go for it as an overall authoring solution)
but then there's a 100000 ways to put a pipeline together. whatever works best depends on context and team too much to say anything useful in general imo.
but then there's a 100000 ways to put a pipeline together. whatever works best depends on context and team too much to say anything useful in general imo.