WTF is it with NVidia-only demos?
category: general [glöplog]
Quote:
Scali : Raw Confessions is based on a Direct-X engine and works fine on Radeon cards.
It didn't work on my R8500 anyway. Perhaps I should try it again. I know it's DirectX... that doesn't mean the coders can't screw up in a retarded way though.
Quote:
chaos, Scali .. arb_vertex_program, arb_fragment_program and arb_vertex_buffer_object should be what your looking for. nothing vendor specific, and supported on the cards that can handle it ( and some more )
Don't tell us, tell the idiots that code vendor-specific stuff.
Quote:
and nvidias innovation.. i meant extensionvise - ati 22, and nvidia 40..
Since when does the number of extensions implemented in a driver say anything about the innovation?
For example, if you have one extension, pixelshaders... you might accomplish more with that than 10 other extensions put together.
Quote:
btw, i had more problem with dx demos than opengl once.. but then again i have an nvidia gf3 in my computer.
Yes, nVidia has been having trouble sticking to the D3D specs lately. Their drivers seem to be having lots of trouble rendering 3dmark03 for example... Random black bars appear, and pointsprites are not drawn at all...
I guess they try to keep up with ATi's hardware at the risk of bugs in their drivers... Not a very good strategy.
Quote:
"Many OpenGL extensions only exist on hardware of one vendor."
wrong. just check the extensions provided for a Nvidia card here:
Okay wiseguy... "Many OpenGL extensions with a vendor-name only exist on hardware of one vendor".
In fact, let me clarify that even further: "Many important and often-used OpenGL extensions (such as the previously mentioned vertexbuffer/indexbuffer/vertexshader/textureshader/pixelshader ones) wit a vendor-name only exist on hardware of one vendor. Furthermore, there is not always an equivalent that is not vendor-specific. On top of that, even if there is a vendor-specific equivalent, coders do not always use it."
Is it clear now?
You have problems interpreting English properly? I think in this context it was already very clear what I meant.
Quote:
But it isn't portable to other platforms than m$ ones
What's worse? Your demo not running on 5% of the computers that don't have Windows installed (and I think in the demoscene, linux is even less popular, for obvious reasons), or your demo not running on 60% of the computers that don't have an nVidia card? (or... 90% of the computers that don't have a GeForce3+ card, to get back to Kolor..)
Quote:
OpenGL 2.0 could be a solution to solve all these problems, but we don't know yet *which* hardware will support it.
So why wait for OpenGL 2.0 when you can get the same from Direct3D 9.0 *NOW*?
hum, you still like to make it into a ogl/dx war i see..
well dont like to chance all my old code in order to get my hand of one new feature.. that is,between dx8 and dx9 it seams to be mostly a search and replace for some constants, but before that i heard about alot of trouble to make things happend.
its not hard to understant that nvidia got some trouble to implement for example ps1.4, it was really just improvements that showed some ati specific implementation details..
And i still say, its not the API, its the coder.. you can just as easy fuck up both.. so NO (my opinion) its not easier to do it in opengl. like the intel integrated chipset.. i know a demo that used the 'store the buffer on the card' flag ( dont know the name, but static something i guess) for vertexbuffers.. just becourse the dx help clearly stated that is was faster. then they changed the buffer each frame, worked finr on his computer ( with shared memory) but the FPS dropped to a few per second on gfxcards with their own memory..
its all about bad coding, and face it, we cant afford to test our stuff on so many different cards.
well dont like to chance all my old code in order to get my hand of one new feature.. that is,between dx8 and dx9 it seams to be mostly a search and replace for some constants, but before that i heard about alot of trouble to make things happend.
its not hard to understant that nvidia got some trouble to implement for example ps1.4, it was really just improvements that showed some ati specific implementation details..
And i still say, its not the API, its the coder.. you can just as easy fuck up both.. so NO (my opinion) its not easier to do it in opengl. like the intel integrated chipset.. i know a demo that used the 'store the buffer on the card' flag ( dont know the name, but static something i guess) for vertexbuffers.. just becourse the dx help clearly stated that is was faster. then they changed the buffer each frame, worked finr on his computer ( with shared memory) but the FPS dropped to a few per second on gfxcards with their own memory..
its all about bad coding, and face it, we cant afford to test our stuff on so many different cards.
Quote:
hum, you still like to make it into a ogl/dx war i see..
I don't, Direct3D just seems the better option at this moment, given the facts... I already stated that I don't care if you use Direct3D or OpenGL, as long as your stuff works.
Quote:
its not hard to understant that nvidia got some trouble to implement for example ps1.4, it was really just improvements that showed some ati specific implementation details..
Erm, excuse me... Are you trying to say that ps1.4 is just ps1.3 wiht some 'details'? ps1.4 is a completely different programming model, allowing two 'phases' in one shader... basically giving you two passes in one. It's closer to ps2.0 than to ps1.3, even though the version number might not indicate it. But if you inform yourself on the matter, you'll see.
Quote:
its all about bad coding, and face it, we cant afford to test our stuff on so many different cards.
You still don't get what I'm saying, do you? Bad coding is a problem, no matter what API you use. Testing is also a problem... However, I am talking about *DELIBERATELY* not supporting other vendors, that's a different story.
I can understand that it's not possible to test everything, but you can at least try to be compatible. And you can at least try to fix the bugs, when they are reported (fr08 was terrible aswell at first, the party version was very buggy... But the final runs on most hardware without a problem... And Planet Loop didn't work on ATi at all, but they released a patch. I really appreciate that). But if you just say "Okay I use this incompatible API because I think M$ sucks, C rules, and nVidia is gh0d, and I don't care about people wanting to watch the demo, I only care about winning the compo and getting the money", like Kolor does... no, that is definitely a problem. We do not appreciate this. If you don't want people to view your demos, don't make them!
A bit angy there mate :)
I make demos for fun, and to see if I can get the result I want.. and if you want to see the demos go to the party, dont sit at home whining about that people want to use the full extent of their expensive cards.
Who cares if you cant see a demo or two? if its not becourse the 'extension hell' its becourse some card have a buggy driver, or that the audience dont have the expected HW.. If i want to make use of an specific extension to achieve an effect i will use it, and if i figure out a way to do it with standard extensions later i probably will change.. but the main goal is to get the effect I want. So if they deliberatly bind themself to a specific card its probably becourse they know what they wanted, managed to get it to work ( vendor specific) before the party, and was happy about it. Come on Scali.. havent you done the same sometime? give me a link to all your demos and i will try them :)
I make demos for fun, and to see if I can get the result I want.. and if you want to see the demos go to the party, dont sit at home whining about that people want to use the full extent of their expensive cards.
Who cares if you cant see a demo or two? if its not becourse the 'extension hell' its becourse some card have a buggy driver, or that the audience dont have the expected HW.. If i want to make use of an specific extension to achieve an effect i will use it, and if i figure out a way to do it with standard extensions later i probably will change.. but the main goal is to get the effect I want. So if they deliberatly bind themself to a specific card its probably becourse they know what they wanted, managed to get it to work ( vendor specific) before the party, and was happy about it. Come on Scali.. havent you done the same sometime? give me a link to all your demos and i will try them :)
Sorry, if you *make* demos for fun, why enter them in a party at all? If you want to share them, but don't want/can't afford testing or portability why not enter them in the wild compo?
Oh boy, calm down everyone please...
I thought about switching from OpenGL to DX, too, for obvious reasons. I'm pretty confident with my OpenGL engine but I definitely want to use PixelShaders which is a major pain in the a** with OpenGL right now.
On the other hand, PS are (as some of us might now) not supported on older gfx cards. Right now I'm more like skipping the PS stuff and go on with OpenGL.
The problem is, that this whole thing puts a strong bias to compos.
Some groups stick to "standard" stuff that runs even on old TNT1, therefore cannot use sophisticated stuff like PS.
Other groups give a damn sh*t on older cards, they just go like "now let's get everything out of *MY* card". So they've got a clear advantage, if the compo machine features the right card.
But w/o these demos I guess we would still stick to VESA ;)
Unfortunately I don't have any perfect-for-all solution to this problem, maybe just stick to the most common extensions and skip all PS completely if it's not supported instead of crashing
I thought about switching from OpenGL to DX, too, for obvious reasons. I'm pretty confident with my OpenGL engine but I definitely want to use PixelShaders which is a major pain in the a** with OpenGL right now.
On the other hand, PS are (as some of us might now) not supported on older gfx cards. Right now I'm more like skipping the PS stuff and go on with OpenGL.
The problem is, that this whole thing puts a strong bias to compos.
Some groups stick to "standard" stuff that runs even on old TNT1, therefore cannot use sophisticated stuff like PS.
Other groups give a damn sh*t on older cards, they just go like "now let's get everything out of *MY* card". So they've got a clear advantage, if the compo machine features the right card.
But w/o these demos I guess we would still stick to VESA ;)
Unfortunately I don't have any perfect-for-all solution to this problem, maybe just stick to the most common extensions and skip all PS completely if it's not supported instead of crashing
to sum up, nvidia-only (or ati-only) demos suck ass, regardless of api.
Quote:
If i want to make use of an specific extension to achieve an effect i will use it, and if i figure out a way to do it with standard extensions later i probably will change.. but the main goal is to get the effect I want.
Sounds like wild demo to me.
Quote:
Come on Scali.. havent you done the same sometime? give me a link to all your demos and i will try them :)
I must admit that I released a thing which used dynamic cubemaps once. I didn't have a workaround, because well... there is none. Then again, it worked on ANY card which supported cubemaps. And I released it last year, when cubemaps were quite widespread. Besides, it was a tech-demo... with reflections and shadows and stuff... I didn't compete with it. I think competition should be fair, and everyone should follow the same rules... I think incompatible demos should enter the wild compo. It's unfair to have them compete with more compatible demos.
we all follow the same rules.. you know they use to put the rules on the parys homepage a couple of days before the party :)
noname : ofcourse i make them for fun.. why other reasons are there for making demos? fame and fortune?
btw: the demo/wild discussion is in another thread, long dead and forgotten.. but to remind you, wild doenst state that it should be realtime, thats a main difference.
noname : ofcourse i make them for fun.. why other reasons are there for making demos? fame and fortune?
btw: the demo/wild discussion is in another thread, long dead and forgotten.. but to remind you, wild doenst state that it should be realtime, thats a main difference.
Quote:
we all follow the same rules.. you know they use to put the rules on the parys homepage a couple of days before the party :)
Read above, I already suggested some changes to the rules to prevent vendor-specific demos. If the rules were fine as it is, we wouldn't be having this discussion.
Quote:
why other reasons are there for making demos? fame and fortune?
That appears to be Kolor's reason... They got their cash-prize at the party, now they don't care if anyone can ever watch their demo again.
Quote:
wild doenst state that it should be realtime, thats a main difference.
They don't state that it shouldn't, either. PocketPC and mobile demos need to enter wild demos aswell.
I think they are in console or mobile compos.
look, incomplatible demos make great content for mindcandy vol 128. :)
excerpt from their page:
just replace the date with something more recent :)
excerpt from their page:
Quote:
You can't always just run the demo. For the thousands of demos written before 2000, the hardware they were written for is hard to find, and modern hardware doesn't play them properly (or at all).
just replace the date with something more recent :)
Quote:
I think they are in console or mobile compos.
Sometimes, if the party happens to have those... But otherwise, they don't enter the regular democompo usually, rather the wildcompo.
Quote:
look, incomplatible demos make great content for mindcandy vol 128. :)
Well, I would be happy if people released movies of their incompatible demos. Raw Confessions was released as a movie right away. That's very nice.
Many non-PC demos are also released as movies. Great solution.
So I would like to see that either demos become compatible, or that incompatible ones will be released as movies.
Quote:
- How about having compo rules against vendor-specific code?
- How about not disclosing the actual hardware in the compomachine? Instead, just state eg. 'Radeon8500/GeForce3 class accelerator'
1. Hum.. how are you going to check that? if they provide an alternative path ( flatshade only, 1fps :) for all other cards.. and you can still be very vendorspecific with ARB or EXT extensions.. just ask for 8 TMUs and your there ( gffx only shows 4 tmus unless using vertex/fragment shaders or make your fragment program 1024 step long..)
2. pretty good idea.. but still youre pretty sure it will be atleast those two cards.. so what about all with gf2? or are the new rules to make only you happy? :)
I dont want to look down on the ideas, i just wanted to point out the difficulties in all different sets of rules.
Quote:
1. Hum.. how are you going to check that? if they provide an alternative path ( flatshade only, 1fps :) for all other cards.. and you can still be very vendorspecific with ARB or EXT extensions.. just ask for 8 TMUs and your there ( gffx only shows 4 tmus unless using vertex/fragment shaders or make your fragment program 1024 step long..)
Compos often have 2 machines... Put two different brands in them, and you have a reasonably good test-base (they can borrow videocards from sceners if necessary... or heck, they can ask a few sceners to run the prods on their boxes at the party itself?).
Quote:
2. pretty good idea.. but still youre pretty sure it will be atleast those two cards.. so what about all with gf2? or are the new rules to make only you happy? :)
GF2 is incompatible because it's outdated (from 1999 if I'm not mistaken), not because it happens to be from the wrong vendor. These people should just upgrade, which they will anyway. The thing is, they should be able to upgrade to the product of their choice, without having to give up demowatching, because some idiotic coder believes it's 'the wrong brand'.
by supporting Open GL you support terrorism!!!!!!!!!!!!!
some facts:
- i code demos for FUN.
- i LIKE (completly irrational if you will) gl more than dx
- i happen to HAVE an nvidia card
result:
- i just fail to see how people think they have any right to DEMAND a demo to run on their platform.
moreover:
- i fail to see how people think to improve anything by flaming about such an issue
- if somebody can show me he has the skills and wants to do so, i am willing to share code with him so he can port the demo to ati/dx/linux/amiga/c64 or whatever else he happens to have on his desk.
- i code demos for FUN.
- i LIKE (completly irrational if you will) gl more than dx
- i happen to HAVE an nvidia card
result:
- i just fail to see how people think they have any right to DEMAND a demo to run on their platform.
moreover:
- i fail to see how people think to improve anything by flaming about such an issue
- if somebody can show me he has the skills and wants to do so, i am willing to share code with him so he can port the demo to ati/dx/linux/amiga/c64 or whatever else he happens to have on his desk.
"- i code demos for FUN."
Great, more power to you!
"- i LIKE (completly irrational if you will) gl more than dx"
Fine, who cares?
It's not about gl vs dx, it's about compatibility.
"- i happen to HAVE an nvidia card"
So do I. This does not delude me into thinking that the entire world has, or should have, an nvidia card however.
"- i just fail to see how people think they have any right to DEMAND a demo to run on their platform."
I don't demand anything. I'm just expressing my disgust for your attitude towards compatible demos.
"- i fail to see how people think to improve anything by flaming about such an issue"
Agreed, you can't be reasoned with anyway. But perhaps some other coders will do their best to make more compatible code.
"- if somebody can show me he has the skills and wants to do so, i am willing to share code with him so he can port the demo to ati/dx/linux/amiga/c64 or whatever else he happens to have on his desk."
Why don't you do it yourself? I can lend you an ATi card. Or is democoding suddenly not fun anymore, if you can't use your nazi brand of hardware or your nazi 3d api?
Great, more power to you!
"- i LIKE (completly irrational if you will) gl more than dx"
Fine, who cares?
It's not about gl vs dx, it's about compatibility.
"- i happen to HAVE an nvidia card"
So do I. This does not delude me into thinking that the entire world has, or should have, an nvidia card however.
"- i just fail to see how people think they have any right to DEMAND a demo to run on their platform."
I don't demand anything. I'm just expressing my disgust for your attitude towards compatible demos.
"- i fail to see how people think to improve anything by flaming about such an issue"
Agreed, you can't be reasoned with anyway. But perhaps some other coders will do their best to make more compatible code.
"- if somebody can show me he has the skills and wants to do so, i am willing to share code with him so he can port the demo to ati/dx/linux/amiga/c64 or whatever else he happens to have on his desk."
Why don't you do it yourself? I can lend you an ATi card. Or is democoding suddenly not fun anymore, if you can't use your nazi brand of hardware or your nazi 3d api?
As DirectX grew up, it appeared the API changed a lot being simpler to use (in fact it's not realy simpler, it just has helpers like D3DX coded upon it, helpers that arent included in the release runtime version so which are unusable for intros;) ). And you gotta recognise that making a simple but robust framework for DX needs much more work and linecode than an OGL one.
The thing is that DX also began to be very architectured and fixed in the way it provides to be usable, this is good for engines coders that like flyby and follow the generic (game) way of coding.
I mean, with OGL, every coder has his own way to code things, manage states and pipeline through simple functions... DirectX is much more structured and now i feel it tends to influence the way we code. The problem is that i'm no gamecoder, and i dont do conventionnal 3dengines. My aim is not to find the better and faster way to render materials, but more to find the way to make it look unconventionnal. I'm not into unified code following a given pipeline;) It seems me OGL gives more "way to code things" flexibility, it's C-only support enables us to create C++ apps that manage just everything we think about. The simple fact of grouping things into objects (from the API point) is restrictive for non conventionnal usages.
Extension mecanism is not that hard to handle and its behavior never changed;) lets compare it with the multiple useless renamings of some important DX functions, without forgetting to talk about the things that simply moved, forcing some of us to rewrite lots of methods (just take the viewport if you need an exemple).
Being vendor specific shouldn't be a problem since there are lots of ppl coding on devices that lots of sceners doesnt have (lets name C64, GBA, linux, amiga with strange cards, ...) I do think it's even a way to be sure that the demo you do will look the way you did it. Dont tell me ATI rendering looks the same as nvidias one, if you think that, just try doing a simple OGL fog and render it using an ATI card older than 8500, you'll see how funny the result will look;)))
So, instead of crying because of compatibility, just ask an AVI version.
PS: before the 9th version, DX gave no real argument for moving from OGL. But with the FX feature, it now becomes very attractive and has no GL equivalent. We're now able to manage states changes along with shaders outside the apps core code and THIS is a good reason to prefer DX;)
The thing is that DX also began to be very architectured and fixed in the way it provides to be usable, this is good for engines coders that like flyby and follow the generic (game) way of coding.
I mean, with OGL, every coder has his own way to code things, manage states and pipeline through simple functions... DirectX is much more structured and now i feel it tends to influence the way we code. The problem is that i'm no gamecoder, and i dont do conventionnal 3dengines. My aim is not to find the better and faster way to render materials, but more to find the way to make it look unconventionnal. I'm not into unified code following a given pipeline;) It seems me OGL gives more "way to code things" flexibility, it's C-only support enables us to create C++ apps that manage just everything we think about. The simple fact of grouping things into objects (from the API point) is restrictive for non conventionnal usages.
Extension mecanism is not that hard to handle and its behavior never changed;) lets compare it with the multiple useless renamings of some important DX functions, without forgetting to talk about the things that simply moved, forcing some of us to rewrite lots of methods (just take the viewport if you need an exemple).
Being vendor specific shouldn't be a problem since there are lots of ppl coding on devices that lots of sceners doesnt have (lets name C64, GBA, linux, amiga with strange cards, ...) I do think it's even a way to be sure that the demo you do will look the way you did it. Dont tell me ATI rendering looks the same as nvidias one, if you think that, just try doing a simple OGL fog and render it using an ATI card older than 8500, you'll see how funny the result will look;)))
So, instead of crying because of compatibility, just ask an AVI version.
PS: before the 9th version, DX gave no real argument for moving from OGL. But with the FX feature, it now becomes very attractive and has no GL equivalent. We're now able to manage states changes along with shaders outside the apps core code and THIS is a good reason to prefer DX;)
"PS: before the 9th version, DX gave no real argument for moving from OGL. But with the FX feature, it now becomes very attractive and has no GL equivalent. We're now able to manage states changes along with shaders outside the apps core code and THIS is a good reason to prefer DX;)"
Well, that is not a D3D9 thingy, its in D3DX9 .. and noone uses D3DX anyways, simply since it sucks ass.
if you dont wanna code own math and image library, use prophecy (www.twilight3d.com) its really great for such things, and much better than d3dx..
anyways, its not more code with d3d stuff, its usually less, please paste your multitexturing code and rendertarget code, and i will paste mine, lets see who wins :)
Well, that is not a D3D9 thingy, its in D3DX9 .. and noone uses D3DX anyways, simply since it sucks ass.
if you dont wanna code own math and image library, use prophecy (www.twilight3d.com) its really great for such things, and much better than d3dx..
anyways, its not more code with d3d stuff, its usually less, please paste your multitexturing code and rendertarget code, and i will paste mine, lets see who wins :)
"Why don't you do it yourself? I can lend you an ATi card. Or is democoding suddenly not fun anymore, if you can't use your nazi brand of hardware or your nazi 3d api?"
Coding for compatibility is probably the least fun aspect of coding.
Coding for compatibility is probably the least fun aspect of coding.
pete: true but lack of compatibility in your code is the best way to lose respect. Especially when the results do not require it.
Alkama babbled:
I am sad to say that this piece of text makes absolutely no sense at all. honestly. I mean:
- "it's C-only support enables us to create C++ apps that manage just everything we think about"? what?
- "The simple fact of grouping things into objects (from the API point) is restrictive for non conventionnal usages."? eh?!
etc
Quote:
I mean, with OGL, every coder has his own way to code things, manage states and pipeline through simple functions... DirectX is much more structured and now i feel it tends to influence the way we code. The problem is that i'm no gamecoder, and i dont do conventionnal 3dengines. My aim is not to find the better and faster way to render materials, but more to find the way to make it look unconventionnal. I'm not into unified code following a given pipeline;) It seems me OGL gives more "way to code things" flexibility, it's C-only support enables us to create C++ apps that manage just everything we think about. The simple fact of grouping things into objects (from the API point) is restrictive for non conventionnal usages.
I am sad to say that this piece of text makes absolutely no sense at all. honestly. I mean:
- "it's C-only support enables us to create C++ apps that manage just everything we think about"? what?
- "The simple fact of grouping things into objects (from the API point) is restrictive for non conventionnal usages."? eh?!
etc
Quote:
As DirectX grew up, it appeared the API changed a lot being simpler to use (in fact it's not realy simpler, it just has helpers like D3DX coded upon it, helpers that arent included in the release runtime version so which are unusable for intros;) ). And you gotta recognise that making a simple but robust framework for DX needs much more work and linecode than an OGL one.
The unsurpassed masters of the 64k, Farbrausch, do in fact use Direct3D, not OpenGL, so Direct3D can't be all that bad, can it?
Oh, and their stuff works pretty well on my last three cards... G450, GF2GTS and R8500.
"The compatibility in 64k!"
For ultimate compatibility the scene could use Flash or Java. 'Cept I usually avoid such demos like one might avoid the plague.