Maximum file size for demo (esp. @Assembly)
category: general [glöplog]
They're alright ya
Quote:
what xTr1m said .. if I want to watch expert level animations I watch a Disney Movie .. the discussion about the limitations like file size needs first a definition what a "demo" is and what the scene makes unique .. I think it's not just the audio-visual aesthetics of a prod, since this doesn't help to differentiate a demo from any other art ..
Yeah guys, I really feel you, but there is another way of looking at it.
We know what is what, but non-demosceners wonder: why the heck games looks so much better than demos?
I believe this example was already used couple of time. It's carefully captured data, but obviously not the greatest challenge to render (basic LOD+PBR). Unfortunately, it's really damn hard to compete with it e.g. using procedural generation (not impossible, but ridiculously hard).
If you want to show off real skills - intros is the right way, while big demos suppose to show off artistic passion and quality that is at least on par with modern games - which is theoretically possible, just even harder to achieve with the size limit.
Saying that, I don't really care, because again, the cost of making such demo would be too ridiculous for me comparing to benefits/glory.
And again, there is no issue IMHO as you can put your oversized stuff on Revision and probably Assembly will follow (judging from this discussion).
a nice demo has nothing to do with size, just look at the 100000 of 4ks that use the same raymaching techniques and looks like shit, i rather watch a good produced 256mb pc demo that has good content :) (the 256mb demo would probably require alot more skill to produce aswell).
but 256mb isnt enough, that's what's what started this debate!
Quote:
big demos suppose to show off artistic passion and quality that is at least on par with modern games
No, for me demos are not only about creating the visuals that look better than / on par with modern games. Great demos have clever and tricky things going on, that go a bit beneath the usual "put some fancy shaders on static meshes and aim for UE4-look".
Look at Andromeda demos for example. In Noumenon and Stargazer there are several parts where you ask yourself "how did they do that?" (at least back when they were released). Good Amiga and C64 demos accomplish the same.
Of course it helps to use high-quality textures and shaders, but for me the concept of the effects is what makes things special compared to plain 3d-scene-player demos.
Quote:
Ok, well, it seems i didn't get your point then. Because _I was_ refering to technical limits. That people try to aim for something appealing and interesting I take for granted when talking about making quality demos.
But maybe I just failed to understand what you mean. In that case I apologise.
I see, no problem! My admittedly sarcastic point was that in the demoscene, "interesting to watch" is usually conflated with technical considerations. If a production is criticised for being boring, it's easily excused by pointing out that it's impressive for 1 kB. In "artsy fartsy stuff" there's no such escape hatch.
In other words, just as 3 GB of content doesn't necessarily result in something that's interesting to watch, neither does imposing more or less random technical limits.
It seems some people think bigger filesizes mean we're going to just use giant uncompressed textures or videos downloaded from the net, and not bother compressing stuff.
Hint: any coder capable of doing a reasonably high end demo knows what compression is and where/how to use it.
Actual cases I can imagine needing bigger sizes:
- Volumetric stuff. A few people have mentioned this. I've been working on this a bit too, and in my case a typical model is ~50-100MB. 256MB would give me 2-3 models. I wouldn't enter a compo with that size limit with this stuff. (And no, I couldn't just use a standard mesh, because then I couldn't use nice new rendering techniques.)
- Video. No, not sticking a video of an animation with a replayer in an exe. There's a ton of very cool stuff you can do in realtime with a video, but you probably don't want 640x480 res if you're doing that.
- Machine learning. There's a ton of scope to use ML in demos! But the data sets can be huge, 256MB or less is going to hurt.
Hint: any coder capable of doing a reasonably high end demo knows what compression is and where/how to use it.
Actual cases I can imagine needing bigger sizes:
- Volumetric stuff. A few people have mentioned this. I've been working on this a bit too, and in my case a typical model is ~50-100MB. 256MB would give me 2-3 models. I wouldn't enter a compo with that size limit with this stuff. (And no, I couldn't just use a standard mesh, because then I couldn't use nice new rendering techniques.)
- Video. No, not sticking a video of an animation with a replayer in an exe. There's a ton of very cool stuff you can do in realtime with a video, but you probably don't want 640x480 res if you're doing that.
- Machine learning. There's a ton of scope to use ML in demos! But the data sets can be huge, 256MB or less is going to hurt.
Quote:
Yeah, all that point clound darta for unlimited detail won't fit in 256mb.but 256mb isnt enough, that's what's what started this debate!
Quote:
Actual cases I can imagine needing bigger sizes:
I'll add a fourth one: lack of time. Assuming you don't have unlimited time to work on a production due to work/family/other commitments, each moment you spend on trying to reduce the file size to satisfy people's desire for small is a moment not spent on making your production better in aspects that are relevant to it.
Gestalt by Quite is a good example of this. The production is excellent in every possible way, but there are people in the comments complaining about the file size which is not relevant to this production in any way aside from technical fetishism. I'm sure it could be made smaller by reducing the fidelity and spending shitloads of more time on optimizing it/using procedural generation/writing the whole thing in a custom engine instead of using Unreal, but is that relevant to the concept and the production itself? No, not really.
Quote:
but is that relevant to the concept and the production itself? No, not really
It's not relevant if it was for a realtime demo out of the context of the demoscene. But entering such production into a compo at a demoparty is a different matter imho.
With your same arguments any machinima-type stuff (realtime or even recorded) should be allowed to enter regular demo compos - which I think would so far clearly have to be entered into wild compo.
If we follow along this route - then why do seperate compos for intros/demo/wild anyway? the logical conclusion would be to just do one big hodge-podge compo and good luck with that..
Is machinima even a thing still?
You don't necessarily have to do everything by yourself. I'm happily using Crinkler and 4klang. Is there some kind of proof that this point cloud data is inherently incompressible? I get a feeling that Navis may not have asked for help with his data.
:D
Sorry for this rant (tl;dr: make a demo about it), but...
It doesn't work that way, for several reasons:
- In practice this is not a continuum. A 64k is not a small demo, a 4k is not a small 64k, a 1k is not a small 4k, and a 256b is not a small 1k. These are made using widely different approaches, techniques and tools.
- The only basis we have for judging whether a production in "good for its size" is the hundreds of productions we have seen previously with the same size. And this tells us nothing about what can be achieved for intermediate sizes.
- We think in categories. 4k is a thing. You don't see many 5k intros in 64k compos, but you see lots of 4k intros (when there is no 4k compo), because the creators know that this is something the audience can relate to.
- It's much harder (and wildly more subjective) to compare entries against other when there are multiple variables (instead of just "how well do I like this"). We have this situation to some extent already in multi-platform compos, where the contestants play the gamble that the audience has at least some idea of what the various platforms are capable of. But with size this just becomes wild speculation about the preferences of the audience.
- It is the target size, not the actual size, that matters. Making something as small as possible and making something fit to a predefined size are two entirely different processes. A 64k intro that is 20k is not better optimized than one that is 60k, since further size reduction was never an optimization parameter. Which leads to the final point:
- Irrelevant optimization parameters are just a waste of the creators' precious time. Preacher put it very well, in what I would say is the crux of all this:
Quote:
Once we are used to not think in 1k/4k/8k/64k categories but judge each prods size independently and in relation to its technique and content coders would have a lot more freedom of choice instead of aiming at one of those predefined limits.
It doesn't work that way, for several reasons:
- In practice this is not a continuum. A 64k is not a small demo, a 4k is not a small 64k, a 1k is not a small 4k, and a 256b is not a small 1k. These are made using widely different approaches, techniques and tools.
- The only basis we have for judging whether a production in "good for its size" is the hundreds of productions we have seen previously with the same size. And this tells us nothing about what can be achieved for intermediate sizes.
- We think in categories. 4k is a thing. You don't see many 5k intros in 64k compos, but you see lots of 4k intros (when there is no 4k compo), because the creators know that this is something the audience can relate to.
- It's much harder (and wildly more subjective) to compare entries against other when there are multiple variables (instead of just "how well do I like this"). We have this situation to some extent already in multi-platform compos, where the contestants play the gamble that the audience has at least some idea of what the various platforms are capable of. But with size this just becomes wild speculation about the preferences of the audience.
- It is the target size, not the actual size, that matters. Making something as small as possible and making something fit to a predefined size are two entirely different processes. A 64k intro that is 20k is not better optimized than one that is 60k, since further size reduction was never an optimization parameter. Which leads to the final point:
- Irrelevant optimization parameters are just a waste of the creators' precious time. Preacher put it very well, in what I would say is the crux of all this:
Quote:
each moment you spend on trying to reduce the file size to satisfy people's desire for small is a moment not spent on making your production better in aspects that are relevant to it.
Quote:
You don't necessarily have to do everything by yourself. I'm happily using Crinkler and 4klang. Is there some kind of proof that this point cloud data is inherently incompressible? I get a feeling that Navis may not have asked for help with his data.
I know enough to tell if something is incompressible beyond 1:3 to reach eg 1:10 to be under x mbytes, also as a balance of my effort vs time given to the demo itself.
If ski jumpers can do it, so can computer geeks.
https://en.wikipedia.org/wiki/Ski_jumping#Scoring_system
Balanced demo score = voting points / (log (prod size in bytes) * log (CPU clock frequency in Hz))
And you could also add other relevant things like newcomer bonus and demo group size (to counter inevitable self-voting).
https://en.wikipedia.org/wiki/Ski_jumping#Scoring_system
Balanced demo score = voting points / (log (prod size in bytes) * log (CPU clock frequency in Hz))
And you could also add other relevant things like newcomer bonus and demo group size (to counter inevitable self-voting).
Argh. I thought I could resist on writing anything here before we have had a meeting with Assembly compo crew but nope...
My personal (not related to assembly) opinion on this is that if you like the size to be a thing do an 1k/4k/64k/what ever size restricted category intro.
Then again what comes to changing/not changing Assembly rules, that is a discussion we will have with the compo team in bit more than a week.
My personal (not related to assembly) opinion on this is that if you like the size to be a thing do an 1k/4k/64k/what ever size restricted category intro.
Then again what comes to changing/not changing Assembly rules, that is a discussion we will have with the compo team in bit more than a week.
Quote:
And you could also add other relevant things like newcomer bonus and demo group size (to counter inevitable self-voting).
And the cumulative amount of blue in the demo pixels (calculated from pixel perfect lossless capture in 1280x720)
Quote:
Balanced demo score = voting points / (log (prod size in bytes) * log (CPU clock frequency in Hz))
I've always wanted to make an Amiga demo which doesn't use the CPU at all. This is the perfect occasion!
1mhz is plenty to tell a GPU to draw 60x per second. I won’t bother to write it, just send me the prize.
I had the same thought. But you only get about 1.5x factor on your votes from that, so you still need to make a good demo. ;)
Abyss/FC send me a msg:
"Rimina, who is the head of compocrew, will be posting there in about a week as we've gone through rules for next year. But yes, it's a valid point and I don't think the filesize restriction makes sense anymore"
:-)
"Rimina, who is the head of compocrew, will be posting there in about a week as we've gone through rules for next year. But yes, it's a valid point and I don't think the filesize restriction makes sense anymore"
:-)
Great news! I hope 3kspert is already preparing 8gb entry!
That's good news. We will also try to reduce file size as much as possible (as an exercise).
Quote:
The production is excellent in every possible way, but there are people in the comments complaining about the file size which is not relevant to this production in any way aside from technical fetishism. I'm sure it could be made smaller by reducing the fidelity and spending shitloads of more time on optimizing it/using procedural generation/writing the whole thing in a custom engine instead of using Unreal, but is that relevant to the concept and the production itself? No, not really.
Hammer, meet nail.
The demo compo has always been there to allow for a degree of content/artistic freedom and focus on the experience (or effects, give it a name) instead of having to worry about staying within strict limits. As time goes by the allowance goes up and as I've already said, organizers will just decide and you'll have to deal with that.
Reducing as an exercise? Great, good for you. But that's your personal call there and not something people should feel they have to do to compete in this category.
That thing I offered earlier, "show file size", doesn't do anything either, I realize, except perhaps keep T$ happy :-) And I completely forego PR and brand building and shit like that yet which really influences your ranking at the end of the day.