OpenGL or DirectX for 64k PC
category: code [glöplog]
Hello
For the moment all my 64k pc intro where done using simple GLSL shaders, but I'm planning to have something more sophisticated for my next release (generating some meshes, textures and also use Compute Shader.
I wanted to have feedback of other groups of which api was the best for fitting the 64k. I saw that a lot of 64k demos are done with DirectX 11 the only exception I found yet is the Ctrl-Alt-Test (https://github.com/laurentlb/Ctrl-Alt-Test) demos.
With OpenGL you have extra data needed because you need to load the functions pointer (which is not true on linux).
My tools curretly hot compile C that's why I'm very hesitating on changing the rendering API
Does someone made any comparison of this ?
Thanks
For the moment all my 64k pc intro where done using simple GLSL shaders, but I'm planning to have something more sophisticated for my next release (generating some meshes, textures and also use Compute Shader.
I wanted to have feedback of other groups of which api was the best for fitting the 64k. I saw that a lot of 64k demos are done with DirectX 11 the only exception I found yet is the Ctrl-Alt-Test (https://github.com/laurentlb/Ctrl-Alt-Test) demos.
With OpenGL you have extra data needed because you need to load the functions pointer (which is not true on linux).
My tools curretly hot compile C that's why I'm very hesitating on changing the rendering API
Does someone made any comparison of this ?
Thanks
With OpenGL you need to indeed query the function pointers with string identifiers, but it's a drop in the ocean compared to everything else. People write 4k intros just fine even though you need to do the same thing there.
A bigger difference is that you need to ship your GLSL shaders as strings instead of compiled HLSL bytecode. In practice this isn't a problem since you can either a) minify the shaders, or b) let the compressor deal with them. When making Guberniya (writeup) we didn't even remove comments from shaders and everything fit in fine.
However, if you want your intro to run on both Nvidia and AMD cards, then DirectX 11 is a better choice. You can make compatible OpenGL intros too, but it's more work. Overall it sounds to me you should just stick with your current approach and focus on making a kickass demo :)
A bigger difference is that you need to ship your GLSL shaders as strings instead of compiled HLSL bytecode. In practice this isn't a problem since you can either a) minify the shaders, or b) let the compressor deal with them. When making Guberniya (writeup) we didn't even remove comments from shaders and everything fit in fine.
However, if you want your intro to run on both Nvidia and AMD cards, then DirectX 11 is a better choice. You can make compatible OpenGL intros too, but it's more work. Overall it sounds to me you should just stick with your current approach and focus on making a kickass demo :)
I'd recommend to go with the API you're most comfortable using, after all 64k is plenty of space and the few function imports will be lost in packing.
For the record The Jaderoom uses OpenGL and since we were in a hurry and the pipeline wasn't all the way setup we didn't even minify the shaders... and still had kilobytes to waste *ducks away in shame*. That said I wrote the jaderoom with a "jacked-up 4k++" mentality, I still think that it would be rather easy to make it fit into 32k.
logicoma stuff is GL too. What the other dudes said :)
Only tangentially related, but:
This actually depends on the installed driver -- on Mesa, and new NVidia versions that use libglvnd, you can get away with it, but for older NVidia drivers (and maybe some other cases as well), you still have to use GetProcAddress
Either way, for 64ks I think this doesn't make much of a difference size-wise. If you put all those strings right next to one another in the binary they'll compress quite well anyway.
Quote:
(which is not true on linux)
This actually depends on the installed driver -- on Mesa, and new NVidia versions that use libglvnd, you can get away with it, but for older NVidia drivers (and maybe some other cases as well), you still have to use GetProcAddress
Either way, for 64ks I think this doesn't make much of a difference size-wise. If you put all those strings right next to one another in the binary they'll compress quite well anyway.
Quote:
I saw that a lot of 64k demos are done with DirectX 11
I honestly can't remember any recent 64k that was DX aside from ours...? (Not that I was checking thoroughly, admittedly.)
Atlas was DX11, honestly just cause I wanted to try a different API. Personally I prefer it, you tend to have better luck with GPU vendor compatibility and the API is a little less... boneheaded, since they don't try and maintain backwards compatibility with the same API.
But if you've already got something with OpenGL or that's what you're comfortable with, there's no need to change it. As others have said, in a 64k the function imports won't make a difference.
But if you've already got something with OpenGL or that's what you're comfortable with, there's no need to change it. As others have said, in a 64k the function imports won't make a difference.
Thanks for all your anwser !
the use of the api functions themselves isnt really an issue in 64k, so its about the shaders and personal preference.
you'll probably be doing the same thing in either API: generating/stitching/storing a shader as source code, then compiling it using the API's shader compiler at runtime. it gives you a lot more flexibility than precompiled binaries - which are only an option on dx and are usually a lot larger anyway (when you start getting complicated).
in dx9 we had the super handy fx framework to make use of, but that's a static lib on dx11 :(
in my experience the dx shader compiler is quite a lot slower than the GLSL one but produces rather more efficient code - and the compiler isnt entirely shoved into the driver, which helps with vendor compatibility. the compile speed is a real issue in 64k though because (if youre generating lots of big shaders with raymarching loops) that time hits your precalc.
i find the interop between compute and graphics pipes easier in dx and the compiler to be less pedantic and more open to abuse, but that could all be personal preference,
on the plus side for GLSL, shadertoy is full of examples :)
you'll probably be doing the same thing in either API: generating/stitching/storing a shader as source code, then compiling it using the API's shader compiler at runtime. it gives you a lot more flexibility than precompiled binaries - which are only an option on dx and are usually a lot larger anyway (when you start getting complicated).
in dx9 we had the super handy fx framework to make use of, but that's a static lib on dx11 :(
in my experience the dx shader compiler is quite a lot slower than the GLSL one but produces rather more efficient code - and the compiler isnt entirely shoved into the driver, which helps with vendor compatibility. the compile speed is a real issue in 64k though because (if youre generating lots of big shaders with raymarching loops) that time hits your precalc.
i find the interop between compute and graphics pipes easier in dx and the compiler to be less pedantic and more open to abuse, but that could all be personal preference,
on the plus side for GLSL, shadertoy is full of examples :)
It's 2020, don't start new projects in GL. Choose Vulkan or D3D12.
And with those, I would easily choose Vulkan myself, because SPIR-V is much more reasonable to generate or compress than DXIL, at least in something like a 64k footprint.
And with those, I would easily choose Vulkan myself, because SPIR-V is much more reasonable to generate or compress than DXIL, at least in something like a 64k footprint.
kusma: Why would you pick infamously verbose APIs for a size-limited production though? That just means less space for content, for what gain?
—
Use the API that you're most comfortable with actually getting working results out; that's ultimately the only thing that matters. D3D11 is probably better if you're only concerned of Windows due to superior (as far as I understand) error handling model, and debugging and tooling options. On the other hand you'll probably find more support for OpenGL issues from folks within the scene. Beyond that what cce said should cover the other significant 64k specific points.
cpdt: These APIs are only verbose in the scope of hello world triangles. That kind of verboseness doesn't matter for 64k. In fact, even an a little bit advanced renderer can often get less code, because the API allows for much more orthogonal code. And the gain is a lot. Access to new hardware abilities is a major one. More flexibility on how to use the existing hardware features is another one. Not investing time in dying technology is a third. Keeping up with the state-of-the-art is a fourth.
Want to use HW raytracing without vendor lock-in? Forget about GL or DX11.
noby: It the tongue-in-cheekness of that comment isn't obvious to you, then I don't know what to say. And no, the demoscene is already sequeezing out plenty of mediocre raymarchers using 10 year old technology. Let's not encourage that, OK?
Want to use HW raytracing without vendor lock-in? Forget about GL or DX11.
noby: It the tongue-in-cheekness of that comment isn't obvious to you, then I don't know what to say. And no, the demoscene is already sequeezing out plenty of mediocre raymarchers using 10 year old technology. Let's not encourage that, OK?
kusma: I made a try to test raytracing with vulkan in 64k, but at the 32bit driver doesn't support the extension :[
Maybe time to start the 64bit packer experiments then? :)
if you target Windows only then DX11, it produce smaller *.exe file
if you target linux and windows and want minimal exe size then it GLES3.3
if you target full crossplatform with Mobile phones iPhone and Mac and Lin/Win then only Vulkan.
Vulkan produce ~2 times bigger *.exe size then GLES3 API, and shaders for Vulkan in spirv format are 10x-100x larger then OpenGL compressed GLSL text files.(vert and frag shader for triangle with static color is 5+5=10kb already)
if you target linux and windows and want minimal exe size then it GLES3.3
if you target full crossplatform with Mobile phones iPhone and Mac and Lin/Win then only Vulkan.
Vulkan produce ~2 times bigger *.exe size then GLES3 API, and shaders for Vulkan in spirv format are 10x-100x larger then OpenGL compressed GLSL text files.(vert and frag shader for triangle with static color is 5+5=10kb already)
Quote:
Either way, for 64ks I think this doesn't make much of a difference size-wise. If you put all those strings right next to one another in the binary they'll compress quite well anyway.
all my demo on pouet in Vulkan, and just Vulkan API by itself produce 10x large *.exe code, if minimal single triangle exe in GLES3 is ~2Kb size, in Vulkan is ~20Kb(after compress it, original size 60Kb)...its just single triangle
biggest problem in Vulkan is spirv shaders, that absolutely enormous size, usual compiled to spirv shaders size from 100-500 lines of code is 50Kb-250Kb(yes) that even larger then original GLSL source file...
and Vulkan drivers that compile spirv shaders are in alpha/beta state, my own shaders that launched in OpenGL on same hardware may even crash Vulkan drivers, I already send few bugreports...as example new Doom eternal does not work on AMD hardware... and even Valve understanding that and develop its own spirv compiler for AMD GPU...
While kusma has some valid points, consider that after all this years no one else in the demoscene comes even close to what smash (and maybe guille) are able to do with even dx11.
Why bother with vulkan/dx12 if all we manage to do with it is the same static meshes + glow + dof. Or raymarching.
Where‘s the creative use of geometry shaders? Unorthodox compute pipelines? Tesselation? We didn‘t even scratch the surface of dx11.
Why bother with vulkan/dx12 if all we manage to do with it is the same static meshes + glow + dof. Or raymarching.
Where‘s the creative use of geometry shaders? Unorthodox compute pipelines? Tesselation? We didn‘t even scratch the surface of dx11.
Danilw: did you try dx12 by curiosity ?
Quote:
Danilw: did you try dx12 by curiosity ?
no I did not
I know DX12 need almost same(little less) calls to API as Vulkan, so resylt *.exe will be just little smaller size then using Vulkan.
I saw somewhere on github "single triangle DX12" launcher with ~4Kb exe size, but I know for sure that for more complicated project DX12 code size explode same as Vulkan, so ... you can try
I do not see any reason to use DX12 when we have Vulkan
Quote:
We didn‘t even scratch the surface of dx11.
I think the point was that it's not necessary to squeeze every single drop out of the previous generation before moving on to the status quo:
Quote:
Not investing time in dying technology
Then let me rephrase: Which aspect of dx11 currently severely limits you in your demomaking / effects invention? What aspect of vulkan/dx12 do you think will help with that significantly in the next months?
I get that it's fun to get to know/try new stuff and I`m not against that, mind you. If that's your goal and you are willing to invest your spare time (months) into that, very well.
Me personally, I invest time in higher-level code first that gives benefit irrespective of the rendering api used. I had my fair share of doing rendering frameworks from scratch in the last 20 or so years (DX7 anyone?), and I am getting tired of it.
I get that it's fun to get to know/try new stuff and I`m not against that, mind you. If that's your goal and you are willing to invest your spare time (months) into that, very well.
Me personally, I invest time in higher-level code first that gives benefit irrespective of the rendering api used. I had my fair share of doing rendering frameworks from scratch in the last 20 or so years (DX7 anyone?), and I am getting tired of it.
Quote:
I had my fair share of doing rendering frameworks from scratch in the last 20 or so years (DX7 anyone?), and I am getting tired of it.
using Open Source technology always better then MS corporation(especially for non U.S. members)
and Vulkan API does not get outdated like DX8/9/10 or OpenGL2
Quote:
time (months) into tha
porting Doom3 engine, and learning Vulkan at same time, cost 3 month of work. (GDC youtube report)
porting an exist frameform from OpenGL3+/DX10+ to Vulkan cost less then month
Quote:
as example new Doom eternal does not work on AMD hardware...
Why didn't anyone tell me before I finished the whole game on an RX580?!
(HDR doesn't work, the game does.)