Starting with OpenGL
category: code [glöplog]
Thanks, I'm sure I can find my way around all the tutorials and whatnot! In the end I just want a shader-based workflow anyway, with as little fixed-function legacy as possible. But I just needed to know what to look for in the first place :)
From 2012 but should bring you from zero to pro in one read: Click Me!
But calling OpenGL cross-platform isn't the truth. Modern OpenGL is Windows Desktop. Apple is behind. Linux is behind. And let's face it, current ES 2.0 (+extensions) isn't really OpenGL, ES3.0 is at 40% adoption rate and ES3.1 will take years...
Cross-platform today: abstract your renderer (at some high or low level), start with a D3D11 backend (UWP, Windows, Phone, XBox), then Metal (iOS, OSX), then GL for Linux and GLES for Android, in that order.
And for the GL version you have to fight against nVidia vs. AMD vs. Intel and different GL versions and extensions. And more or less crappy drivers.
Official State of Cross-Platform Video
Cross-platform today: abstract your renderer (at some high or low level), start with a D3D11 backend (UWP, Windows, Phone, XBox), then Metal (iOS, OSX), then GL for Linux and GLES for Android, in that order.
And for the GL version you have to fight against nVidia vs. AMD vs. Intel and different GL versions and extensions. And more or less crappy drivers.
Official State of Cross-Platform Video
Indeed. For commercial games/demos that'd have to be the case.
For more simpler, less feature oriented GPU intensive things (especially shader driven) it might be more pragmatic to write an abstracted renderer based around ES2.0/GL2.1 which will run on almost-modern desktop gl / desktop ES / iOS / Android and Angle project can be used for for windows tablet / "modern ui" apps where GL simply isn't available. None of it is deprecated, and lots of code remains common across all platforms there.
Official State of OpenGL compatibility Video
and then there is Vulkan...
For more simpler, less feature oriented GPU intensive things (especially shader driven) it might be more pragmatic to write an abstracted renderer based around ES2.0/GL2.1 which will run on almost-modern desktop gl / desktop ES / iOS / Android and Angle project can be used for for windows tablet / "modern ui" apps where GL simply isn't available. None of it is deprecated, and lots of code remains common across all platforms there.
Official State of OpenGL compatibility Video
and then there is Vulkan...
Yes, well... I'm not THAT concerned with full, across-the-board, 100% compatibility. But if my choice is "fairly supported" with OpenGL or "just abstract all your rendering and write all your shaders multiple times" I think I'll take my chances for now :)
I briefly looked at ANGLE and that kind of works I suppose, but as far as I can tell it doesn't have support for ES 3.0 yet, which limits is quite a bit. Confirm/deny?
I briefly looked at ANGLE and that kind of works I suppose, but as far as I can tell it doesn't have support for ES 3.0 yet, which limits is quite a bit. Confirm/deny?
Incidentally, what about just using OpenGL ES 3.x directly? If you use non-open drivers (e.g. no Mesa) would this work on Windows and Linux?
For bigger Applications and OpenGL cross-platform (Win/Linux/Mac/Android) compatibility take look at Qt. It abstracts most of the stuff you'd need (windows/contexts/extensions + textures/buffers/shaders/... + images/fonts). As of version 5.4+ it can also automatically switch between an OpenGL and Angle backend on application startup for increased compatiblity. It is rather big though and only GPL/LGPL...
Don't use angle. It introduces one more compile step in your shaders, which is bad for all kinds of reasons, even if ES had all the fun features, which it doesn't. Do yourself a favour and stay away from ES if you don't absolute need to support web or mobile.
Nvidia drivers for linux are decent and full-featured (disclaimer: it's been a while since I used opengl on linux myself. But our 64ks usually just work with wine, without us ever testing). Not sure about ATi on linux but I hear they have improved and they claim to support 4.4 now, same as the windows version.
What EvilOne said is true, but "abstracting" a renderer is what you have to do when you write something like a commercial game engine, not a demo. Complexity explodes tenfold and many features that make your life so much easier can't be used because only 85% of the target platform support them. Making demos or doing research are rare chances to use the shiny bleeding edge features (often meaning stuff that is reasonably stable and a few years old already) because you can just choose to drop e.g. Intel graphics. It's a privilege you have over commercial engine developers, don't waste it for increasing your reach by a few percent. I play both roles, and my day-job self envies my night-demoscener self for that reason. And, most importantly, don't worry about the limits too much before you actually hit them: chances are you will be fine for a very long time. And by then, in case you hit some obstacle (which, again, might not happen at all) you will know what you actually need and there will be okay workarounds that cover much of that. Just focus on having fun with all the nice toys we got and don't limit yourself to the crappy stuff because Intel and Apple are behind. You're doing this for fun and learning and not for business (I guess).
Nvidia drivers for linux are decent and full-featured (disclaimer: it's been a while since I used opengl on linux myself. But our 64ks usually just work with wine, without us ever testing). Not sure about ATi on linux but I hear they have improved and they claim to support 4.4 now, same as the windows version.
What EvilOne said is true, but "abstracting" a renderer is what you have to do when you write something like a commercial game engine, not a demo. Complexity explodes tenfold and many features that make your life so much easier can't be used because only 85% of the target platform support them. Making demos or doing research are rare chances to use the shiny bleeding edge features (often meaning stuff that is reasonably stable and a few years old already) because you can just choose to drop e.g. Intel graphics. It's a privilege you have over commercial engine developers, don't waste it for increasing your reach by a few percent. I play both roles, and my day-job self envies my night-demoscener self for that reason. And, most importantly, don't worry about the limits too much before you actually hit them: chances are you will be fine for a very long time. And by then, in case you hit some obstacle (which, again, might not happen at all) you will know what you actually need and there will be okay workarounds that cover much of that. Just focus on having fun with all the nice toys we got and don't limit yourself to the crappy stuff because Intel and Apple are behind. You're doing this for fun and learning and not for business (I guess).
ES2.0/GL2.1 is a bit limited, but OpenGL >= 3.1 should give you 99% of what you'd ever need and it is available on ALL relevant desktop platforms...
Also DO start with MODERN OpenGL (which is already pretty to ES2.0), do everything without fixed-function stuff, use shaders. DON'T follow the NeHe path. The learning phase might be a bit longer, but you're not learning deprecated stuff and you can do so much more with it...
Once you have a triangle (possibly colored and/or rotating) on the screen with modern OpenGL, you have basically learned all there is to it...
Also DO start with MODERN OpenGL (which is already pretty to ES2.0), do everything without fixed-function stuff, use shaders. DON'T follow the NeHe path. The learning phase might be a bit longer, but you're not learning deprecated stuff and you can do so much more with it...
Once you have a triangle (possibly colored and/or rotating) on the screen with modern OpenGL, you have basically learned all there is to it...
Raer: I'm actually building my main editor with Qt, but I want to avoid the overhead of having to distribute Qt just to run my demo (see: recent Adept demos) :)
Cupe: Yes, makes sense. No ES, then!
Cupe: Yes, makes sense. No ES, then!
saga: You're right. 25MB for icu-stuff. wtf. Would be sweet if you could use only the OpenGL-DLL, but sadly that doesn't quite work out... :/
but yes. new 3state demos plz!
Apart from the fact that yes, having to ship Qt for a Demo is a bit... weird... Regarding that bloated icu DLL and other things, people have sat down and stripped those DLLs from features that are not really required (IIRC the biggest part of that ICU DLL were LUTs that you don't need anyway). Just have a look here: https://github.com/WPN-XM/qt-mini-deploy
raer, I disagree. 3.3 does not have what you need if you start fresh now. 4.3 is the bare minimum in my opinion. Example reasons: tessellation, image load/store, atomic operations, compute shaders. Plus tons of subtle little features like being able to query texture properties and all kinds of other things from within a shader, or more expressive layout qualifiers. 4.5 has direct state access (currently only available as an extension by ati, so might soon[tm] be officially supported as 4.5)
Saga: that's pretty cool :) but I already have my ui code separated fully from the rest so I'd really only be using Qt for the GL stuff which seems wasteful. It's quite nice, though.
Cupe: how widespread are compute shaders? Are people using them to, oh I don't know, blur framebuffers and so on, or is this still pixel shaders plus full screen quad?
Cupe: how widespread are compute shaders? Are people using them to, oh I don't know, blur framebuffers and so on, or is this still pixel shaders plus full screen quad?
mostly fullscreen quads, although I guess a few 4k/8k use them instead of pixel shaders that would do more or less the same. They are more useful for simulation (particles, fluids, light transport) or generally processing of less evenly structured data than pixels or vertices (any kind of hierarchical data structures like BHVs for raytracing, stuff that needs sorting or compaction, FFT, things that need scattered writes (although ImageStore makes this possible for the other types)). It's one of those "you'll know it when you need it" things, but when you do, a short elegant compute shader might either save you awkward abuses of other shaders or make something possible that wasn't possible before. So far, none of our intros had to actually use them, but it will happen and it's always nice to know they are there and our system supports them.
Details, even if I'm not asked ;) I prefer triangles/cs over quads. Why?
1. faster.
2. At least on the DX side of this, setting up a triangle was just smaller in DX9.
Another sidenote: Compute Shaders are also wonderfull for well structured data (totally scattered data is basically bad everywhere) - plus you save some pipeline overhead (never actually measured whether that really gives a win in the end).
Okay, now that cupe recommended you to go the "use OpenGL for fun route" i.e. ignoring platform compatibility... I can recommend using DX11.x with a NV Desktop & VS2013 - the debug layer is just awesome and it's fun to program - way more fun than GL/GLSL imho. Stuff wont run in wine, but at least the chances that some graphics card vendor breaks your intro with a future driver release is somewhat reduced when using DX/HLSL. To the best of my knowledge: All major graphics APIs and shader language compilers have their problems.
Maybe Vulkan will be awesome, I don't expect that to happen.
P.S.: I really enjoyed the "Official State of" part of this thread. Muhahah.
1. faster.
2. At least on the DX side of this, setting up a triangle was just smaller in DX9.
Another sidenote: Compute Shaders are also wonderfull for well structured data (totally scattered data is basically bad everywhere) - plus you save some pipeline overhead (never actually measured whether that really gives a win in the end).
Okay, now that cupe recommended you to go the "use OpenGL for fun route" i.e. ignoring platform compatibility... I can recommend using DX11.x with a NV Desktop & VS2013 - the debug layer is just awesome and it's fun to program - way more fun than GL/GLSL imho. Stuff wont run in wine, but at least the chances that some graphics card vendor breaks your intro with a future driver release is somewhat reduced when using DX/HLSL. To the best of my knowledge: All major graphics APIs and shader language compilers have their problems.
Maybe Vulkan will be awesome, I don't expect that to happen.
P.S.: I really enjoyed the "Official State of" part of this thread. Muhahah.
I use afullscreen triangle. Clipping is free, and setting up a single triangle to cover the whole screen isn't very hard.
That way you avould the problem of two triangles, where the seam is inefficient, because it is touched multiple times, especially with AA enabled.
Like that:
That way you avould the problem of two triangles, where the seam is inefficient, because it is touched multiple times, especially with AA enabled.
Like that:
Code:
___
| /
|/
Which is what is mentioned in the linked blogpost. You also can avoid possible problems/seams with screen space derivatives this way.
I'm too lazy to setup any buffers and just use SV_VertexID to figure out where the three vertices go.
I'm too lazy to setup any buffers and just use SV_VertexID to figure out where the three vertices go.
FWIW, you have no guarantee that the driver won't just clip that back into a quad and then two triangles. If you need it for size then fine, otherwise don't bother. (I measured it myself back in the day and found exactly no difference speed-wise, although GPU architectures are evolved since then.)
Quote:
FWIW, you have no guarantee that the driver won't just clip that back into a quad and then two triangles.
Erm, sure you do.
Drivers don't process geometry at that level. They can't, because they don't know waht the geometry will look like in screenspace until after it's passed through the vertex shader.
By then it is already out of the driver's control really.
Quote:
If you need it for size then fine, otherwise don't bother. (I measured it myself back in the day and found exactly no difference speed-wise, although GPU architectures are evolved since then.)
This technique was actually recommended by nVidia in a presentation: ftp://download.nvidia.com/developer/presentations/2004/GPU_Jackpot/GeForce_6800_Performance.pdf
Pretty sure other GPUs have a very similar implementation of the pipeline, where this is the most efficient way to render fullscreen. It makes perfect sense.
Clipping to a quad and then rendering as two triangles does not.
6800 was one of the cards I benchmarked this on at the time, incidentally. :-) My rendertargets were relatively big for the time (2048x1024 and such), though, perhaps it mattered more if you did 256x256.
Quote:
My rendertargets were relatively big for the time (2048x1024 and such), though, perhaps it mattered more if you did 256x256.
Could be, you may have been limited by the texture access rather than the rasterizer.
I generally had my rendertargets the same size as the screen back then, so probably 1024x768 or such (yay non-pow2 support!).
Quote:
how widespread are compute shaders? Are people using them
compute shaders are teh future.
modern rendering tech is heavily based on it (forward+, light-indexed deferred rendering a la battlefield, etc.). just look at what's going on on the ps4.. esp. what statix is doing (100% compute only pipeline).
smash will show us what's possible ;)