Mantle - a new low-level API for AMD GPUs
category: code [glöplog]
Understanding AMD’s Mantle: A Low-Level Graphics API For GCN
I am not a coder (nothing newer than 1995 technology, anyway :), but this could be interesting. It seems AMD is leveraging its presence in both the PS4 and XBO, by introducing a new low-level API based on the XBO API (presumably) for current/next-gen Radeon cards. This could make porting games to the PC easier, but of course, couldn't replace traditional D3D or OpenGL versions for NVidia GPU users.
Will demo coders pick up Mantle? Will it mean going back to limited hardware, "GUS-only"-era compatibility (at least more so than today)? I guess we'll have to wait a bit longer to find out just how much of an improvement it can make.
I am not a coder (nothing newer than 1995 technology, anyway :), but this could be interesting. It seems AMD is leveraging its presence in both the PS4 and XBO, by introducing a new low-level API based on the XBO API (presumably) for current/next-gen Radeon cards. This could make porting games to the PC easier, but of course, couldn't replace traditional D3D or OpenGL versions for NVidia GPU users.
Will demo coders pick up Mantle? Will it mean going back to limited hardware, "GUS-only"-era compatibility (at least more so than today)? I guess we'll have to wait a bit longer to find out just how much of an improvement it can make.
I feel it's a bit too early to pass judgement, but I guess if it's open enough, then there's no reason why demos couldn't use it - apart from the whole ATI-only thing.
Sure, it should come in handy for the next-gen Rob is Jarig port.
it'll make a massive difference to something that's limited by the amount of cpu time spent in d3d - i.e. something that pushes a lot of drawcalls. might enable say 9000 draw calls in a 60hz frame not 1000.
the problem is that (from personal experience) most demos probably dont need very many drawcalls and are gpu limited, so it wont be much use to them.
the problem is that (from personal experience) most demos probably dont need very many drawcalls and are gpu limited, so it wont be much use to them.
So perfect for those 3DS Max flyby demos!
From the company that brought you Streram SDK!
Anyway, this is probably intended to entice xbox one/ps4 developers to port their stuff over to PC with Mantle and for that reason it might work (kinda) but do we really need to fragment the demoscene some more?
On the other hand, people already do this by writing single-vendor GLSL shaders I guess.
Anyway, this is probably intended to entice xbox one/ps4 developers to port their stuff over to PC with Mantle and for that reason it might work (kinda) but do we really need to fragment the demoscene some more?
On the other hand, people already do this by writing single-vendor GLSL shaders I guess.
And Amga demos that don't work on an Atari ST. Not a big deal I guess, if someone puts it into good use. Few of us can watch console or oldschool demos in real time either.
Mantle - A very risky move from AMD.
So that is what they did while not properly supporting high-level APIs...
It almost took them a year to get a OpenGL 4.3 beta driver ready, did anyone test that yet?
We have to wait for the specifications.
Plus: The rumors say that their new flagship hardware does not properly compete against NV hardware, so I will just wait for the numbers.
I suspect the following to happen:
Instead of going AMD only, I will go NV only.
As long the NV hardware is superior for some certain tasks, especially tasks involving control flows which are not completely non-divergent...
AMD is for gamers.
:P
So that is what they did while not properly supporting high-level APIs...
It almost took them a year to get a OpenGL 4.3 beta driver ready, did anyone test that yet?
We have to wait for the specifications.
Plus: The rumors say that their new flagship hardware does not properly compete against NV hardware, so I will just wait for the numbers.
I suspect the following to happen:
Instead of going AMD only, I will go NV only.
As long the NV hardware is superior for some certain tasks, especially tasks involving control flows which are not completely non-divergent...
AMD is for gamers.
:P
Well played, yet another vendor-specific API. Yay!
We already have "GUS only". I mean, who in their right mind has an AMD card anyway, let alone tries to run demos on one. It's Nvidia all the way down, baby.
MaNtLe!
las: AMD themselves says that their new hardware will "Blow NV out of the water". They probably wouldn't have said this unless they have tests to back it up. :)
RIP NVIDIA.
Maybe Intel will buy their tech. Probably not. They've already advanced so much in the field on their own.
Sell your stock now (just not to me).
That's a pretty good place to be at for a GPU manufacturer.
Maybe Intel will buy their tech. Probably not. They've already advanced so much in the field on their own.
Sell your stock now (just not to me).
Quote:
AMD is for gamers.
That's a pretty good place to be at for a GPU manufacturer.
Quote:
I mean, who in their right mind has an AMD card anyway, let alone tries to run demos on one.
Looks like NVidia putting tweaked Cg compiler in their drivers and let everyone think it was proper GLSL was a wise anti-competition step. Now everybody writes broken shaders which NVidia happily accepts and which by some unknown reason don't work on other hardware with "obviously broken" drivers. ;)
looks like a good thing. removes some overhead. more work for coders. but it's getting a lil mean. now how many render backends do game developers have already. how many imcompatible efficient ways to render some polygons on how many platforms and apis?
oh, welcome back, BSODs \0/
yzi: me, and its fucking awesome.
yzi: AMD bought gDebugger and made it free. They try to be strict with their drivers. They introduced ARB_debug_output. Granted, they have a history of bad drivers, but that can be mitigated by a bit of care when choosing (eg: Steam has recommendations). Also they push for actual CPU/GPU shared memory adressing which is very relevant for thos of us not working in video games.
Meanwhile, NVIDIA voluntarily cripple OpenCL performance to promote CUDA.
Guess what, I prefer the former entity.
Meanwhile, NVIDIA voluntarily cripple OpenCL performance to promote CUDA.
Guess what, I prefer the former entity.
in my years of trying different ati/nv cards, the nv cards never caught fire unlike certain cards that definitely need a mantle!
I think that's my cue to continue work on my PowerSGL code and make a demo with it.
Quote:
las: AMD themselves says that their new hardware will "Blow NV out of the water". They probably wouldn't have said this unless they have tests to back it up. :)
Prolly they just wrote the benchmarks with mantle... As it is supporting their own internal architecture, it should be faster. Also apple said, touch id uses subcutaneous layers, they probably wouldn't have said this unless they have tests to back it up. :) eh?
on topic: yay, hardware specific api. i suspect that amd just wants to point to the developers when their code does not perform as good as on nvidia (like it was up to now). i guess smash is one of the few guys in the world who manages that his demos run better on amd cards than on nvidia ones ;) how do you do that?!
before a bunch of pouet smartarses chime in (oh wait, its too late), can i just try and cut this thread off here: "if you work on the renderer of a (pc & modern console) game, you probably know why you might want this and can decide whether the extra work of supporting another platform is worth it for the cpu performance gain in your case. if you don't work on the renderer of a (pc & modern console) game, this is probably not for you. nothing to see here, move along."
:)
:)
Yeah. The real gains for this will in be the high end game rendering engines. And of course I support anything that competes with DX, really.
Especially a multiplatform approach.
Strangely I'm more open to vendor-locked APIs (which, as some whispers on the Internet have shown isn't even strictly true, because Mantle could reasonably be implemented on NV hw at some point) than OS-locked APIs.
Especially a multiplatform approach.
Strangely I'm more open to vendor-locked APIs (which, as some whispers on the Internet have shown isn't even strictly true, because Mantle could reasonably be implemented on NV hw at some point) than OS-locked APIs.
Hmm, factoring everything the only thing i can see that would be able to give them that much increase in latency would that they are MMAP:ing in some kind of command buffer on the gpu, NVidia/Intel should potentially be able to support something similiar by creating an emulation layer (might not be trivial but not too bad). But if the GPU is on the same die as the CPU then it might be hard for NVidia to keep up over a bus (compared to on die).
My biggest question is actually about how security is handled if they expose that "much"?
My biggest question is actually about how security is handled if they expose that "much"?
decrease in latency ofcourse