Lets talk about DirectX 10 Warp10 Software Rendering
category: general [glöplog]
So, the word on the street is that Microsoft are dipping one foot in the past and creating a software rasteriser for DirectX that is fully compliant with all Shader Models, and provides a high quality, reference quality render courtesy of your CPU and a super optimised JIT compiler. Something about x86 doing shaders has me all up in a funk.
Temis need not worry though, because the performance is the cubic root of ass, Crysis at 800x600 on a cutting edge Core i7 8 Core @ 3.0GHz gives 7.36fps.
Still, I think it'd be awesome for doing super quality kkaptures of demos - and certainly it'll be a lot easier for anyone to make captures from any bit of hardware they have (although very slowly).
Even older demos should run reasonably. It'd be interesting to see how many 3DMarks (2001) this Software GPU gets.
More info:
http://msdn.microsoft.com/en-us/library/dd285359.aspx
Temis need not worry though, because the performance is the cubic root of ass, Crysis at 800x600 on a cutting edge Core i7 8 Core @ 3.0GHz gives 7.36fps.
Still, I think it'd be awesome for doing super quality kkaptures of demos - and certainly it'll be a lot easier for anyone to make captures from any bit of hardware they have (although very slowly).
Even older demos should run reasonably. It'd be interesting to see how many 3DMarks (2001) this Software GPU gets.
More info:
http://msdn.microsoft.com/en-us/library/dd285359.aspx
Why would the capture be of better quality when you run it on the CPU instead of the GPU?
barrio: because the gpus vary, there are sometimes glitches, not everything gets supported, there's the good old 'doesn't work on x' thing etc...
In theory, you would be able to run any demo at 1080p, with maximum quality settings, and 16x AA. Slowly, yes.. but that's not a massive problem if you're capturing.
In theory, you would be able to run any demo at 1080p, with maximum quality settings, and 16x AA. Slowly, yes.. but that's not a massive problem if you're capturing.
Then why not just go straight ahead and render everything from 3dsmax, blender, etc :)
I mean, you render everything and then it gets ugly by the compression of the video codec. I don't see the big advantage of this ultra high quality capture.
i think it's a good extra step for retro-compatibility and preservation, specially if we want to be able to run current demos on the hardware of future years ahead.
Nice thing. The old dx reference rasterizer was/is slow as hell. This could probably benefit a lot from dynamic compilation. I wonder what techniques they're using...
Capturing: I think it is a good idea, because I have seen some demos captures that had errors in them, because of the driver not supporting something etc.
It would be really, really slow though...
Capturing: I think it is a good idea, because I have seen some demos captures that had errors in them, because of the driver not supporting something etc.
It would be really, really slow though...
barrio: if you have demos your pc won't run, it's the only way to watch it. If you have a slower card, you can watch it at higher quality too. Plus some of us use other OSes, it's often better to watch a good quality capture than mess about rebooting.
Also, if you capture at high quality and encode at high quality with high bitrate, the quality of the video is actually not far off the original. Perfectly watchable anyway. It just eats a lot of disk space :)
Also, if you capture at high quality and encode at high quality with high bitrate, the quality of the video is actually not far off the original. Perfectly watchable anyway. It just eats a lot of disk space :)
Quote:
because the gpus vary, there are sometimes glitches, not everything gets supported, there's the good old 'doesn't work on x' thing etc...
dx10 has no caps, just a defined featureset - therefore, if something works on dx10 and your card is dx10 compliant it works on your card - full stop. (*in theory.)
that said, there's still bound to be little differences between the output of different hw vendor cards - but wait, if something was developed on nvidia or ati and never checked on anything else, what's to say it wont look different again on warp?
anyway, did they ever say it would support anything other than dx10+windows 7? where did the idea spawn from that it might work with dx8 or dx9, let alone opengl? that'd blow the whole "backwards compatibility" argument out of the window.
still, it's a useful solution for ms because it'll enable everyone to use windows aero, even if they dont have the necessary gpu. which means they can probably start deleting the legacy oldschool code path. wahey!
That's a great idea. IMO. But will it run in XP? xD or Wine :P
Quote:
*in theory
yeah. sure. ;)
Quote:
Running an OS at 2fps... that's my dream.even if they dont have the necessary gpu
wouldn't running aero on the cpu murder battery life? (and slow down your system in a bad way most likely too).
Dunno if it'll support older stuff.. it'd be nice if it did. Opengl has a software renderer you can force an app to use in various ways on osx, so I guess there's ways to force it on windows too?
Dunno if it'll support older stuff.. it'd be nice if it did. Opengl has a software renderer you can force an app to use in various ways on osx, so I guess there's ways to force it on windows too?
Quote:
Opengl has a software renderer you can force an app to use in various ways on osx, so I guess there's ways to force it on windows too?
And there's also mesa.
Quote:
Opengl has a software renderer you can force an app to use in various ways on osx, so I guess there's ways to force it on windows too?
XP and Vista don't include a reference rasterizer by default... it's card drivers which install it, so, not really.
You can use Mesa though, as xrl pointed out.
so the question is:
do you want your rendering to be
1. reference-correct
or
2. visually correct on all videocards?
:)
do you want your rendering to be
1. reference-correct
or
2. visually correct on all videocards?
:)
The problem is really thinking "realtime" for all of this. Having a reference rasterizer (even if it's slow) can help develop many software applications not really needing realtime (or fast fps, for the matter), but still needing 3D representation.
-And- those applications could benefit from having a faster renderer in hardware, yet needing it is not necessary. There are plenty of useful cards there (with triple or even quadruple outputs, not in the range of nVidia or ATI cards... Matrox M-Series come to mind) useful for medical applications and some other stuff... which don't include a whole OpenGL or DirectX hardware implementation and could really benefit of a software implementation.
Not to forget, that a 100% correct software implementation might also help cards to "get correct results" by comparing to the reference software rasterizer (that's what software reference rasterizers should be for, anyway).
So I just can't help but see this whole WARP thing is a good idea, no matter how you look at it
-And- those applications could benefit from having a faster renderer in hardware, yet needing it is not necessary. There are plenty of useful cards there (with triple or even quadruple outputs, not in the range of nVidia or ATI cards... Matrox M-Series come to mind) useful for medical applications and some other stuff... which don't include a whole OpenGL or DirectX hardware implementation and could really benefit of a software implementation.
Not to forget, that a 100% correct software implementation might also help cards to "get correct results" by comparing to the reference software rasterizer (that's what software reference rasterizers should be for, anyway).
So I just can't help but see this whole WARP thing is a good idea, no matter how you look at it
Jcl: There's already a (dead slow, but still) reference rasterizer in D3D.
kusma: I'm not a DirectX-savvy person (I'm not a 3D coder anymore, but I used OpenGL back in the days), but last time I saw it (probably D3D 7 or something), the reference rasterizer was not only dead slow (and I mean it), but also pretty much incomplete. If this means having a full implementation, it basically means that you don't really need different codepaths for software and hardware (which you really did with at least, older reference rasterizers).
So it's still a good thing :)
So it's still a good thing :)
Isn't it just to be prepared for Intel's Larabee?
Twinside: How does that make sense?
Becauce it can compile Larabee code and use it as an "accelerator".
So that you can use spare CPU cycles for GPU stuff? I doubt that would be worth the trouble.
Loard Graga : Intel announced it as a GPU with modified x86 instruction set Larrabee (wikipedia)
Twinside: That's up to Intel to code the implementation. Not Microsoft.