pouët.net

Go to bottom

which is dead ? the GPU or the CPU

category: general [glöplog]
99% is way too large. a large number of clusters are used for large matrix computations, and GPU architectures are quite suitable for several important kernels (notably, iterative solvers for sparse linear systems map very well onto GPUs, and they're quite important in practice).

from what i've seen so far, the main problem about GPUs for scientific computing is the lack of IEEE-compliant rounding (or, in fact, any rounding or error propagation guarantees) and missing support for double-precision arithmetic. the first slows convergence of iterative solvers (and, naturally, decreases the quality of the results) and the second makes them unsuitable for very large matrices or high-precision applications.

but if there's sufficient demand, i doubt that it will take long for such things to appear.
added on the 2008-05-31 20:40:23 by ryg ryg
Amiga is awesome. GPUs are awesome. See the connection now? :)
added on the 2008-05-31 20:40:40 by NeARAZ NeARAZ
if you want large computing power, find someone who gives you access to Storm
added on the 2008-06-01 01:11:36 by Gargaj Gargaj
I hate x-men. What about bat-computer?
added on the 2008-06-01 01:14:04 by xernobyl xernobyl
http://www.youtube.com/watch?v=GAoLJc-BGN8
from cxnull's oneliner
added on the 2008-06-01 08:43:27 by LiraNuna LiraNuna
ryg, but then then scientific guys would have to recode all their programs in shader languages, no? or is there some cool compiler that can already target gpus with parallel stuff?
added on the 2008-06-01 10:26:09 by skrebbel skrebbel
skrebbel, there's CUDA (maybe the Wikipedia page is more useful to non-devs). it's not real C, it has some restrictions (no recursive functions for now) and some extensions, but it's got proper integration with normal programs, and you don't need to go through any graphics API.

plus you need to design stuff for parallelism explicitly or it won't help. so yeah, you have to rewrite your code (well, the solvers at least). but i'm fairly certain that as soon as someone starts porting core LAPACK routines to CUDA (bound to happen within the next few years), there's going to be a lot of interest in it :)
added on the 2008-06-01 20:40:27 by ryg ryg
i meant rewrite already parallel intended-for-clusters code to be running in shaders instead. so yeah, *wow*, that cuda thing looks pretty cool :)
added on the 2008-06-01 20:53:26 by skrebbel skrebbel

login

Go to top