Vector Units, and CUDA / GPGPU
#11
I'll say it here and now: CUDA is a marketing ploy developed by thinktanks at nVIDIA who are desperately trying to think of new ways to make ever0more-expensive video cards. Seriously, if you're buying a video card with the expectation that it's going to be able to run your photoshop filters and spreadsheet calculations in parallel with your cpu, then I have a bridge I'd like to sell you.

The whole premise of CUDA will continue to fail in the face of ever-more threading present in the CPU cores themselves, which are without any of the penalties of the CUDA architectural model (no bus latencies, less heat generation, better cooling options, and far fewer penalties for the complicated coding patterns needed by most modern and future 3D scene generators). And for that matter, many games are as much limited by complicated AI and physics processing as they are graphics, and CUDA would do nothing for that. CUDA's just there because nVIDIA can't think of anything else new to add to their video cards anymore to merit hefty luxury price tags.
Jake Stine (Air) - Programmer - PCSX2 Dev Team

Sponsored links

#12
(04-24-2009, 06:10 AM)Air Wrote: CUDA's just there because nVIDIA can't think of anything else new to add to their video cards anymore to merit hefty luxury price tags.

that's why I bought a $40 8400gs lol
#13
(04-24-2009, 06:10 AM)Air Wrote: I'll say it here and now: CUDA is a marketing ploy developed by thinktanks at nVIDIA who are desperately trying to think of new ways to make ever0more-expensive video cards. Seriously, if you're buying a video card with the expectation that it's going to be able to run your photoshop filters and spreadsheet calculations in parallel with your cpu, then I have a bridge I'd like to sell you.

The whole premise of CUDA will continue to fail in the face of ever-more threading present in the CPU cores themselves, which are without any of the penalties of the CUDA architectural model (no bus latencies, less heat generation, better cooling options, and far fewer penalties for the complicated coding patterns needed by most modern and future 3D scene generators). And for that matter, many games are as much limited by complicated AI and physics processing as they are graphics, and CUDA would do nothing for that. CUDA's just there because nVIDIA can't think of anything else new to add to their video cards anymore to merit hefty luxury price tags.

i didnt want to sound like a newb, but since youve said this much already ill just add. Smile

CUDA so far has been tested/reviewed and has failed miserably. Only one game, Mirror's Edge, has actually taken use of its capabilities. overall the project has been a huge failure in the industry so far.
CPU : i7 4930k @ 4.0GHz
RAM : Corsair Vengeance 8x8GB 1603MHz 1.60v
GPU : Asus R9 290X 4GB
MOBO: Asus Rampage Extreme IV
OSYS: Windows 10 Pro
#14
what other games "use" it?
#15
fallout 3

crysis warhead

far cry 2

ut3

there are probably more but those for sure. Smile[/u]
CPU : i7 4930k @ 4.0GHz
RAM : Corsair Vengeance 8x8GB 1603MHz 1.60v
GPU : Asus R9 290X 4GB
MOBO: Asus Rampage Extreme IV
OSYS: Windows 10 Pro
#16
ah.. nothing I play lmao. I hope Prototype uses it. I can't wait for it! Smile
#17
Hmmm, CUDA has already been used to good effect for tasks like video-encoding/transcoding, previously the domain of multi-core CPUs. In addition, physics-processing is already intimately linked to CUDA as that is how nVidia are running the PhysX engine on their GPUs. Both those tasks are very parallel requiring the same thing to be done to many things at once though (SIMD), whilst I'm guessing from what Air said that VU0 and VU1 are much more closely linked to the CPU and that the latency for grouping similar VU requests and then passing them to the GPU and awaiting the results back from it, would make offloading VU work to the GPU impossible. That pretty much answers my question, I'm assuming the emulation cannot tolerate the latency which would be required if VU0 and VU1 work was buffered then sent in large similar chunks at once through the GPU using CUDA (or DX11 GPGPU interface).

Gabest, the PS2 plug-in (and a helluva lot of other good graphics stuff too I see from a quick googleage) developer... hmmm, do those plug-ins use much CPU time? I know they must use some, is there a way to see how much CPU time is being spent running the various bits of PCSX2 (in particular the graphics plug-in). I know there are lots of figures at the top of the window using GSDX, and the fps and % are pretty obvious, I'm guessing most of the others are about the number of polygons/vertices/objects or something.

CUDA is almost certainly going to be short lived, rather like proprietary GPU interfaces in the late 90's before Direct3D and OpenGL replaced the mish-mash of competing interfaces like Glide, PowerVR, Riva, Matrox etc where every game had to ship with a driver for each type of card (in the early days of Windows 95, games could actually ship with as many as seven or eight different graphics drivers to try to support most 3D cards available- it was madness).

I seem to have lost track of what I was saying. I guess the question now is, how much CPU load do the graphics plug-ins actually incur and is their an easy way to find this out. Obviously in software mode, the answer will be "a helluva lot", but how efficiently are they able to send everything sent to them to the GPU?
CPU: Athlon 64 X2 4400+ (2.2GHz @ solid 2.53GHz)
GPU: nVidia GeForce 8800GTS 640MB (not currently O/C)
Memory: 2GB DDR400 (2x 1GB @ DDR422 2.5-3-2)
#18
(04-24-2009, 06:21 AM)unauthorizedlogin Wrote:
(04-24-2009, 06:10 AM)Air Wrote: I'll say it here and now: CUDA is a marketing ploy developed by thinktanks at nVIDIA who are desperately trying to think of new ways to make ever0more-expensive video cards. Seriously, if you're buying a video card with the expectation that it's going to be able to run your photoshop filters and spreadsheet calculations in parallel with your cpu, then I have a bridge I'd like to sell you.

The whole premise of CUDA will continue to fail in the face of ever-more threading present in the CPU cores themselves, which are without any of the penalties of the CUDA architectural model (no bus latencies, less heat generation, better cooling options, and far fewer penalties for the complicated coding patterns needed by most modern and future 3D scene generators). And for that matter, many games are as much limited by complicated AI and physics processing as they are graphics, and CUDA would do nothing for that. CUDA's just there because nVIDIA can't think of anything else new to add to their video cards anymore to merit hefty luxury price tags.

i didnt want to sound like a newb, but since youve said this much already ill just add. Smile

CUDA so far has been tested/reviewed and has failed miserably. Only one game, Mirror's Edge, has actually taken use of its capabilities. overall the project has been a huge failure in the industry so far.

(04-24-2009, 06:28 AM)unauthorizedlogin Wrote: fallout 3

crysis warhead

far cry 2

ut3

there are probably more but those for sure. Smile[/u]

you sure didnt messed Cuda with PyshicsX ? ..... cos they arent the same for what i know Tongue
CPU - [PHENOM 9950 BE 3.0 GHZ | GFX - [Evga 260GTX]
OS - [Vista 64 Business] | RAM - [4GB Gskill 1066Mhz]
#19
physx was originally owned by Ageia, and has since been integrated into CUDA when Nvidia purchased it in 08.

so phsyx is now dependant on CUDA Smile
CPU : i7 4930k @ 4.0GHz
RAM : Corsair Vengeance 8x8GB 1603MHz 1.60v
GPU : Asus R9 290X 4GB
MOBO: Asus Rampage Extreme IV
OSYS: Windows 10 Pro
#20
Quote:I seem to have lost track of what I was saying. I guess the question now is, how much CPU load do the graphics plug-ins actually incur and is their an easy way to find this out. Obviously in software mode, the answer will be "a helluva lot", but how efficiently are they able to send everything sent to them to the GPU?

The CPU % on the GSdx bar informs you of how much % of the thread dedicated to it is being used.
[Image: newsig.jpg]




Users browsing this thread: 1 Guest(s)