Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
ATI or Nvidia...Does it matter?
#11
Basically, Radeon's have a more complex way of operating code, as opposed to nVidia's method which is more direct (a sort of "brute force" method). nVidia's architecture is scalar, while AMD's architecture is superscalar. It's sort of like the way you can throw a 2 threaded app at a quad core CPU and not see the full potential of that CPU. It all depends on the case, but real world results generally don't come too far off between relative cards of each brand.

IMO - I see nVidia as a more compatible architecture. When dealing with emulation, I believe compatibility is key.

Honestly, the HD 6950 likely boast a bit more power in the case of PCSX2, but I wouldn't be surprised to see the GTX 560 Ti not far behind.
Reply
#12
Actually Radeons are more brute force while Nvidia is more efficient per unit. ATI just benefits from that their vector processors are more efficient on a per area basis even if they need a little fancier scheduling in the driver.
Reply
#13
The stream processors in Nvidia's and AMD's respective architectures are slightly different, but there's a reason for that. Nvidia's architecture is scalar, while AMD's architecture is superscalar. Basically, with Nvidia's architecture, you can throw any piece of code at it and it will run on as many of the stream processors as it needs - they all have the same functionality (bar the special function unit - there's one of those per eight stream processors but it's not part of the count) and are pretty generalised. It's a brute force method - you just throw code at it and it works everything out itself.

On the other hand, AMD's architecture has blocks of five (well, technically six) stream processors that have differing functionality. Four of them can handle FP MAD, FP MUL, FP/INT ADD and dot product calculations, while the fifth unit can't handle dot products, INT ADD or double precision calculations, but can handle INT MUL, INT DIV, bit shifting and transcendental calculations (SIN, COS, LOG, etc). It's a bit more complex, but if the code is optimised well, it can deliver much higher performance - that's why the FLOPS throughputs on the AMD chips are quite a bit higher, too, because they only take FP MAD and FP MUL into account and all of the units in the AMD chips can do those calculations (they're the most widely used).

I'd say the AMD architecture is a lot cleverer in many respects, but it does require a bit more work from the developer to achieve peak performance.
Reply
#14
There is no diffrence in emulation betwen ATI and Nvidia,they are both effective,so you won't mistake if you chose one or another. As long as they are high end cards.
Reply
#15
Quote:Nvidia's architecture is scalar, while AMD's architecture is superscalar.
Actually nvidia is superscalar. And AMD vectorial (VLIW).
scalar ~ work on integer.
super scalar (let's say 4 way) ~ work on 4 integer in parallal. No contrains between the 4 pipeline. Easier to increase frequency (but limited by silicon nowadays)
vectorial, VLIW ~ work on vector. Only one pipeline. In VLIW, each scalar of the vector go to a special unit which can handle a limited number of operations.

AMD: less hardware, but you need a very good driver to sort instruction in the format of your vector (otherwise some unit will sleep ...)
Nvidia: more hardware (cost+, power+), you does not need a good drivers. You can send your command to any execution unit. In my opinion this arch is also better for (the not so useful) GPGPU because instruction are not as regular as graphics computings.
Reply
#16
(06-12-2011, 03:56 AM)nosisab Ken Keleh Wrote: each one bets on a different approach, ATI in sheer number of stream processors (which can pass the thousand in some cards) while Nvidia is based in those mentioned CUDA processors, which can pass half thousand in some cards.

not at all. AMD still has less physical shaders than Nvidia, the AMD parts can just execute multiple operations per shader unit as they are multiway units.
(06-12-2011, 10:00 AM)Rezard Wrote: Basically, Radeon's have a more complex way of operating code, as opposed to nVidia's method which is more direct (a sort of "brute force" method). nVidia's architecture is scalar, while AMD's architecture is superscalar. It's sort of like the way you can throw a 2 threaded app at a quad core CPU and not see the full potential of that CPU. It all depends on the case, but real world results generally don't come too far off between relative cards of each brand.

IMO - I see nVidia as a more compatible architecture. When dealing with emulation, I believe compatibility is key.

Honestly, the HD 6950 likely boast a bit more power in the case of PCSX2, but I wouldn't be surprised to see the GTX 560 Ti not far behind.

No. AMD's is the brute force method, and Fermi, particularly GF104/114 are Superscalar designs.
(06-12-2011, 11:56 AM)Game Wrote: There is no diffrence in emulation betwen ATI and Nvidia,they are both effective,so you won't mistake if you chose one or another. As long as they are high end cards.

There is actually differences, Nvidia is more likely to choke on the 8bit texture shader which is why you should only enable it when the game will take advantage of it.
(06-12-2011, 11:13 AM)Rezard Wrote: On the other hand, AMD's architecture has blocks of five (well, technically six) stream processors that have differing functionality. Four of them can handle FP MAD, FP MUL, FP/INT ADD and dot product calculations, while the fifth unit can't handle dot products, INT ADD or double precision calculations

I'd say the AMD architecture is a lot cleverer in many respects, but it does require a bit more work from the developer to achieve peak performance.

its blocks of four, it used to be 5.
Reply
#17
I'd go with Nvidia regardless of all of this tecnobable. In recent history both of them have had Driver issues, but Nvidia seems to be moving ahead at a much faster rate performance wise. There was a great post on GameFAQs about ATI vs Nvidia a while ago and I think I'll share it with you.

[Image: 2ibn6ae.jpg]
[Image: 1817164.png]
E8400 @ Stock, 4GB DDR3-1066, XFX HD4670, Windows 7 Pro 64-bit
E5700 @ 4GHz, ASUS P5Q-EM DO, 2GB DDR2-1002, Coolermaster Elite 460W
Reply
#18
WOW, the post on the picture indeed explained everything in a way I didn't understand anything, but I grasped the author opinion fairly well Smile

@Leonhart
CUDA processors are multipurpose processors able to perform almost all C code you trow at it, it's clearly intended to general computing than just graphic processing. Nvidia actively pursues this purpose and is making it's way in areas like supercomputing, biological research, theoretical physics and almost everything which benefits on heavy parallel processing.

PS: Now, maybe, people begin to understand why more than once I told AMD (as in the microprocessor segment) seems more concerned with Nvidia than with Intel itself.

The Quadro line is meant to professional graphic processing (and big multimonitor panels, like those seen in public presentations), Tesla line onto general computing and the more known (and affordable) GT is meant to gaming segment.

Imagination is where we are truly real
Reply
#19
Or you can avoid all the technobabble and fanboyism by simply looking for benchmarks of your favorite games, then choosing the right card at the right price/performance ratio like I do, and have done for the last 10 years Wink. Which is better for certain applications will flux just as it's always has, so trying to predict which will give the best overall performance for X will be fairly difficult... where as the games that are out now can be tested and proven for sure.

Just my two cents on the issue.
[Image: 2748844.png]
Reply
#20
benchmarks cannot be trusted.
Reply




Users browsing this thread: 1 Guest(s)