(09-24-2011, 12:24 AM)refraction Wrote: I stumbled across something similar when learning directX and couldnt really explain it. As far as anybody could tell me, it's something to do with modern graphics cards handling larger textures better than smaller ones. so in my tests going from 800x600 to 1280x1024 doubled my fps, was very strange ;p
Very strange, but extremely interesting.
This actually made sense to me, and I was having a little trouble putting the concept into words. I was thinking something to the effect of modern graphics units being designed, or more optimized, to a larger workload. So optimized towards a more modern standard, that for them to take on a minor enough task, they could be forced to operate at a less efficient rate.
Make sense? Well, I'm just babbling on a concept I feel I understand, but can't find the right words for...
... But I found this. It seems to represent the very idea I'm getting at.
Quote:Modern graphics cards are really good at pushing a lot of polygons, but they have quite a bit of overhead for every batch that you submit to the graphics card. So if you have a 100-triangle object it is going to be just as expensive to render as a 1500-triangle object. The "sweet spot" for optimal rendering performance is somewhere around 1500-4000 triangles per mesh.
This would be like what we're dealing with, right?
And for all you fans of the "ATI vs. nVidia" debate; How about the way Livy's Radeon seemed to benefit more from this than my GeForce? Kinda seems like the "sweet spot" was more benefitial to the Radeon, which could indicate that the less efficient (or less optimized) situation was more detrimental to it.
(Lack of optimization: ATI's achilles heel)