[deleted]
#1
Question 
[deleted]
Reply

Sponsored links

#2
Quote:I also figure it's going to put DirectX ahead of OpenGL without question now. Up until now you can see all kinds of arguments between OpenGL and DirectX for "which is better", and the verdict seems to vary on a case-by-case basis (in my experience, OpenGL seems to support more features and be more compatible, but DX is usually faster). I wonder if DX12 is going to change all of that.
In this case, the speed difference depends only on the driver. OpenGL supports much more feature than Dx that why GSdx-ogl emulates more effect than GSdx-dx. So honestly it would already be a good progress if Dx can reduce the gap Wink

In my opinion, change could be very big which might requires lots of GSdx update. However the results won't be that nice
1/ It doesn't make your GPU faster for big upscale
2/ It doesn't help to emulate blending accurately. 70% of GSdx bug. It requires new Hardware. Again OpenGL will support it before Dx.
3/ It won't help to fix the texture cache 25% of the remaining GSdx bug.
4/ Even with Dx12 you still need to emulate the GS (you know the special effect slow down Wink )

To be honest, I think it could increase a bit the speed on low upscale in normal gameplay but it won't fix the big lag/spikes.
Reply
#3
DX12 does a better CPU>>GPU communication by giving the commands list in a package rather than one by one relying on frequent communications between the CPU & GPU, which effectively reduces the CPU overhead on DX12.

though benefit obviously depends on the specific game and drivers. Wink
We're supposed to be working as a team, if we aren't helping and suggesting things to each other, we aren't working as a team.
- Refraction
Reply
#4
First, I don't think you can saturate the PCIE link with a bunch of command. Then nothing prevent the driver to accumulate the state change and to send them at once on the GPU.

What reduce the CPU overhead is
1/ Create a full immutable state object (with mostly all state information instead of several object).
2/ Do the validation once at creation and convert it once in GPU format (i.e. a single fat memory write, or multiple register write)

What increase the GPU overhead is
1/ the full state is uploaded so it need a bigger flush.

I think the mains questions are
1/ is GS limited by the CPU (if you reduce the upscaling and fps increases => NO)
2/ is GS limited by the CPU due to driver overhead or due to GS emulation
3/ is accuracy more important than speed? Accuracy will requires a beefer GPU not a faster CPU. For example accurate rounding of color might cost 10% of extra GPU power.
4/ I have a small testcase of SotC that is around 180 fps on my (fast) CPU, so I'm not sure that driver overhead is that big.
Reply
#5
Actually all those new API-hype is nothing but a tradeoff between RAM and CPU.
1/ old API: Revalidate state each time but only keep a limited number of state object in RAM
2/ new API: Put all state object in RAM but only validate the object once.

I can tell you, new API will requires more RAM. 32 bits systems will suffer.
Reply
#6
This is a bad transitory period to make any real decision. DirectX 12 is Windows only, so no OSX or Linux versions, and the transition to the mobile - assuming someone will do it - will need even more heavy rewrites. While Vulkan, which will support all these platforms - assuming Apple won't make Metal the only supported API on their platforms - including mobile, but it is made clear by Khronos that it will be bare bones and lacking many features at launch later this year.

So DX12 is windows only and Vulkan will need a while to get mature. No clear option yet from what I see other than continue with OGL till things clear up.
Reply
#7
Yes it is clearly the wrong time to rush thing. Honestly I'm in favor to let the hype gown down and the dust settle.

AMD pays some devs to tune game engine for Mantle. It means the gain are not automatic, just using the API is not enough. You need to use it the correct way. However PS2 games won't have their game engines updated Tongue2 By the way I didn't see (search) any benchmark that show a big improvement on the performance.

Quote:https://developer.nvidia.com/sites/defau...erlock.txt
This kind of extension is a real game changer for us. It would fix lots of bad color/bad blending. Various texture cache issues. Various slow down in post-processing effect (it avoid the copy of the framebuffer, reduce a bit the memory consumption).
Reply
#8
That's OGL 4.3, don't we already have that?
[Image: ref-sig-anim.gif]

Reply
#9
No, it means the spec patch need to be applied on the 4.3 spec.

However you have it, not me, I only have a kepler GPU Sad I hope they release soon their 14nm/16nm GPU with HBM.
Reply
#10
(07-17-2015, 02:10 PM)ssakash Wrote: DX12 does a better CPU>>GPU communication by giving the commands list in a package rather than one by one relying on frequent communications between the CPU & GPU, which effectively reduces the CPU overhead on DX12.

though benefit obviously depends on the specific game and drivers. Wink

I used to explain this. Anyway, Direct3D11 can send multiple commands at once as well. You can read this: Command List.
Reply




Users browsing this thread: 1 Guest(s)