03-06-2016, 11:06 PM
First topic: Drop old renderer
I would like to create a legacy GSdx plugin. It is more or less the current state of the GSdx. It would remain available for DX9 (very old GPU on Windows & MacOSx) and the GL3.3 (for less old GPU on linux).
The future GSdx will just drop DX9 and GL3.3 limited drivers (various GL4 features are compatible with GL3 hardware class, even GL2 HW class). It means that Windows DX10/GL3 GPU will only have access to DX10/11 renderer. On linux, DX10/GL3 will be supported on the free driver.
In order to keep the number of build manageable, I propose to keep only SSE2 for it. (if you have a recent CPU, time to upgrade the GPU).
Next step, in a more distant future, will be to introduce pure HW GL4 features (i.e that requires DX11/GL4 class GPU).
Second topic: Support integral texture coordinate
Hardware GPU filtering unit uses float (or normalized integer) value as texture coordinate. It is fast but it suffers of a couple of limitations. The idea will be to use pure integer (aka not normalized). It would help to
* Fix texture sampling on some games such as Lupin 3rd
* Reduce memory usage (allow to use a subset of the texture instead of a full 1024x1024 texture)
* Reduce texture cache downsclaling
* Future: bypass the texture cache in some special draw (you know the top-left corner bug
* Sampler management will be much easier (i.e no more sampler actually )
But there is a catch. HW unit only supports float so integer filtering will be done in SW (must be read as in shader)
* Bad for the performance. Increase memory bandwidth requirement (in case of linear filtering). Increase ALU computing (might be hidden by the memory access)
* No more anisotropic filtering
* No more custom resolution (well maybe it can be partially emulated with a float factor)
I would like to create a legacy GSdx plugin. It is more or less the current state of the GSdx. It would remain available for DX9 (very old GPU on Windows & MacOSx) and the GL3.3 (for less old GPU on linux).
The future GSdx will just drop DX9 and GL3.3 limited drivers (various GL4 features are compatible with GL3 hardware class, even GL2 HW class). It means that Windows DX10/GL3 GPU will only have access to DX10/11 renderer. On linux, DX10/GL3 will be supported on the free driver.
In order to keep the number of build manageable, I propose to keep only SSE2 for it. (if you have a recent CPU, time to upgrade the GPU).
Next step, in a more distant future, will be to introduce pure HW GL4 features (i.e that requires DX11/GL4 class GPU).
Second topic: Support integral texture coordinate
Hardware GPU filtering unit uses float (or normalized integer) value as texture coordinate. It is fast but it suffers of a couple of limitations. The idea will be to use pure integer (aka not normalized). It would help to
* Fix texture sampling on some games such as Lupin 3rd
* Reduce memory usage (allow to use a subset of the texture instead of a full 1024x1024 texture)
* Reduce texture cache downsclaling
* Future: bypass the texture cache in some special draw (you know the top-left corner bug
* Sampler management will be much easier (i.e no more sampler actually )
But there is a catch. HW unit only supports float so integer filtering will be done in SW (must be read as in shader)
* Bad for the performance. Increase memory bandwidth requirement (in case of linear filtering). Increase ALU computing (might be hidden by the memory access)
* No more anisotropic filtering
* No more custom resolution (well maybe it can be partially emulated with a float factor)