(05-06-2015, 11:36 PM)refraction Wrote: yeh well mipmaps are done by using smaller textures on further away objects, the texture cache can't handle these maps because of how the hw texture cache works, so i was trying to force the game (Ratchet and Clank) to always use the largest texture. I got it to not use any texture rather than crap, but that's not what i was going for xD
GSdx always use the biggest texture. However I think the game doesn't always send all layers. I don't know what will be the best
Emulate 6 GL textures by GS texture (one for each layers)
Or Emulater a 6 layers GL textures by GS texture.
Quote:reminds me of a "hack" i did years ago. rendering to a rendertarget while using that as a texture. sorta direct "feedback loop". the gpu caches handle that. on old hardware that was easy.
Old HW doesn't have cache (mobile?), so it is safe to write/read. Newer HW have cache so you don't know what you read. 2 solutions
1/ either completely disable the cache (likely not possible with current HW)
2/ Allow to invalidate the cache after a read. It is the purpose of GL_ARB_texture_barrier extension.
As a side node, there are others extensions that will allow to do blending/DATE efficiently in hardware
1/ GL_NV_fragment_shader_interlock => requires maxwell GPU (crap I only have a kepler GPU)
2/ GL_INTEL_fragment_shader_ordering => for intel hardware (but not in linux)
3/ AMD => I don't know
In short, for sure those problem will be solved in the future.
@ssakash
Without GS dump, I can't do anything. By the way, the best will be scene that have bad rendering due to those errors.
Anyway first I need to extend the debug capabilities of GSdx so I can easily detect error and compare output with the SW renderer.