I'd love to test that 2x software rendering hypothesis. I have a i5-4670k so if you ever do get around to allowing upscaling in software mode, I'll be sure to try it out. But yeah, I definitely second that suggestion. That would make me a very happy camper.
BTW thanks for explaining that
rama. FWIW, I did look around a bit and didn't see any mention of that in any of the sticky'ed threads or the links provided therein.
(09-08-2014, 10:29 PM)Blyss Sarania Wrote: [ -> ]I think the best possible change to GSdx is allowing upscaling in software mode. I don't know how hard/trivial that is, but it should be at least feasible without a rewrite. And at least 2x should be possible in software on a modern PC.
that bothers me still tho.
in theory it's possible. but... the texture/memory sampling is mean. at 2x you do 4x as much. think slow. it's possible to optimize it to not take too much random samples that don't fit the cache but you need a damn good texture/memory fragment algorithm that is good prefetching and emulating that "block and swizzle" SH*T computation fast. that's what turns that off right at the start. this is what makes that GS so hard. computing the memory location and having it fast. really A*S.
but... i believe there is some bit magic and numbers that could be found... that'd do that nicely in software.
yes yes yes, upscaling in software mode would be fantastic omg...
gabest still updating on google svn? what that about? i just went there. and... o_O
Wow gabest did make an update on the old SVN a few hours ago...
seems like hes testing a port of the renderer to openCL though the code itself looks pretty much the same as the other renderer. So the new renderer isn't really coded to do anything differently than before - looks like Gabest is just testing to see if the framework does anything smarter/better internally than the dx one
(09-08-2014, 10:29 PM)Blyss Sarania Wrote: [ -> ]I think the best possible change to GSdx is allowing upscaling in software mode. I don't know how hard/trivial that is, but it should be at least feasible without a rewrite. And at least 2x should be possible in software on a modern PC.
The whole model of the software renderer/rasterizer seemed pretty different from the hardware renderers - the whole thing does some stuff that only really works on the assumption that it doesn't need to upscale. Because the sizes of things don't change, it can pretty much just slap the graphics directly onto the screen and then rasterize it in lines.
Yay, I did not notice the move to github. Could someone tell me what to do to get access there? I have never used it before. Merging the changes will be fun.
About opencl, I'm a bit disappointed. The enqueue call overhead is too much sometimes, and none of the debuggers work reliably. In the upcoming 2.0 specs, they added sub-kernel calls, that could be useful. They say cuda already has this, maybe only a driver update is needed, not a new hardware.
Upscaling works with the psx renderer only because texture coords aren't shifted to half pixel. With the tile based rendering I'm trying in opencl, I could at least do super-sampling and real antialiasing.
Gabest, I for one am glad to see your return. GSDX needs you
Gabest, save us! Only you can do it!
Also, can't even regular accounts make pulls on github, no special access needed? Although whoever is in charge of the PCSX2 permissions there should give Gabest Owner status so he can merge his own pulls.