..:: PCSX2 Forums ::..

Full Version: ZZogl -- Zero GS KOSMOS fork
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Really? Must have been fixed in zzogl-pg at some point, then, for that case but not Kingdom Hearts 1. I wonder which revision?

And I can go ahead and remove it from the list.
Guys I need your opinion on the temporary buffer for clut. I have 2 ideas (but not really sure they are good idea ^^).

The current implementation of the buffer is an array of 512 * 16bits entries.

We have somethings like that: 256 0 257 1 .... 510 254 511 255
Eithers we reads a full 32 bits word (256&0, 257&1 ... ) or only half word every word (aka 0, 1 ... )
It is fast for 32 bits, slow and complex for 16 bits access.

1/ why not create 2 buffers.
Advantage: easy and fast code for the 2. A nice template will be compatible for the 2. It also improves the texture cache efficiency.
Drawback: compatibility issue if the game writes (reads) in 32 bits format the reads (writes) in 16 bits formats. Now I do not thinks some games do that, it will be very awkward but you never know...

2/ Change the array order to 0 1 ... 511 512
Advantages: 16 bits is much more easier & faster
Drawback: 32 bits access is a little more difficult but not a big deal (the increase of complexity is very small against the decrease of the 16 bits code complexity)> However it will reduce a little the speed. My game (1) seems to use mostly (if not exclusively) 32 bits clut colors.

(1) Well there are not limited by clut anyway so not sure there is a real impact.
32-bit clut are widely used by most games we support. So we must not degrage 32-bit clut speed. And about RW 16/32... I bet some games do that. It seems like a nasty, but usefull trick, that could be used by programmer. But this case is possible.
(08-30-2009, 06:52 PM)Zeydlitz Wrote: [ -> ]Melty Blood: Actress Again is working perfectly on my Debian, no troubles (except extra pixels per sprite).

Melty Blood doesn't work with ZZogl unless I made some code changes. I am using Nvidia proprietary driver, so that can be one factor...
It's almost impossible to fix unreproduced bug. At lest, what console output ZZogl was produced? It's in GSlog.txt
There is some dead code in the NoHighlights function (primitive stuff)
Code:
u32 resultA = prim->iip + (2 * (prim->tme)) + (4 * (prim->fge)) + (8 * (prim->abe)) + (16 * (prim->aa1)) + (32 * (prim->fst)) + (64 * (prim->ctxt)) + (128 * (prim->fix));
Because each prim is only 1 bits: resultA < 0x90
So the following does nothings (0x310A > 0x90)
Code:
if ((resultA == 0x310a) && (result == 0x0)) return false; // Radiata Stories

Did I miss somethings ?
It's old try to made a skipdraw. Unsuccessfull. Every line, except return, should be removed. And this is different result's -- 0x310 is from
u32 result = curtest.ate + ((curtest.atst) << 1) +((curtest.afail) << 4) + ((curtest.date) << 6) + ((curtest.datm) << 7) + ((curtest.zte) << 8) + ((curtest.ztst)<< 9);
Current primitive status

PS2 primitives => Render as opengl primitive
point => gl_point
line => gl_line
line_strip => gl_line
triangle => gl_triangle
triangle_strip => gl_triangle
triangle_fan => gl_triangle
quad/sprite => 2 * gl_triangle

Why not used others opengl primitives like gl_line_strip, gl_triangle_strip, gl_triangle_fan and gl_quad ? As far as I understand, it would add an extra complexity in the kick function but it would save a lots of vertices. Maybe it could reduce graphical memory transfer and driver processing. However I do not know how others parts of the code will be impacted, any opinions ?
Is it possible to implement a software mode just like gsdx on zzogl?
Theoretically yes, but in reallity -- whole code should be rewritten. Right now there is no even slight possibility to change renderer.