Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
ps1 runtime error
#11
As suspected. Do you know why there are fired. And it seems there are not caught... what did I do again?
Reply

Sponsored links

#12
I've moved it back to where it was before:

Code:
    if(SUCCEEDED(hr))
    {
        t = new GSTexture11(texture);
        if (t == NULL)
            throw std::bad_alloc();

It does not crash in this case.
Reply
#13
The null check is useless. Is it normal than the API don't report "SUCCEEDED" ? Something smells fishy on dx world
Reply
#14
It seems that trying to create a render target texture with certain texture formats will fail (I don't know much about DX stuff, so I don't have specifics). It's not necessarily a memory allocation issue, so the else shouldn't have been added.

The std::bad_alloc exceptions are thrown on the MTGS thread and aren't actually caught by the core thread exception handlers - it's not an exception derived from std::runtime_error, which we do catch.
Reply
#15
Quote:It's not necessarily a memory allocation issue, so the else shouldn't have been added.
Sure but code expect to really create a surface so an exception must be fired.

However, it means the texture is created outside of the draw call, maybe during vsync. We need to catch it too.

Do you know the texture format that will "generate" the throw ?
Reply
#16
DXGI_FORMAT_R16_UINT, set in GSTextureCache11::Read (didn't check DX9).

Code:
DXGI_FORMAT format = TEX0.PSM == PSM_PSMCT16 || TEX0.PSM == PSM_PSMCT16S ? DXGI_FORMAT_R16_UINT : DXGI_FORMAT_R8G8B8A8_UNORM;

if(GSTexture* offscreen = m_renderer->m_dev->CopyOffscreen(t->m_texture, src, w, h, format))
{
Reply
#17
Oh very interesting. It could explain the bad shader.

Here the full explanation. By default GPU color are normalized on the color. So it ranges from 0 (color 0) to 1 (color 255). GS colors are not normalized. The read function will read the data from the GPU target and copy them to the GS local memory.
* GS format is RGB5A1 not normalized (aka integral)
* GPU format is RGBA8 normalized

There are 2 solutions
Dx9 => read GPU data as RGBA and convert it to the CPU
Dx11/ogl (support integral operation on GPU) => convert GPU data directly on the GPU as integral 16 bits value and then read back the data

However the shader code (ps_main1 in convert) that does the conversion feels wrong but I kept it because I was sure that GSdx was good (initial gl port).

Here the GLSL code https://github.com/PCSX2/pcsx2/blob/mast...t.glsl#L76
with both version the bad and good one.

However, if you tell me that Dx fails to create an integral frame buffer. All conversion is done on the CPU.

Edit: however it doesn't explain why this format isn't a valid target in DX10 Wink
Reply
#18
Oops. I completely missed something - the width was set to 0, which was causing the failure.
Reply
#19
But not the height I guess. So the bad condition doesn't work.

if (!t->m_dirty.empty() || (r.width() == 0 && r.height() == 0))
{
return;
}

I think, I have corrected OpenGL to use at least 1 for size. However I'm not sure the read will be applied i.e. m_mem.WritePixel* could be a nop. Maybe it would be better to skip the read when a size parameter is 0. (or we need to read a single pixel).
Reply
#20
Yeah, just the width.

Skipping the read if either parameter is 0 fixes the crash (I checked that WritePixel* does nothing in those cases).
Reply




Users browsing this thread: 1 Guest(s)