Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Some OpenGL driver/benchmark questions
p, li { white-space: pre-wrap; }
On Wednesday, January 11, 2017 3:23:39 PM EST Gregory Hainaut wrote:
> You don't compare the same thing. This perf is GSdx alone (with pre recorded
> core emulation). And the 50 fps is without the draw call (so yes the
> overhead is 0 in this case but the screen remain black). I'm sure that
> Vulkan overhead is far from 0. I let you manage the conclusion.
> I have a good CPU, haswell at 4GHz. 50 fps is awfully low to render a black
> screen. With normal rendering (Nvidia driver) I'm around 160 fps on SotC
> By latest driver, you mean latest Mesa? Free driver is really slow but I
> need to benchmark it again. It ought to be better. If they merge gl
> threading, I'm pretty sure we will get a massive speed boost.
Ah thanks for the explanation,

Now I have two (real) questions:

1. What exactly do you mean by "prerecorded?" The core emulation is done and saved somewhere and used by GSdx?
2. You said "If they merge GL threading," - is there a way to install from source with it now? If so, I'd be very, very happy to benchmark.

Oh right.

3. How can I perform a benchmark of speed on my computer so I can check how much of an impact a PR or a driver update makes?

Thanks in advance.

Moved this to forum as it was off-topic

Sponsored links

1/ Imagine a standard video game. Let's you can record all GPU PCIe transactions in a dump file. If you want replay the rendering you just need to read the dump, and regenerate the same transactions. You can do the same with the GS. You can record all the transaction from the CORE to the GS (called gsdump). Therefore you can easily replay/debug/benchmark a scenario without core emulation.

2/ Marek did a branch to implement gl threading. I don't know the status, neither I remember the branch name.

3/ It depends of what you want to want to benchmark. On the GS/GPU side, it is better to use the built-in GSdx benchmark mode (done automatically when you replay a gs dump).
You first need to generate a dump (search on the forum), tweak the GSdx option (in particular linux_replay), then build the replayer to replay the dump.
Thanks. In case anyone looks at this later, is the branch you were referring to. Going to test it out, but I'm not expecting much as it primarily targets Radeon and I have Intel (i965 IIRC).

Edit: just took another look - my first assumption of primarily targeting Radeon seems false. Marek's work for Radeon is most recent, but there is some Intel stuff.

I want to benchmark every part - GS, EE, VU, SPU, etc. (preferably before installing the mesa branch so we can see how much of an impact it makes).

Quick noob question: in windowed mode, what do the percents on the bar mean (if EE is at 100% and GS is at 45%, does that mean that EE is the slowest part?
IMHO it is harder the speed impact as you have more parameters in the equation. Anyway you can look at the various value in the title bar.

The % are the thread activity. 0 == sleeping. +90 == suck CPU power.

You need to find some interesting scene to benchmark. Maybe you can increase blending accuracy to increase driver work.
By the way there is likely an option to enable disable gl threading
There is - IIRC from last night when I stayed up until two trying to get marek's branch working an env variable named mesa_threading should be set to "true" to enable it.

I need to fix my computer. Maybe you can help. When I installed the branch I compiled, it overwrote Debian's files (most importantly the DRI and files). Now multiple GL extensions are no longer supported, and glxinfo gives this:

name of display: :0
libGL error: unable to load driver:
libGL error: failed to load driver: swrast
X Error of failed request: GLXBadContext
Major opcode of failed request: 154 (GLX)
Minor opcode of failed request: 6 (X_GLXIsDirect)
Serial number of failed request: 48
Current serial number in output stream: 47

I'm going to try compiling the X11 Server as well and see if that helps, but in all honesty I have NO clue what I'm doing right now. I don't usually work this close to system libs, which is why I first tried installing into /usr/local/ instead of /usr. Yeah, nope.
Restore your debian lib.
You can use this kind of command "apt-get install --reinstall ....mesa_package_name...."

I will give you tonight the instruction to use mesa without any installation (it can be done with env variable).
Note: you need to compile a 32 bits version of Mesa.

The GS % contains the CPU time of thread. Currently it includes the time spent in the driver. Once you enable the new option, driver gl command will be dispatched to a separate thread. So GS % must go down.
I know how to reinstall packages, one of the first things I did was reinstall GL and the DRI and try again. I didn't know that I could use mesa without installing. Oops. Need to recompile mesa again.

I've noticed that a DX9 implementation exists in mesa. If I compile and install it, do you know if WINE could use it at higher speed? I only ask here because I spent over eight hours trying to figure it out before giving up and you clearly know more about mesa3d than I do.
Here my compilation script for mesa. You need to tune the option to select your GPU (I build for Nouveau/Nvidia) and path.
It is not the correct way to cross compile but it is working on my system. There is a better way to do it.
#!/bin/zsh -x

export CFLAGS=" -m32 -Ofast "
export CXXFLAGS=" -m32 -Ofast  "
export PKG_CONFIG_PATH="/usr/lib/i386-linux-gnu/pkgconfig"

make clean

./ \
    --enable-texture-float \
    --enable-glx-tls \
    --with-gallium-drivers='nouveau swrast' \
    --with-dri-drivers=  \
    --enable-gallium-llvm \
    --with-egl-platforms='drm x11' \
    --enable-shared-glapi --enable-gbm --enable-dri3 \
    --disable-gles1 --disable-gles2 \
    --disable-asm \

make -j 16

sed -i -e 's@/usr/lib/x86_64-linux-gnu/' **/*.la

make -j 16

If you want to run an external mesa you can set the following variable. Actually I'm not sure it will work for you, Intel isn't based on gallium. But I'm pretty sure they have variable for you too. Dev can't install mesa, otherwise it will break their computer.
    export LIBGL_DRIVERS_PATH=$mesa/lib/gallium
    export EGL_DRIVERS_PATH=$mesa/lib/egl
    export LD_LIBRARY_PATH="$mesa/lib"

I don't know if wine could be faster. It could if wine dispatch the GL command. However I won't try it.
* wine is far from bug free
* openGL is light-years ahead of DX9
* I'm sure Marek will love some feedback that it can actually help some programs Wink
Question: keeping --with-dri-drivers= blank will prevent the installed drivers from being overwritten, right?

Also, it did break mine. Currently, it will not load the intel (i965 IIRC) driver. It's staying with the llvmpipe one. I can't fix it. I'm just going to reinstall in a few minutes.

Since it's not based on Gallium, will your script still work?

Users browsing this thread: 1 Guest(s)