got pcsx2 running very slowly on armv8!
#1
I managed to get PCSX2 compiled and running on my nvidia shield today, which runs armv8, as a fun exercise. I plan on continuing work on this (very slowly; full time job  Angry ).

A few screen shots here: http://imgur.com/a/ZICaR

The '00083b44' in the standard output is where the EE is spinning during bootup of BIOS... not sure if this is intended or not. You can see in the other image 'uname -p' returning aarch64 for some degree of proof. I'm Xforwarding from ubuntu 16.04 running on my shield console.

This way actually surprisingly easy. Some status:
 - Currently running on all interpreters (obviously), 
 - devel build, 
 - built with clang.
 - x86 specific assembly / SSE usage / etc stubbed
 - x86 recompilers still compiled into image but not used.
 - error loading CDVD plugin; required stubbing... not sure of cause
 - using all NULL plugins (havent gotten GSDX building yet, wouldn't run even if it did)
 - runs at ~1.8 FPS with 100% CPU usage

Anyway, just wanted to share. Next step is to get a GS plugin working (zzogl or zeroGS probably) and get some sort of video output. After that, the sky is the limit! (Very) long term plan is to steal borrow Dolphin's armv8 emitter and work on an armv8 backend.


Attached Files Thumbnail(s)
   
Main Rig: i5 4670k, 16GB RAM, Nvidia 770 GTX, Windows 8.1/Arch
Main Laptop: Toshiba Kirabook. i5 4200U, 8 GB RAM, windows 8.1/Arch
Reply

Sponsored links

#2
Could you test the speed with output disabled instead ? (ini option DisableOutput set it to enabled)

Quote:GS plugin working (zzogl or zeroGS probably)
Otherwise search in the history, old GSdx support GLES
Reply
#3
Damn!
[Image: gmYzFII.png]
[Image: dvedn3-5.png]
Reply
#4
The nvidia shield supports GL4.4 (or 4.5, can't remember which), so I'm not worried about using just GLES. I don't think disabling the output will help much; I put in those prints just to see if the emulator was actually making progess, and they only print every 1 million instructions, so not a lot of overhead there.
Main Rig: i5 4670k, 16GB RAM, Nvidia 770 GTX, Windows 8.1/Arch
Main Laptop: Toshiba Kirabook. i5 4200U, 8 GB RAM, windows 8.1/Arch
Reply
#5
It isn't the output, it is to completely bypass the GS side. Note: even a check on the interpreter line is costly.

Forgot to reply. GSdx uses only 3 extensions from 4.5, I'm pretty sure there are supported.
Quote:status &= status_and_override(found_GL_ARB_clip_control, "GL_ARB_clip_control", true);
status &= status_and_override(found_GL_ARB_direct_state_access, "GL_ARB_direct_state_access", true);
status &= status_and_override(found_GL_ARB_texture_barrier, "GL_ARB_texture_barrier", true);
However, you might need to disable some SSE2 code. But I'm not even sure. FWIW, the hardware renderer is working on 64 bits (x64).

The code supports the EGL API too. But I think some bugs remains with Nvidia. However it could be fixed on standard PC and potentially it could be debugged on Nouveau (linux free driver) too.
Reply
#6
With DisableOutput = enabled and a release build, interpreter runs at ~ 5FPS. I'm actually pretty impressed. Unfortunately I don't really have a way of seeing if the emulator is really making progress. I'm having some trouble compiling a GS plugin. ZeroGS and zzOGL both require CG, but the CG toolkit isn't available for ARM platforms, at least not as a library I can install via apt-get. I *think* that there are utilities that convert CG to GLSL which may work. I'm no a GFX programmer though so looking into that is out of my skill set. GSDX requires SSE2 and won't compile without it.

I know that a plugin called GSsoft existed long ago. Maybe it is advanced enough to at least show something in the GS output? I'm going to experiment

Edit: didn't realize ZZogl has a GLSL backend. I actually got it to compile. I can't xforward zeroGS since creating a context fails. I messed up my X setup on my device so I'll have to reformat, which is easy.

Edit: and it doesn't work. I hit an error in ZZCreate: ZZLog::Error_Log("ZZogl ERROR: could not fill blocks");

Will look into this some more later.
Main Rig: i5 4670k, 16GB RAM, Nvidia 770 GTX, Windows 8.1/Arch
Main Laptop: Toshiba Kirabook. i5 4200U, 8 GB RAM, windows 8.1/Arch
Reply
#7
ZZ error could be an SSE2 related error.

GSdx is based on GSvector wrapper. Maybe you just need to partially port it to ARM equivalent. I don't know if you have any intrinsics available.
Reply
#8
The ZZ ogl error isn't related to SSE2. ZZogl actually has a #define switch to turn off all SSE2 code which I am assuming works correctly. The error is at this line, when trying to utilize some GL3.0(I think) features, which is strange, since it ZZogl correctly detects and creates a GL4.4 context with GLX. Not sure of the source of this error. Is there an IRC room or similar I can drop in and ask some questions on?
Main Rig: i5 4670k, 16GB RAM, Nvidia 770 GTX, Windows 8.1/Arch
Main Laptop: Toshiba Kirabook. i5 4200U, 8 GB RAM, windows 8.1/Arch
Reply
#9
#pcsx2dev is the irc channel on efnet
[Image: gmYzFII.png]
[Image: dvedn3-5.png]
Reply
#10
I haven't messed with the GFX stuff since I don't really care about it at this point.

Instead, I've started doing some work on writing new aarch64 recompilers. Work is being done on my personal branch here if you want to follow the progress. The core MIPS instruction set isn't actually that hard. Having 3 operand instructions makes it a lot easier, as does a bunch of registers. I won't actually have it running for a while since I'm still missing all of the recompiling glue code. I've copied the emitter code from Dolphin and done some refactoring as well.
Main Rig: i5 4670k, 16GB RAM, Nvidia 770 GTX, Windows 8.1/Arch
Main Laptop: Toshiba Kirabook. i5 4200U, 8 GB RAM, windows 8.1/Arch
Reply




Users browsing this thread: 1 Guest(s)