Poll: x64 build faster on x64 OS?
You do not have permission to vote in this poll.
Yes
55.56%
5 55.56%
No
44.44%
4 44.44%
Total 9 vote(s) 100%
* You voted for this item. [Show Results]

Could you explain in simple words why a x64 PCSX2 wouldn't be faster than a x86 build
#21
It's not off topic, the point it's not the actual code the reason, it is the model.
Imagination is where we are truly real
Reply

Sponsored links

#22
(08-02-2010, 01:02 AM)nosisab Ken Keleh Wrote: I fear not, Air. Great part of it's kernel was 16 bits and the whole thing constructed over the MS-DOS base architecture. ME (Millenium Edition) was the first attempt to a pure 32bits code and ill implemented. XP became the "de fact" 32 bits OS and even so became really stable only after many updates and the SPs.

No no no. The entire kernel was 32 bit code. It *had* to be. What Millennium tried to do was change the protection model on the drivers, which resulted in breaking lots of existing drivers if they didn't properly conform to Microsoft driver rules (and as everyone found out, lots of drivers didn't, explaining why there were so many problems with Windows stability).

Seriously, the system would not be able to function if the kernel wasn't 32 bits.

The fact that it had a 16 bit MS-DOS underneath was irrelevant. Like any decent 32-bit app of the DOS days, Win9x completely subplanted the DOS layer with its own kernel and driver system (think about how Linux could be booted from a dos prompt using LILO for DOS). By Win98 the only code that called the DOS layer was old 16 bit DOS apps and various poorly-coded drivers. The DOS layer was only there to provide speedy and simple compatibility for legacy apps (which, btw, could destroy the system quickly and easily because the 16-bit subsystems had no memory protection model). Again, the problem wasn't really the 16/32 bit code...

the problem was that the old DOS was a system-wide resource, and could muck up and crash all processes and the Windows kernel. This has nothing to do with it being 32-bit code or not. It has to do with basic operating system design.

In Windows2000, they removed the old 16-bit DOS and provided an emulated DOS implementation that ran as a protected process. Since it was process specific, the emulated DOS, when abused, could only muck up its own process. Anything else would cause a GPF and close the app, and leave the rest of the system safe and secure. This was, of course, a moderate speed hit compared to the Win9x approach, and it required a lot more memory when running said DOS/16bit apps. But otherwise it was an important step to a secure OS.

Linux was lucky; it was pretty well developed using 32-bit protected mode from the ground up. This isn't saying that 32-bit allows for a more secure OS; what it means is that Linux never really had to worry with providing backwards compatibility to an older and highly insecure 16 bit operating system. What allows for a more secure OS is protected memory features built into the CPU, and it just happens that such features were introduced into the Intel world along with the unrolling of the 32-bit 80386 (the 80286 actually had very fundamental memory protection and memory mapping features, which are nearly unusable).


Bottom line: x32 and x64 mean almost nothing to system stability. System stability is about memory protection (GPF and DEP) and good management of processes, drivers, and threads. These are things that have been in continual development in both Windows and Linux for decades. They are extremely complicated sciences, and they depend a lot on had features built into the CPUs. Those features are, however, equally available regardless of the CPU being in x32 or x64 mode.
Jake Stine (Air) - Programmer - PCSX2 Dev Team
Reply
#23
(08-02-2010, 04:58 PM)nosisab Ken Keleh Wrote: It's not off topic, the point it's not the actual code the reason, it is the model.

I still don't get it right. What model?

Reading that chaotic essay again doesn't show me what you talking about.

The thing with sequential or parallel processing is interesting.

It needs a task sheduler running as the main thread and push tasks to a thread/core that's free.

The problem is the sync or to break the processing portions in parts that can be computed 'in time'. What if a task takes a lil longer to complete? Execute out of order and get bugs cause there might be data that depends on some thing computed by another component. Or just stall the whole system to wait for the task to finish. I think the PS2 and the programmers managed that for the specific platform but It should be a lil difficult to emulate cause the cycles and stuff are completely different.

Example: How to break a recompiled VU-program with a loop into a portion that can be executed 'in time' and keep all the data, variables, loop counters and stuff to finish it at a later moment? I'm not even thinking about out of order processing those parts on 2 cores. Maybe cause this might break the whole ***** to shreds.

Is that what you meant?
Reply
#24
@Air
Actually the computing development is something "I was there" from some decades in the system support and administration staff for a government oligarchy (CPD, a datacenter) that began with an ancient /370 4341 main frame and some dozen dumb terminals under CICS and an environment of thousands of employees in many secretaries, departments and sectors.

That environment grew from eventually replacing those terminals with 8086 based MS-DOS and all the problems it brought in terms of data security. By itself something that forced us to understand the OS working so to implement system and local policies meant to protection from external and internal security breaches.

From there all the windows flavors eventually replacing the former as the hardware platform too including the old mainframe for minicomputers based on the AS/400. At the time of the win9x era, it was already a NT based multiserver environment still main frame centralized over a platform of some hundreds workstations spread all over the city. In all these changes the concerns not only with the security itself as with the "production" continuity was taken seriously. On such environment hangs and crashes can be a nasty thing. Knowing the OS "internals" is a must. Of course we had almost every manual produced by MS above the user manual level and that helped alot.

But cannot come any goodness to keep a discussion in terms of proving one another wrong. We both are right to an extent, part of the kernel was 32 bits in win9x and part was 16 bits... and this part was the one that made the BSOD so 'popular' Smile

Meant being an informative bit and irrelevant other way, I'm a Linuxer on the heart too, since the kernel 1.0 or so, that time before Caldera branched in the Slackware and SCO (arghh), those "good times" when the kernel began being modularized, struggling with winmodems, before the user space methodology coming to stay... and I'm yet a Linux lover today.

I might even code in a bunch of languages yet, from Ancients COBOL, FORTRAN, PLI... passing for the /370, z80, 6562 and x86 (and big gap in the ia64/amd64 sadly) assembly... to some more actual things, like the not so new Pascal, Perl, Python and of course C and C++, although certainly more than a bit rusty, I'm sure. I'm retired today, in that comfortable numb of the ocio.

@Haxor
I mean the same to you, my posts seems to attract reactions where I want only be informative.
You are not wrong there, those points are indeed the main restraints to try implementing multithreading in a pipeline model, although some the problems you pointed are worsen in this very model, like one task stalling stalls the whole pipe.

The point being there is no actual advantage in replacing the code to 64bits keeping that pipeline model, The part about the task scheduler is going for the "tree" model. PCSX2 already 'branched' in using two cores under the code control although the result is still pipped. And neither you or I disagree with it for what I see.
Imagination is where we are truly real
Reply




Users browsing this thread: 1 Guest(s)