qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PATCH 0/4] tcg-hppa finish, v4


From: Richard Henderson
Subject: [Qemu-devel] [PATCH 0/4] tcg-hppa finish, v4
Date: Wed, 7 Apr 2010 16:29:11 -0700

On 04/07/2010 04:56 AM, Aurelien Jarno wrote:
> Sorry, I haven't find time to review it in details. Would also be nice
> if someone can try it on an hppa machine, and ack it.

I got an ack against v3 here:

http://lists.gnu.org/archive/html/qemu-devel/2010-03/msg01214.html

This isn't just written to a spec, I do have access to an hppa machine:

address@hidden:~$ cat /proc/cpuinfo 
processor       : 0
cpu family      : PA-RISC 2.0
cpu             : PA8600 (PCX-W+)
cpu MHz         : 552.000000
model           : 9000/785/J6000
model name      : Duet W+
hversion        : 0x00005d40
sversion        : 0x00000491
I-cache         : 512 KB
D-cache         : 1024 KB (WB, direct mapped)
ITLB entries    : 160
DTLB entries    : 160 - shared with ITLB
bogomips        : 1101.00
software id     : 2011956991

Test results for linux-user-test-0.3:

ok:     i386 arm armeb sparc ppc mips mipsel
bad:    sparc32plus ppc64abi32 sh4 sh4eb x86_64

Of course, sparc32plus doesn't work on an i386 host either -- I think
the implementation of fstat is wrong.

I don't have any system emulations set up on the hppa machine yet.

Changes v3->v4:
  * Re-base vs HEAD.
  * Fix earlyclobber in add2/sub2.
  * Note problem with indirect calls, which we don't actually use.


r~


Richard Henderson (4):
  tcg-hppa: Compute is_write in cpu_signal_handler.
  tcg-hppa: Finish the port.
  tcg-hppa: Fix in/out register overlap in add2/sub2.
  tcg-hppa: Don't try to calls to non-constant addresses.

 configure             |    5 +-
 cpu-exec.c            |   38 +-
 tcg/hppa/tcg-target.c | 1790 ++++++++++++++++++++++++++++++++++---------------
 tcg/hppa/tcg-target.h |  143 +----
 4 files changed, 1323 insertions(+), 653 deletions(-)





reply via email to

[Prev in Thread] Current Thread [Next in Thread]