qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Leon3 is broken since 6281f7d11


From: Artyom Tarasenko
Subject: [Qemu-devel] Leon3 is broken since 6281f7d11
Date: Wed, 25 Jan 2012 12:06:06 +0100

Leon3 machine is broken in the current git master.

 Bisect shows the following:

6281f7d11fa6bfb6da3926359fbe70684e582cb1 is the first bad commit
commit 6281f7d11fa6bfb6da3926359fbe70684e582cb1
Author: Avi Kivity <address@hidden>
Date:   Mon Nov 14 13:10:13 2011 +0200

    grlib_apbuart: convert to memory API

    Signed-off-by: Avi Kivity <address@hidden>



I've asked the author's permission to publish the test program. Until
he responds here is some preliminary analysis. Before the memory API
running the test program (with unassigned access debugging turned on)
looked like this:
$ sparc-softmmu/qemu-system-sparc -M leon3_generic -kernel ravenscar-test

Unassigned mem read access of 4 bytes to 0000000000000108 from 400080d0
Unassigned mem write access of 1 byte to 0000000040032af5 asi 0x01 from 40003c4c
Unassigned mem read access of 2 bytes to 0000000000000212 from 40006ca8
...

Currently it looks like this:
$ sparc-softmmu/qemu-system-sparc -M leon3_generic -kernel ravenscar-test

Unassigned mem write access of 1 byte to 0000000040032af5 asi 0x01 from
<hang>

It looks that qemu produces less 'unassigned mem read accesses' than before.

Unfortunately, there seems to be another bug in the logging, because I
don't see what could have triggered the unassigned access at 400080d0,
but there is a good candidate at 400080d8 though:

IN: system__bb__peripherals__initialize_uart
0x400080d0:  sethi  %hi(0x80000000), %g1
0x400080d4:  or  %g1, 0x108, %g1        ! 0x80000108
0x400080d8:  ld  [ %g1 ], %g1
0x400080dc:  btst  1, %g1
0x400080e0:  bne  0x400081a4
0x400080e4:  btst  2, %g1


Artyom
-- 
Regards,
Artyom Tarasenko

solaris/sparc under qemu blog: http://tyom.blogspot.com/search/label/qemu



reply via email to

[Prev in Thread] Current Thread [Next in Thread]