qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] [M25P80] Make sure not to overrun the internal


From: mar.krzeminski
Subject: Re: [Qemu-devel] [PATCH] [M25P80] Make sure not to overrun the internal data buffer.
Date: Fri, 30 Dec 2016 16:39:47 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.5.1



W dniu 27.12.2016 o 18:08, Jean-Christophe DUBOIS pisze:
You can have a more detailed procedure on how to run Xvisor on Qemu Sabrelite (with Linux guests if you wish) at the following URL.

https://github.com/avpatel/xvisor-next/blob/master/docs/arm/imx6-sabrelite.txt

You don't need to start the guest to see the crash. Just boot Xvisor ...

JC

Le 24/12/2016 à 19:12, Jean-Christophe DUBOIS a écrit :
Le 24/12/2016 à 19:04, mar.krzeminski a écrit :


W dniu 24.12.2016 o 18:41, Jean-Christophe DUBOIS pisze:
Le 24/12/2016 à 18:18, mar.krzeminski a écrit :
Hello,

W dniu 24.12.2016 o 16:11, Jean-Christophe Dubois pisze:
It did happen that the internal data buffer was overrun leading to a Qemu
crash (in particular while emulating the i.MX6 sabrelite board).

This patch makes sure the data array would not be overrun and allow the
sabrelite emulation to run without crash.

Signed-off-by: Jean-Christophe Dubois <address@hidden>
---
  hw/block/m25p80.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/block/m25p80.c b/hw/block/m25p80.c
index d29ff4c..a1c4e5d 100644
--- a/hw/block/m25p80.c
+++ b/hw/block/m25p80.c
@@ -1117,7 +1117,7 @@ static uint32_t m25p80_transfer8(SSISlave *ss, uint32_t tx)
          s->data[s->len] = (uint8_t)tx;
          s->len++;
  -        if (s->len == s->needed_bytes) {
+ if ((s->len >= s->needed_bytes) || (s->len >= sizeof(s->data))) {
              complete_collecting_data(s);
          }
          break;
Do you have exact scenario that caused the problem?

When booting Xvisor (http://xhypervisor.org/) on top of Qemu emulated Sabrelite.

During the boot Qemu would segfault while writing to the SPI flash.
Thanks, I'll try to take I look.

Once you have built Xvisor for "generic ARMv7" you can run the following command.

qemu-system-arm -M sabrelite -display none -serial null -serial stdio -kernel ./build/vmm.bin -initrd ./build/vmm.bin -dtb ./build/arch/arm/board/generic/dts/imx6/sabrelite-a9/one_guest_sabrelite-a9.dtb

I got some time, and reproduced the problem. Here are some logs with m25p80 debugs:
: decode_new_cmd: decoded new command:9f
: decode_new_cmd: populated jedec code
: decode_new_cmd: decoded new command:0
: decode_new_cmd: decoded new command:0 //Getting flash Id in above 4 lines -> OK (but missing CS)
Found sst25vf016b compatible flash device
: decode_new_cmd: decoded new command:6 //Write enable, command without payload, so it is ok : decode_new_cmd: decoded new command:1 //Write to status register, guest sends data
INFO: spi0.0: sst25vf016b (2048 Kbytes)
INFO: spi0.0: mtd
  .name = spi0.0,
  .size = 0x200000 (2MiB)
  .erasesize = 0x00001000 (4KiB)
  .numeraseregions = 0
Segmentation fault (core dumped) //Here probably guest try to send some data

The root cause why m25p80 enter strange state is that CS line is not selected/deselected at all- there is missing debug from m25p80_cs. In spi transfer CS line (here qemu_irq) should be 0 before begin of every message, and set after end of transmission.
In case of simple WREN command you should see something like this:
: m25p80_cs: deselect
: decode_new_cmd: decoded new command:6
: m25p80_cs: select

Can you check spi controller model code?

Thanks,
Marcin

You can also run Qemu  under valgrind that will pinpoint the problem.

JC


Generally it should not happen.

The fact is that there is no protection to make sure the data array is not overrun.
Yes. IMHO it could be nice to log some error here and reset state machine instead
of going to next state.

May be it should not happen but it did happen in this case ....
Yeap, but this mean m25p80's state machine goes nuts. Overflow is just a symptom
that something wrong is going on.

Thanks,
Marcin

JC



Thanks,
Marcin















reply via email to

[Prev in Thread] Current Thread [Next in Thread]