qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Question about (and problem with) pflash data access


From: Guenter Roeck
Subject: Re: Question about (and problem with) pflash data access
Date: Thu, 13 Feb 2020 06:26:47 -0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.4.1

On 2/13/20 1:51 AM, Paolo Bonzini wrote:
On 13/02/20 08:40, Alexey Kardashevskiy wrote:

memory-region: system
    0000000000000000-ffffffffffffffff (prio 0, i/o): system
      0000000000000000-0000000001ffffff (prio 0, romd): omap_sx1.flash0-1
      0000000000000000-0000000001ffffff (prio 0, rom): omap_sx1.flash0-0
Eh two memory regions with same size and same priority... Is this legal?

I'd say yes if used with memory_region_set_enabled() to make sure only
one is enabled. Having both enabled is weird and we should print a
warning.

Yeah, it's undefined which one becomes visible.


I have a patch fixing that, resulting in

(qemu) info mtree -f
FlatView #0
 AS "I/O", root: io
 Root memory region: io
  0000000000000000-000000000000ffff (prio 0, i/o): io

FlatView #1
 AS "memory", root: system
 AS "cpu-memory-0", root: system
 Root memory region: system
  0000000000000000-0000000001ffffff (prio 0, romd): omap_sx1.flash0
  0000000002000000-0000000003ffffff (prio 0, i/o): sx1.cs0
  0000000004000000-0000000007ffffff (prio 0, i/o): sx1.cs1
  0000000008000000-000000000bffffff (prio 0, i/o): sx1.cs2
  000000000c000000-000000000fffffff (prio 0, i/o): sx1.cs3
  ...

but unfortunately that doesn't fix my problem. The data in the
omap_sx1.flash0 region is as wrong as before.

What really puzzles me is that there is no trace output for
flash data accesses (trace_pflash_data_read and trace_pflash_data_write),
meaning the actual flash data access must be handled elsewhere.
Can someone give me a hint where that might be ?
Clearly I am missing something about inner workings of qemu.

Thanks,
Guenter



reply via email to

[Prev in Thread] Current Thread [Next in Thread]