qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Question about (and problem with) pflash data access


From: Philippe Mathieu-Daudé
Subject: Re: Question about (and problem with) pflash data access
Date: Thu, 13 Feb 2020 00:50:20 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.4.1

Cc'ing Paolo and Alexey.

On 2/13/20 12:09 AM, Guenter Roeck wrote:
On Wed, Feb 12, 2020 at 10:39:30PM +0100, Philippe Mathieu-Daudé wrote:
Cc'ing Jean-Christophe and Peter.

On 2/12/20 7:46 PM, Guenter Roeck wrote:
Hi,

I have been playing with pflash recently. For the most part it works,
but I do have an odd problem when trying to instantiate pflash on sx1.

My data file looks as follows.

0000000 0001 0000 aaaa aaaa 5555 5555 0000 0000
0000020 0000 0000 0000 0000 0000 0000 0000 0000
*
0002000 0002 0000 aaaa aaaa 5555 5555 0000 0000
0002020 0000 0000 0000 0000 0000 0000 0000 0000
*
0004000 0003 0000 aaaa aaaa 5555 5555 0000 0000
0004020 0000 0000 0000 0000 0000 0000 0000 0000
...

In the sx1 machine, this becomes:

0000000 6001 0000 aaaa aaaa 5555 5555 0000 0000
0000020 0000 0000 0000 0000 0000 0000 0000 0000
*
0002000 6002 0000 aaaa aaaa 5555 5555 0000 0000
0002020 0000 0000 0000 0000 0000 0000 0000 0000
*
0004000 6003 0000 aaaa aaaa 5555 5555 0000 0000
0004020 0000 0000 0000 0000 0000 0000 0000 0000
*
...

pflash is instantiated with "-drive file=flash.32M.test,format=raw,if=pflash".

I don't have much success with pflash tracing - data accesses don't
show up there.

I did find a number of problems with the sx1 emulation, but I have no clue
what is going on with pflash. As far as I can see pflash works fine on
other machines. Can someone give me a hint what to look out for ?

This is specific to the SX1, introduced in commit 997641a84ff:

  64 static uint64_t static_read(void *opaque, hwaddr offset,
  65                             unsigned size)
  66 {
  67     uint32_t *val = (uint32_t *) opaque;
  68     uint32_t mask = (4 / size) - 1;
  69
  70     return *val >> ((offset & mask) << 3);
  71 }

Only guessing, this looks like some hw parity, and I imagine you need to
write the parity bits in your flash.32M file before starting QEMU, then it
would appear "normal" within the guest.

I thought this might be related, but that is not the case. I added log
messages, and even ran the code in gdb. static_read() and static_write()
are not executed.

Also,

     memory_region_init_io(&cs[0], NULL, &static_ops, &cs0val,
                           "sx1.cs0", OMAP_CS0_SIZE - flash_size);
                                      ^^^^^^^^^^^^^^^^^^^^^^^^^^
     memory_region_add_subregion(address_space,
                                 OMAP_CS0_BASE + flash_size, &cs[0]);
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^

suggests that the code is only executed for memory accesses _after_
the actual flash. The memory tree is:

memory-region: system
   0000000000000000-ffffffffffffffff (prio 0, i/o): system
     0000000000000000-0000000001ffffff (prio 0, romd): omap_sx1.flash0-1
     0000000000000000-0000000001ffffff (prio 0, rom): omap_sx1.flash0-0

Eh two memory regions with same size and same priority... Is this legal?

(qemu) info mtree -f -d
FlatView #0
 AS "memory", root: system
 AS "cpu-memory-0", root: system
 Root memory region: system
  0000000000000000-0000000001ffffff (prio 0, romd): omap_sx1.flash0-1
  0000000002000000-0000000003ffffff (prio 0, i/o): sx1.cs0
  0000000004000000-0000000007ffffff (prio 0, i/o): sx1.cs1
  0000000008000000-000000000bffffff (prio 0, i/o): sx1.cs3
  0000000010000000-0000000011ffffff (prio 0, ram): omap1.dram
  0000000020000000-000000002002ffff (prio 0, ram): omap1.sram
  ...
  Dispatch
    Physical sections
      #0 @0000000000000000..ffffffffffffffff (noname) [unassigned]
      #1 @0000000000000000..0000000001ffffff omap_sx1.flash0-1 [not dirty]
      #2 @0000000002000000..0000000003ffffff sx1.cs0 [ROM]
      #3 @0000000004000000..0000000007ffffff sx1.cs1 [watch]
      #4 @0000000008000000..000000000bffffff sx1.cs3
      #5 @0000000010000000..0000000011ffffff omap1.dram
      #6 @0000000020000000..000000002002ffff omap1.sram
      ...
    Nodes (9 bits per level, 6 levels) ptr=[3] skip=4
      [0]
          0       skip=3  ptr=[3]
          1..511  skip=1  ptr=NIL
      [1]
          0       skip=2  ptr=[3]
          1..511  skip=1  ptr=NIL
      [2]
          0       skip=1  ptr=[3]
          1..511  skip=1  ptr=NIL
      [3]
          0       skip=1  ptr=[4]
          1       skip=1  ptr=[5]
          2       skip=2  ptr=[7]
          3..13   skip=1  ptr=NIL
         14       skip=2  ptr=[9]
         15       skip=2  ptr=[11]
         16..511  skip=1  ptr=NIL
      [4]
          0..63   skip=0  ptr=#1
         64..127  skip=0  ptr=#2
        128..255  skip=0  ptr=#3
        256..383  skip=0  ptr=#4
        384..511  skip=1  ptr=NIL

So the romd wins.

     0000000002000000-0000000003ffffff (prio 0, i/o): sx1.cs0

I thought that the dual memory assignment (omap_sx1.flash0-1 and
omap_sx1.flash0-0) might play a role, but removing that didn't make
a difference either (not that I have any idea what it is supposed
to be used for).

Thanks,
Guenter





reply via email to

[Prev in Thread] Current Thread [Next in Thread]