qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v7] pflash: Require backend size to match device


From: Markus Armbruster
Subject: Re: [Qemu-devel] [PATCH v7] pflash: Require backend size to match device, improve errors
Date: Sat, 09 Mar 2019 10:20:58 +0100
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/26.1 (gnu/linux)

Philippe Mathieu-Daudé <address@hidden> writes:

> On 3/8/19 4:40 PM, Kevin Wolf wrote:
>> Am 08.03.2019 um 15:29 hat Markus Armbruster geschrieben:
>>> Kevin Wolf <address@hidden> writes:
>>>
>>>> Am 08.03.2019 um 13:28 hat Markus Armbruster geschrieben:
>>>>> Laszlo Ersek <address@hidden> writes:
>>>>>> This one has got to be one of the longest bike-shedding sessions! :)
>>>>>>
>>>>>> I'm fine with this patch, but I could suggest two improvements.
>>>>>>
>>>>>> (1) When blk_getlength() fails, we could format the negative error code
>>>>>> returned by it into the error message.
>>>>>
>>>>> I can do that.
>>>>
>>>> By using error_setg_errno(), I assume. Not throwing away error details
>>>> is always good.
>>>>
>>>>>> (2) We could extract the common code to a new function in
>>>>>> "hw/block/block.c". (It says "Common code for block device models" on
>>>>>> the tin.)
>>>>>
>>>>> There's so much common code in these two files even before this patch...
>>>>
>>>> My understanding is that hw/block/block.c contains code that is
>>>> potentially useful to all kinds of block devices, not random code that
>>>> two specific similar devices happen to share.
>>>>
>>>> If we want to deduplicate some code in the flash devices, without any
>>>> expectation that other devices will use it at some point, I'd rather
>>>> create a new source file hw/block/pflash_common.c or something like
>>>> that.
>>>
>>> Yes.
>>>
>>> The helper I came up with (appended) isn't really specific to flash
>>> devices.  Would it be okay for hw/block/block.c even though only the two
>>> flash devices use it for now?
>> 
>> Hm, it feels more like a helper for devices that can't decide whether
>> they want to be a block device or not. Or that actually don't want to be
>> a block device, but use a BlockBackend anyway. Reading in the whole
>> image isn't something that a normal block device would do.
>> 
>> But yes, it doesn't have flash-specific knowledge, even though I hope
>> that it's functionality that will remain very specific to these two
>> devices.
>> 
>> So it's your call, I don't have a strong opinion either way.
>> 
>>>
>>> bool blk_check_size_and_read_all(BlockBackend *blk, void *buf, hwaddr size,
>>>                                  Error **errp)
>>> {
>>>     int64_t blk_len;
>>>     int ret;
>>>
>>>     blk_len = blk_getlength(blk);
>>>     if (blk_len < 0) {
>>>         error_setg_errno(errp, -blk_len,
>>>                          "can't get size of block backend '%s'",
>>>                          blk_name(blk));
>>>         return false;
>>>     }
>>>     if (blk_len != size) {
>>>         error_setg(errp, "device requires %" PRIu64 " bytes, "
>>>                    "block backend '%s' provides %" PRIu64 " bytes",
>>>                    size, blk_name(blk), blk_len);
>> 
>> Should size use HWADDR_PRIu?
>> 
>> I'm not sure if printing the BlockBackend name is a good idea because
>> hopefully one day the BlockBackend will be anonymous even for the flash
>> devices.
>> 
>>>         return false;
>>>     }
>>>
>>>     /* TODO for @size > BDRV_REQUEST_MAX_BYTES, we'd need to loop */
>>>     assert(size <= BDRV_REQUEST_MAX_BYTES);
>> 
>> I don't think we'd ever want to read in more than 2 GB into a memory
>> buffer. Before we even get close to this point, the devices should be
>> reworked to be more like an actual block device and read only what is
>> actually accessed.
>
> The biggest NOR available in the market is 256 MiB (bigger size is
> barely ChipSelect MMIO-addressable).
>
> Maybe you can use:
>
> #define NOR_FLASH_MAX_BYTES (256 * MiB)
>
> and refuse bigger flashes.

The comment next to the definition of property "width" in pflash_cfi01.c
suggests the device model can emulate a bunch of flash chips wired
together:

    /* width here is the overall width of this QEMU device in bytes.
     * The QEMU device may be emulating a number of flash devices
     * wired up in parallel; the width of each individual flash
     * device should be specified via device-width. If the individual
     * devices have a maximum width which is greater than the width
     * they are being used for, this maximum width should be set via
     * max-device-width (which otherwise defaults to device-width).
     * So for instance a 32-bit wide QEMU flash device made from four
     * 16-bit flash devices used in 8-bit wide mode would be configured
     * with width = 4, device-width = 1, max-device-width = 2.
     *
     * If device-width is not specified we default to backwards
     * compatible behaviour which is a bad emulation of two
     * 16 bit devices making up a 32 bit wide QEMU device. This
     * is deprecated for new uses of this device.
     */

> We could also check what is the widest chip-select range addressable
> by all the supported architectures. I don't think it's worth it.
>
>> 
>>>     ret = blk_pread(blk, 0, buf, size);
>
> OK, this function is named blk_check_size_and_read_all. Here we
> read_all. Refactoring this device we should be able to read at
> most sizeof(the biggest sector).
>
> But this implies some serious work.

Here's another thing to consider: shadowing in RAM.  Attractive when you
got the RAM, and it's faster than (parallel) flash.  If you shadow
anyway, you can just as well use serial flash, and throw in compression.
Now, emulated pflash can execute code just as fast es emulated RAM.  The
question is what kind of hardware the firmware expects.  Even if it
supports a variety of hardware, it may still have preferences.

>>>     if (ret < 0) {
>>>         error_setg_errno(errp, -ret, "can't read block backend '%s'",
>>>                          blk_name(blk));
>>>         return false;
>>>     }
>>>     return true;
>>> }
>> 
>> Kevin
>> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]