|
From: | Eric Blake |
Subject: | Re: [Qemu-devel] [PATCH] block: unify blocksize types |
Date: | Fri, 9 Feb 2018 14:38:52 -0600 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.5.2 |
On 02/09/2018 09:11 AM, Kevin Wolf wrote:
Yes, I do, thanks - I'll prepare patch v2 today. Also, I haven't found any hidden dependencies on blocksize being <= 32768, so I assume changing the new max value to 2^31 is safe. Could somebody more familiar with qemu code confirm(or invalidate) my assumption?The IDE code has the following line in the emulation of the IDENTIFY DEVICE command: put_le16(p + 106, 0x6000 | get_physical_block_exp(&dev->conf)); That is, the result of get_physical_block_exp() is just blindly ORed to the word. The IDE spec says that bits 0-3 contain the exponent. Four bits mean a maximum of 15 (and 2^15 = 32768). After that, we don't actually expose a larger block size, but start modifying reserved bits. I haven't checked other device models, but I wouldn't rule out that they make similar assumptions.
NBD documents that:The minimum block size represents the smallest addressable length and alignment within the export, although writing to an area that small may require the server to use a less-efficient read-modify-write action. If advertised, this value MUST be a power of 2, MUST NOT be larger than 2^16 (65,536), and MAY be as small as 1 for an export backed by a regular file, although the values of 2^9 (512) or 2^12 (4,096) are more typical for an export backed by a block device. If a server advertises a minimum block size, the advertised export size SHOULD be an integer multiple of that block size, since otherwise, the client would be unable to access the final few bytes of the export.
We probably need to get that enlarged to a bigger minimum block size, if there really are devices that require more than a 64k minimum access size. But you do NOT want 2^31 as a permitted block size; that implies that any action smaller than the block size will be performed as a read-modify-write; and reading 2G just to modify a subset then write back 2G is painfully slow. 1M might be a much more reasonable maximum block size (if 64k is indeed too small in practice for existing hardware).
-- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3266 Virtualization: qemu.org | libvirt.org
[Prev in Thread] | Current Thread | [Next in Thread] |