qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3 3/5] qcow2: Introduce an option for sufficien


From: Leonid Bloch
Subject: Re: [Qemu-devel] [PATCH v3 3/5] qcow2: Introduce an option for sufficient L2 cache for the entire image
Date: Thu, 26 Jul 2018 17:50:15 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1

On 07/26/2018 05:42 PM, Kevin Wolf wrote:
Am 26.07.2018 um 14:24 hat Leonid Bloch geschrieben:
You mean with QDict? I'll look into that now. But already sent v5 before
reading this email.

Yes, with reading it from the QDict. (Or whatever the simplest way is
that results in the right external interface, but I suppose this is the
one.)

Well, there is a problem with that: I can easily isolate
l2-cache-size from QDict, check if it is "full", and if it is - do whatever
is needed, and delete this option before parsing. But what if it is "foo"?
It will not get deleted, and the regular QEMU_OPT_SIZE parsing error will
appear, stating that l2-cache-size "expects a non-negative number..." - no
word about that it can expect "full" as well. Now, one can try to modify
local_err->msg for this particular option, but this will require substantial
additional logic. I think considering this, it would be easier to stick with
a dedicated option, l2-cache-full.

Do you think there is a smarter way to parse the l2-cache-size option, so it
would accept both size and "full", while handling errors correctly? It seems
more elegant to have a single option, but the internal handling will be more
elegant and simpler with two mutually exclusive options.

I think we can live with the suboptimal error message for a while. Once
qcow2 is QAPIfied, it should become easy to improve it. Let's not choose
a worse design (that stays forever) for a temporarily better error
message.

OK. I'll add a TODO then.


By the way, the L2 cache resizes now on image resize. Will send the changes
in v6. Thanks for the suggestion!

Sounds good!

Kevin




reply via email to

[Prev in Thread] Current Thread [Next in Thread]