[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 4/4] qemu-img: conditionally discard target on c

From: Peter Lieven
Subject: Re: [Qemu-devel] [PATCH 4/4] qemu-img: conditionally discard target on convert
Date: Thu, 18 Jul 2013 16:32:39 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130623 Thunderbird/17.0.7

On 18.07.2013 16:20, Paolo Bonzini wrote:
Il 18/07/2013 16:09, Peter Lieven ha scritto:
On 18.07.2013 15:52, Paolo Bonzini wrote:
Il 18/07/2013 15:29, Peter Lieven ha scritto:
If the driver would have a better method of writing zeroes than
discard it simply should not set bdi->write_zeroes_w_discard = 1.
If the driver had a better method of writing zeroes than discard, it
simply should ignore the BDRV_MAY_UNMAP (or BDRV_MAY_DISCARD) flag in
its bdrv_write_zeros implementation.
ok, but this would require an individual patch in every driver, wouldn't
it.  i am ok with that.
Yes (making the drivers return the flag in the BDI would also require
per-driver patches).
we still might need a hint for qemu-img convert that the driver does
writing zeroes by unmap because using write_zeroes in the main loop
might result in unaligned requests that the target is not able to unmap.
and to avoid writing several blocks twice by first writing all zeroes
to the target and then writing all data blocks again I would need to
keep the loop
at the beginning of qemu-img convert to write zeroes with correct
alignment and granularity if the driver supports write_zeroes_w_discard.
(Mis)alignment and granularity can be handled later.  We can ignore them
for now.  Later, if we decide the best way to support them is a flag,
we'll add it.  Let's not put the cart before the horse.

BTW, I expect alignment!=0 to be really, really rare.
To explain my concerns:

I know that my target has internal page size of 15MB. I will check what happens
if I deallocate this 15MB in chunks of lets say 1MB. If the page gets 
after the last chunk is unmapped it would be fine :-)


reply via email to

[Prev in Thread] Current Thread [Next in Thread]