|
From: | Avi Kivity |
Subject: | Re: [Qemu-devel] QEMU interfaces for image streaming and post-copy block migration |
Date: | Sun, 12 Sep 2010 18:45:08 +0200 |
User-agent: | Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.9) Gecko/20100907 Fedora/3.1.3-1.fc13 Lightning/1.0b3pre Thunderbird/3.1.3 |
On 09/12/2010 05:23 PM, Anthony Liguori wrote:
On 09/12/2010 08:40 AM, Avi Kivity wrote:Why would it serialize all I/O operations? It's just like another vcpu issuing reads.Because the block layer isn't re-entrant.
A threaded block layer is reentrant. Of course pushing the thing into a thread requires that.
What you basically do is: stream_step_three(): complete() stream_step_two(offset, length): bdrv_aio_readv(offset, length, buffer, stream_step_three) bdrv_aio_stream(): bdrv_aio_find_free_cluster(stream_step_two)Isn't there a write() missing somewhere?Streaming relies on copy-on-read to do the writing.
Ah. You can avoid the copy-on-read implementation in the block format driver and do it completely in generic code.
And that's exactly what the current code looks like. The only change to the patch that this does is make some of qed's internals be block layer interfaces.Why do you need find_free_cluster()? That's a physical offset thing. Just write to the same logical offset.IOW: bdrv_aio_stream(): bdrv_aio_read(offset, stream_2)It's an optimization. If you've got a fully missing L1 entry, then you're going to memset() 2GB worth of zeros. That's just wasted work. With a 1TB image with a 1GB allocation, it's a huge amount of wasted work.
Ok. And it's a logical offset, not physical as I thought, which confused me.
stream_2(): if all zeros: increment offset if more: bdrv_aio_stream() bdrv_aio_write(offset, stream_3) stream_3(): bdrv_aio_write(offset, stream_4)I don't understand why stream_3() is needed.
This implementation doesn't rely on copy-on-read code in the block format driver. It is generic and uses existing block layer interfaces. It would need copy-on-read support in the generic block layer as well.
stream_4(): increment offset if more: bdrv_aio_stream()Of course, need to serialize wrt guest writes, which adds a bit more complexity. I'll leave it to you to code the state machine for that.http://repo.or.cz/w/qemu/aliguori.git/commitdiff/d44ea43be084cc879cd1a33e1a04a105f4cb7637?hp=34ed425e7dd39c511bc247d1ab900e19b8c74a5d
Clever - it pushes all the synchronization into the copy-on-read implementation. But the serialization there hardly jumps out of the code.
Do I understand correctly that you can only have one allocating read or write running?
Parts of it are: commit. Of course, that's horribly synchronous.If you've got AIO internally, making commit work is pretty easy. Doing asynchronous commit at a generic layer is not easy though unless you expose lots of details.
I don't see why. Commit is a simple loop that copies all clusters. All it needs to know is if a cluster is allocated or not.
When commit is running you need additional serialization against guest writes, and to direct guest writes and reads to the committed region to the backing file instead of the temporary image. But the block layer already knows of all guest writes.
Generally, I think the block layer makes more sense if the interface to the formats are high level and code sharing is achieved not by mandating a world view but rather but making libraries of common functionality. This is more akin to how the FS layer works in Linux.So IMHO, we ought to add a bdrv_aio_commit function, turn the current code into a generic_aio_commit, implement a qed_aio_commit, then somehow do qcow2_aio_commit, and look at what we can refactor into common code.
What Linux does if have an equivalent of bdrv_generic_aio_commit() which most implementations call (or default to), and only do something if they want something special. Something like commit (or copy-on-read, or copy-on-write, or streaming) can be implement 100% in terms of the generic functions (and indeed qcow2 backing files can be any format).
-- error compiling committee.c: too many arguments to function
[Prev in Thread] | Current Thread | [Next in Thread] |