qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] qed: Add QEMU Enhanced Disk format


From: Anthony Liguori
Subject: Re: [Qemu-devel] [RFC] qed: Add QEMU Enhanced Disk format
Date: Sun, 12 Sep 2010 10:13:24 -0500
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.12) Gecko/20100826 Lightning/1.0b1 Thunderbird/3.0.7

On 09/12/2010 08:24 AM, Avi Kivity wrote:
Not atexit, just when we close the image.

Just a detail, but we need an atexit() handler to make sure block devices get closed because we have too many exit()s in the code today.


Right.

So when you click the 'X' on the qemu window, we get to wait a few seconds for it to actually disappear because it's flushing metadata to disk..

I've started something and will post it soon.

Excellent, thank you.

When considering development time, also consider the time it will take users to actually use qed (6 months for qemu release users, ~9 months on average for semiannual community distro releases, 12-18 months for enterprise distros. Consider also that we still have to support qcow2 since people do use the extra features, and since I don't see us forcing them to migrate.

I'm of the opinion that qcow2 is unfit for production use for the type of production environments I care about. The amount of changes needed to make qcow2 fit for production use put it on at least the same timeline as you cite above.

Yes, there are people today that qcow2 is appropriate but by the same respect, it will continue to be appropriate for them in the future.

In my view, we don't have an image format fit for production use. You're arguing we should make qcow2 fit for production use whereas I am arguing we should start from scratch. My reasoning for starting from scratch is that it simplifies the problem. Your reasoning for improving qcow2 is simplifying the transition for non-production users of qcow2.

We have an existence proof that we can achieve good data integrity and good performance by simplifying the problem. The burden still is establishing that it's possible to improve qcow2 in a reasonable amount of effort.

NB, you could use qcow2 today if you had all of the data integrity fixes or didn't care about data integrity in the event of power failure or didn't care about performance. I don't have any customers that fit that bill so from my perspective, qcow2 isn't production fit. That doesn't mean that it's not fit for someone else's production use.


I realize it's somewhat subjective though.

While qed looks like a good start, it has at least three flaws already (relying on physical image size, relying on fsck, and limited logical image size). Just fixing those will introduce complication. What about new features or newly discovered flaws?

Let's quantify fsck. My suspicion is that if you've got the storage for 1TB disk images, it's fast enough that fsck can not be so bad.

Keep in mind, we don't have to completely pause the guest while fsck'ing. We simply have to prevent cluster allocations. We can allow reads and we can allow writes to allocated clusters.

Consequently, if you had a 1TB disk image, it's extremely likely that the vast majority of I/O is just to allocated clusters which means that fsck() is entirely a background task. The worst case scenario is actually a half-allocated disk.

But since you have to boot before you can run any serious test, if it takes 5 seconds to do an fsck(), it's highly likely that it's not even noticeable.

Maybe I'm broken with respect to how I think, but I find state machines very easy to rationalize.

Your father's state machine. Not as clumsy or random as a thread; an elegant weapon for a more civilized age

I find your lack of faith in QED disturbing.

To me, the biggest burden in qcow2 is thinking through how you deal with shared resources. Because you can block for a long period of time during write operations, it's not enough to just carry a mutex during all metadata operations. You have to stage operations and commit them at very specific points in time.

The standard way of dealing with this is to have a hash table for metadata that contains a local mutex:

    l2cache = defaultdict(L2)

    def get_l2(pos):
        l2 = l2cache[pos]
        l2.mutex.lock()
        if not l2.valid:
             l2.pos = pos
             l2.read()
             l2.valid = True
        return l2

    def put_l2(l2):
        if l2.dirty:
            l2.write()
            l2.dirty = False
        l2.mutex.unlock()

You're missing how you create entries.  That means you've got to do:

def put_l2(l2):
   if l2.committed:
       if l2.dirty
           l2.write()
           l2.dirty = False
       l2.mutex.unlock()
    else:
       l2.mutex.lock()
       l2cache[l2.pos] = l2
       l2.mutex.unlock()

And this really illustrates my point. It's a harder problem that it seems. You also are keeping l2 reads from occurring when flushing a dirty l2 entry which is less parallel than what qed achieves today.

This is part of why I prefer state machines. Acquiring a mutex is too easy and it makes it easy to not think through what all could be running. When you are more explicit about when you are allowing concurrency, I think it's easier to be more aggressive.

It's a personal preference really. You can find just as many folks on the intertubes that claim Threads are Evil as claim State Machines are Evil.

The only reason we're discussing this is you've claimed QEMU's state machine model is the biggest inhibitor and I think that's over simplifying things. It's like saying, QEMU's biggest problem is that too many of it's developers use vi verses emacs. You may personally believe that vi is entirely superior to emacs but by the same token, you should be able to recognize that some people are able to be productive with emacs.

If someone wants to rewrite qcow2 to be threaded, I'm all for it. I don't think it's really any simpler than making it a state machine. I find it hard to believe you think there's an order of magnitude difference in development work too.

It's far easier to just avoid internal snapshots altogether and this is exactly the thought process that led to QED. Once you drop support for internal snapshots, you can dramatically simplify.

The amount of metadata is O(nb_L2 * nb_snapshots). For qed, nb_snapshots = 1 but nb_L2 can be still quite large. If fsck is too long for one, it is too long for the other.

nb_L2 is very small. It's exactly n / 2GB + 1 where n is image size. Since image size is typically < 100GB, practically speaking it's less than 50.

OTOH, nb_snapshots in qcow2 can be very large. In fact, it's not unrealistic for nb_snapshots to be >> 50. What that means is that instead of metadata being O(n) as it is today, it's at least O(n^2).

Doing internal snapshots right is far more complicated than qcow2 does things.

How long does fsck take?

We'll find out soon. But remember, fsck() only blocks pending metadata writes so it's not entirely all up-front.

Not doing qed-on-lvm is definitely a limitation. The one use case I've heard is qcow2 on top of clustered LVM as clustered LVM is simpler than a clustered filesystem. I don't know the space well enough so I need to think more about it.

I don't either. If this use case survives, and if qed isn't changed to accomodate it, it means that that's another place where qed can't supplant qcow2.

I'm okay with that. An image file should require a file system. If I was going to design an image file to be used on top of raw storage, I would take an entirely different approach.

Refcount table. See above discussion for my thoughts on refcount table.

Ok. It boils down to "is fsck on startup acceptable". Without a freelist, you need fsck for both unclean shutdown and for UNMAP.

To rebuild the free list on unclean shutdown.

5) No support for qed-on-lvm

6) limited image resize

Not anymore than qcow2 FWIW.

Again, with the default create parameters, we can resize up to 64TB without rewriting metadata. I wouldn't call that limited image resize.

I guess 64TB should last a bit. And if you relax the L1 size to be any number of clusters (or have three levels) you're unlimited.

btw, having 256KB L2s is too large IMO. Reading them will slow down your random read throughput. Even 64K is a bit large, but there's no point making them smaller than a cluster.

This is just defaults and honestly, adding another level would be pretty trivial.

(an aside: with cache!=none we're bouncing in the kernel as well; we really need to make it work for cache=none, perhaps use O_DIRECT for data and writeback for metadata and shared backing images).

QED achieves zero-copy with cache=none today. In fact, our performance testing that we'll publish RSN is exclusively with cache=none.

Yes, you'll want to have that regardless. But adding new things to qcow2 has all the problems of introducing a new image format.

Just some of them. On mount, rewrite the image format as qcow3. On clean shutdown, write it back to qcow2. So now there's no risk of data corruption (but there is reduced usability).

It means on unclean shutdown, you can't move images to older versions. That means a management tool can't rely on the mobility of images which means it's a new format for all practical purposes.

QED started it's life as qcow3. You start with qcow3, remove the features that are poorly thought out and make correctness hard, add some future proofing, and you're left with QED.

We're fully backwards compatible with qcow2 (by virtue that qcow2 is still in tree) but new images require new versions of QEMU. That said, we have a conversion tool to convert new images to the old format if mobility is truly required.

So it's the same story that you're telling above from an end-user perspective.

They are once you copy the image. And power loss is the same thing as unexpected exit because you're not simply talking about delaying a sync, you're talking staging future I/O operations purely within QEMU.

qed is susceptible to the same problem. If you have a 100MB write and qemu exits before it updates L2s, then those 100MB are leaked. You could alleviate the problem by writing L2 at intermediate points, but even then, a power loss can leak those 100MB.

qed trades off the freelist for the file size (anything beyond the file size is free), it doesn't eliminate it completely. So you still have some of its problems, but you don't get its benefits.

I think you've just established that qcow2 and qed both require an fsck. I don't disagree :-)

There's a difference between a background scrubber and a foreground fsck.

The difference between qcow2 and qed is that qed relies on the file size and qcow2 uses a bitmap.

The bitmap grows synchronously whereas in qed, we're not relying on synchronous file growth. If we did, there would be no need for an fsck.

If you attempt to grow the refcount table in qcow2 without doing a sync(), then you're going to have to have an fsync to avoid corruption.

qcow2 doesn't have an advantage, it's just not trying to be as sophisticated as qed is.

Regards,

Anthony Liguori



reply via email to

[Prev in Thread] Current Thread [Next in Thread]