qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Sheepdog] [PATCH] sheepdog: add data preallocation sup


From: MORITA Kazutaka
Subject: Re: [Qemu-devel] [Sheepdog] [PATCH] sheepdog: add data preallocation support
Date: Fri, 08 Jul 2011 20:06:23 +0900
User-agent: Wanderlust/2.14.0 (Africa) SEMI/1.14.6 (Maruoka) FLIM/1.14.9 (Gojō) APEL/10.8 Emacs/22.3 (x86_64-pc-linux-gnu) MULE/5.0 (SAKAKI)

At Wed, 06 Jul 2011 09:53:32 +0200,
Kevin Wolf wrote:
> 
> Am 05.07.2011 20:21, schrieb MORITA Kazutaka:
> >>> +
> >>> +    max_idx = (vdi_size + SD_DATA_OBJ_SIZE - 1) / SD_DATA_OBJ_SIZE;
> >>> +
> >>> +    for (idx = 0; idx < max_idx; idx++) {
> >>> +        uint64_t oid;
> >>> +        oid = vid_to_data_oid(vid, idx);
> >>> +
> >>> +        if (inode->data_vdi_id[idx]) {
> >>> +            ret = read_object(fd, buf, 
> >>> vid_to_vdi_oid(inode->data_vdi_id[idx]),
> >>> +                              1, SD_DATA_OBJ_SIZE, 0);
> >>> +            if (ret)
> >>> +                goto out;
> >>
> >> Missing braces.
> >>
> >> Also, what is this if branch doing? Is it to ensure that we don't
> >> overwrite existing data? But then, isn't an image always empty when we
> >> preallocate it?
> > 
> > This branch is for handling a cloned image, which is created with -b
> > option.  This branch reads data from the backing file (read_object
> > returns zero when it succeeds) instead of filling buffer with zero.
> 
> Oh, I see. You support preallocation even with backing files. And
> suddenly it makes perfect sense. :-)
> 
> (Although after completing preallocation, you won't need the backing
> file any more as all of it has been copied into the image. Maybe we
> should drop the reference then?)

Though we can drop it, Sheepdog uses the reference to show the VM
image relationships in a tree format as VMware does.  So as far as a
Sheepdog protocol is concerned, I think we should keep it.


Thanks,

Kazutaka



reply via email to

[Prev in Thread] Current Thread [Next in Thread]