qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PULL 34/54] commit: Support multiple roots above top n


From: Kevin Wolf
Subject: Re: [Qemu-devel] [PULL 34/54] commit: Support multiple roots above top node
Date: Fri, 15 Mar 2019 12:19:57 +0100
User-agent: Mutt/1.11.3 (2019-02-01)

Am 14.03.2019 um 18:06 hat Peter Maydell geschrieben:
> On Fri, 6 Oct 2017 at 17:20, Kevin Wolf <address@hidden> wrote:
> >
> > This changes the commit block job to support operation in a graph where
> > there is more than a single active layer that references the top node.
> >
> > This involves inserting the commit filter node not only on the path
> > between the given active node and the top node, but between the top node
> > and all of its parents.
> >
> > On completion, bdrv_drop_intermediate() must consider all parents for
> > updating the backing file link. These parents may be backing files
> > themselves and as such read-only; reopen them temporarily if necessary.
> > Previously this was achieved by the bdrv_reopen() calls in the commit
> > block job that made overlay_bs read-write for the whole duration of the
> > block job, even though write access is only needed on completion.
> >
> > Now that we consider all parents, overlay_bs is meaningless. It is left
> > in place in this commit, but we'll remove it soon.
> >
> > Signed-off-by: Kevin Wolf <address@hidden>
> 
> Hi -- a recent change to the block layer has caused Coverity
> to flag up a possible issue with this older commit: CID 1399710:
> 
> > @@ -3492,42 +3504,42 @@ int bdrv_drop_intermediate(BlockDriverState 
> > *active, BlockDriverState *top,
> >          goto exit;
> >      }
> >
> > -    new_top_bs = bdrv_find_overlay(active, top);
> > -
> > -    if (new_top_bs == NULL) {
> > -        /* we could not find the image above 'top', this is an error */
> > -        goto exit;
> > -    }
> > -
> > -    /* special case of new_top_bs->backing->bs already pointing to base - 
> > nothing
> > -     * to do, no intermediate images */
> > -    if (backing_bs(new_top_bs) == base) {
> > -        ret = 0;
> > -        goto exit;
> > -    }
> > -
> >      /* Make sure that base is in the backing chain of top */
> >      if (!bdrv_chain_contains(top, base)) {
> >          goto exit;
> >      }
> >
> >      /* success - we can delete the intermediate states, and link top->base 
> > */
> > -    if (new_top_bs->backing->role->update_filename) {
> > -        backing_file_str = backing_file_str ? backing_file_str : 
> > base->filename;
> > -        ret = 
> > new_top_bs->backing->role->update_filename(new_top_bs->backing,
> > -                                                         base, 
> > backing_file_str,
> > -                                                         &local_err);
> > -        if (ret < 0) {
> > -            bdrv_set_backing_hd(new_top_bs, top, &error_abort);
> > +    backing_file_str = backing_file_str ? backing_file_str : 
> > base->filename;
> > +
> > +    QLIST_FOREACH_SAFE(c, &top->parents, next_parent, next) {
> > +        /* Check whether we are allowed to switch c from top to base */
> > +        GSList *ignore_children = g_slist_prepend(NULL, c);
> > +        bdrv_check_update_perm(base, NULL, c->perm, c->shared_perm,
> > +                               ignore_children, &local_err);
> 
> Here we don't check the return value from bdrv_check_update_perm(),
> which we do in all four other places that we call it. I think this
> is probably a false positive in that the function will only
> return ret < 0 if it also returns a failure via local_err, but
> it might be worth being consistent just to placate Coverity.

I agree, this shouldn't be a problem in theory. I've sent a patch anyway
to make things more consistent.

Kevin

> > +        if (local_err) {
> > +            ret = -EPERM;
> > +            error_report_err(local_err);
> >              goto exit;
> >          }
> > -    }
> 
> Happy to just mark the issue as a false-positive in the Coverity
> UI if you think that's the best resolution.
> 
> thanks
> -- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]