qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH for-2.9?] 9pfs: fix migration_block leak


From: Li Qiang
Subject: Re: [Qemu-devel] [PATCH for-2.9?] 9pfs: fix migration_block leak
Date: Fri, 31 Mar 2017 16:01:15 +0800

2017-03-31 15:07 GMT+08:00 Greg Kurz <address@hidden>:

> On Fri, 31 Mar 2017 09:26:35 +0800
> Li Qiang <address@hidden> wrote:
>
> > Hello,
> >
> > 2017-03-30 23:46 GMT+08:00 Greg Kurz <address@hidden>:
> >
> > > On Thu, 30 Mar 2017 08:25:25 -0500
> > > Eric Blake <address@hidden> wrote:
> > >
> > > > On 03/30/2017 07:27 AM, Li Qiang wrote:
> > > > > The guest can leave the pdu->s->migration_blocker exists by attach
> > > >
> > > > s/exists/in place/
> > > > s/attach/attaching/
> > > >
> > >
> >
> > Eric,
> > Thanks for pointing my mistakes!
> >
> >
> > > > > but not remove a fid. Then if we hot unplug the 9pfs device, the
> > > >
> > >
> > > In theory you're right, but the current 9p client in linux won't let
> you
> > > hot
> > > unplug the device unless you unmount the 9p share first, hence freeing
> the
> > > blocker.
> > >
> > >
> > I think we should consider every possible situation.
> >
> >
> > > > s/remove/removing/
> > > >
> > > > > v9fs_reset() just free the fids, but not free the
> migration_blocker.
> > > > > This will leak a memory leak. This patch avoid this.
> > >
> > > I had a similar issue sitting my TODO list for quite a time: the
> blocker
> > > survives a system_reset. It doesn't cause a memory leak but it prevents
> > > migration until the guest mounts/unmounts the 9p share again.
> > >
> > > This boils down to virtfs_reset() calling free_fid() instead of
> put_fid()
> > > IIRC.
> > >
> > > >
> > > > s/leak a/cause a/
> > > > s/avoid/avoids/
> > > >
> > > > >
> > > > > Signed-off-by: Li Qiang <address@hidden>
> > > > > ---
> > > > >  hw/9pfs/9p.c | 6 ++++++
> > > > >  1 file changed, 6 insertions(+)
> > > >
> > > > Probably worth including in 2.9 as a bug fix.
> > > >
> > > > Reviewed-by: Eric Blake <address@hidden>
> > > >
> > > > >
> > > > > diff --git a/hw/9pfs/9p.c b/hw/9pfs/9p.c
> > > > > index 48babce..b55c02d 100644
> > > > > --- a/hw/9pfs/9p.c
> > > > > +++ b/hw/9pfs/9p.c
> > > > > @@ -548,6 +548,12 @@ static void coroutine_fn virtfs_reset(V9fsPDU
> > > *pdu)
> > > > >              free_fid(pdu, fidp);
> > > > >          }
> > > > >      }
> > > > > +
> > > > > +    if (pdu->s->migration_blocker) {
> > > > > +        migrate_del_blocker(pdu->s->migration_blocker);
> > > > > +        error_free(pdu->s->migration_blocker);
> > > > > +        pdu->s->migration_blocker = NULL;
> > > > > +    }
> > >
> > > I'd prefer to drain all PDUs in virtfs_reset() and have the loop above
> > > to call put_fid() instead of free_fid(). If this isn't doable for 2.9,
> > > I'll apply this patch with a comment.
> > >
> > >
> > Yes, I have considered to use put_fid() to fix this. But I'm not sure the
> > 'fidp->ref' is  at most 1 in virtfs_reset() function(I think it is).
> >
>
> And indeed, if the fid is involved in an I/O then its ref will be != 0.
>
> > IIUC I think omit the 'else' branch, and call put_fid() directly like
> this.
> >
>
> This won't work: we must ensure that fidp->ref reaches zero (ie, drain
> all PDUs), then we can fidp->ref++ (ie, get a ref on the fid) and call
> put_fid(), which will drop the last ref of the fid and clear the blocker
> if this is the root fid.
>
>
Right, but how can we ensure we have drained all PDUs? Any idea?

Thanks.



> > diff --git a/hw/9pfs/9p.c b/hw/9pfs/9p.c
> > index 48babce..ae97e79 100644
> > --- a/hw/9pfs/9p.c
> > +++ b/hw/9pfs/9p.c
> > @@ -544,9 +544,8 @@ static void coroutine_fn virtfs_reset(V9fsPDU *pdu)
> >
> >          if (fidp->ref) {
> >              fidp->clunked = 1;
> > -        } else {
> > -            free_fid(pdu, fidp);
> >          }
> > +        put_fid(pdu, fidp);
> >      }
> >  }
> >
> >
> > If you agree, I will send a formal patch.
> >
> >
> >
> > > > >  }
> > > > >
> > > > >  #define P9_QID_TYPE_DIR         0x80
> > > > >
> > > >
> > >
> > >
>
>


reply via email to

[Prev in Thread] Current Thread [Next in Thread]