qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2] file-posix: Skip effectiveless OFD lock oper


From: Fam Zheng
Subject: Re: [Qemu-devel] [PATCH v2] file-posix: Skip effectiveless OFD lock operations
Date: Tue, 14 Aug 2018 16:12:52 +0800
User-agent: Mutt/1.10.0 (2018-05-17)

On Mon, 08/13 15:42, Kevin Wolf wrote:
> Am 13.08.2018 um 04:39 hat Fam Zheng geschrieben:
> > If we know we've already locked the bytes, don't do it again; similarly
> > don't unlock a byte if we haven't locked it. This doesn't change the
> > behavior, but fixes a corner case explained below.
> > 
> > Libvirt had an error handling bug that an image can get its (ownership,
> > file mode, SELinux) permissions changed (RHBZ 1584982) by mistake behind
> > QEMU. Specifically, an image in use by Libvirt VM has:
> > 
> >     $ ls -lhZ b.img
> >     -rw-r--r--. qemu qemu system_u:object_r:svirt_image_t:s0:c600,c690 b.img
> > 
> > Trying to attach it a second time won't work because of image locking.
> > And after the error, it becomes:
> > 
> >     $ ls -lhZ b.img
> >     -rw-r--r--. root root system_u:object_r:virt_image_t:s0 b.img
> > 
> > Then, we won't be able to do OFD lock operations with the existing fd.
> > In other words, the code such as in blk_detach_dev:
> > 
> >     blk_set_perm(blk, 0, BLK_PERM_ALL, &error_abort);
> > 
> > can abort() QEMU, out of environmental changes.
> > 
> > This patch is an easy fix to this and the change is regardlessly
> > reasonable, so do it.
> > 
> > Signed-off-by: Fam Zheng <address@hidden>
> 
> Thanks, applied to the block branch.

Self-NACK. This breaks raw_abort_perm_update(). The extra bytes locked by
raw_check_perm() are not tracked by s->perm (it is only updated in
raw_set_perm()), thus will not get released. This patch is "misusing" s->perm
and s->shared_perm.

I'll revise the implementation and send another version together with dropping
s->lock_fd.

Fam



reply via email to

[Prev in Thread] Current Thread [Next in Thread]