qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 07/55] block: Make BlockDriver method bdrv_set_l


From: Markus Armbruster
Subject: Re: [Qemu-devel] [PATCH 07/55] block: Make BlockDriver method bdrv_set_locked() return void
Date: Thu, 21 Jul 2011 17:07:17 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/23.1 (gnu/linux)

Luiz Capitulino <address@hidden> writes:

> On Wed, 20 Jul 2011 18:23:41 +0200
> Markus Armbruster <address@hidden> wrote:
>
>> The only caller is bdrv_set_locked(), and it ignores the value.
>> 
>> Callees always return 0, except for FreeBSD's cdrom_set_locked(),
>> which returns -ENOTSUP when the device is in a terminally wedged
>> state.
>
> The fact that we're ignoring ioctl() failures caught my attention. Is it
> because the state we store in bs->locked is what really matters?
>
> Otherwise the right thing to do would be to propagate the error and
> change callers to handle it.

The only caller is bdrv_set_locked().  Which can't handle it, so it
would have to pass it on.

The purpose of bdrv_set_locked() is to synchronize the physical lock (if
any) with the virtual lock.  Worst that can happen is the physical lock
fails to track the virtual one.

bdrv_set_locked()'s callers are device models: ide-cd and scsi-cd.  Each
one calls it in four places (with all my patches applied):

* Device init: can return failure.  Failure makes QEMU refuse to start
  (-device cold plug), or hot plug fail (device_add).

* Device exit: can't return failure.

* Device post migration: can return failure, makes migration fail.
  Given the choice, I figure I'd pick an incorrect physical tray lock
  over a failed migration.

* Guest PREVENT ALLOW MEDIUM REMOVAL: can send error reply to guest.
  Recommended errors according to MMC-5 are "unit attention errors" (not
  a good match), "CDB or parameter list validation errors" (worse) and
  "hardware failures" (no good choices there either).

I doubt adding the error handling is worth our while, but feel free to
send patches.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]