qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH] implement vmware pvscsi device


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [RFC PATCH] implement vmware pvscsi device
Date: Fri, 15 Apr 2011 15:01:15 +0100

On Fri, Apr 15, 2011 at 2:42 PM, Paolo Bonzini <address@hidden> wrote:
> Lightly tested with Linux guests; at least it can successfully partition
> and format a disk.  scsi-generic also lightly tested.
>
> Doesn't do migration, doesn't do hotplug (the device would support that,
> but it is not 100% documented and the Linux driver in particular cannot
> initiate hot-unplug).  I did it as quick one-day hack to study the SCSI
> subsystem and it is my first real foray into device model land, please
> be gentle. :)
>
> vmw_pvscsi.h is taken from Linux, so it doesn't fully respect coding
> standards.  I think that's fair.
>
> Size is curiously close to the recently added sPAPR adapter:
>
>  911  2354 25553 hw/vmw_pvscsi.c
>  988  3177 29628 hw/spapr_vscsi.c
>
> Sounds like that's just the amount of code it takes to implement a SCSI
> HBA in QEMU. :)

Interesting, thanks for posting this.  I've been playing with virtio
SCSI and it is still in the early stages.  Nicholas A. Bellinger and I
have been wiring the in-kernel SCSI target up to KVM using vhost.
Feel free to take a peek at the work-in-progress:

http://repo.or.cz/w/qemu/stefanha.git/shortlog/refs/heads/virtio-scsi
http://git.kernel.org/?p=linux/kernel/git/nab/lio-core-2.6.git;a=shortlog;h=refs/heads/tcm_vhost

I think SCSI brings many benefits.  Guests can deal with it better
than these alien vdX virtio-blk devices, which makes migration easier.
 It becomes possible to attach many disks without burning through free
PCI slots.  We don't need to update guests to add cache control,
discard, and other commands because they are part of SCSI.  We can
pass through more exotic devices.  The list goes on...

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]