qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] virtio-scsi spec, first public draft


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] virtio-scsi spec, first public draft
Date: Fri, 6 May 2011 13:31:40 +0100

On Thu, May 5, 2011 at 3:50 PM, Paolo Bonzini <address@hidden> wrote:
> On 05/05/2011 04:29 PM, Hannes Reinecke wrote:
>>>
>>> I chose 1 requestq per target so that, with MSI-X support, each
>>> target can be associated to one MSI-X vector.
>>>
>>> If you want a large number of units, you can subdivide targets into
>>> logical units, or use multiple adapters if you prefer. We can have
>>> 20-odd SCSI adapters, each with 65534 targets. I think we're way
>>> beyond the practical limits even before LUN support is added to QEMU.
>>
>> But this will make queue full tracking harder.
>> If we have one queue per LUN the SCSI stack is able to track QUEUE FULL
>> states and will adjust the queue depth accordingly.
>> When we have only one queue per target we cannot track QUEUE FULL
>> anymore and have to rely on the static per-host 'can_queue' setting.
>> Which doesn't work as well, especially in a virtualized environment
>> where the queue full conditions might change at any time.
>
> So you want one virtqueue per LUN?  I had it in the first version, but then
> you had to associate a (target, 8-byte LUN) pair to each virtqueue manually.
>  That was very hairy, so I changed it to one target per queue.
>
>> But read on:
>>
>>> For comparison, Windows supports up to 1024 targets per adapter
>>> (split across 8 channels); IBM vSCSI provides up to 128; VMware
>>> supports a maximum of 15 SCSI targets per adapter and 4 adapters per
>>> VM.
>>>
>> We don't have to impose any hard limits here. The virtio scsi transport
>> would need to be able to detect the targets, and we would be using
>> whatever targets have been found.
>
> Yes, that's what I wrote above.  Right now "detect the targets" means "send
> INQUIRY for LUN0 and/or REPORT LUNS to each virtqueue", thanks to the 1:1
> relationship.  In my first version it would mean:
>
> - associate each target's LUN0 to a virtqueue
> - if needed, send INQUIRY for LUN0 and/or REPORT LUNS
> - if needed, deassociate the LUN0 and the virtqueue
>
> Really, it was ugly.  It also brings a lot more the question, such as what
> to do if a virtqueue has pending requests at deassociation time.
>
>>> Yes, just add the first LUN to it (it will be LUN0 which must be
>>> there anyway). The target's existence will be reported on the
>>> control receiveq.
>>>
>> ?? How is this supposed to work?
>> How can I detect the existence of a virtqueue ?
>
> Config space tells you how many virtqueue exist.  That gives how many
> targets you can address at most.  If some of them are empty at the beginning
> of the guest's life, their LUN0 will fail to answer INQUIRY and REPORT LUNS.
>
> (It is the same for vmw_pvscsi by the way, except simpler: the maximum # of
> targets is not configurable, and there is just one queue + one interrupt).

Okay, this explains how you plan to handle targets appearing - you
want to set a maximum number of targets.  I was wondering how we would
add virtqueues dynamically (and why the control vqs are placed last at
n,n+1 instead of 0,1).  Like Hannes said, why introduce a limit here
if we don't have to?

I'm really not sure I understand the win of creating lots of
virtqueues.  I just want a pipe out onto the SCSI bus so I can talk to
all devices in the SCSI domain.  Creating separate virtqueues
increases complexity in the driver and emulation IMO.

What is the MSI-X win you mentioned?  I guess if an application on
vcpu0 is accessing target0 a lot then interrupt handling can be
handled on vcpu0 while other vcpus handle interrupts for other SCSI
targets?  I remember VMware pv scsi has a trick here, each request can
contain the vcpu number which influences interrupt routing somehow.

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]