qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC][PATCH 28/45] qemu-kvm: msix: Drop tracking of use


From: Jan Kiszka
Subject: Re: [Qemu-devel] [RFC][PATCH 28/45] qemu-kvm: msix: Drop tracking of used vectors
Date: Tue, 18 Oct 2011 20:24:39 +0200
User-agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); de; rv:1.8.1.12) Gecko/20080226 SUSE/2.0.0.12-1.1 Thunderbird/2.0.0.12 Mnenhy/0.7.5.666

On 2011-10-18 19:06, Michael S. Tsirkin wrote:
> On Tue, Oct 18, 2011 at 05:55:54PM +0200, Jan Kiszka wrote:
>> On 2011-10-18 17:22, Jan Kiszka wrote:
>>> What KVM has to do is just mapping an arbitrary MSI message
>>> (theoretically 64+32 bits, in practice it's much of course much less) to
>>
>> ( There are 24 distinguishing bits in an MSI message on x86, but that's
>> only a current interpretation of one specific arch. )
> 
> Confused. vector mask is 8 bits. the rest is destination id etc.

Right, but those additional bits like the destination make different
messages. We have to encode those 24 bits into a unique GSI number and
restore them (by table lookup) on APIC injection inside the kernel. If
we only had to encode 256 different vectors, we would be done already.

> 
>>> a single GSI and vice versa. As there are less GSIs than possible MSI
>>> messages, we could run out of them when creating routes, statically or
>>> lazily.
>>>
>>> What would probably help us long-term out of your concerns regarding
>>> lazy routing is to bypass that redundant GSI translation for dynamic
>>> messages, i.e. those that are not associated with an irqfd number or an
>>> assigned device irq. Something like a KVM_DELIVER_MSI IOCTL that accepts
>>> address and data directly.
>>
>> This would be a trivial extension in fact. Given its beneficial impact
>> on our GSI limitation issue, I think I will hack up something like that.
>>
>> And maybe this makes a transparent cache more reasonable. Then only old
>> host kernels would force us to do searches for already cached messages.
>>
>> Jan
> 
> Hmm, I'm not all that sure. Existing design really allows
> caching the route in various smart ways. We currently do
> this for irqfd but this can be extended to ioctls.
> If we just let the guest inject arbitrary messages,
> that becomes much more complex.

irqfd and kvm device assignment do not allow us to inject arbitrary
messages at arbitrary points. The new API offers kvm_msi_irqfd_set and
kvm_device_msix_set_vector (etc.) for those scenarios to set static
routes from an MSI message to a GSI number (+they configure the related
backends).

> 
> Another concern is mask bit emulation. We currently
> handle mask bit in userspace but patches
> to do them in kernel for assigned devices where seen
> and IMO we might want to do that for virtio as well.
> 
> For that to work the mask bit needs to be tied to
> a specific gsi or specific device, which does not
> work if we just inject arbitrary writes.

Yes, but I do not see those valuable plans being negatively affected.

Jan

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]