qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH] qemu pci: pci_add_capability enhancement to


From: Alexey Kardashevskiy
Subject: Re: [Qemu-devel] [RFC PATCH] qemu pci: pci_add_capability enhancement to prevent damaging config space
Date: Sat, 09 Jun 2012 00:00:47 +1000
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:13.0) Gecko/20120604 Thunderbird/13.0

08.06.2012 21:30, Jan Kiszka пишет:
> On 2012-06-08 13:16, Alexey Kardashevskiy wrote:
>> 08.06.2012 20:56, Jan Kiszka написал:
>>> On 2012-06-08 10:47, Alexey Kardashevskiy wrote:
>>>> Yet another try :)
>>>>
>>>> Normally the pci_add_capability is called on devices to add new
>>>> capability. This is ok for emulated devices which capabilities list
>>>> is being built by QEMU.
>>>>
>>>> In the case of VFIO the capability may already exist and adding new
>>>
>>> Why does it exit? VFIO should build the virtual capability list from
>>> scratch (just like classic device assignment does), recreating the
>>> layout of the physical device (except for masked out caps). In that
>>> case, this conflict should become impossible, no?
>>
>> Normally capabilities in emulated devices are created by calling
>> msi_init or msix_init - just when emulated device wants to advertise it
>> to the guest.
>>
>> In the case of VFIO, there is a lot of capabilities which QEMU does not
>> know and does not want to know about. They are read from the host kernel
>> as is. And we definitely want to pass these capabilities to the guest as
>> is, i.e. on the same position and the same number of them. Just for some
>> we call pci_add_capability (indirectly!) if we want QEMU to support them
>> somehow.
>>
>> If we invent some function which "readds" all the capabilities we got
>> from the host to keep internal QEMU's PCIDevice data in sync, then we'll
>> need to change every piece of code which adds capabilities.
> 
> I can't follow. What is different in VFIO from device-assignment.c,
> assigned_device_pci_cap_init (except that it already uses msi[x]_init,
> something we need to fix in device-assignment.c)?

What are device-assignment.c and assigned_device_pci_cap_init? Cannot
find them in QEMU tree.

Ah, anyway. The main difference is QEMU does not emulate VFIO devices,
it just a proxy to the host system. Or I do not understand the question.

>> I noticed,
>> this is very common approach here to change a lot for a very small thing
>> or rare case but I'd like to avoid this :)
>>
>>> But if pci_*add*_capability should actually be used like this (I doubt
>>> this),
>>
>> MSI/MSIX use it. To enable MSI/MSIX on VFIO PCIDevice, we call
>> msi_init/msix_init and they call pci_add_capability.
> 
> You can't blame msi_init/msix_init for the fact that VFIO creates a
> capability list with an existing MSI/MSI-X entry beforehand.

VFIO does not create any capability. It gets them all from the host
kernel and passes to the guest as is. VFIO only needs MSIX to be enabled
in VFIO.

>>> some renaming would be required. "Add" sound like "append" to me,
>>> not "update".
>>
>> It is "add" for all the cases but VFIO. VFIO is the very special case
>> and I do not see another one doing the same soon.
> 
> PCI device assignment may have some special requirements. Then it is
> either required to generalize common services properly or keep the
> specialty local. So far, this proposal does not fall in any of those two
> categories.

It is a common patch. It does not know about VFIO and lets
pci_add_capability handle one more situation when the capability already
exists.

The only "common" solution I see here is
1) to add pci_fixup_capabilities() which would mark all the bytes of
existing capabilities as "used", we will call it once we fetched the
config space from the host kernel
2) to fix pci_add_capabilities not to fail and simply return (0?) if we
add a capability which already exists.

Will it be ok?


-- 
With best regards

Alexey Kardashevskiy -- icq: 52150396





reply via email to

[Prev in Thread] Current Thread [Next in Thread]