[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH RFC server 05/11] vfio-user: run vfio-user context
From: |
Jag Raman |
Subject: |
Re: [PATCH RFC server 05/11] vfio-user: run vfio-user context |
Date: |
Mon, 16 Aug 2021 14:10:00 +0000 |
> On Aug 16, 2021, at 8:52 AM, John Levon <john.levon@nutanix.com> wrote:
>
> On Fri, Aug 13, 2021 at 02:51:53PM +0000, Jag Raman wrote:
>
>> Thanks for the information about the Blocking and Non-Blocking mode.
>>
>> I’d like to explain why we are using a separate thread presently and
>> check with you if it’s possible to poll on multiple vfu contexts at the
>> same time (similar to select/poll for fds).
>>
>> Concerning my understanding on how devices are executed in QEMU,
>> QEMU initializes the device instance - where the device registers
>> callbacks for BAR and config space accesses. The device is then
>> subsequently driven by these callbacks - whenever the vcpu thread tries
>> to access the BAR addresses or places a config space access to the PCI
>> bus, the vcpu exits to QEMU which handles these accesses. As such, the
>> device is driven by the vcpu thread. Since there are no vcpu threads in the
>> remote process, we created a separate thread as a replacement. As you
>> can see already, this thread blocks on vfu_run_ctx() which I believe polls
>> on the socket for messages from client.
>>
>> If there is a way to run multiple vfu contexts at the same time, that would
>> help with conserving threads on the host CPU. For example, if there’s a
>> way to add vfu contexts to a list of contexts that expect messages from
>> client, that could be a good idea. Alternatively, this QEMU server could
>> also implement a similar mechanism to group all non-blocking vfu
>> contexts to just a single thread, instead of having separate threads for
>> each context.
>
> You can use vfu_get_poll_fd() to retrieve the underlying socket fd (simplest
> would be to do this after vfu_attach_ctx(), but that might depend), then poll
> on
> the fd set, doing vfu_run_ctx() when the fd is ready. An async hangup on the
> socket would show up as ENOTCONN, in which case you'd remove the fd from the
> set.
OK sounds good, will check this model out. Thank you!
--
Jag
>
> Note that we're not completely async yet (e.g. the actual socket read/writes
> are
> synchronous). In practice that's not typically an issue but it could be if you
> wanted to support multiple VMs from a single server, etc.
>
>
> regards
> john