[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH] *** Vhost-pci RFC v2 ***
From: |
Marc-André Lureau |
Subject: |
Re: [Qemu-devel] [PATCH] *** Vhost-pci RFC v2 *** |
Date: |
Thu, 01 Sep 2016 08:49:20 +0000 |
Hi
On Thu, Sep 1, 2016 at 12:19 PM Wei Wang <address@hidden> wrote:
> On 08/31/2016 08:30 PM, Marc-André Lureau wrote:
>
> - If it could be made not pci-specific, a better name for the device could
> be simply "driver": the driver of a virtio device. Or the "slave" in
> vhost-user terminology - consumer of virtq. I think you prefer to call it
> "backend" in general, but I find it more confusing.
>
>
> Not really. A virtio device has it own driver (e.g. a virtio-net driver
> for a virtio-net device). A vhost-pci device plays the role of a backend
> (just like vhost_net, vhost_user) for a virtio device. If we use the
> "device/driver" naming convention, the vhost-pci device is part of the
> "device". But I actually prefer to use "frontend/backend" :) If we check
> the QEMU's doc/specs/vhost-user.txt, it also uses "backend" to describe.
>
>
Yes, but it uses "backend" freely without any definition and to name
eventually different things. (at least "slave" is being defined as the
consumer of virtq, but I think some people don't like to use that word).
Have you thought about making the device not pci specific? I don't know
much about mmio devices nor s/390, but if devices can hotplug their own
memory (I believe mmio can), then it should be possible to define a device
generic enough.
- regarding the socket protocol, why not reuse vhost-user? it seems to me
> it supports most of what you need and more (like interrupt, migrations,
> protocol features, start/stop queues). Some of the extensions, like uuid,
> could be beneficial to vhost-user too.
>
>
> Right. We recently changed the plan - trying to make it (the vhost-pci
> protocol) an extension of the vhost-user protocol.
>
>
> Great!
> - Why is it required or beneficial to support multiple "frontend" devices
> over the same "vhost-pci" device? It could simplify things if it was a
> single device. If necessary, that could also be interesting as a vhost-user
> extension.
>
>
> We call it "multiple backend functionalities" (e.g. vhost-pci-net,
> vhost-pci-scsi..). A vhost-pci driver contains multiple such backend
> functionalities, because in this way they can reuse (share) the same memory
> mapping. To be more precise, a vhost-pci device supplies the memory of a
> frontend VM, and all the backend functionalities need to access the same
> frontend VM memory, so we consolidate them into one vhost-pci driver to use
> one vhost-pci device.
>
>
That's what I imagined. Do you have a use case for that?
Given that it's in a VM (no caching issues?), how is it a problem to map
the same memory multiple times? Is there a memory limit?
- no interrupt support, I suppose you mainly looked at poll-based net
> devices
>
>
> Yes. But I think it's also possible to add the interrupt support. For
> example, we can use ioeventfd (or hypercall) to inject interrupts to the
> fontend VM after transmitting packets.
>
> I guess it would be a good idea to have this in the spec from the
beginning, not as an afterthought
>
> - when do you expect to share a wip/rfc implementation?
>
> Probably in October (next month). I think it also depends on the
> discussions here :)
>
thanks
--
Marc-André Lureau