[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v8] introduce vfio-user protocol specification
From: |
Stefan Hajnoczi |
Subject: |
Re: [PATCH v8] introduce vfio-user protocol specification |
Date: |
Wed, 5 May 2021 16:51:12 +0100 |
On Tue, May 04, 2021 at 02:31:45PM +0000, John Levon wrote:
> On Tue, May 04, 2021 at 02:51:45PM +0100, Stefan Hajnoczi wrote:
>
> > On Wed, Apr 14, 2021 at 04:41:22AM -0700, Thanos Makatos wrote:
> > By the way, this DMA mapping design is the eager mapping approach. Another
> > approach is the lazy mapping approach where the server requests translations
> > as necessary. The advantage is that the client does not have to send each
> > mapping to the server. In the case of VFIO_USER_DMA_READ/WRITE no mappings
> > need to be sent at all. Only mmaps need mapping messages.
>
> Are you arguing that we should implement this? It would non-trivially
> complicate
> the implementations on the server-side, where the library "owns" the mappings
> logic, but an API user is responsible for doing actual read/writes.
It's up to you whether the lazy DMA mapping approach is worth
investigating. It might perform better than the eager approach.
The vhost/vDPA lazy DMA mapping message is struct vhost_iotlb_msg in
Linux if you want to take a look.
> > How do potentially large messages work around max_msg_size? It is hard
> > for the client/server to anticipate the maximum message size that will
> > be required ahead of time, so they can't really know if they will hit a
> > situation where max_msg_size is too low.
>
> Are there specific messages you're worried about? would it help to add a
> specification stipulation as to minimum size that clients and servers must
> support?
>
> Ultimately the max msg size exists solely to ease implementation: with a
> reasonable fixed size, we can always consume the entire data in one go, rather
> than doing partial reads. Obviously that needs a limit to avoid unbounded
> allocations.
It came to mind when reading about the dirty bitmap messages. Memory
dirty bitmaps can become large. An 8 GB memory region has a 1 MB dirty
bitmap.
> > > +VFIO_USER_DEVICE_GET_INFO
> > > +-------------------------
> > > +
> > > +Message format
> > > +^^^^^^^^^^^^^^
> > > +
> > > ++--------------+----------------------------+
> > > +| Name | Value |
> > > ++==============+============================+
> > > +| Message ID | <ID> |
> > > ++--------------+----------------------------+
> > > +| Command | 4 |
> > > ++--------------+----------------------------+
> > > +| Message size | 32 |
> > > ++--------------+----------------------------+
> > > +| Flags | Reply bit set in reply |
> > > ++--------------+----------------------------+
> > > +| Error | 0/errno |
> > > ++--------------+----------------------------+
> > > +| Device info | VFIO device info |
> > > ++--------------+----------------------------+
> > > +
> > > +This command message is sent by the client to the server to query for
> > > basic
> > > +information about the device. The VFIO device info structure is defined
> > > in
> > > +``<linux/vfio.h>`` (``struct vfio_device_info``).
> >
> > Wait, "VFIO device info format" below is missing the cap_offset field,
> > so it's exactly not the same as <linux/vfio.h>?
>
> We had to move away from directly consuming struct vfio_device_info when
> cap_offset was added. Generally trying to use vfio.h at all seems like a bad
> idea. That's an implementation thing, but this was a dangling reference we
> need
> to clean up.
Okay. Dropping "<linux/vfio.h>" from the spec would solve this.
Stefan
signature.asc
Description: PGP signature
Re: [PATCH v8] introduce vfio-user protocol specification, Alex Williamson, 2021/05/19