qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [virtio-dev] [RFC QEMU] docs: vhost-user: Add custom memory mapping


From: Viresh Kumar
Subject: Re: [virtio-dev] [RFC QEMU] docs: vhost-user: Add custom memory mapping support
Date: Fri, 3 Mar 2023 13:41:10 +0530

On 01-03-23, 10:47, Stefan Hajnoczi wrote:
> Resend - for some reason my email didn't make it out.

How about this (will send a formal patch later).

Author: Viresh Kumar <viresh.kumar@linaro.org>
Date:   Tue Feb 21 14:36:30 2023 +0530

    docs: vhost-user: Add Xen specific memory mapping support

    The current model of memory mapping at the back-end works fine where a
    standard call to mmap() (for the respective file descriptor) is enough
    before the front-end can start accessing the guest memory.

    There are other complex cases though where the back-end needs more
    information and simple mmap() isn't enough. For example Xen, a type-1
    hypervisor, currently supports memory mapping via two different methods,
    foreign-mapping (via /dev/privcmd) and grant-dev (via /dev/gntdev). In
    both these cases, the back-end needs to call mmap() and ioctl(), and
    need to pass extra information via the ioctl(), like the Xen domain-id
    of the guest whose memory we are trying to map.

    Add a new protocol feature, 'VHOST_USER_PROTOCOL_F_XEN_MMAP', which lets
    the back-end know about the additional memory mapping requirements.
    When this feature is negotiated, the front-end can send the
    'VHOST_USER_SET_XEN_MMAP' message type to provide the additional
    information to the back-end.

    Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
---
 docs/interop/vhost-user.rst | 36 ++++++++++++++++++++++++++++++++++++
 1 file changed, 36 insertions(+)

diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst
index 3f18ab424eb0..8be5f5eae941 100644
--- a/docs/interop/vhost-user.rst
+++ b/docs/interop/vhost-user.rst
@@ -258,6 +258,24 @@ Inflight description

 :queue size: a 16-bit size of virtqueues

+Xen mmap description
+^^^^^^^^^^^^^^^^^^^^
+
++-------+-------+
+| flags | domid |
++-------+-------+
+
+:flags: 64-bit bit field
+
+- Bit 0 is set for Xen foreign memory memory mapping.
+- Bit 1 is set for Xen grant memory memory mapping.
+- Bit 2 is set if the back-end can directly map additional memory (like
+  descriptor buffers or indirect descriptors, which aren't part of already
+  shared memory regions) without the need of front-end sending an additional
+  memory region first.
+
+:domid: a 64-bit Xen hypervisor specific domain id.
+
 C structure
 -----------

@@ -867,6 +885,7 @@ Protocol features
   #define VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS 14
   #define VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS  15
   #define VHOST_USER_PROTOCOL_F_STATUS               16
+  #define VHOST_USER_PROTOCOL_F_XEN_MMAP             17

 Front-end message types
 -----------------------
@@ -1422,6 +1441,23 @@ Front-end message types
   query the back-end for its device status as defined in the Virtio
   specification.

+``VHOST_USER_SET_XEN_MMAP``
+  :id: 41
+  :equivalent ioctl: N/A
+  :request payload: Xen mmap description
+  :reply payload: N/A
+
+  When the ``VHOST_USER_PROTOCOL_F_XEN_MMAP`` protocol feature has been
+  successfully negotiated, this message is submitted by the front-end to set 
the
+  Xen hypervisor specific memory mapping configurations at the back-end.  These
+  configurations should be used to mmap memory regions, virtqueues, descriptors
+  and descriptor buffers. The front-end must send this message before any
+  memory-regions are sent to the back-end via ``VHOST_USER_SET_MEM_TABLE`` or
+  ``VHOST_USER_ADD_MEM_REG`` message types. The front-end can send this message
+  multiple times, if different mmap configurations are required for different
+  memory regions, where the most recent ``VHOST_USER_SET_XEN_MMAP`` must be 
used
+  by the back-end to map any newly shared memory regions.
+

 Back-end message types
 ----------------------

-- 
viresh



reply via email to

[Prev in Thread] Current Thread [Next in Thread]