qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Re: [PATCH v4] virtio/vsock: add two more queues for datagram types


From: Jiang Wang .
Subject: Re: Re: [PATCH v4] virtio/vsock: add two more queues for datagram types
Date: Sun, 5 Sep 2021 11:08:34 -0700

On Mon, Aug 9, 2021 at 3:58 AM Stefano Garzarella <sgarzare@redhat.com> wrote:
>
> On Thu, Aug 05, 2021 at 12:07:02PM -0700, Jiang Wang . wrote:
> >On Wed, Aug 4, 2021 at 1:13 AM Stefano Garzarella <sgarzare@redhat.com> 
> >wrote:
> >>
> >> On Tue, Aug 03, 2021 at 11:41:32PM +0000, Jiang Wang wrote:
> >> >Datagram sockets are connectionless and unreliable.
> >> >The sender does not know the capacity of the receiver
> >> >and may send more packets than the receiver can handle.
> >> >
> >> >Add two more dedicate virtqueues for datagram sockets,
> >> >so that it will not unfairly steal resources from
> >> >stream and future connection-oriented sockets.
> >> >
> >> >Signed-off-by: Jiang Wang <jiang.wang@bytedance.com>
> >> >---
> >> >v1 -> v2: use qemu cmd option to control number of queues,
> >> >       removed configuration settings for dgram.
> >> >v2 -> v3: use ioctl to get features and decide number of
> >> >        virt queues, instead of qemu cmd option.
> >> >v3 -> v4: change DGRAM feature bit value to 2. Add an argument
> >> >       in vhost_vsock_common_realize to indicate dgram is supported or 
> >> > not.
> >> >
> >> > hw/virtio/vhost-user-vsock.c                  |  2 +-
> >> > hw/virtio/vhost-vsock-common.c                | 58 ++++++++++++++++++-
> >> > hw/virtio/vhost-vsock.c                       |  5 +-
> >> > include/hw/virtio/vhost-vsock-common.h        |  6 +-
> >> > include/hw/virtio/vhost-vsock.h               |  4 ++
> >> > include/standard-headers/linux/virtio_vsock.h |  1 +
> >> > 6 files changed, 69 insertions(+), 7 deletions(-)
> >> >
> >> >diff --git a/hw/virtio/vhost-user-vsock.c b/hw/virtio/vhost-user-vsock.c
> >> >index 6095ed7349..e9ec0e1c00 100644
> >> >--- a/hw/virtio/vhost-user-vsock.c
> >> >+++ b/hw/virtio/vhost-user-vsock.c
> >> >@@ -105,7 +105,7 @@ static void vuv_device_realize(DeviceState *dev, 
> >> >Error **errp)
> >> >         return;
> >> >     }
> >> >
> >> >-    vhost_vsock_common_realize(vdev, "vhost-user-vsock");
> >> >+    vhost_vsock_common_realize(vdev, "vhost-user-vsock", false);
> >> >
> >> >     vhost_dev_set_config_notifier(&vvc->vhost_dev, &vsock_ops);
> >> >
> >> >diff --git a/hw/virtio/vhost-vsock-common.c 
> >> >b/hw/virtio/vhost-vsock-common.c
> >> >index 4ad6e234ad..c78536911a 100644
> >> >--- a/hw/virtio/vhost-vsock-common.c
> >> >+++ b/hw/virtio/vhost-vsock-common.c
> >> >@@ -17,6 +17,8 @@
> >> > #include "hw/virtio/vhost-vsock.h"
> >> > #include "qemu/iov.h"
> >> > #include "monitor/monitor.h"
> >> >+#include <sys/ioctl.h>
> >> >+#include <linux/vhost.h>
> >> >
> >> > int vhost_vsock_common_start(VirtIODevice *vdev)
> >> > {
> >> >@@ -196,9 +198,39 @@ int vhost_vsock_common_post_load(void *opaque, int 
> >> >version_id)
> >> >     return 0;
> >> > }
> >> >
> >> >-void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name)
> >> >+static int vhost_vsock_get_max_qps(bool enable_dgram)
> >> >+{
> >> >+    uint64_t features;
> >> >+    int ret;
> >> >+    int fd = -1;
> >> >+
> >> >+    if (!enable_dgram)
> >> >+        return MAX_VQS_WITHOUT_DGRAM;
> >> >+
> >> >+    fd = qemu_open_old("/dev/vhost-vsock", O_RDONLY);
> >>
> >>
> >> As I said in the previous version, we cannot directly open
> >> /dev/vhost-vsock, for two reasons:
> >>
> >>    1. this code is common with vhost-user-vsock which does not use
> >>    /dev/vhost-vsock.
> >>
> >>    2. the fd may have been passed from the management layer and qemu may
> >>    not be able to directly open /dev/vhost-vsock.
> >>
> >> I think is better to move this function in hw/virtio/vhost-vsock.c,
> >> using the `vhostfd`, returning true or false if dgram is supported, then
> >> you can use it for `enable_dgram` param ...
> >>
> >
> >Yes, you are right. Now I remember you said that before but I forgot about 
> >that
> >when I changed the code. I will fix it. Sorry about that.
>
> No problem :-)
>
> >
> >> >+    if (fd == -1) {
> >> >+        error_report("vhost-vsock: failed to open device. %s", 
> >> >strerror(errno));
> >> >+        return -1;
> >> >+    }
> >> >+
> >> >+    ret = ioctl(fd, VHOST_GET_FEATURES, &features);
> >> >+    if (ret) {
> >> >+        error_report("vhost-vsock: failed to read  device. %s", 
> >> >strerror(errno));
> >> >+        qemu_close(fd);
> >> >+        return ret;
> >> >+    }
> >> >+
> >> >+    qemu_close(fd);
> >> >+    if (features & (1 << VIRTIO_VSOCK_F_DGRAM))
> >> >+        return MAX_VQS_WITH_DGRAM;
> >> >+
> >> >+    return MAX_VQS_WITHOUT_DGRAM;
> >> >+}
> >> >+
> >> >+void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name, 
> >> >bool enable_dgram)
> >> > {
> >> >     VHostVSockCommon *vvc = VHOST_VSOCK_COMMON(vdev);
> >> >+    int nvqs = MAX_VQS_WITHOUT_DGRAM;
> >> >
> >> >     virtio_init(vdev, name, VIRTIO_ID_VSOCK,
> >> >                 sizeof(struct virtio_vsock_config));
> >> >@@ -209,12 +241,24 @@ void vhost_vsock_common_realize(VirtIODevice
> >> >*vdev, const char *name)
> >> >     vvc->trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE,
> >> >                                        vhost_vsock_common_handle_output);
> >> >
> >> >+    nvqs = vhost_vsock_get_max_qps(enable_dgram);
> >> >+
> >> >+    if (nvqs < 0)
> >> >+        nvqs = MAX_VQS_WITHOUT_DGRAM;
> >>
> >> ... and here, if `enable_dgram` is true, you can set `nvqs =
> >> MAX_VQS_WITH_DGRAM``
> >>
> >sure.
> >
> >> >+
> >> >+    if (nvqs == MAX_VQS_WITH_DGRAM) {
> >> >+        vvc->dgram_recv_vq = virtio_add_queue(vdev, 
> >> >VHOST_VSOCK_QUEUE_SIZE,
> >> >+                                              
> >> >vhost_vsock_common_handle_output);
> >> >+        vvc->dgram_trans_vq = virtio_add_queue(vdev, 
> >> >VHOST_VSOCK_QUEUE_SIZE,
> >> >+
> >> >vhost_vsock_common_handle_output);
> >> >+    }
> >> >+
> >>
> >> I'm still concerned about compatibility with guests that don't
> >> support
> >> dgram, as I mentioned in the previous email.
> >>
> >> I need to do some testing, but my guess is it won't work if the host
> >> (QEMU and vhost-vsock) supports it and the guest doesn't.
> >>
> >> I still think that we should allocate an array of queues and then decide
> >> at runtime which one to use for events (third or fifth) after the guest
> >> has acked the features.
> >>
> >Agree. I will check where the guest ack the feature. If you have any
>
> I'm not sure we should delete them, I think we can allocate 5 queues and
> then use queue 3 or 5 for events in vhost_vsock_common_start(), when the
> guest already acked the features.
>

I think I just solved most compatibility issues during migration. The
previous error I saw was due to a bug in vhost-vsock kernel module.
After fixing that, I did not change anything for qemu ( i.e, still the same
version 4, btw I will fix fd issue in v5) and did a few migration tests.
Most of them are good.

There are two test cases that migration failed with "Features 0x130000002
unsupported"error, which is due to
SEQPACKET qemu patch (my dgram patch
is based on seqpacket patch). Not sure if we need to
fix it or not.  I think the compatibility is good as of now. Let me
know if you have other concerns or more test cases to test.
Otherwise, I will submit V5 soon.

Test results:
Left three columns are the source set-up,  right are destination set up.
Host and Guest refers to the host and guest kernel respectively. These
tests are not complete, and I make the src and dest kernel mostly the
same version. But I did test one case where source kernel has dgram
support while dest kernel does not and it is good. Btw, if the src kernel
and dest kernel have a big difference( like 5.14 vs 4.19), then QEMU
will show some msr errors which I guess is kind of expected.

Host        QEMU        Guest            --> Host        QEMU            result
dgram       no-dgram    no-dgram        dgram       no-dgram        Good
dgram       no-dgram    dgram           dgram       no-dgram        Good
dgram       dgram       no-dgram        dgram       dgram           Good
dgram       dgram       no-dgram        dgram       no-dgram        Good
dgram       dgram       dgram           dgram       no-dgram
load feature error *1

no-dgram    no-dgram    dgram           no-dgram    no-dgram        Good
no-dgram    dgram       dgram           no-dgram    dgram             Good
no-dgram    dgram       no-dgram        no-dgram    dgram           Good
no-dgram    dgram       no-dgram        no-dgram    no-dgram        Good
no-dgram    dgram       dgram           no-dgram    no-dgram
load feature error *1

dgram       dgram       no-dgram        no-dgram    no-dgram        Good

*1 Qemu shows following error messages:
qemu-system-x86_64: Features 0x130000002 unsupported. Allowed
features: 0x179000000
qemu-system-x86_64: Failed to load virtio-vhost_vsock:virtio
qemu-system-x86_64: error while loading state for instance 0x0 of
device '0000:00:05.0/virtio-vhost_vsock'
qemu-system-x86_64: load of migration failed: Operation not permitted

This is due to SEQPACKET feature bit.


Step back and rethink about whether the event vq number should be 3 or or 5,
now I think it does not matter. The tx and rx queues (whether 2 or 4 queues)
belong to vhost, but event queue belongs to QEMU. The qemu code
allocates an array  for vhost_dev.vqs only for tx and rx queues. So
event queue is never in that array. That means we don't need to
worry about even queue number is 3 or 5. And my tests confirmed that.
I think for the virtio spec, we need to put event queue somewhere and
it looks like having a relative position to tx rx queues. But for vhost kernel
implementation, the event queue is a special case and not related to
tx or rx queues.

Regards,

Jiang


> >pointers,
> >just let me know. Also, could we just remove the vq allocation in 
> >common_realize
> >and do it at a later time? Or need to delete and add again as I mentioned in 
> >the
> >previous email?
>
> Instead of having 'recv_vq', 'trans_vq', 'event_vq' fields, we can have
> an array of VirtQueue pointers and a field that indicates what the index
> of the event queue is. (or a boolean that indicates if we are enabling
> dgram or not).
>
> This should simplify the management.
>
> Thanks,
> Stefano
>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]