[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [RFC v2 8/8] virtio: guest driver reload for vhost-net
From: |
Wei Xu |
Subject: |
Re: [Qemu-devel] [RFC v2 8/8] virtio: guest driver reload for vhost-net |
Date: |
Tue, 19 Jun 2018 15:53:55 +0800 |
User-agent: |
Mutt/1.5.24 (2015-08-30) |
On Wed, Jun 06, 2018 at 11:48:19AM +0800, Jason Wang wrote:
>
>
> On 2018年06月06日 03:08, address@hidden wrote:
> >From: Wei Xu <address@hidden>
> >
> >last_avail, avail_wrap_count, used_idx and used_wrap_count are
> >needed to support vhost-net backend, all these are either 16 or
> >bool variables, since state.num is 64bit wide, so here it is
> >possible to put them to the 'num' without introducing a new case
> >while handling ioctl.
> >
> >Unload/Reload test has been done successfully with a patch in vhost kernel.
>
> You need a patch to enable vhost.
>
> And I think you can only do it for vhost-kenrel now since vhost-user
> protocol needs some extension I believe.
OK.
>
> >
> >Signed-off-by: Wei Xu <address@hidden>
> >---
> > hw/virtio/virtio.c | 42 ++++++++++++++++++++++++++++++++++--------
> > 1 file changed, 34 insertions(+), 8 deletions(-)
> >
> >diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> >index 4543974..153f6d7 100644
> >--- a/hw/virtio/virtio.c
> >+++ b/hw/virtio/virtio.c
> >@@ -2862,33 +2862,59 @@ hwaddr virtio_queue_get_used_size(VirtIODevice
> >*vdev, int n)
> > }
> > }
> >-uint16_t virtio_queue_get_last_avail_idx(VirtIODevice *vdev, int n)
> >+uint64_t virtio_queue_get_last_avail_idx(VirtIODevice *vdev, int n)
> > {
> >- return vdev->vq[n].last_avail_idx;
> >+ uint64_t num;
> >+
> >+ num = vdev->vq[n].last_avail_idx;
> >+ if (virtio_vdev_has_feature(vdev, VIRTIO_F_RING_PACKED)) {
> >+ num |= ((uint64_t)vdev->vq[n].avail_wrap_counter) << 16;
> >+ num |= ((uint64_t)vdev->vq[n].used_idx) << 32;
> >+ num |= ((uint64_t)vdev->vq[n].used_wrap_counter) << 48;
>
> So s.num is 32bit, I don't think this can even work.
I mistakenly checked out s.num is 64bit, will add a new case in next version.
Wei
>
> Thanks
>
> >+ }
> >+
> >+ return num;
> > }
> >-void virtio_queue_set_last_avail_idx(VirtIODevice *vdev, int n, uint16_t
> >idx)
> >+void virtio_queue_set_last_avail_idx(VirtIODevice *vdev, int n, uint64_t
> >num)
> > {
> >- vdev->vq[n].last_avail_idx = idx;
> >- vdev->vq[n].shadow_avail_idx = idx;
> >+ vdev->vq[n].shadow_avail_idx = vdev->vq[n].last_avail_idx =
> >(uint16_t)(num);
> >+
> >+ if (virtio_vdev_has_feature(vdev, VIRTIO_F_RING_PACKED)) {
> >+ vdev->vq[n].avail_wrap_counter = (uint16_t)(num >> 16);
> >+ vdev->vq[n].used_idx = (uint16_t)(num >> 32);
> >+ vdev->vq[n].used_wrap_counter = (uint16_t)(num >> 48);
> >+ }
> > }
> > void virtio_queue_restore_last_avail_idx(VirtIODevice *vdev, int n)
> > {
> > rcu_read_lock();
> >- if (vdev->vq[n].vring.desc) {
> >+ if (!vdev->vq[n].vring.desc) {
> >+ goto out;
> >+ }
> >+
> >+ if (!virtio_vdev_has_feature(vdev, VIRTIO_F_RING_PACKED)) {
> > vdev->vq[n].last_avail_idx = vring_used_idx(&vdev->vq[n]);
> >- vdev->vq[n].shadow_avail_idx = vdev->vq[n].last_avail_idx;
> > }
> >+ vdev->vq[n].shadow_avail_idx = vdev->vq[n].last_avail_idx;
> >+
> >+out:
> > rcu_read_unlock();
> > }
> > void virtio_queue_update_used_idx(VirtIODevice *vdev, int n)
> > {
> > rcu_read_lock();
> >- if (vdev->vq[n].vring.desc) {
> >+ if (!vdev->vq[n].vring.desc) {
> >+ goto out;
> >+ }
> >+
> >+ if (!virtio_vdev_has_feature(vdev, VIRTIO_F_RING_PACKED)) {
> > vdev->vq[n].used_idx = vring_used_idx(&vdev->vq[n]);
> > }
> >+
> >+out:
> > rcu_read_unlock();
> > }
>
- Re: [Qemu-devel] [RFC v2 4/8] virtio: get avail bytes check for packed ring, (continued)
[Qemu-devel] [RFC v2 6/8] virtio: flush/push for packed ring, wexu, 2018/06/05
[Qemu-devel] [RFC v2 8/8] virtio: guest driver reload for vhost-net, wexu, 2018/06/05
[Qemu-devel] [RFC v2 7/8] virtio: event suppression for packed ring, wexu, 2018/06/05
Re: [Qemu-devel] [RFC v2 0/8] packed ring virtio-net userspace backend support, Jason Wang, 2018/06/05
Re: [Qemu-devel] [RFC v2 0/8] packed ring virtio-net userspace backend support, Jason Wang, 2018/06/05