qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] snabbswitch integration with QEMU for userspace etherne


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] snabbswitch integration with QEMU for userspace ethernet I/O
Date: Tue, 28 May 2013 13:58:43 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

On Tue, May 28, 2013 at 12:10:50PM +0200, Luke Gorrie wrote:
> On 27 May 2013 11:34, Stefan Hajnoczi <address@hidden> wrote:
> 
> > vhost_net is about connecting the a virtio-net speaking process to a
> > tun-like device.  The problem you are trying to solve is connecting a
> > virtio-net speaking process to Snabb Switch.
> >
> 
> Yep!
> 
> 
> > Either you need to replace vhost or you need a tun-like device
> > interface.
> >
> > Replacing vhost would mean that your switch implements virtio-net,
> > shares guest RAM with the guest, and shares the ioeventfd and irqfd
> > which are used to signal with the guest.
> 
> 
> This would be a great solution from my perspective. This is the design that
> I am now struggling to find a good implementation strategy for.

The switch needs 3 resources for direct virtio-net communication with
the guest:

1. Shared memory access to guest physical memory for guest physical to
   host userspace address translation.  vhost and data plane
   automatically guest access to guest memory and they learn about
   memory layout using the MemoryListener interface in QEMU (see
   hw/virtio/vhost.c:vhost_region_add() and friends).

2. Virtqueue kick notifier (ioeventfd) so the switch knows when the
   guest signals the host.  See virtio_queue_get_host_notifier(vq).

3. Guest interrupt notifier (irqfd) so the switch can signal the guest.
   See virtio_queue_get_guest_notifier(vq).

I don't have a detailed suggestion for how to interface the switch and
QEMU processes.  It may be necessary to communicate back and forth (to
handle the virtio device lifecycle) so a UNIX domain socket would be
appropriate for passing file descriptors.  Here is a rough idea:

$ switch --listen-path=/var/run/switch.sock
$ qemu --device virtio-net-pci,switch=/var/run/switch.sock

On QEMU startup:

(switch socket) add_port --id="qemu-$PID" --session-persistence

(Here --session-persistence means that the port will be automatically
destroyed if the switch socket session is terminated because the UNIX
domain socket is closed by QEMU.)

On virtio device status transition to DRIVER_OK:

(switch socket) configure_port --id="qemu-$PID"
                               --mem=/tmp/shm/qemu-$PID
                               --ioeventfd=2
                               --irqfd=3

On virtio device status transition from DRIVER_OK:

(switch socket) deconfigure_port --id="qemu-$PID"

I skipped a bunch of things:

1. virtio-net has several virtqueues so you need multiple ioeventfds.

2. QEMU needs to communicate memory mapping information, this gets
   especially interesting with memory hotplug.  Memory is more
   complicated than a single shmem blob.

3. Multiple NICs per guest should be supported.

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]