qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] IVSHMEM device performance


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] IVSHMEM device performance
Date: Mon, 11 Apr 2016 15:27:34 +0300

On Mon, Apr 11, 2016 at 10:56:54AM +0200, Markus Armbruster wrote:
> Cc: qemu-devel
> 
> Eli Britstein <address@hidden> writes:
> 
> > Hi
> >
> > In a VM, I add a IVSHMEM device, on which the MBUFS mempool resides, and 
> > also rings I create (I run a DPDK application in the VM).
> > I saw there is a performance penalty if I use such device, instead of 
> > hugepages (the VM's hugepages). My VM's memory is *NOT* backed with host's 
> > hugepages.
> > The memory behind the IVSHMEM device is a host hugepage (I use a patched 
> > version of QEMU, as provided by Intel).
> > I thought maybe the reason is that this memory is seen by the VM as a 
> > mapped PCI memory region, so it is not cached, but I am not sure.
> > So, my direction was to change the kernel (in the VM) so it will consider 
> > this memory as a regular memory (and thus cached), instead of a PCI memory 
> > region.
> > However, I am not sure my direction is correct, and even if so, I am not 
> > sure how/where to change the kernel (my starting point was  mm/mmap.c, but 
> > I'm not sure it's the correct place to start).
> >
> > Any suggestion is welcomed.
> > Thanks,
> > Eli.

A cleaner way is just to use virtio, keeping everything in VM's
memory, with access either by data copies in hypervisor, or
directly using vhost-user.
For example, with vhost-pci: https://wiki.opnfv.org/vm2vm_mst
there has been recent work on this, see slides 12-14 in
http://schd.ws/hosted_files/ons2016/36/Nakajima_and_Ergin_PreSwitch_final.pdf

This is very much work in progress, but if you are interested
you should probably get in touch with Nakajima et al.

-- 
MST



reply via email to

[Prev in Thread] Current Thread [Next in Thread]