qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] IVSHMEM device performance


From: Eli Britstein
Subject: Re: [Qemu-devel] IVSHMEM device performance
Date: Mon, 11 Apr 2016 13:18:03 +0000


> -----Original Message-----
> From: Michael S. Tsirkin [mailto:address@hidden
> Sent: Monday, 11 April, 2016 3:28 PM
> To: Markus Armbruster
> Cc: Eli Britstein; address@hidden; address@hidden
> Subject: Re: IVSHMEM device performance
>
> On Mon, Apr 11, 2016 at 10:56:54AM +0200, Markus Armbruster wrote:
> > Cc: qemu-devel
> >
> > Eli Britstein <address@hidden> writes:
> >
> > > Hi
> > >
> > > In a VM, I add a IVSHMEM device, on which the MBUFS mempool
> resides, and also rings I create (I run a DPDK application in the VM).
> > > I saw there is a performance penalty if I use such device, instead of
> hugepages (the VM's hugepages). My VM's memory is *NOT* backed with
> host's hugepages.
> > > The memory behind the IVSHMEM device is a host hugepage (I use a
> patched version of QEMU, as provided by Intel).
> > > I thought maybe the reason is that this memory is seen by the VM as a
> mapped PCI memory region, so it is not cached, but I am not sure.
> > > So, my direction was to change the kernel (in the VM) so it will consider
> this memory as a regular memory (and thus cached), instead of a PCI
> memory region.
> > > However, I am not sure my direction is correct, and even if so, I am not
> sure how/where to change the kernel (my starting point was  mm/mmap.c,
> but I'm not sure it's the correct place to start).
> > >
> > > Any suggestion is welcomed.
> > > Thanks,
> > > Eli.
>
> A cleaner way is just to use virtio, keeping everything in VM's memory, with
> access either by data copies in hypervisor, or directly using vhost-user.
> For example, with vhost-pci: https://wiki.opnfv.org/vm2vm_mst there has
> been recent work on this, see slides 12-14 in
> http://schd.ws/hosted_files/ons2016/36/Nakajima_and_Ergin_PreSwitch_fi
> nal.pdf
>
> This is very much work in progress, but if you are interested you should
> probably get in touch with Nakajima et al.
[Eli Britstein] This is indeed very interesting and I will further look into it.
However, if I'm not wrong, this requires some support from the host which I 
would like to avoid.
My requirement from the host is only to provide an IVSHMEM device for several 
VMs, and my applications are running on VMs only. So, I think that vhost-pci is 
not applicable in my case. Am I wrong?
Can you think of a reason why accessing that PCI memory mapped memory (which is 
really the host's hugepage) is more expensive than accessing the VM's hugepages 
(even though they are not really the host's hugepage)?
Do you think my suspicion that as a PCI mapped memory it doesn't use cache is 
correct? If so, do you think I can change it (either in some configuration or 
changing the VM's kernel)?
Any other direction?

Thanks, Eli
>
> --
> MST
-------------------------------------------------------------------------------------------------------------------------------------------------
This email and any files transmitted and/or attachments with it are 
confidential and proprietary information of
Toga Networks Ltd., and intended solely for the use of the individual or entity 
to whom they are addressed.
If you have received this email in error please notify the system manager. This 
message contains confidential
information of Toga Networks Ltd., and is intended only for the individual 
named. If you are not the named
addressee you should not disseminate, distribute or copy this e-mail. Please 
notify the sender immediately
by e-mail if you have received this e-mail by mistake and delete this e-mail 
from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing 
or taking any action in reliance on
the contents of this information is strictly prohibited.
------------------------------------------------------------------------------------------------------------------------------------------------




reply via email to

[Prev in Thread] Current Thread [Next in Thread]