[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-discuss] Inter VM dependency on two socket hypervisor

From: Meganathan Raja
Subject: [Qemu-discuss] Inter VM dependency on two socket hypervisor
Date: Thu, 8 Sep 2016 18:24:27 -0700

Hi All,

We are seeing an interesting scenario during the throughput tests.


System Configuration:

Two socket having 10 cores each.

VM -0 cores are pinned to phy cores 2-to-9  on socket# 0 and emulator pin pinned on core 1

VM-1 cores are pinned to 12-19  on socket# 1 and emulator pin pinned on core 11

Each are 8 core VMs

CPU: E5 2687W v3 @ 3.1 Ghz

Memory 65 GB on  each socket

All SR-IOV ports are Intel 40G

Centos 7.0 , Kernel 3.10.0-123.9.3.el7.x86_64

Qemu version 2.4

Using library: libvirt 1.2.18

Using API: QEMU 1.2.18

Running hypervisor: QEMU 2.4.0


Issue we are seeing is: somehow a VM-1 running on CPU socket# 1, affects the throughput of VM-0 running on CPU socket# 0, even after making sure all the isolation done in libvirt XML  (more detail in the note 1 at the end)


Issue is VM-0 drops traffic on port connected to traffic generator. We found that VM-0 drops traffic only if, VM-1 is running on socket #1. If we either remove VM-1 or very less load on VM-1, then we don’t see drops.


Note 1:

-          Cores for each of them is pinned on their respective sockets

-          Emulator pins are pinned to respective cpu sockets/core

-          Memory for each is from the respective numa nodes, like for VM-0 running on socket 1, huge pages are from numa node 1

-          SR-IOV interfaces are on the pci slots connected to correct CPUs, like IO interfaces of each one of the VM is using PCI slots which are connected to their CPU socket,

-          Tried with and without hyper-threading

-          Enabled the following BIOS settings:

Intel VT, SR-IOV support, Energy Efficient





reply via email to

[Prev in Thread] Current Thread [Next in Thread]