[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Unpredictable performance degradation in QEMU KVMs

From: Parnell Springmeyer
Subject: Re: Unpredictable performance degradation in QEMU KVMs
Date: Wed, 6 Oct 2021 10:50:52 -0500

Hi Ken, thanks for replying.

1. We use x86-64 linux hosts and x86-64 linux guests.
2. We do run multiple guest instances, what's frustrating is that we have a hard time reproducing the degradation (even running 4-5 guests at the same time).
3. We do have enough cores and we're very careful to allocate cores correctly.

On Tue, Oct 5, 2021 at 7:31 PM Ken Moffat <zarniwhoop@ntlworld.com> wrote:
On Tue, Oct 05, 2021 at 06:58:51PM -0500, Parnell Springmeyer wrote:
> Hi, we use QEMU VMs for running our integration testing infrastructure and
> have run into a very difficult to debug problem: occasionally we will see a
> severe performance degradation in some of our QEMU VMs.
> We've tried tuning our QEMU VMs (a long time ago) but we still have this
> issue. Is this something anyone has experience with and I'm just not
> finding it via Google? Or could someone recommend some troubleshooting
> steps?

I can't answer the question (my qemu experience is limited to x86_64
linux running x86_64 linux guests, I think that relative runtimes in
the guests are usually degraded by the overhead of running in qemu).
And yes, finding the right search terms can be difficult.

But some questions which might help elucidate information :

1. What guest OS and architecture, and what host OS and architecture
?  'linux' is good enough for OS if that is what you are using, I
doubt distro variations make much difference, but running different
OS's on guests, and particularly running incompatible architectures
such as aarch64 on x86_^4, will add overheads which probably vary
according to exactly what the VM is running.

2. Are you running multiple guest instances on the same host(s) ?

3. If you are running multiple guests on each host, do your hosts
have enough cores ?

I have read that specifying more cores for the guest than are
actually available is possible, but that doesn't sound like a recipe
for performance.  Of course, if your guests are externally-connected
servers running real (but different) loads then I guess
overspecifying the number of cores for each will often be beneficial
if their peak loads do not coincide.  For C-I tests with random
build jobs, that might not be such a good idea if ninja or cargo are
used (those will try to use N+2 CPUs if you have multiple cores - 4
or more, from memory).

We had to carve each 0 and 1 on separate granite wheelbarrows and
then carry them on our backs, neck-deep in the snow uphill both ways
and with a wolf nailed to our skull to keep us warm!
                                  -- 'JockTroll' on slashdot

Parnell Springmeyer

reply via email to

[Prev in Thread] Current Thread [Next in Thread]