qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [libvirt] Modern CPU models cannot be used with libvirt


From: Itamar Heim
Subject: Re: [Qemu-devel] [libvirt] Modern CPU models cannot be used with libvirt
Date: Mon, 12 Mar 2012 21:12:40 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:10.0) Gecko/20120131 Thunderbird/10.0

On 03/12/2012 09:01 PM, Anthony Liguori wrote:
On 03/12/2012 01:53 PM, Itamar Heim wrote:
On 03/11/2012 05:33 PM, Anthony Liguori wrote:
On 03/11/2012 09:56 AM, Gleb Natapov wrote:
On Sun, Mar 11, 2012 at 09:12:58AM -0500, Anthony Liguori wrote:
-cpu best wouldn't solve this. You need a read/write configuration
file where QEMU probes the available CPU and records it to be used
for the lifetime of the VM.
That what I thought too, but this shouldn't be the case (Avi's idea).
We need two things: 1) CPU model config should be per machine type.
2) QEMU should refuse to start if it cannot create cpu exactly as
specified by model config.

This would either mean:

A. pc-1.1 uses -cpu best with a fixed mask for 1.1

B. pc-1.1 hardcodes Westmere or some other family

(A) would imply a different CPU if you moved the machine from one system
to another. I would think this would be very problematic from a user's
perspective.

(B) would imply that we had to choose the least common denominator which
is essentially what we do today with qemu64. If you want to just switch
qemu64 to Conroe, I don't think that's a huge difference from what we
have today.

It's a discussion about how we handle this up and down the stack.

The question is who should define and manage CPU compatibility.
Right now QEMU does to a certain degree, libvirt discards this and
does it's own thing, and VDSM/ovirt-engine assume that we're
providing something and has built a UI around it.
If we want QEMU to be usable without management layer then QEMU should
provide stable CPU models. Stable in a sense that qemu, kernel or CPU
upgrade does not change what guest sees.

We do this today by exposing -cpu qemu64 by default. If all you're
advocating is doing -cpu Conroe by default, that's fine.

But I fail to see where this fits into the larger discussion here. The
problem to solve is: I want to use the largest possible subset of CPU
features available uniformly throughout my datacenter.

QEMU and libvirt have single node views so they cannot solve this
problem on their own. Whether that subset is a generic Westmere-like
processor that never existed IRL or a specific Westmere processor seems
like a decision that should be made by the datacenter level manager with
the node level view.

If I have a homogeneous environments of Xeon 7540, I would probably like
to see a Xeon 7540 in my guest. Doesn't it make sense to enable the
management tool to make this decision?

literally, or in capabilities?
literally means you want to allow passing the cpu name to be exposed
to the guest?

Yes, literally.

Xen exposes the host CPUID to the guest for PV. Both PHYP (IBM System P)
and z/VM (IBM System Z) do the same.

What does VMware expose to guests by default?

if in capabilities, how would it differ from choosing the correct "cpu
family"?
it wouldn't really be identical (say, number of cores/sockets and no
VT for time
being)

It's a trade off. From a RAS perspective, it's helpful to have
information about the host available in the guest.

If you're already exposing a compatible family, exposing the actual
processor seems to be worth the extra effort.

only if the entire cluster is (and will be?) identical cpu.
or if you don't care about live migration i guess, which could be hte case for clouds, then again, not sure a cloud provider would want to expose the physical cpu to the tenant.


ovirt allows to set "cpu family" per cluster. assume tomorrow it could
do it an
even more granular way.
it could also do it automatically based on subset of flags on all
hosts - but
would it really make sense to expose a set of capabilities which
doesn't exist
in the real world (which iiuc, is pretty much aligned with the cpu
families?),
that users understand?

No, I think the lesson we've learned in QEMU (the hard way) is that
exposing a CPU that never existed will cause something to break. Often
times, that something is glibc or GCC which tends to be rather epic in
terms of failure.

good to hear - I think this is the important part.
so from that perspective, cpu families sounds the right abstraction for general use case to me. for ovirt, could improve on smaller/dynamic subsets of migration domains rather than current clusters and sounds like you would want to see "expose host cpu for non migratable guests, or for identical clusters".



reply via email to

[Prev in Thread] Current Thread [Next in Thread]