qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] powerpc iommu: enable multiple TCE requests


From: Alexander Graf
Subject: Re: [Qemu-devel] [PATCH] powerpc iommu: enable multiple TCE requests
Date: Tue, 20 Aug 2013 07:55:08 +0100

On 20.08.2013, at 02:36, Alexey Kardashevskiy wrote:

> On 08/19/2013 07:47 PM, Paolo Bonzini wrote:
>> Il 19/08/2013 10:44, Alexey Kardashevskiy ha scritto:
>>>>> It means that if you use the same QEMU version with the same command
>>>>> line on a different kernel version, your guest looks different because
>>>>> we generate the dtb differently.
>>> Oh. Sorry for my ignorance again, I am not playing dump or anything like
>>> that - I do not understand how the device tree (which we cook in QEMU) on
>>> the destination can possibly survive migration and not to be overwritten by
>>> the one from the source. What was in the destination RAM before migration
>>> does not matter at all (including dt), QEMU device tree is what matters but
>>> this does not change. As it is "the same QEMU version", hypercalls are
>>> supported anyway, the only difference where they will be handled - in the
>>> host kernel or QEMU. What do I miss?
>> 
>> Nothing, I just asked to test that handling the hypercall in QEMU works.
> 
> Well, I was asking rather Alex :)
> 
>> On x86 we have a similar problem, though with cpuid bits instead of the
>> device tree.  An older kernel might not support some cpuid bits, thus
>> "-cpu SandyBridge" might have different cpuid bits depending on the host
>> processor and kernel version.  This is handled by having an "enforce"
>> mode where "-cpu SandyBridge,enforce" will fail to start if the host
>> processor or the kernel is not new enough.
> 
> Hm. Here we might have a problem like this is we decide to migrate from
> QEMU with this patch running on modern kernel to QEMU without this patch
> running on old kernel - for this we might want to be able to disable
> "multi-tce" via machine options on newer kernels. Do we care enough to add
> such a parameter or we just disable migration and that's it?

The problem is not only contained to migration. Even if you just start up your 
VM with a different host kernel you get a different guest environment, so you 
potentially get change lurking in that can kill reproducability. Imagine you're 
Amazon EC2: You don't want people to get any idea what host they're running on.

If the pseries target was a mature, well established and used target, I'd add a 
machine option "multi-tce" with 3 possibilities:

  on - force multi-tce exposing on
  off - force multi-tce exposing off
  unset - use your current detection code

That way libvirt for example can decide that it wants to nail down TCE support 
throughout a cluster. It's really the same as the cpu,enforce mode, just on a 
machine level rather than for cpuid bits.

However, considering the current user base of KVM on pseries I think it's fine 
to just declare newer QEMU on older KVM as slower because it doesn't use the 
in-kernel multi-tce support and call it a day. It makes everyone's life a _lot_ 
easier.

Or are you aware of any products using older kernels that are going to run QEMU 
1.7 and above and won't change the kernel as well as they bump up the version?

> This SandyBridge,enforce - what if the destination host running on old
> kernel was run without this option - will the migration fail? What is the

Migration "requires" you to use the same command line on both ends of the 
migration. Unfortunately in only enforces it implicitly - one of the protocol's 
shortcomings - but that's the idea.


Alex

> mechanism? Do machine options migrate? I looked at target-i386/cpu.c but
> did not see the quick answer.
> 
> 
>> But in this case, you do not need this because the hypercall works if
>> emulated by QEMU.  I like Alex's solution of making it universally
>> available in the dtb.
> 
> The solution would be good if we did not already have H_PUT_TCE accelerated
> for emulated devices in the host kernel but we do have it.
> 
> 
> -- 
> Alexey




reply via email to

[Prev in Thread] Current Thread [Next in Thread]