qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] qemu-kvm upstreaming: Do we need -no-kvm-pit and -no-kv


From: Jan Kiszka
Subject: Re: [Qemu-devel] qemu-kvm upstreaming: Do we need -no-kvm-pit and -no-kvm-pit-reinjection semantics?
Date: Fri, 20 Jan 2012 13:51:20 +0100
User-agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); de; rv:1.8.1.12) Gecko/20080226 SUSE/2.0.0.12-1.1 Thunderbird/2.0.0.12 Mnenhy/0.7.5.666

On 2012-01-20 13:42, Daniel P. Berrange wrote:
> On Fri, Jan 20, 2012 at 01:00:06PM +0100, Jan Kiszka wrote:
>> On 2012-01-20 12:45, Daniel P. Berrange wrote:
>>> On Fri, Jan 20, 2012 at 12:13:48PM +0100, Jan Kiszka wrote:
>>>> On 2012-01-20 11:25, Daniel P. Berrange wrote:
>>>>> On Fri, Jan 20, 2012 at 11:22:27AM +0100, Jan Kiszka wrote:
>>>>>> On 2012-01-20 11:14, Marcelo Tosatti wrote:
>>>>>>> On Thu, Jan 19, 2012 at 07:01:44PM +0100, Jan Kiszka wrote:
>>>>>>>> On 2012-01-19 18:53, Marcelo Tosatti wrote:
>>>>>>>>>> What problems does it cause, and in which scenarios? Can't they be
>>>>>>>>>> fixed?
>>>>>>>>>
>>>>>>>>> If the guest compensates for lost ticks, and KVM reinjects them, guest
>>>>>>>>> time advances faster then it should, to the extent where NTP fails to
>>>>>>>>> correct it. This is the case with RHEL4.
>>>>>>>>>
>>>>>>>>> But for example v2.4 kernel (or Windows with non-acpi HAL) do not
>>>>>>>>> compensate. In that case you want KVM to reinject.
>>>>>>>>>
>>>>>>>>> I don't know of any other way to fix this.
>>>>>>>>
>>>>>>>> OK, i see. The old unsolved problem of guessing what is being executed.
>>>>>>>>
>>>>>>>> Then the next question is how and where to control this. Conceptually,
>>>>>>>> there should rather be a global switch say "compensate for lost ticks 
>>>>>>>> of
>>>>>>>> periodic timers: yes/no" - instead of a per-timer knob. Didn't we
>>>>>>>> discussed something like this before?
>>>>>>>
>>>>>>> I don't see the advantage of a global control versus per device
>>>>>>> control (in fact it lowers flexibility).
>>>>>>
>>>>>> Usability. Users should not have to care about individual tick-based
>>>>>> clocks. They care about "my OS requires lost ticks compensation, yes or 
>>>>>> no".
>>>>>
>>>>> FYI, at the libvirt level we model policy against individual timers, for
>>>>> example:
>>>>>
>>>>>   <clock offset="localtime">
>>>>>     <timer name="rtc" tickpolicy="catchup" track="guest"/>
>>>>>     <timer name="pit" tickpolicy="delay"/>
>>>>>   </clock>
>>>>
>>>> Are the various modes of tickpolicy fully specified somewhere?
>>>
>>> There are some (not all that great) docs here:
>>>
>>>   http://libvirt.org/formatdomain.html#elementsTime
>>>
>>> The meaning of the 4 policies are:
>>>
>>>       delay: continue to deliver at normal rate
>>
>> What does this mean? The timer stops ticking until the guest accepts its
>> ticks again?
> 
> It means that the hypervisor will not attempt to do any compensation,
> so the guest will see delays in its ticks being delivered & gradually
> drift over time.

Still, is the logic as I described? Or what is the difference to "discard".

> 
>>>     catchup: deliver at higher rate to catchup
>>>       merge: ticks merged into 1 single tick
>>>     discard: all missed ticks are discarded
>>
>> But those interpretations aren't stated in the docs. That makes it hard
>> to map them on individual hypervisors - or model proper new hypervisor
>> interfaces accordingly.
> 
> That's not a real problem, now I notice they are missing the docs, I
> can just add them in.

TIA, but just please more verbose. The above descriptions only help if
you take real implementations of hypervisors as reference.

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux



reply via email to

[Prev in Thread] Current Thread [Next in Thread]