bug-gnu-utils
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: time: Should real time usage account discontinous jumps?


From: Charles Swiger
Subject: Re: time: Should real time usage account discontinous jumps?
Date: Thu, 26 Sep 2013 08:09:16 -0700

Hi--

On Sep 25, 2013, at 11:37 PM, Petr Pisar <address@hidden> wrote:
> On 2013-09-26, Bob Proulx <address@hidden> wrote:
>> Petr Pisar wrote:
>>> The GNU time as well as bash built-in compute real time process usage as
>>> a simple difference between two real time points. If there was a time
>>> adjustement in between (by NTP or manual), the meassured value would be
>>> affected.
>> 
>> NTP will never step the clock.  NTP will adjust the time of each clock
>> tick to keep the clock on time and so that every tick is present.  If
>> the clock is being step'd it would be due to other reasons such as
>> manually.
> 
> NTP does not step, NTP slows or accelerates real time. But the effect is
> the same---the time point difference mismatches physical duration.

NTP calls adjtime() or similar to adjust the rate that the system clock [1]
increments it's notion of time to match "real time" obtained from the NTP
timesource, which is either a lower-stratum NTPd server via the Internet,
or a primary time reference like a GPS sensor or atomic clock.

>> But why would there be a time adjustment between?  Stepping the clock
>> is an abnormal condition.  It isn't something that should ever happen
>> during normal system operation.  If your clock is being stepped then
>> that is a bug and needs to be fixed.
> 
> NTP per definition refuses to adjust clock if the difference is too big.
> Thus distributions usually step the clock on first NTP contact, and then
> keep adjusting. With mobile hosts loosing and getting network
> connectivity on the fly, it's quite possible the system will experience
> time steps.

Oh, agreed.

For the normal case, ntpd won't change time faster than 0.5 ms per second,
but if the sample interval is long enough to contain network dropout and
re-acquisition, then ntpd might be restarted and do an initial step of time
rather than the slew rate.

However, on a good day, ntpd will have already figured out the intrinsic
first-order deviation of the local HW clock versus "real time", and the
device will continue to keep better time as a result even thru a network
outage than it would otherwise.

>>> I have found any hint nowhere if this is intended behavior or if one
>>> should meassure some kind of monotic time line.
>> 
>> Since stepping the clock is not a normal condition I don't think it
>> matters.  It certainly isn't a problem if the system is running NTP
>> and the clock is running normally.
> 
> I agree one can consider NTP-adjusted clock as `running normally'. Because
> the reason for adjustment is that local real clock is not accurate
> enough. In this light, CLOCK_MONOTONIC seems good enough.

Yes, if you want to compute "how long something took" via delta between start
and finish, then using CLOCK_MONOTONIC is likely the best choice.

Regards,
-- 
-Chuck

[1]: These days, probably the TSC or maybe ACPI or HPET timers.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]