On Tue, 20 Oct 2020 at 17:06, Philippe Mathieu-Daudé <f4bug@amsat.org> wrote:
On 10/17/20 1:47 PM, Philippe Mathieu-Daudé wrote:
Hi Damien, Peter,
+/*
+ * macro helpers to convert to hertz / nanosecond
+ */
+#define CLOCK_PERIOD_FROM_NS(ns) ((ns) * (CLOCK_SECOND / 1000000000llu))
+#define CLOCK_PERIOD_TO_NS(per) ((per) / (CLOCK_SECOND / 1000000000llu))
+#define CLOCK_PERIOD_FROM_HZ(hz) (((hz) != 0) ? CLOCK_SECOND / (hz) :
0u)
I'm having Floating Point Exception using a frequency of 1GHz.
Using frequency >=1GHz we have CLOCK_PERIOD_FROM_HZ(hz) > 0x100000000.
Then CLOCK_PERIOD_TO_NS(0x100000000) = 0.
So for frequency >=1GHz clock_get_ns() returns 0.
So Peter suggested on IRC to rewrite the code consuming this API
to avoid reaching this limit :)
Still some assert would help other developers triggering the same
issue to quicker figure how to bypass the problem.
The fundamental problem here is that if you have a 2GHz
clock then it is just not possible to have an API which
says "give me the period of this clock in nanoseconds
as an integer".
Even for clocks which have lower frequencies, there is
still a rounding/accuracy problem when converting to
a nanoseconds count. I think these macros and the
functions that wrap them are in retrospect a mistake,
and we should replace them with other APIs that allow
calculations which can avoid the rounding problem
(eg if what you need is "how many nanoseconds is it
until 5000 ticks have expired" we would need an API
for that, rather than trying to calculate it as
5000 * nanoseconds_per_tick).
It looks like currently the only uses of CLOCK_PERIOD_TO_NS()
and clock_get_ns() are:
* some tracepoints in the clock code itself
* mips_cp0_period_set(), which does:
env->cp0_count_ns = cpu->cp0_count_rate
* clock_get_ns(MIPS_CPU(cpu)->clock);
so I think it is trying to calculate "nanoseconds for
X ticks of the clock".
CLOCK_PERIOD_TO_HZ() and clock_get_hz() are used by:
* the qdev_print() code that prints a human-readable
description of the clock
* hw/char/cadence_uart.c and hw/char/ibex_uart.c code
that calculates a baud rate given the input clock
CLOCK_PERIOD_FROM_HZ and CLOCK_PERIOD_FROM_NS are
used only in the clock_update*() and clock_set*()
functions. I think those should all be OK, because
they're just setting the period of the clock (possibly
propagating it to connected clocks), and we can
assume the caller uses whatever unit they naturally
have available as the accurate way to set the clock.
So that suggests to me that we should look at designing
APIs for:
* "give me the time in nanoseconds for X ticks of this clock"
* "give me a human-readable string describing this clock"
for the qdev_print() monitor output