gnash-commit
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Re[2]: [Gnash-commit] gnash ChangeLog gui/NullGui.cpp


From: Martin Guy
Subject: Re: Re[2]: [Gnash-commit] gnash ChangeLog gui/NullGui.cpp
Date: Sat, 7 Jul 2007 12:21:18 +0100

2007/7/7, Udo Giacomozzi <address@hidden>:
MG>   In general I agree that tu_* should be dropped wherever possible,

What's the problem with tu_timer? I interpreted it as a general Gnash
tool for timing. Anyway, wouldn't it be best to have a dedicated .h
just for timing purposes?

You are right, and that's what it is.
The problem is the tu_ which stands for Thatcher Ulrich. I guess there
should have been a smiley there.
Imagine if I went round writing functions called martin_this and
martin_that... how would that look?
It's a problem of egotism, which is generally a damaging thing in
programming since it tends to make people defend their territories
rather than concentrate on technical merits in a detached way.
There is egotism in the current battle of the timers too - look how no
one can bear to be seen to have made a mistake.

The infinite loop design was taken to avoid calls to usleep() when the
delay is 1.

if (delay > 1) usleep(delay); would have done that for you.

An infinite loop with breaks is not a design, it is a mess. If you
understand your problem before you try to code it, you can always find
an elegant and clear way to express the solution.

usleep() just sleeps _at least_ the specified time (unless interrupted
by a signal), which means the timing would be inaccurate in that case.

usleep(1) does not really sleep a nanosecond. It just
causes a task switch which on most architectures corresponds to a 10
milliseconds sleep (100 HZ)

Thanks, I didn't know that. It turns out that 1000 usleep(1)'s take
7.4 seconds realtime here on a fast laptop in X with an open browser,
4 seconds on a idle machine with exactly the same OS. CPU idle time
was 98% total in both cases.
That's pretty unreliable.
I've seen clock interrupt vary from 18Hz (the MSDOS RTC interrupt) to
1000Hz (Linux with low latency set). I guess your own Linux kernel was
configured for 100Hz. Other OSs may do anything.

However I suspect I've missed the current purpose of timing in NullGui here.

I was assuming it was an attempt to get the timing code correct so as
to be able to use the same algorithm in the functional GUIs (at
present every one does timing in its own different way).
It seems instead that it's being used for brute-force execution
profiling of everything except rendering, by running the CPU at full
speed and seeing how many realtime FPS (or whatever) you gets while
little changes are made here and there.

A more accurate way is to perform a fixed-size task and use the "time"
command to see how much CPU it uses, as measured by the kernel.
You could then use an efficient and accurate timekeeping design and
your results will be valid for the non-null GUIs too.

Better yet if we used framedropping: run the SWF/AS code for every
frame but only rendered the graphics if the previous rendering has
completed - the same technique that mplayer uses to keep audio/video
in sync on slow CPUs. That would pretty much guarantee that movies run
in real time regardless of the CPU power or rendering speed you have
available.

Still. Whatever.

   M




reply via email to

[Prev in Thread] Current Thread [Next in Thread]