[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: gpsd autobaud and cpu usage

From: sean d'epagnier
Subject: Re: gpsd autobaud and cpu usage
Date: Fri, 25 Sep 2020 22:27:12 -0400

yo gary!

> I use 19200 a lot.  28800 is not in the hunt loop.  The autobaud starts
> at the last used speed, so that depends on the state of the port before
> gpsd starts.

ah, it should probably start at the higher speeds and work its way
down as they would finish faster anyway.  Of course trying the current
speed first.

>> So it seems the autobaud does work but is slow.
> Yup.  remember autobaud not only needs to check different speeds, abut also
> different framing.  That is a lot of combinations to check.
>> I'm pretty sure with
>> two passes it could find the right baud in 20 seconds or less for the
>> most common cases, so maybe I'll attempt a patch.
> 8 speeds, times 2 seconds per test, times 3 parity (E, N, O) times
> 1 stop bits (0, 1, 2).  So that is at 144 seconds.   Plus flush()
> time and settling time and time to see the start and end of a complete
> message.

Hmm.   I've never heard of any gps that uses something besides 8n1.
So perhaps it could do 1 stop bit without parity over all speeds

> As noted in the code, USB adapters take a long time to change their
> speed and start returing good data.
> With DOS this was easy, but POSIX and Linux do not report framing
> errors or buffer overruns.  Thus the need to get an entire message
> decoded to know you have the right speed and framing.

I didn't realize this either but makes sense.

>> I would like to switch from select to poll also but not sure I'm ready
>> to deal with all the changes needed so I'm keeping my nanosleep hack
>> in place to greatly reduce cpu.
> As I previously quoted from the man page pselect() is select() is
> poll().  The major difference is the order of arguments.  They all wait
> the same way.  There is no point changing between them.

This is not strictly true.   When select returns, you have to iterate
over the fdset and up to maxfds testing every one for bitmask of if
the flag is set.  This is a large number.   In the code gpsd does this
in at least 3 different cases, one for sockets, one for devices, and
another for control sockets.   This is quite a lot of overhead for
common single byte reads and is the reason the cpu usage is 3 (or 5%)

Poll on the other hand would directly return the descriptor that needs
to be read from completely eliminating this iteration and reducing the

So while the time in select vs poll is the same, and the efficiency
there is the same, what gpsd has to do to deal with the result is

from gpsd source:

        /* always be open to new client connections */
        for (i = 0; i < AFCOUNT; i++) {
            if (msocks[i] >= 0 && FD_ISSET(msocks[i], &rfds)) {

        for (cfd = 0; cfd < (int)FD_SETSIZE; cfd++)
            if (FD_ISSET(cfd, &control_fds)) {

on my system FD_SETSIZE is defined to be 1024, so this ends up being a
lot of iterations for what is almost always a 0 or 1 byte read from a
single descriptor.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]