bug-gnu-emacs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#10580: 24.0.92; gdb initialization takes more than one minute at 100


From: Jean-Philippe Gravel
Subject: bug#10580: 24.0.92; gdb initialization takes more than one minute at 100
Date: Mon, 17 Dec 2012 23:45:49 -0500

Here is my patch.

As stated previously, I only re-wrote gud-gdbmi-marker-filter.  This
function now parses the GDB/MI records in the order of arrival.  Only
the signature of each record is read by the new parser.  The actual
content of the record (i.e. result strings) is still parsed by the
original code (which I will refer to as the record handlers below).

The new parser is based on the GDB/MI output BNF grammar available at:
ftp://ftp.gnu.org/pub/old-gnu/Manuals/gdb/html_node/gdb_214.html#SEC221

Records that are too large to be received in one data chunk can be
parsed progressively by the record handlers, if they support it.  The
global configuration alist “gdbmi-bnf-result-state-configs” defines
the mapping between record types and record handlers.  This structure
flags all the handlers as either progressive or atomic.

Progressive handlers are invokes as soon as a partial data chunks are
received.  Atomic handlers on the other hand will not be invoked until
the whole record is received.  This design allowed me to progressively
attack the optimization problem: the ^done / ^error messages being the
biggest bottleneck (the reply to -file-list-exec-source-files), I
started by only converting those to progressive parsing.  If we find
that other messages cause performance issue, we can always convert
them to progressive parsing as well.

That being said, while the handler for ^done and ^error
(gdb-done-or-error) do receive the data progressively, it doesn’t
parse it on the fly.  Instead, it accumulates the data chunks in a
temporary buffer and parses its content only when the record is
complete.  This is sub-optimal, but my tests showed that optimizing
this part would have only a minimal effect compared to fixing
gud-gdbmi-marker-filter.  I decided to keep this as a separate
optimization task, to be done later.

For performance reason, I tried to keep processing and data copy to a
minimum.  I therefore work in-place, directly in the gud-marker-acc
string, instead of copying the string to a temporary buffer.  The
parser walks in the string using an offset stored in gdbmi-bnf-offset.
 By looking at the character at this offset, I can quickly detect the
type of record we received, BEFORE trying to parse it with
string-match.

Note: I am a little confused about when the parser states should be
(re)initialized.  I would have expected the states to be initialized
before the GDB process is started (before gud-common-init) because
once the process starts, the marker-filter can be invoked at any time.
 Instead, I find that the gdb-mi variables are all initialized in
gdb-init-1, which runs after the process is started.  I added a new
function gdbmi-bnf-init that is invoked from within gdb-init-1, but
that doesn’t seem right.  Does anyone have an opinion on this?  I
certainly do think there is currently a problem because if the GDB/MI
initialization is interrupted expectedly (for instance because of an
error), it seems to restart in a very bad state the next time (I think
that's true even before my fix)…

Jean-Philippe

Attachment: gdb-mi-optimization-1.patch
Description: Binary data


reply via email to

[Prev in Thread] Current Thread [Next in Thread]