#!/bin/env lua --- Untangle -- Script to enable refactoring dfa.c (>4000 lines) --[[ Copyright (C) 2013-2014 Grouse Software. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street - Fifth Floor, Boston, MA 02110-1301, USA --]] -- Initial design and implementation by behoffski of Grouse Software, -- 2013-2014. --[[ dfa.c is a particularly big and complex piece of software, and serves a massive set of use cases and machine environments. However, to a limited extent, it has become prisoner of its own complexity: While improvements in one area may be feasible, the intervening layers may re-code the pattern in a fashion that makes the improvement hard to implement. So, this script tries to untangle dfa.c a little: To break the lexical analysis (token definition, char-class and multibyte-class storage management, pattern interpretation options) into separate module(s) (fsalex.[ch], fsatokens.h, fsautils.[ch], ... ? not sure). Initially, it is likely that the "untangled" version will have the same implementation as the current version, so that the integrity of the split sources can be checked. Down the track, my hope is to change some information that is curently communicated by #ifdefs and/or separate variables into a larger, more explicit set of lexical tokens, so that the parser, and the following DFA compilation and analysis, are simplified by the improved token set, and the extra information captured in the tokens enables more sophisticated analysis of the pattern, leading to better search engine choices. At the time of writing (17 Feb 2014), my current guess is that tokens will be guaranteed to be at least 32 bits wide, the lowest bits (?? probably 8 bits) will always contain the token type (opcode), and the higher bits can optionally contain information relating to the token (?? 24-bit index pointer for character sets/multibyte charsets; 24-bit value for a multibyte character; an 8-bit value for an ASCII character with an extra bit (C locale)). Existing (sometimes implied) opcodes will be expanded into families to explicitly name parameter interpretation, e.g. CHAR_EXACT, CHAR_CASE_INSENSITIVE, CHAR_CASE_DONT_CARE, MBCHAR_EXACT, MBCHAR_CASE_INSENSITIVE, MBCHAR_CASE_DONT_CARE, etc. I'm not sure if CLASS and CLASS_INVERT opcodes are warranted, but it's possible: The end-of-line character processing is perhaps best deferred to the parse stage in dfa.c. (?? Memo to self: Perhaps later on, split out parser to fsaparse.[ch].) Incidentally, I use "fsa" rather than "dfa" in the names above, as the deterministic/nondeterministic implementation of the search engine isn't directly relevant during lexing and parsing. There are indirect influences, such as choosing character-by-character representation of simple strings in the input pattern, and then working later to extract these strings (the "musts") since skip-based string searches are very efficient. The chosen representation allows for a simple, uniform way of naming states that are alive during the search, and this fits well with the DFA's character-by-character search strategy. The script is written in Lua (http://www.lua.org), and was developed using Lua 5.1.5. I've also run it successfully with Lua 5.2.3. The reasons for performing this refactoring as a Lua script is because: (a) The refactoring will take a *long* time to get to a coherent level, and therefore keeping track of code changes in the original is critical. The script lets me split up the code into snippets (segments) in an automated fashion, and verify that the original can be fully recreated from the sliced-up version. Each snippet gets an unique name, and this is used when specifying modified versions; thus, the output has some protection from changes to the input; (b) Lua is a small, powerful scripting language influenced by Scheme, CLU, Modula-2 and others; its speed is impressive; writing readable code is fairly easy, and its reputation and sphere of influence is growing; (c) Lua has a compact, but very powerful and flexible, string manipulation library, which will be useful when wanting to perform tricky code edits in an automated fashion; and (d) Lua's licence (from 5.0 onwards; I'm using 5.1) is the MIT license, which is Open Source-compatible, so, while it isn't Free Software, it is compatible for a once-off refactoring project. (This script is licenced as GPL3, but that's an independent choice.) --]] local io = require("io") local os = require("os") local string = require("string") local table = require("table") -- Avoid luarocks/luarocks.loader as I'm not sophisticated about -- handling dual-lua-5.1/5.2 modules. I use "strictness" to turn -- unintended globals into errors, as these are almost always typos. -- The strictness script (version 0.1.0, obtained from the luarocks -- main repository) is compatible with both Lua 5.1 and 5.2. -- require("luarocks.loader") require("strictness") -------------------------------------------------------------------------- -- Pander to "make syntax-check"'s expectation that error messages be -- marked for translation by having _(...) wrapping. The C sources -- are already marked correctly; the only problem is that syntax-check -- also demands that this script conform. I can't be bothered trying -- to fine-tune the sources, so just dummy up a lua function to mimic -- in a trivial fashion the layout that syntax-check demands. local function _(...) return ... end -------------------------------------------------------------------------- -- We refer to line numbers as offsets from a section boundary, so that -- if insertions/deletions occur, we mostly only have to redefine the -- local line numbers plus the following section numbers, not every -- line reference throughout the rest of the file. local SectionInfo do local InitialSource = {Filename = "-none-", PrevLine = -1, } SectionInfo = {Title = "(untitled)", Source = InitialSource, LineNr = 0} end local function Section(Title, Source, LineNr) if type(Source) ~= "table" then error(_(":Source must be a file-as-lines table object")) end -- Append a new section entry to the list of sections SectionInfo[#SectionInfo + 1] = {Title = Title, Source = Source, LineNr = LineNr} -- Perhaps we should check that sections do not overlap and/or -- skip lines, but most of our needs are met by the segment -- tagging facility anyway, so it's not a priority at the -- moment. end -------------------------------------------------------------------------- -- NewFileTable reads a file, line-by-line, into a table, and returns -- the table. Array (sequence 1..n) references are file text entries; -- Name/Value pairs hold other values (e.g. keys "Filename", "PrevLine") local function NewFileTable(Filename) local f = assert(io.open(Filename)) local Table = {Filename = Filename, PrevLine = 1} for l in f:lines() do Table[#Table + 1] = l end assert(f:close()) return Table end -------------------------------------------------------------------------- -- Read dfa.c, dfa.h and Makefile.am, line-by-line, into tables local dfac = NewFileTable("dfa.c") local dfah = NewFileTable("dfa.h") local makefileam = NewFileTable("Makefile.am") -------------------------------------------------------------------------- local function printf(Format, ...) io.write(string.format(Format, ...)) end -------------------------------------------------------------------------- -- Function to indent text with embedded newlines local function Indent(Text, Level) local Spacing = string.rep(" ", Level) return Spacing .. Text:gsub("\n", "\n" .. Spacing) end -------------------------------------------------------------------------- -- Helper function to simplify extracting lines (paragraph(s)) as a -- newline-terminated string. local function Lines(Table, Start, End) -- End = End or Start return table.concat(Table, "\n", Start, End) .. "\n" end -------------------------------------------------------------------------- local Segs = {SegList = {}, SegSeq = {}} -------------------------------------------------------------------------- -- Function to create a tagged entry with a copy of lines from a -- source file function Segs:Tag(Name, Start, End) if self.SegList[Name] ~= nil then error(_("duplicate segment tag: " .. Name)) end End = End or Start -- Convert section-offset line numbers into absolute values local CurrentSection = SectionInfo[#SectionInfo] local Source = CurrentSection.Source Start = Start + CurrentSection.LineNr End = End + CurrentSection.LineNr local Entry = {Name = Name, Source = Source, Start = Start, End = End, RawText = Lines(Source, Start, End), Section = CurrentSection, } -- Assume that tags are issued in ascending-line-number sequence, -- and so keep track of intervening text between tagged sections. -- (These should *only* be blank lines; anything else is an error.) if Source.PrevLine < Start then Entry.PrevText = Lines(Source, Source.PrevLine, Start - 1) end Source.PrevLine = End + 1 -- Record new entry, both by name and by sequence self.SegList[Name] = Entry self.SegSeq[#self.SegSeq + 1] = Entry --[[ Warning turned off for the moment: It's not top of the priority list. -- Report horizontal tabs in the source if Entry.RawText:match("\t") then print("Warning: Segment " .. Entry.Name .." contains tabs.") end --]] end -------------------------------------------------------------------------- --[[ -- DEBUG: Omitted for now, but it's very handy to have when -- merging git master changes with local branch changes. -- Print out all the segments extracted (or manufactured) so far for _, Seg in ipairs(Segs.SegSeq) do printf("%s:\n%s\n", Seg.Name, Indent(Seg.RawText, 8)) end --]] -- [[ ]] -- Emacs pretty-printing get confused if this line isn't present. -------------------------------------------------------------------------- -- Check that no important text (anything other than newlines) has -- been left untagged (it is captured by Tag's PrevText facility). for _, Seg in ipairs(Segs.SegSeq) do local PrevText = Seg.PrevText if PrevText and not PrevText:match("^\n*$") then -- We've found some untagged text printf("ERROR: %s:%d:%q has preceding omitted text: \n%s\n", Seg.Source.Filename, Seg.Start, Seg.Name, Indent(PrevText, 8)) end end -------------------------------------------------------------------------- --- ReconstructFile -- Reconstruct file from tagged fragments -- This function is a sanity check that we can faithfully re-assemble -- the original source file from the broken-down pieces. If not, any -- further processing has a higher probability of having defects. local function ReconstructFile(Source) -- Form a simple filename for the reconstruction local ReconsFile = "reconstructed-" .. Source.Filename local f = assert(io.open(ReconsFile, "w")) -- Search for each snippet from the specified source, and write -- it (plus any preceding text) to the file. for _, Seg in ipairs(Segs.SegSeq) do if Seg.Source == Source then if Seg.PrevText then assert(f:write(Seg.PrevText)) end assert(f:write(Seg.RawText)) end end assert(f:close()) end -------------------------------------------------------------------------- --- WriteFile -- Replace file with segments stored internally -- This is a placeholder function to use when only minor edits to a -- file are made, and the segment list can still be used in order. local function WriteFile(Source) -- Form a simple filename for the reconstruction local f = assert(io.open(Source.Filename, "w")) -- Search for each snippet from the specified source, and write -- it (plus any preceding text) to the file. for _, Seg in ipairs(Segs.SegSeq) do if Seg.Source == Source then if Seg.PrevText then assert(f:write(Seg.PrevText)) end assert(f:write(Seg.RawText)) end end assert(f:close()) end -------------------------------------------------------------------------- --- RawText -- Return raw text associated with named segment -- @param Tag -- Tag given to segment upon creation. -- Note that this function does not report any whitespace that may -- precede the segment. local function RawText(Tag) local RawText = Segs.SegList[Tag].RawText if not RawText then error(_(":RawText: Nonexistent tag requested: " .. Tag)) end return RawText end -------------------------------------------------------------------------- --- EditedText -- Obtain raw text body of tag, with applied edits -- @param TagName -- Tag name of code snippet -- @param ... -- Pattern/Subst pairs, e.g. "charclass c", "charclass *c" -- @return Modified function declaration. -- This function simplifies the code needed to make global -- search-and-replace changes on source files. Some care and precision -- is needed in the source selection to avoid making unintended changes. local function EditedText(TagName, ...) local h = RawText(TagName) -- If there are optional parameters, treat them as pattern/subst -- pairs, and apply them to the header. local args = {...} if #args % 2 ~= 0 then error( ":EditedText: Pattern supplied without a matching subst item") end for i = 1, #args, 2 do h = h:gsub(args[i], args[i + 1]) end return h end -------------------------------------------------------------------------- --- TextSubst -- Like gsub(), but with no regexp pattern matching -- This function is intended for rewriting blocks of code, where the -- original has lots of pattern-magic characters such as "-", "(", "%" -- etc. We use string.find() as it has a "plain text" search option. local function TextSubst(Original, Search, Replacement) local Pos = 1 local Text = Original local Found = false while true do -- Find the start and ending indices of Search in the file local Start, End = Text:find(Search, Pos, true); -- print("Start, End, Pos, Search: ", Start, End, Pos, Search) if not Start then break end Found = true -- Splice the replacement in, in place of the search text Text = Text:sub(1, Start - 1) .. Replacement .. Text:sub(End + 1) Pos = Start + #Replacement end assert(Found, "TextSubst expected at least 1 match: " .. Search) return Text end -------------------------------------------------------------------------- --- WriteExternDecl -- Write function declaration to given file. -- @param File File (stream) object to receive edited declaration. -- @Param Declaration Function declaration (with no preceding lifetime -- specifier, and also without a trailing semicolon). -- Declarations for globally-visible (interface) functions need to be -- emitted at least twice: Once in the header file, and again in the -- body of the implementing module. This function edits the text of -- the declaration to conform to the requirementes of the header file, -- namely, a preceding "extern" keyword, plus a trailing semicolon -- before the final newline. The edited text is then written to the -- given file, along with the usual assert error checking. local function WriteExternDecl(File, Declaration) local Comment, Decl Comment, Decl = Declaration:match("(/%*.-%*/\n+)(.*)") if not Decl then Comment = "" Decl = Declaration end -- Add "extern" before the text of the declaration Decl = Decl:gsub("^", "extern ") -- Insert a semicolon before the final newline of the declaration. Decl = Decl:gsub("\n%s*$", ";\n") -- Finally, write the result to the file, with error checking assert(File:write(Comment, Decl, "\n")) end -------------------------------------------------------------------------- -- Process dfa.h Section("dfa.h header", dfah, 0) Segs:Tag("Description.dfah", 1) Segs:Tag("Copyright.dfah", 2) Segs:Tag("LicenseWarranty.dfah", 4, 17) Segs:Tag("Authors.dfah", 19, 19) Section("dfa.h:includes", dfah, 21) Segs:Tag("regex.h", 0) Segs:Tag("stdbool.h", 1) Segs:Tag("stddef.h", 2) Section("struct dfamust defn", dfah, 25) Segs:Tag("dfamust-struct description", 0, 1) Segs:Tag("dfamust-struct declaration", 2, 9) Section("dfa opaque struct", dfah, 36) Segs:Tag("struct dfa", 0, 1) Section("dfa.h:Entry Points", dfah, 39) Segs:Tag("Entry Points header", 0) Segs:Tag("dfaalloc description", 2, 4) Segs:Tag("dfaalloc declaration", 5) Segs:Tag("dfamusts description", 7) Segs:Tag("dfamusts declaration", 8) Segs:Tag("dfasyntax declaration", 10, 13) Segs:Tag("dfacomp declaration", 15, 18) Section("dfaexec entry point", dfah, 59) Segs:Tag("dfaexec description", 0, 11) Segs:Tag("dfaexec declaration", 12, 13) Section("Remaining entry points", dfah, 74) Segs:Tag("dfasuperset declaration", 0, 5) Segs:Tag("dfaisfast declaration", 6, 7) Segs:Tag("dfafree declaration", 9, 10) Section("Specialist entry points", dfah, 86) Segs:Tag("Entry points for people who know", 0) Segs:Tag("dfainit declaration", 2, 3) Segs:Tag("dfaparse declaration", 5, 6) Segs:Tag("dfaanalyze declaration", 8, 10) Segs:Tag("dfastate declaration", 12, 14) Section("dfah Error handling", dfah, 102) Segs:Tag("Error handling introduction", 0) Segs:Tag("dfawarn declaration", 2, 6) Segs:Tag("dfaerror declaration", 8, 11) Section("expose-using-utf8", dfah, 115) Segs:Tag("using-utf8-extern", 0) -------------------------------------------------------------------------- -- Process dfa.c Section("dfa.c header", dfac, 0) Segs:Tag("Description.dfac", 1) Segs:Tag("Copyright.dfac", 2, 3) Segs:Tag("LicenseWarranty.dfac", 5, 18) Segs:Tag("Authors.dfac", 20, 21) Segs:Tag("ConfigEnv", 23) Segs:Tag("OwnDefs.dfac", 25) Segs:Tag("GlobalIncludes", 27, 33) Section("various-macros-and-helper-fns", dfac, 35) Segs:Tag("STREQ", 0) Segs:Tag("ISASCIIDIGIT", 2, 10) Segs:Tag("NaturalLangSupport", 12, 13) Segs:Tag("dfa-wchar-wctype include", 15, 16) Segs:Tag("xalloc", 18) -- malloc+realloc+OOM check fns -- *Initial part* of code to model character classes as bit vectors in ints Section("Initial-charclass-setup-code", dfac, 55) Segs:Tag("NoHPUXBitOps", 0, 6) Segs:Tag("NOTCHAR", 8, 9) Segs:Tag("charclass_word typedef", 11, 13) -- xxRemoved Segs:Tag("INTBITS", 16, 19) Segs:Tag("CHARCLASS_WORD_BITS", 15, 17) Segs:Tag("CHARCLASS_WORD_MASK", 19, 21) Segs:Tag("CHARCLASS_WORDS enum", 23, 27) Segs:Tag("charclass_typedef", 29, 30) Segs:Tag("to_uchar_typecheck", 32, 39) Section("Contexts", dfac, 96) -- Somewhat hairy context machinery: Initial char classification bits Segs:Tag("Context_Bitmasks", 0, 15) -- Context machinery: Constraints to match by (Criteria, PrevCh, CurrCh) -- specifications. Segs:Tag("SUCCEEDS_IN_CONTEXT", 17, 39) -- Context machinery: Macros to define what a constraint depends upon Segs:Tag("PREV_*_DEPENDENT", 41, 49) -- Context machinery: Finally, the constraint magic numbers themselves: -- Bit 8-11: Valid contexts when next is CTX_NEWLINE; -- Bit 4-7: Valid contexts when next is CTX_LETTER; and -- Bit 0-3: Valid cintexts when next is CTX_NONE. -- Note that "CTX_NONE" probably should be called "CTX_OTHERS", and -- defined after CTX_NEWLINE and CTX_LETTER, but before CTX_ALL Segs:Tag("Context_MAGIC_NUMS", 51, 61) Section("Lexical Tokens", dfac, 159) -- ?? does token need to be ptrdiff_t? Should we separate comment from -- code? Segs:Tag("RegexpTokenType", 0, 4) -- List of all predefined tokens. Initially, we merely just grab all -- the definitions in one slab. However, the comment states that some -- values are not returned by the lexical analyzer, so later on, we -- may move these into a separate block to limit their visibility/scope. Segs:Tag("PredefinedTokens", 6, 92) -- Recognizer (token position + constraint) position type (atom), plus -- the position_set structure to dynamically manage arrays of position atoms. Segs:Tag("PositionAtom+DynSet", 95, 111) -- ?? Section defn? -- The leaf_set type seems out-of-place here: The comment refers to -- position_set, but the element pointer does not (size_t), and the only use -- of this type is in dfastate (near line 2500), after lex and parse. Segs:Tag("leaf_set_typedef", 112, 118) Section("DFA state", dfac, 279) -- Define dfa_state type: While the lexer/parser are not DFA/NFA-specific, -- this type features prominently in producing a dfa-usable description of -- the search expression. See especially field "states" of "struct dfa". Segs:Tag("dfa_state_typedef", 0, 15) Segs:Tag("state_num_typedef", 16, 19) Section("classes, alloc, misc. bits", dfac, 300) -- Struct to handle both char-based and wide-char-based character classes -- "e.g., [a-c], [[:alpha:]], etc." -- CHAR_BIT is at minimum 8, but can be more (e.g. 32 on some DSPs). -- ?? CHAR_BIT versus CHARBITS? Segs:Tag("mb_char_classes_struct", 0, 20) -- Main dfa structure -- (Oct 2013 first impression:) Bit of a dog's -- breakfast with everything thrown in. Initially, merely capture the -- whole structure in a single segment, then, as time and -- understanding advances, break into more focussed smaller segs. Segs:Tag("dfa-struct intro-header", 22, 24) Segs:Tag("dfa-struct scanner", 25, 28) Segs:Tag("dfa-struct parser", 30, 43) Segs:Tag("dfa-struct parser-multibyte", 44, 64) Segs:Tag("dfa-struct mbrtowc_cache", 66, 69) Section("dfa struct: bracket/state/parse", dfac, 371) Segs:Tag("dfa-struct bracket-expressions-array", 0, 3) Segs:Tag("dfa-struct superset", 5, 6) Segs:Tag("dfa-struct state-builder", 8, 11) Segs:Tag("dfa-struct parse->nfa", 13, 27) Segs:Tag("dfa-struct dfaexec", 29, 51) Segs:Tag("dfa-struct musts list", 52, 54) Segs:Tag("dfa-struct mb_follows", 55, 56) Segs:Tag("dfa-struct mb_match_lens", 57, 59) Segs:Tag("dfa-struct closing syntax", 60) -- "Some macros for user access to dfa internals. " Section("dfa helper macros and decls", dfac, 433) Segs:Tag("dfa-access-macros-cmt", 0) Segs:Tag("ACCEPTING", 2, 3) Segs:Tag("ACCEPTS_IN_CONTEXT", 5, 7) -- ?? Not sure why forward declarations of dfamust and regexp are here. -- Regexp answer: Regexp is needed in atom (LPAREN regexp RPAREN). Segs:Tag("dfamust-forward-def", 9) Segs:Tag("regexp-forward-def", 10) -- Code to minimise expensive calls to mbrtowc by FETCH_WC. This is -- done by observing how the function reacts to single-octet inputs, -- and splitting the results into "return octet" and "probe further" -- sets (with WEOF the sentinel for "probe further", and octet values -- meaning "return this value"). Section("dfambcache and mbs_to_wchar", dfac, 445) Segs:Tag("dfambcache", 0, 12) Segs:Tag("mbs_to_wchar", 14, 51) -- Debug utility prtok, with preprocessor directives separated out Section("prtok debug", dfac, 498) Segs:Tag("prtok-ifdef-DEBUG-start", 0) Segs:Tag("prtok-fn-decl", 2, 3) Segs:Tag("prtok-fn-body", 4, 75) Segs:Tag("prtok-ifdef-DEBUG-end", 76) -- Character class facilities: single-bit bit test/set/clear Section("Character classes-intro and bitops", dfac, 576) Segs:Tag("charclass-section-introduction", 0) Segs:Tag("chclass-tstbit-decl", 2, 3) Segs:Tag("chclass-tstbit-body", 4, 6) Segs:Tag("chclass-setbit-decl", 8, 9) Segs:Tag("chclass-setbit-body", 10, 12) Segs:Tag("chclass-clrbit-decl", 14, 15) Segs:Tag("chclass-clrbit-body", 16, 19) -- ... whole-set copy, clear, invert, compare Section("Character classes-set operations", dfac, 597) Segs:Tag("chclass-copyset-decl", 0, 1) Segs:Tag("chclass-copyset-body", 2, 4) Segs:Tag("chclass-zeroset-decl", 6, 7) Segs:Tag("chclass-zeroset-body", 8, 10) Segs:Tag("chclass-notset-decl", 12, 13) Segs:Tag("chclass-notset-body", 14, 19) Segs:Tag("chclass-equal-decl", 21, 22) Segs:Tag("chclass-equal-body", 23, 25) -- maybe_realloc -- Replace REALLOC_IF_NECSSARY macro with function Section("Maybe-realloc fn", dfac, 624) Segs:Tag("maybe-realloc", 0, 15) -- Return unique index representing specified set. Refer to an existing -- set if possible, otherwide add this set to the stored list. Section("Charclass storage allocation, in-dfa", dfac, 641) Segs:Tag("charclass-index-plus-dfa", 0, 14) Segs:Tag("Relocated dfa forward decl", 16, 17) Segs:Tag("redefined charclass_index", 19, 24) -- Variables holding FSA options specified by dfasyntax() Section("FSA syntax options supplied by dfasyntax()", dfac, 667) Segs:Tag("misc: syntax_bits_set, syntax_bits, case_fold, eol_byte", 0, 7) -- Suddenly swith back into character context handling (we now have the -- definition of eolbyte to use for CTX_NEWLINE) Section("More character context handling stuff", dfac, 676) Segs:Tag("context: sbit, letters, newline", 0, 7) Segs:Tag("IS_WORD_CONSTITUENT", 9, 20) Segs:Tag("char_context + wchar_ctxt", 22, 40) -- User function that globally sets DFA options Section("dfasyntax", dfac, 718) Segs:Tag("dfasyntax", 0, 24) Section("wide-char-setbit fns", dfac, 744) Segs:Tag("setbit::wchar_t comment", 0, 4) Segs:Tag("MBS_SUPPORT::setbit_wc", 5, 14) Segs:Tag("setbit_case_fold_c", 16, 26) Section("UTF-8 encoding utils", dfac, 774) Segs:Tag("using_utf8", 0, 13) -- Single-byte + ASCII optimisation test fn added (Mar 2014) Section("Using-simple-locale", dfac, 789) Segs:Tag("using-simple-locale", 0, 38) -- Using the term "dross" here comes from the sources -- and it's a good -- name to wrap up the assortment of static variables defined here. Section("Lexical Analyzer Dross", dfac, 829) -- Start by having all vars in a single clump; probably break them apart -- later when we know more about what we're doing. Segs:Tag("lex-many-static-vars", 0, 19) Segs:Tag("FETCH_WC", 22, 48) Section("MIN (and previously MAX)", dfac, 879) Segs:Tag("MIN", 0, 2) -- Segs:Tag("MAX", 3, 5) -- Handle hairy Unicode non-linear case folding cases Section("Hairy Unicode case_folded_counterparts", dfac, 883) Segs:Tag("unicode-lonesome-lower-table", 0, 15) Segs:Tag("def-CASE_FOLDED_BUFSIZE", 17, 22) Segs:Tag("case_folded_counterparts-decl", 24, 28) Segs:Tag("case_folded_counterparts-body", 29, 45) -- "alpha"->isalpha(), "punct"->ispunct(), "digit"->isdigit() etc. mapping Section("find-Posix-named-predicate", dfac, 930) Segs:Tag("predicate-typedef", 0) Segs:Tag("name-pred-isbyte-atom", 2, 11) Segs:Tag("prednames-list", 13, 27) Segs:Tag("find-pred", 29, 38) -- NOTE: The comment preceding parse_bracket_exp is misleading: The -- function handles *both* multibyte-char (produces a struct mb_char_class) -- and single-byte-char classes (produces a charclass), not just multibyte. -- For starters, merely copy massive functions verbatim Section("charclass-parser-and-lexer", dfac, 970) Segs:Tag("parse_bracket_exp-decl", 0, 3) Segs:Tag("parse_bracket_exp-body", 4, 269) Segs:Tag("lex-decl", 271, 272) Segs:Tag("lex-body", 273, 589) Section("Recursive-Descent Parser", dfac, 1561) Segs:Tag("recursive-descent parser intro", 0) Segs:Tag("lookahead token", 2) Segs:Tag("deferred-prod-stack-depth", 3, 7) Segs:Tag("addtok_mb", 9, 48) Segs:Tag("addtok_wc fwd decl", 50) Segs:Tag("addtok", 52, 106) Segs:Tag("addtok_wc", 108, 138) -- Body is void if MBS_SUPPORT isn't true; this is a simple transformation, -- and so isn't broken out by our segment tagging at present. Section("add_utf8_anychar", dfac, 1701) Segs:Tag("add_utf8_anychar-decl", 0, 1) Segs:Tag("add_utf8_anychar-body-start", 2) Segs:Tag("add_utf8_anychar-classes-array", 3, 21) Segs:Tag("add_utf8_anychar-class-tailor", 23, 37) Segs:Tag("add_utf8_anychar-description", 39, 48) Segs:Tag("add_utf8_anychar-add-tokens", 49, 56) Segs:Tag("add_utf8_anychar-body-end", 57) Section("Parser", dfac, 1760) Segs:Tag("Grammar summary comment", 0, 33) Segs:Tag("atom", 35, 90) Segs:Tag("nsubtoks", 92, 111) Segs:Tag("copytoks", 113, 125) Section("Parser, Part 2", dfac, 1887) Segs:Tag("closure", 0, 40) Segs:Tag("branch", 42, 51) Segs:Tag("regexp", 53, 63) -- dfaparse: External user's main entry point for the parser -- Suddenly, the dfa struct is seen more in parameter lists (as "d"), -- although it's copied to the file-global-scope variable "dfa" here. Segs:Tag("dfaparse-decl", 65, 69) Segs:Tag("dfaparse-body", 70, 102) -- ?? Out of FSA territory, and into DFA territory... yet? Section("dfa: position_set operations", dfac, 1990) Segs:Tag("position_set intro", 0) -- Function names "copy", "insert", "merge" and "delete" are too generic -- for my liking. However, they give hints to the history of the code: -- These were likely key parts of the early DFA implementation, and so -- didn't need highly-qualified names -- later additions needed longer -- names in order to avoid namespace clashes. There's also the problem -- that early compilers only used the first few chars of a name (6?) for -- externally-visible identifiers, so very-terse names were needed to -- survive in this environment. Segs:Tag("copy", 2, 14) Segs:Tag("alloc_position_set", 16, 22) Segs:Tag("insert", 24, 54) Segs:Tag("merge", 56, 84) Segs:Tag("delete", 86, 98) Segs:Tag("state_index", 100, 160) -- find and/or create item Segs:Tag("epsclosure", 162, 226) -- epsilon closure -- Some more functions to manipulate contexts Section("Context-fns", dfac, 2218) Segs:Tag("charclass_context", 0, 21) Segs:Tag("state_separate_contexts", 23, 44) Section("dfaanalyze", dfac, 2265) Segs:Tag("dfaanalyse summary comment", 0, 51) Segs:Tag("dfaanalyse", 52, 258) Section("dfastate", dfac, 2526) Segs:Tag("dfastate summary comment", 0, 29) Segs:Tag("dfastate", 30, 284) Segs:Tag("realloc_trans_if_necessary", 286, 309) Segs:Tag("build_state", 311, 372) Section("Multibyte fns for dfaexec", dfac, 2900) Segs:Tag("Multibyte fns section comment", 0) -- Segs:Tag("SKIP_REMAINS_MB_IF_INITIAL_STATE", 2, 21) -- ?? New section: state transition support/execution? Segs:Tag("status_transit_state typedef", 2, 9) Segs:Tag("transit_state_singlebyte", 11, 47) Segs:Tag("match_anychar", 49, 76) Segs:Tag("match_mb_charset", 78, 177) Section("Multibyte fns for dfaexec - part 2", dfac, 3079) Segs:Tag("check_matching_with_multibyte_ops", 0, 29) Segs:Tag("transit_state_consume_1char", 31, 77) Segs:Tag("transit_state", 79, 157) Section("dfaexec/init/opt/comp/free", dfac, 3238) Segs:Tag("dfaexec", 0, 177) Segs:Tag("dfasuperset", 179, 183) Segs:Tag("ddaisfast", 185, 189) Segs:Tag("free_mbdata", 191, 219) Segs:Tag("dfainit", 221, 229) Segs:Tag("dfaoptimize", 231, 268) Section("superset-build, comp, free", dfac, 3508) Segs:Tag("dfassbuild", 0, 77) Segs:Tag("dfacomp", 79, 95) Segs:Tag("dfafree", 97, 147) -- dfaalloc (approx. line 4106) probably should be here... -- Knowing must-have strings is highly valuable, as we can use very fast -- search algorithms (e.g. Boyer-Moore) instead of the slow, grinding -- character-by-character work of the DFA search engine. The idea is that -- the fast search will hopefully eliminate nearly all the lines in the -- file (buffer) (e.g. possibly 99%), so we can have our cake and eat it: -- A combination of a very fast search program, together with an expressive -- search engine that can handle sophisticated constructs such as {n,m} -- repeat constructs, multibyte characters (including collation classes) in -- multiple locales, and backreferences. Section("musthave-strings: docs-support", dfac, 3657) Segs:Tag("'musts' explanation", 0, 82) Segs:Tag("icatalloc", 84, 96) Segs:Tag("istrstr", 98, 109) Segs:Tag("freelist", 111, 116) Segs:Tag("enlist", 118, 149) Segs:Tag("comsubs", 151, 176) Segs:Tag("addlists", 178, 184) Segs:Tag("inboth", 186, 205) Section("musthave-strings: Higher fns", dfac, 3864) Segs:Tag("must typedef", 0, 11) Segs:Tag("allocmust", 13, 25) Segs:Tag("resetmust", 27, 35) Segs:Tag("freemust", 37, 46) Segs:Tag("dfamust declaration", 48, 49) Segs:Tag("dfamust definition", 50, 250) -- dfaalloc should be near dfafree (approx. line 3550), as they are a pair Segs:Tag("dfaalloc", 252, 256) Segs:Tag("dfamusts", 258, 262) Section("end-configure-vim-attributes", dfac, 4128) Segs:Tag("vim: set shiftwidth=2", 0) -------------------------------------------------------------------------- -- Process Makefile.am Section("Automake file header", makefileam, 0) Segs:Tag("automake-process-hint", 1) Segs:Tag("Copyright.makefileam", 2) Segs:Tag("Comment-block-spacer-1", 3) Segs:Tag("LicenseWarranty.makefileam", 4, 15) -- Define "automake-persistent" shadow versions of build-time macros Section("am-persistent-macros", makefileam, 17) Segs:Tag("am:LN", 0) Segs:Tag("am:AM_CFLAGS", 2) Segs:Tag("am:AM_LDFLAGS", 4, 5) Section("am-programs-generated", makefileam, 24) Segs:Tag("am:bin_PROGRAMS", 0) Segs:Tag("am:bin_SCRIPTS", 1) Section("sources-and-headers", makefileam, 26) Segs:Tag("grep_SOURCES", 0, 3) Segs:Tag("noinst_HEADERS", 4) Section("additional link libs", makefileam, 32) Segs:Tag("LIBINTL documentation", 0, 4) Segs:Tag("LDADD defn", 5, 7) Segs:Tag("grep_LDADD", 9) Segs:Tag("am:localedir", 10) Segs:Tag("am:AM_CPPFLAGS", 11) -- Perhaps CPPFLAGS should be grouped with other Automake overrides (ln 17)? Section("am:EXTRA_DIST", makefileam, 45) Segs:Tag("am:dosbuf.c", 0) Section("am:egrep fgrep", makefileam, 47) Segs:Tag("egrep-fgrep scripts", 0, 14) Section("am: clean files", makefileam, 63) Segs:Tag("am:CLEANFILES", 0) -------------------------------------------------------------------------- --[[ -- DEBUG: Omitted for now -- Print out all the segments we've extracted (or manufactured) so far for _, Seg in ipairs(Segs.SegSeq) do printf("%s:\n%s\n", Seg.Name, Indent(Seg.RawText, 8)) end --]] -------------------------------------------------------------------------- -- Check that no important text (anything other than newlines) has been -- left untagged (it is captured by Tag's PrevText facility). for _, Seg in ipairs(Segs.SegSeq) do local PrevText = Seg.PrevText if PrevText and not PrevText:match("^\n*$") then -- We've found some untagged text printf("ERROR: %s:%d:%q has preceding omitted text: \n%s\n", Seg.Source.Filename, Seg.Start, Seg.Name, Indent(PrevText, 8)) end end -------------------------------------------------------------------------- -- Check integity of of our deconstruction of each file by reconstituting -- it from the individual pieces. ReconstructFile(dfac) ReconstructFile(dfah) ReconstructFile(makefileam) -------------------------------------------------------------------------- -- Time for the rubber to hit the road: Create new files with existing content -- re-ordered into (hopefully) more coherent groups/modules, and also modify -- Makefile.am to know about the new sources. At present, this script doesn't -- re-run automake after editing Makefile.am; maybe later? -- Edit function headers/declarations before working on individual files, -- as we use these public version both in the headers and in the -- implementations. local Decls = {} -- Functions from charclass.[ch] Decls["charclass-initialise"] = [[ void charclass_initialise (size_t initial_pool_size) ]] Decls["charclass-destroy-module"] = [[ void charclass_destroy_module (void) ]] Decls["tstbit"] = [[ bool _GL_ATTRIBUTE_PURE charclass_tstbit (int b, charclass_t const *ccl) ]] Decls["setbit"] = [[ void charclass_setbit (int b, charclass_t *ccl) ]] Decls["clrbit"] = [[ void charclass_clrbit (int b, charclass_t *ccl) ]] Decls["setbit_range"] = [[ void charclass_setbit_range (int start, int end, charclass_t *ccl) ]] Decls["clrbit_range"] = [[ void charclass_clrbit_range (int start, int end, charclass_t *ccl) ]] Decls["copyset"] = [[ void charclass_copyset (charclass_t const *src, charclass_t *dst) ]] Decls["zeroset"] = [[ void charclass_zeroset (charclass_t *ccl) ]] Decls["notset"] = [[ void charclass_notset (charclass_t *ccl) ]] Decls["equal"] = [[ int _GL_ATTRIBUTE_PURE charclass_equal (charclass_t const *ccl1, charclass_t const *ccl2) ]] Decls["unionset"] = [[ void charclass_unionset (charclass_t const *src, charclass_t *dst) ]] Decls["intersectset"] = [[ void charclass_intersectset (charclass_t const *src, charclass_t *dst) ]] -- Functions from fsatoken.[ch] Decls["prtok"] = EditedText("prtok-fn-decl", "^static%s+", "", "prtok %(token t%)", "fsatoken_prtok (fsatoken_token_t t)") -- Functions from mbcsets.[ch] Decls["mbcsets-initialise"] = [[ /* Prepare module for operation. */ void mbcsets_initialise (void) ]] Decls["mbcsets-destroy-module"] = [[ /* Destroy all classes, plus any associated resources owned by the module. */ void mbcsets_destroy_module (void) ]] Decls["mbcsets-new"] = [[ /* Generate a new instance of a multibyte-character set descriptor. */ mbcsets_set_t * mbcsets_new (void) ]] Decls["mbcsets-add-wchar"] = [[ /* Individual wide characters. */ void mbcsets_add_wchar (mbcsets_set_t *mbc, wint_t wc) ]] Decls["mbcsets-add-wchar-list"] = [[ /* Add a list of wide characters (note: not wide integers). */ void mbcsets_add_wchar_list (mbcsets_set_t *mbc, size_t len, wchar_t *wc_list) ]] Decls["mbcsets-add-class"] = [[ /* Common character classes, e.g. alpha, digit, punct etc. */ void mbcsets_add_class (mbcsets_set_t *mbc, wctype_t wchar_class) ]] Decls["mbcsets-set-match-sense"] = [[ /* By default, classes match the specified characters. Regular expressions allow this sense to be inverted, usually by the convention of "^" being the first character of a bracketed class. By default, positive sense is selected; this function lets the user specify the sense, probably to specify inverted matching. */ void mbcsets_set_match_sense (mbcsets_set_t *mbc, bool invert) ]] Decls["mbcsets-add-range"] = [[ /* Explicit character ranges. */ void mbcsets_add_range (mbcsets_set_t *mbc, wint_t beg, wint_t end) ]] Decls["mbcsets-add-equivs"] = [[ void mbcsets_add_equivs (mbcsets_set_t *mbc, wint_t wc) ]] Decls["mbcsets-add-collation"] = [[ void mbcsets_add_collation (mbcsets_set_t *mbc, wint_t wc) ]] Decls["mbcsets-receive-incomplete-charclass"] = [[ /* Receive an "in-work" character class, which may or may not have members. Mbcset takes ownership of this set, and, depending on the circumstances, either maintains it internally, or else copies its contents (if any) to its internals, and releases (abandons) the supplied set. This function must not applied to a set that has been completed. */ void mbcsets_receive_incomplete_charclass (mbcsets_set_t *mbc, charclass_t *ccl) ]] Decls["mbcsets-get-chars"] = [[ /* Copy wide char list to caller's work area. */ void mbcsets_get_chars (mbcsets_set_t *mbc, wchar_t *char_list) ]] Decls["mbcsets-completed"] = [[ /* Mark a set as completed; the implementation may also analyse and optimise the set at this point (e.g. use charclasses to represent unibyte characters; merge overlapping ranges; remove the individual listing of a character if it is covered by a range, etc.) In addition, note that no further changes (e.g. receive another incomplete charclass) are allowed for this set, once "completed" is called. */ void mbcsets_completed (mbcsets_set_t *mbc) ]] Decls["mbcsets-get-characteristics"] = [[ /* Retrieve high-level information about the class, which is useful (in fsaparse) for deciding on how to deal with it. We are forced to provide significant query resources since we demand that the type internal remain opaque (even though the initial implementation may do a poor job of this effort). */ void mbcsets_get_characteristics (mbcsets_set_t *mbc, bool *p_invert, charclass_t **pp_charclass, size_t *p_nchars, size_t *p_nch_classes, size_t *p_nranges, size_t *p_nequivs, size_t *p_ncoll_elems) ]] -- Functions from fsalex.[ch] Decls["lex-initialise"] = [[ /* Prepare module for operation. */ void fsalex_initialise (void) ]] Decls["lex-destroy-module"] = [[ /* Destroy all lexer instances, plus any associated resources owned by the module. */ void fsalex_destroy_module (void) ]] Decls["lex-new"] = [[ /* Generate a new instance of an FSA lexer. */ fsalex_ctxt_t * fsalex_new (void) ]] -- Redefine "lex" as "fsalex_lex", so we can run it in parallel with the -- original code in dfa.c. Also add a (currently-unused) context pointer, -- (void*, sigh), so that we can have a fully-reentrant lexer sometime. Decls["lex"] = EditedText("lex-decl", "lex %(void%)", "fsalex_lex (fsalex_ctxt_t *lexer)", "^static%s+token", "fsatoken_token_t") -- As per dfa_syntax in dfa.c, create a function to receive directives on -- how to interpret REs. Decls["lex-syntax"] = [[ /* Receive syntax directives, and other pattern interpretation instructions such as case folding and end-of-line character. In addition, this function configures various internal structures based on the locale in force. */ void fsalex_syntax (fsalex_ctxt_t *lexer, reg_syntax_t bits, int fold, unsigned char eol) ]] Decls["lex-pattern"] = [[ /* Receive the pattern, and reset the lexical analyser state. The interpretation of the chars (octets?) in the pattern (ASCII chars? variable-length UTF-8 sequences? Simplified Chinese? etc.) depends on the locale that was in force when fsalex_syntax () was called. NULs may be present amongst the codes, which is why the length is given explicitly, rather than relying on strlen(3). */ void fsalex_pattern (fsalex_ctxt_t *lexer, char const *pattern, size_t const pattern_len) ]] Decls["lex-exchange"] = [[ /* Define external function to do non-core data exchanges between the lexer and the parser. This function must conform to proto_lexparse_exchange_fn_t. This interface lets two instances communicate without requiring help from an outside party. */ int fsalex_exchange (fsalex_ctxt_t *lexer, proto_lexparse_opcode_t opcode, void *param) ]] Decls["lex-exception-fns"] = [[ /* Receive functions to deal with exceptions detected by the lexer: Warnings and errors. Internally, we add the _Noreturn attribute to the error callback, to help the compiler with code flow analysis. */ void fsalex_exception_fns (fsalex_ctxt_t *lexer, fsalex_warn_callback_fn *warningfn, fsalex_error_callback_fn *errorfn) ]] -- Functions from fsaparse.[ch] Decls["parse-initialise"] = [[ /* Prepare module for operation. */ void fsaparse_initialise (void) ]] Decls["parse-destroy-module"] = [[ /* Destroy all parser instances, plus any associated resources owned by the module. */ void fsaparse_destroy_module (void) ]] Decls["parse-new"] = [[ /* Generate a new instance of an FSA parser. */ fsaparse_ctxt_t * fsaparse_new (void) ]] Decls["parse-lexer"] = [[ /* Receiver a lexer function, plus lexer instance context pointer, for use by the parser. Although not needed initially, this plug-in architecture may be useful in the future, and it breaks up some of the intricate connections that made the original dfa.c code so daunting. */ void fsaparse_lexer (fsaparse_ctxt_t *parser, void *lexer_context, proto_lexparse_lex_fn_t *lex_fn, proto_lexparse_exchange_fn_t *lex_exchange_fn) ]] -- Rewrite header of fsaparse_parse completely here, as splitting the lexer -- into a separate entity makes much of the header irrelevant. Decls["parse"] = [[ /* Main entry point for the parser. Parser is a pointer to a parser context struct created by fsaparse_new. Before calling this function, the parser instance must be supplied with a lexer (fsaparse_lexer), and also with callback functions to receive warning and error reports (fsaparse_esception_fns). */ void fsaparse_parse (fsaparse_ctxt_t *parser) ]] Decls["parse-get-token-list"] = [[ /* After parsing, report a list of tokens describing the pattern. Complex structures such as alternation, backreferences, and locale-induced complexity such as variable-length utf8 sequences are described here by appending operators that apply to the preceding item(s) (postfix notation). */ void fsaparse_get_token_list (fsaparse_ctxt_t *parser, size_t *nr_tokens, fsatoken_token_t **token_list) ]] -- Functions from fsamusts.[ch] Decls["must"] = [[ /* Receive an existing list (possibly empty) of must-have strings, together with a list of the tokens for the current FSA (postfix tree order), and if there are any more must-have strings in the token list, add them to the must-have list. Returns the possibly-modified list to the caller. Locale and syntax items are partially covered here by the case_fold and unibyte_locale flags, but this is incomplete, and should be addressed by Stage 2 (improving the expressiveness of tokens). */ fsamusts_list_element_t * fsamusts_must (fsamusts_list_element_t *must_list, size_t nr_tokens, fsatoken_token_t *token_list, bool case_fold, bool unibyte_locale) ]] -- Functions from high-level-dfa.[ch] Decls["hldfa-initialise"] = [[ /* Prepare module for operation. */ void hldfa_initialise (void) ]] Decls["hldfa-destroy-module"] = [[ /* Destroy all high-level dfa instances, plus any associated resources owned by the module. */ void hldfa_destroy_module (void) ]] ----------------******** charclass.h ********---------------- print("Creating charclass.h...") local f = assert(io.open("charclass.h", "w")) assert(f:write([[ /* charclass -- Tools to create and manipulate sets of characters (octets) ]])) assert(f:write(RawText("Copyright.dfac"), "\n")) assert(f:write(RawText("LicenseWarranty.dfac"), "\n")) assert(f:write(RawText("Authors.dfac"))) assert(f:write([[ /* 2014: Repackaged by "untangle" script, written by behoffski. */ /* This module provides services to allocate, manipulate, consolidate and discard 256-bit vectors, used to describe 8-bit (octet) sets. Octet is used as the member name here, as "byte" or "char" can sometimes refer to different bit sizes (e.g. char -> 6 bits on some IBM/Cyber architectures; char -> 32 bits on some DSP architectures; in C, sizeof (char) == 1 by definition on all architectures). The connection between these "charclass" sets and set expression by RE tools can be non-trivial: Many Unicode characters cannot fit into 8 bits, and even where octet-based code pages are used, nontrivial cases can appear (e.g. Code page 857, MS-DOS Turkish, which has both a dotted and a dotless lowercase and uppercase "I"). On the storage side, things are slightly tricky and perhaps even murky at times. The client starts by allocating a charclass, working on it, and then either completing it (usually) or abandoning it. The working class (pun intended) is represented by a pointer. If not abandoned, this pointer is guaranteed to remain valid for the lifetime of the module. The module tries aggressively to eliminate duplicates; this is perhaps the main function of the completion step. So, the pointer that represents the class after completion may not be the working pointer. In addition to the pointer method of referring to a class, the classes can be viewed as an array, with the first class receiving index 0, the second receiving index 1, and so on. Functions are provided to map pointers to indexes, and vice versa. The index representation is handy as it is very compact (typically much fewer than 24 bits), whereas pointers are architecture and OS-specific, and may be 64 bits or more. Index 0 is special; it will always represent the zero-class (no members set). Users wanting to store a set of non-zeroclass classes (e.g. utf8) can use this property as a sentinel (a value of 0 for a static variable can mean "not initialised"). Finally, there are some "gutter" bits, at least 3 on each end of the class, so that, to a limited extent (and especially for the common case of EOF == -1), bits can be set and cleared without causing problems, and the code does not need to include the overhead of checks for out-of-bound bit numbers. These gutter bits are cleared when the class is completed, so EOF (for instance) should never be member of a class. */ ]])) -- Add preprocessor lines to make this header file idempotent. assert(f:write([[ #ifndef CHARCLASS_H #define CHARCLASS_H 1 /* Always import environment-specific configuration items first. */ #include #include #include #include /* Define charclass as an opaque type. */ typedef struct charclass_struct charclass_t; /* First integer value that is greater than any unibyte char. Each class can model [0 .. CHARCLASS_NOTCHAR - 1] members. The user may reference a few members above or below this range, thanks to the explicit gutters, but the margins provided by this facility are small. */ enum { CHARCLASS_NOTCHAR = 1 << CHAR_BIT }; /* Indices to valid charclasses are always non-negative; zero is reserved as an index to the zerocass, and all other class entities have positive indices. We use ptrdiff_t rather than size_t here as -1 can be used as a sentinel in some places. */ typedef ptrdiff_t charclass_index_t; /* Index zero is special: It is the zero-class (no bits set) class. */ enum { CHARCLASS_ZEROCLASS_INDEX = 0 }; /* Entire-module initialisation and destruction functions. The client specifies starting size for the class pool. Destroy_module releases all resources acquired by this module. */ ]])) WriteExternDecl(f, Decls["charclass-initialise"]) WriteExternDecl(f, Decls["charclass-destroy-module"]) assert(f:write([[ /* Single-bit operations (test, set, clear). */ ]])) -- Write declaration with "extern" preceding and ";" following text. WriteExternDecl(f, Decls["tstbit"]) WriteExternDecl(f, Decls["setbit"]) WriteExternDecl(f, Decls["clrbit"]) assert(f:write([[ /* Range-of-bits set and clear operations. These are easier to read, and also more efficient, than multiple single-bit calls. */ ]])) WriteExternDecl(f, Decls["setbit_range"]) WriteExternDecl(f, Decls["clrbit_range"]) assert(f:write([[ /* Whole-of-set operations (copy, zero, invert, compare-equal). */ ]])) WriteExternDecl(f, Decls["copyset"]) WriteExternDecl(f, Decls["zeroset"]) WriteExternDecl(f, Decls["notset"]) WriteExternDecl(f, Decls["equal"]) assert(f:write([[ /* Add "unionset" and "intersectset" functions since whole-of-class operations tend to be reasonably expressive and self-documenting. In both cases, the source modifies the destination; ORed in, in the case of unionset; ANDed in, in the case of intersectset. */ ]])) WriteExternDecl(f, Decls["unionset"]) WriteExternDecl(f, Decls["intersectset"]) assert(f:write([[ /* Functions to allocate, complete and abandon charclasses. Note that the module aggressively tries to reuse existing completed classes rather than create new ones. The module returns an unique index that can be used to reference the module; this index supercedes the pointer used during the work phase, e.g.: work_class_pointer = charclass_alloc (); ...(Set/clear members as required to construct a class in work_class_pointer.)... class_index = charclass_completed (work_class_pointer); completed_class_pointer = charclass_get_pointer (class_index); ...(completed_class_pointer might not == work_class_pointer.)... The aggressive-reuse policy also means that completed classes must not undergo further modification. Another piece of coding hygiene is that the pointer value used to construct the class should not be used once the class is either completed or abandoned. Allocating and then abandoning classes is useful where an operation requires temporary classes for a while, but these do not need to be maintained once the work is complete. 2 May 2014: Note to self and others: I was using the term "finalise" instead of "completed" for the operation where a class has been constructed and is now ready to be used; I've decided to change the terminology after reading in various places how the term "finalise" is usually strongly connected with end-of-life operations on an object and/or class, not a ready-for-use operation. This note is a reminder to myself and a hint to others about this change, in case vestiges of the earlier naming scheme slip through my edits and/or appear in the code. */ extern charclass_t * charclass_alloc (void); extern charclass_index_t charclass_completed (charclass_t *ccl); extern void charclass_abandon (charclass_t *ccl); /* Functions to map between pointer references and index references for a charclass. As explained above, the index is convenient as it is typically an array reference, and is usually not much larger than the number of classes that have been allocated. */ extern charclass_t * _GL_ATTRIBUTE_PURE charclass_get_pointer (charclass_index_t const index); extern charclass_index_t _GL_ATTRIBUTE_PURE charclass_get_index (charclass_t const *ccl); /* Return a static string describing a class (Note: not reentrant). */ extern char * charclass_describe (charclass_t const *ccl); ]])) -- Finally, add trailer lines (idempotency, vim) assert(f:write([[ #endif /* CHARCLASS_H */ /* vim:set shiftwidth=2: */ ]])) assert(f:close()) ----------------******** charclass.c ********---------------- print("Creating charclass.c...") local f = assert(io.open("charclass.c", "w")) assert(f:write([[ /* charclass -- Tools to create and manipulate sets of C "char"s This module provides tools to create, modify, store and retrieve character classes, and provides tools tuned to the needs of RE lexical analysers. The class itself is an opaque type, referenced by a pointer while under construction, and later by an unique index when completed. The module tries aggressively to reuse existing completed classes, rather than create duplicates. Functions are provided to map between indexes and pointers. Because of the deduplication effort, the index reported for a class upon completion may map to a different pointer than the one supplied by new (). Classes may be shared between different lexer instances, although, at the time of writing (10 April 2014) it is not thread-safe. In many cases, there might only be one class under construction at any time, with the effort either completed or abandoned quickly. However, this module recognises that sometimes multiple classes might be worked on in parallel, and so explicitly marks each allocated class area as one of "unused", "work" or "completed". This marking is done by an array of state bytes dynamically allocated when the pool is created. ]])) assert(f:write(RawText("Copyright.dfac"), "\n")) assert(f:write(RawText("LicenseWarranty.dfac"), "\n")) assert(f:write(RawText("Authors.dfac"))) assert(f:write([[ /* 2014: Repackaged by "untangle" script, written by behoffski. */ /* Always import environment-specific configuration items first. */ #include #include #include "charclass.h" #include #include #include #include #include /* for EOF assert test. */ #include #include /* for WEOF assert test. */ #include "xalloc.h" /* Lower bound for size of first pool in the list. */ /* ?? Set to 2 for pool debug; Use 10 in production? */ #define POOL_MINIMUM_INITIAL_SIZE 10 #ifndef MAX # define MAX(a,b) ((a) > (b) ? (a) : (b)) #endif #ifndef MIN # define MIN(a,b) ((a) < (b) ? (a) : (b)) #endif ]])) assert(f:write([[ /* We maintain a list-of-pools here, choosing to malloc a new slab of memory each time we run out, instead of a realloc strategy. This is so that we can provide a guarantee to the user that any class pointer issued remains valid for the lifetime of the module. */ typedef size_t pool_list_index_t; /* Designator for each charclass in each pool. Note that enums are ints by default, but we use a single unsigned char per class in our explicit memory allocation. */ typedef enum { STATE_UNUSED = 0, STATE_WORKING = 1, STATE_COMPLETED = 2 } charclass_state_t; typedef struct pool_info_struct { charclass_index_t first_index; size_t alloc; /* ?? Use pool_list_index_t type for these? */ size_t used; charclass_t *classes; /* Charclass designator byte array, one per item, allocated dynamically. */ unsigned char *class_state; } pool_t; static pool_list_index_t pool_list_used = 0; static pool_list_index_t pool_list_alloc = 0; static pool_t *pool_list = NULL; /* While the header only guarantees a 3-bit gutter at each end of each class, we use an entire word (typically 32 bits) for the gutter, with at least 1 word placed at the start of each pool, 1 word as a shared gutter between each class, and 1 word after the last class. */ ]])) -- Define CHARBITS, INTBITS, and CHARCLASS_WORDS used to scope out -- size (in integers) of bit-vector array. assert(f:write(RawText("NoHPUXBitOps"), "\n")) -- xxRemoved: assert(f:write(RawText("CHARBITS-octets"), "\n")) -- xxRemoved: assert(f:write(RawText("INTBITS"), "\n")) assert(f:write(RawText("charclass_word typedef"), "\n")) assert(f:write(RawText("CHARCLASS_WORD_BITS"), "\n")) assert(f:write(RawText("CHARCLASS_WORD_MASK"), "\n")) assert(f:write(TextSubst(RawText("CHARCLASS_WORDS enum"), "NOTCHAR", "CHARCLASS_NOTCHAR"), "\n")) --]] assert(f:write([[ /* Flesh out opaque charclass type given in the header */ /* The gutter element following the class member storage also serves as the gutter element p-receding the next class in the list. Note that since the "gutter" notion explicitly permits a restricted set of negative indices, members need to be signed, not unsigned, so that arithmetic shift right can be used where possible (e.g. -8 >> 8 == -1, not -8 / 256 == 0). */ struct charclass_struct { charclass_word members[CHARCLASS_WORDS]; charclass_word gutter_following; }; ]])) -- Define public functions. We've made charclass an opaque class, so -- need to edit each body to change parameter "c" to "*c", and in body -- change "c" to "c->members". assert(f:write([[ /* Define class bit operations: test, set and clear a bit. Grrrr. I wanted to exploit arithmetic right shift to come up with a really cheap and neat way of reducing small negative bit values, especially if b == EOF ==-1, to an index of -1 that falls neatly into the gutter, but strict C conformance does not guarantee this. The code below handles the two most likely scenarios, but, as with anything that is undefined, this is playing with fire. */ #if INT_MAX == 2147483647 # define INT_BITS_LOG2 5 /* log2(sizeof(int)) + log2(CHARCLASS_WORD_BITS) */ #else # error "Not implemented: Architectures with ints other than 32 bits" #endif #if ((~0 >> 1) < 0) /* Arithmetic shift right: Both signed and unsigned cases are ok. */ # define ARITH_SHIFT_R_INT(b) ((b) >> INT_BITS_LOG2) #else /* Avoid using right shift if b is negative. The macro may evaluate b twice in some circumstances. */ # define ARITH_SHIFT_R_INT(b) \ (((b) < 0) ? -1 : ((b) >> INT_BITS_LOG2)) #endif ]])) assert(f:write(Decls["tstbit"])) local body = EditedText("chclass-tstbit-body", "c%[b / CHARCLASS_WORD_BITS%]", "ccl->members[ARITH_SHIFT_R_INT(b)]") assert(f:write(body, "\n")) assert(f:write(Decls["setbit"])) local body = EditedText("chclass-setbit-body", "c%[b / CHARCLASS_WORD_BITS%]", "ccl->members[ARITH_SHIFT_R_INT(b)]") assert(f:write(body, "\n")) assert(f:write(Decls["clrbit"])) local body = EditedText("chclass-clrbit-body", "c%[b / CHARCLASS_WORD_BITS%]", "ccl->members[ARITH_SHIFT_R_INT(b)]") assert(f:write(body, "\n")) -- Add "setbit_range" and "clrbit_range", mainly as it allows the client -- to write significantly cleaner code in some cases (utf8), but also because -- this code is (initially) modestly more efficient. (I could implement -- bitmasks here to improve efficiency much more, but there are so many -- things to do that I'm skipping it for now.) assert(f:write(Decls["setbit_range"], "\n")) assert(f:write([[ { int bit; /* Do nothing if the range doesn't make sense. */ if (end < start) return; if (start >= CHARCLASS_NOTCHAR) return; /* Clip the range to be in the interval [-1..NOTCHAR - 1] */ start = MAX(start, -1); end = MAX(end, -1); /* We know start is < CHARCLASS_NOTCHAR from the test above. */ end = MIN(end, CHARCLASS_NOTCHAR - 1); /* ?? Could check that ccl is a valid class, but not at present. */ /* Okay, loop through the range, bit-by-bit, setting members. */ for (bit = start; bit <= end; bit++) ccl->members[ARITH_SHIFT_R_INT(bit)] |= 1U << bit % CHARCLASS_WORD_BITS; } ]])) assert(f:write(Decls["clrbit_range"], "\n")) assert(f:write([[ { int bit; /* Do nothing if the range doesn't make sense. */ if (end < start) return; if (start >= CHARCLASS_NOTCHAR) return; /* Clip the range to be in the interval [-1..NOTCHAR - 1] */ start = MAX(start, -1); end = MAX(end, -1); /* We know start is < CHARCLASS_NOTCHAR from the test above. */ end = MIN(end, CHARCLASS_NOTCHAR - 1); /* ?? Could check that ccl is a valid class, but not at present. */ /* Okay, loop through the range, bit-by-bit, clearing members. */ for (bit = start; bit <= end; bit++) ccl->members[ARITH_SHIFT_R_INT(bit)] &= ~(1U << bit % CHARCLASS_WORD_BITS); } ]])) assert(f:write([[ /* Define whole-set operations: Copy, clear, invert, compare and union */ ]])) assert(f:write(Decls["copyset"])) local body = EditedText("chclass-copyset-body", "\n memcpy .-\n", "\n memcpy (dst->members, src->members, sizeof(src->members));\n") assert(f:write(body, "\n")) assert(f:write(Decls["zeroset"])) local body = EditedText("chclass-zeroset-body", "\n memset .-\n", "\n memset (ccl->members, 0, sizeof(ccl->members));\n") assert(f:write(body, "\n")) assert(f:write(Decls["notset"])) local body = EditedText("chclass-notset-body", " s%[i%] = CHARCLASS_WORD_MASK & ~s%[i%]", " ccl->members[i] = CHARCLASS_WORD_MASK & ~ccl->members[i]") assert(f:write(body, "\n")) assert(f:write(Decls["equal"])) local body = EditedText("chclass-equal-body", "\n return .-\n", "\n return memcmp (ccl1->members, ccl2->members,\n" .. " sizeof(ccl1->members)) == 0;\n") assert(f:write(body, "\n")) -- Add "unionset" and "intersectset" functions so we can use classes more -- expressively and directly. assert(f:write(Decls["unionset"])) assert(f:write([[ { int i; for (i = 0; i < CHARCLASS_WORDS; ++i) dst->members[i] |= src->members[i]; } ]])) assert(f:write(Decls["intersectset"])) assert(f:write([[ { int i; for (i = 0; i < CHARCLASS_WORDS; ++i) dst->members[i] &= src->members[i]; } ]])) -- Rewrite charclass storage handling. The original code relating to this -- starts at about line 595 in dfa.c, but this code is sufficiently -- different that I'm writing it from scratch. assert(f:write([[ /* #ifdef DEBUG */ /* Nybble (4bit)-to-char conversion array for little-bit-endian nybbles. */ static const char *disp_nybble = "084c2a6e195d3b7f"; /* Return a static string describing a class (Note: not reentrant). */ char * charclass_describe (charclass_t const *ccl) { /* The string should probably be less than 90 chars, but overcompensate for limited uncertainty introduced by the %p formatting operator. */ static char buf[256]; char *p_buf = buf; int i; p_buf += sprintf (p_buf, "0x%08lx:", (unsigned long) ccl); for (i = 0; i < CHARCLASS_WORDS; i += 2) { int j = ccl->members[i]; *p_buf++ = ' '; *p_buf++ = disp_nybble[(j >> 0) & 0x0f]; *p_buf++ = disp_nybble[(j >> 4) & 0x0f]; *p_buf++ = disp_nybble[(j >> 8) & 0x0f]; *p_buf++ = disp_nybble[(j >> 12) & 0x0f]; *p_buf++ = disp_nybble[(j >> 16) & 0x0f]; *p_buf++ = disp_nybble[(j >> 20) & 0x0f]; *p_buf++ = disp_nybble[(j >> 24) & 0x0f]; *p_buf++ = disp_nybble[(j >> 28) & 0x0f]; j = ccl->members[i + 1]; *p_buf++ = disp_nybble[(j >> 0) & 0x0f]; *p_buf++ = disp_nybble[(j >> 4) & 0x0f]; *p_buf++ = disp_nybble[(j >> 8) & 0x0f]; *p_buf++ = disp_nybble[(j >> 12) & 0x0f]; *p_buf++ = disp_nybble[(j >> 16) & 0x0f]; *p_buf++ = disp_nybble[(j >> 20) & 0x0f]; *p_buf++ = disp_nybble[(j >> 24) & 0x0f]; *p_buf++ = disp_nybble[(j >> 28) & 0x0f]; } *p_buf++ = '\0'; return buf; } /* static */ void debug_pools (const char *label, bool class_contents) { pool_list_index_t pool_nr; size_t class_nr; printf ("\nPool %p debug(%s): [alloc, used: %ld %ld]\n", pool_list, label, pool_list_alloc, pool_list_used); for (pool_nr = 0; pool_nr < pool_list_used; pool_nr++) { pool_t *pool = &pool_list[pool_nr]; printf ( " %3ld: first_index, alloc, used, classes: %4ld %3lu %3lu %p\n", pool_nr, pool->first_index, pool->alloc, pool->used, pool->classes); printf (" class_states: "); for (class_nr = 0; class_nr < pool->alloc; class_nr++) switch (pool->class_state[class_nr]) { case STATE_UNUSED: putchar ('.'); break; case STATE_WORKING: putchar ('w'); break; case STATE_COMPLETED: putchar ('C'); break; default: printf ("?%02x", pool->class_state[class_nr]); } putchar ('\n'); } /* If class contents requested, print them out as well. */ if (class_contents) for (pool_nr = 0; pool_nr < pool_list_used; pool_nr++) { pool_t *pool = &pool_list[pool_nr]; for (class_nr = 0; class_nr < pool->used; class_nr++) printf ("%s\n", charclass_describe (&pool->classes[class_nr])); } } /* #endif * DEBUG */ static pool_t * add_new_pool (void) { pool_t *prev, *pool; size_t pool_class_alloc; charclass_t *alloc_mem; /* If the pools list is full, use x2nrealloc to expand its size. */ if (pool_list_used == pool_list_alloc) pool_list = x2nrealloc (pool_list, &pool_list_alloc, sizeof (pool_t)); /* Find the size of the last charclass pool in the (old) list. Scale up the size so that malloc activity will decrease as the number of pools increases. Also, add 1 here as we knock off 1 to use as a gutter later. */ prev = &pool_list[pool_list_used - 1]; pool_class_alloc = (prev->alloc * 5 / 2) + 1; alloc_mem = XNMALLOC (pool_class_alloc, charclass_t); /* Set up the new pool, shifting the alloc pointer to create the gutter preceding the first class of the pool. */ pool = &pool_list[pool_list_used++]; pool->classes = alloc_mem + 1; pool->first_index = prev->first_index + prev->alloc; pool->alloc = pool_class_alloc - 1; pool->used = 0; pool->class_state = xzalloc (pool->alloc); return pool; } charclass_t * charclass_alloc (void) { pool_list_index_t pool_nr; charclass_t *class; pool_t *pool = NULL; size_t class_nr; size_t class_last_nr; charclass_word *gutter_preceding; /* Locate a pool with unused entries (if any). */ for (pool_nr = 0; pool_nr < pool_list_used; pool_nr++) { pool = &pool_list[pool_nr]; /* Try use the earliest pool possible, first by filling in a hole from a withdrawn class, or by grabbing an unused class from the end of the list. */ class_last_nr = MIN(pool->used + 1, pool->alloc); for (class_nr = 0; class_nr < class_last_nr; class_nr++) { if (pool->class_state[class_nr] == STATE_UNUSED) goto found_pool_and_class; } } /* No space found, so prepare a new pool and make this class its first element. */ pool = add_new_pool (); class_nr = 0; /* FALLTHROUGH */ found_pool_and_class: /* Mark the found class state as working, zero its elements, and return class pointer to caller. Zeroing is needed as this class may have been previously worked on, but then abandoned or withdrawn. */ pool->class_state[class_nr] = STATE_WORKING; if (class_nr >= pool->used) pool->used = class_nr + 1; class = &pool->classes[class_nr]; /* Zero out the class' members, and also the gutters on each side. */ memset (class, 0, sizeof (*class)); gutter_preceding = ((charclass_word *) class) - 1; *gutter_preceding = 0; return class; } pool_t * _GL_ATTRIBUTE_PURE find_class_pool (charclass_t const *ccl) { pool_list_index_t pool_nr; pool_t *pool = NULL; ptrdiff_t class_ptr_offset; /* Locate the pool whose memory address space covers this class. */ /* ?? Perhaps check &pool->classes[pool->alloc] in this first loop, and then check that the index is in the "used" portion later, so we can diagnose malformed pointers more exactly. */ for (pool_nr = 0; pool_nr < pool_list_used; pool_nr++) { pool = &pool_list[pool_nr]; if ((pool->classes <= ccl) && (ccl < &pool->classes[pool->alloc])) goto found_pool; } /* No credible pool candidate was found. */ assert ("find_class_pool: no pool found"); return NULL; found_pool: /* Make sure the class clearly lies on an array boundary within the pool's memory allocation. */ class_ptr_offset = (char *) ccl - (char *) pool->classes; if ((class_ptr_offset % sizeof (charclass_t)) != 0) { /* Pointer does not lie at the start of a pool member. */ assert ("find_class_pool: pointer not aligned."); return NULL; } return pool; } static void withdraw_class (charclass_t *ccl, pool_t *class_pool) { pool_t *pool; size_t class_nr; int *gutter_preceding; /* Use pool reference if given, otherwise work back from the class pointer to find the associated pool. */ pool = (class_pool != NULL) ? class_pool : find_class_pool (ccl); if (pool == NULL) assert (!"Could not locate a pool for this charclass"); /* Zero out the gutters each side of the class. */ ccl->gutter_following = 0; gutter_preceding = ((int *) ccl) - 1; *gutter_preceding = 0; /* Work out the class index in the pool. */ class_nr = ccl - pool->classes; pool->class_state[class_nr] = STATE_UNUSED; /* Is this the last item within the pool's class list? */ if (class_nr == pool->used - 1) { /* Yes, reduce the pool member count by 1. */ pool->used--; return; } } /* Finish off creating a class, and report an index that can be used to reference the class. */ charclass_index_t charclass_completed (charclass_t *ccl) { charclass_word *gutter_preceding; pool_list_index_t pool_nr; pool_t *pool; charclass_t *found = NULL; size_t class_nr; pool_t *my_pool = NULL; size_t my_class_nr = 0; /* Search all pools for a completed class matching this class, and, if found, use it in preference to the new one. While searching, also record where the work class is located. If we can't find ourselves, the pointer is invalid, and throw an assertion. */ for (pool_nr = 0; pool_nr < pool_list_used; pool_nr++) { pool = &pool_list[pool_nr]; for (class_nr = 0; class_nr < pool->used; class_nr++) { charclass_t *search = &pool->classes[class_nr]; /* Have we found ourselves in the list? */ if (search == ccl) { /* Yes, remember this place in case no duplicate is found. */ my_pool = pool; my_class_nr = class_nr; } if (pool->class_state[class_nr] != STATE_COMPLETED) continue; if (charclass_equal (search, ccl)) { /* Another class, completeded, matches: Use it in preference to potentially creating a duplicate. */ withdraw_class (ccl, my_pool); found = search; goto found_matching_class; } } } /* No duplicate found... but make sure the search pointer is known. */ assert (my_pool != NULL); assert (my_pool->class_state[my_class_nr] == STATE_WORKING); /* Prepare to convert the search (work) class into a completed class. */ pool = my_pool; class_nr = my_class_nr; found = &pool->classes[class_nr]; /* FALLTHROUGH */ found_matching_class: /* Clear out the gutter integers each side of the class entry. */ gutter_preceding = found->members - 1; *gutter_preceding = 0; found->gutter_following = 0; pool->class_state[class_nr] = STATE_COMPLETED; /* Return the index of the class. */ return pool->first_index + class_nr; } void charclass_abandon (charclass_t *ccl) { withdraw_class (ccl, NULL); } /* Additional functions to help clients work with classes. */ charclass_t * _GL_ATTRIBUTE_PURE charclass_get_pointer (charclass_index_t const index) { pool_list_index_t pool_nr; pool_t *pool; /* Does this class match any class we've seen previously? */ for (pool_nr = 0; pool_nr < pool_list_used; pool_nr++) { /* Is the index inside this pool? */ pool = &pool_list[pool_nr]; if (pool->first_index <= index && index < (pool->first_index + pool->used)) { /* Yes, find the pointer within the pool and return it. */ return &pool->classes[index - pool->first_index]; } } /* The mapping above should never fail; we could return NULL, but we choose to abort instead. */ assert (!"index-to-charclass mapping failed"); return NULL; } charclass_index_t _GL_ATTRIBUTE_PURE charclass_get_index (charclass_t const *ccl) { pool_t *pool; /* This code is similar to charclass_completed... perhaps merge? */ pool = find_class_pool (ccl); if (pool == NULL) return -1; /* Report the index to the caller. */ return pool->first_index + (ccl - pool->classes); } /* Functions to initialise module on startup, and to shut down and release acquired resources at exit. */ ]])) assert(f:write(Decls["charclass-destroy-module"], [[ { pool_t *local_list; int i; charclass_t *alloc_mem; /* Move the global list head into a local variable, and immediately clear the global. This is a half-hearted attempt to avoid race conditions; to do things properly, a system-wide atomic operation (locked, including multi-CPU cache coherency) operation should be used. */ local_list = pool_list; pool_list = NULL; /* If the list is already empty, finish now. */ if (! local_list) return; /* First, discard the charclass memory associated with each pool, including catering for the offset used upon creation. */ for (i = 0; i < pool_list_used; i++) { /* We skipped an initial class after allocation, in order to give the first class in the pool a gutter, so undo the offset here when freeing. */ /* REMOVEME: Was typed as "guitter" above, before fixing; how about calling it some variant of "guitar"?! */ alloc_mem = local_list[i].classes; free (alloc_mem - 1); /* Also free up the class_state memory. */ free (local_list[i].class_state); } /* Second, free up the pool list itself. */ free (local_list); pool_list_used = 0; } ]])) assert(f:write(Decls["charclass-initialise"])) assert(f:write([[ { size_t initial_alloc; charclass_t *alloc_mem; pool_t *pool; charclass_t *ccl; charclass_index_t zeroclass_index; /* Usually (gnulib assumption?) EOF = WEOF = -1, but the standard merely states that EOF must be a negative integer, not necessarily -1; furthermore, the definition for (ISO C90) WEOF states that it need not be negative at all, let alone guaranteed to be -1. In this demonstration/prototype code, we demand that both EOF and WEOF be -1, as this value makes the gutters worthwhile: The user can be relieved of some edge case processing if -1 is thrown neatly into the gutter . */ assert (EOF == -1); assert (WEOF == -1); /* First, set up the list-of-pools structure with initial storage. */ pool_list_alloc = 4; pool_list = (pool_t *) xnmalloc (pool_list_alloc, sizeof (pool_t)); /* If initial pool size is small, inflate it here as we prefer to waste a little memory, rather than issue many calls to xmalloc (). This minimum also ensures that our double-up pool size strategy has a sane (perhaps overly generous?) starting point. */ initial_alloc = MAX(initial_pool_size, POOL_MINIMUM_INITIAL_SIZE); /* Set up the first pool using our chosen first alloc size. Allocate an extra class, and offset the pool by this amount, in order to accommodate the initial gutter integer. (Note for the future: If charclass alignment becomes significant, then sizeof (charclass) and this offset may need to be changed, perhaps for SIMD instructions.) */ pool_list_used = 1; pool = &pool_list[0]; pool->first_index = 0; pool->alloc = initial_alloc; pool->used = 0; alloc_mem = XNMALLOC (pool->alloc + 1, charclass_t); pool->classes = alloc_mem + 1; pool->class_state = xzalloc (pool->alloc); /* Enforce the all-zeroes class to be the first class. This is needed as "abandon" may leave a hole in a pool in some cases, and in these cases we need to ensure that no-one else picks it up by accident (as this would invalidate the guarantee that the module eliminates all duplicates, from the point of view of the user). So, we set the first class to all-zeroes, and also zero out abandoned classes where a hole is unavoidable. */ ccl = charclass_alloc (); /* Alloc delivers an all-zeroes class. */ zeroclass_index = charclass_completed (ccl); assert (zeroclass_index == CHARCLASS_ZEROCLASS_INDEX); /* Finally, use atexit () to arrange for module cleanup on exit. */ atexit (charclass_destroy_module); } ]])) -- Finally, add trailer line (vim) assert(f:write([[ /* vim:set shiftwidth=2: */ ]])) assert(f:close()) -------------------------------------------------------------------------- --- FSATokenSubst -- Substitute bare token names with module-prefixed names -- Knowledge of the original name, and also all-caps versus all-lowercase -- variants, are collected here, as multiple parties want to convert -- tokens in their text. -- @parmeter Original -- Source code to be modified -- @return Code with keywords/names qualified with a prefix as appropriate local function FSATokenSubst(Original) local Modified = Original -- Handle macro/uppercase names that are not tokens (prefix FSATOKEN_) for _, UppercaseKeyword in ipairs{"NOTCHAR"} do local Search = "%f[_%w]" .. UppercaseKeyword .. "%f[%W]" local Replace = "FSATOKEN_" .. UppercaseKeyword Modified = Modified:gsub(Search, Replace) end -- Finally, handle token values (prefix with FSATOKEN_TK_) for _, UppercaseToken in ipairs{ "END", "EMPTY", "BACKREF", "BEGLINE", "ENDLINE", "BEGWORD", "ENDWORD", "LIMWORD", "NOTLIMWORD", "QMARK", "STAR", "PLUS", "REPMN", "CAT", "OR", "LPAREN", "RPAREN", "ANYCHAR", "MBCSET", "WCHAR", "CSET"} do local Search = "%f[%w_]" .. UppercaseToken .. "%f[%W]" local Replace = "FSATOKEN_TK_" .. UppercaseToken Modified = Modified:gsub(Search, Replace) end -- Note: We do not try to maintain indentation of any comments after -- the substituted text. This is because this prefixing is (at present) -- a temporary hack to keep the original code's and FSAToken's code -- namespaces entirely separate, so that both can be run side-by-side -- and the outputs compared. Later, if/when the untangled code is -- chosen to replace the original, the prefixing can be eliminated, -- and/or this function can do more to maintain indentation. return Modified end -------------------------------------------------------------------------- ----------------******** fsatoken.h ********---------------- print("Creating fsatoken.h...") local f = assert(io.open("fsatoken.h", "w")) assert(f:write([[ /* fsatoken - Create tokens for a compact, coherent regular expression language ]])) assert(f:write(RawText("Copyright.dfac"), "\n")) assert(f:write(RawText("LicenseWarranty.dfac"), "\n")) assert(f:write(RawText("Authors.dfac"))) assert(f:write([[ /* 2014: Repackaged by "untangle" script, written by behoffski. */ /* Regular expression patterns are presented as text, possibly ASCII; the format is very expressive, but this comes at the cost of being somewhat expensive to interpret (including identifying invalid patterns). By tokenising the pattern, we make life much easier for the parser and other search machinery that follows. This file defines the tokens that we use, both for the benefit of the lexer/parser/dfa analyser that share this information, and for other machinery (such as the C compiler) that may need to store and/or manipulate these items. */ ]])) -- Add preprocessor lines to make this header file idempotent. assert(f:write([[ #ifndef FSATOKEN_H #define FSATOKEN_H 1 /* Always import environment-specific configuration items first. */ #include /* Obtain defn. of ptrdiff_t from stddef.h, and CHAR_BIT from limits.h */ #include #include /* C stream octets, and non-stream EOF, are self-representing tokens. We need to include stdio.h to obtain the definition of EOF. */ #include ]])) -- Write out token-specific code extracted from dfa.c -- xxRemoved: assert(f:write(RawText("CHARBITS-octets"), "\n")) assert(f:write(FSATokenSubst(RawText("NOTCHAR")), "\n")) assert(f:write(EditedText("RegexpTokenType", "typedef ptrdiff_t token", "typedef ptrdiff_t fsatoken_token_t"), "\n")) assert(f:write(FSATokenSubst(RawText("PredefinedTokens")), "\n")) -- Define prototypes for functions provided by module body. assert(f:write([[ /* prtok - Display token name (for debugging) */ #ifdef DEBUG ]])) WriteExternDecl(f, Decls["prtok"]) assert(f:write([[ #endif /* DEBUG */ ]])) -- Finally, add trailer lines (idempotency, vim) assert(f:write([[ #endif /* FSATOKEN_H */ /* vim:set shiftwidth=2: */ ]])) assert(f:close()) ----------------******** fsatoken.c ********---------------- print("Creating fsatoken.c...") local f = assert(io.open("fsatoken.c", "w")) assert(f:write([[ /* fsatoken - Support routines specific to token definitions ]])) assert(f:write(RawText("Copyright.dfac"), "\n")) assert(f:write(RawText("LicenseWarranty.dfac"), "\n")) assert(f:write(RawText("Authors.dfac"))) assert(f:write([[ /* 2014: Repackaged by "untangle" script, written by behoffski. */ /* The majority of the fsatoken[ch] module is in fsatoken.h, as it is shared by other modules. This file provides token-specific support functions, such as functions to print tokens (for debugging). Although there is a relationship between some generic constructs such as character classes and the CSET token defined here, the generic items are defined in a separate support library, not in this module. This is because these tokens are very FSA/grep-specific, whereas the generic consructs are potentially widely useable, and may even be amenable to hardware-specific optimisations (such as superscalar opcodes such as: - and/or/set/clear/test-and-set/test-and-clear; and/or - bit counting operations). */ /* Always import environment-specific configuration items first. */ #include #include "fsatoken.h" #include ]])) assert(f:write(RawText("prtok-ifdef-DEBUG-start"), "\n")) assert(f:write(Decls["prtok"])) assert(f:write(FSATokenSubst(RawText("prtok-fn-body")), "\n")) assert(f:write(RawText("prtok-ifdef-DEBUG-end"))) -- Finally, add trailer line (vim) assert(f:write([[ /* vim:set shiftwidth=2: */ ]])) -------------------------------------------------------------------------- ----------------******** mbcsets.h ********---------------- print("Creating mbcsets.h...") local f = assert(io.open("mbcsets.h", "w")) assert(f:write([[ /* mbcsets -- Handle multi-byte and/or locale-dependent sets of chars. ]])) assert(f:write(RawText("Copyright.dfac"), "\n")) assert(f:write(RawText("LicenseWarranty.dfac"), "\n")) assert(f:write(RawText("Authors.dfac"))) assert(f:write([[ /* 2014: Created by "untangle" script, written by behoffski. ?? more stuff here */ ]])) -- Add preprocessor lines to make this header file idempotent. assert(f:write([[ #ifndef MBCSETS_H #define MBCSETS_H 1 /* Always import environment-specific configuration items first. */ #include #include "charclass.h" #include #include /* Define the multibyte-set descriptor as an opaque type. */ typedef struct mbcsets_set_struct mbcsets_set_t; ]])) WriteExternDecl(f, Decls["mbcsets-initialise"]) WriteExternDecl(f, Decls["mbcsets-destroy-module"]) WriteExternDecl(f, Decls["mbcsets-new"]) WriteExternDecl(f, Decls["mbcsets-set-match-sense"]) WriteExternDecl(f, Decls["mbcsets-add-wchar"]) WriteExternDecl(f, Decls["mbcsets-add-wchar-list"]) WriteExternDecl(f, Decls["mbcsets-add-class"]) WriteExternDecl(f, Decls["mbcsets-add-range"]) WriteExternDecl(f, Decls["mbcsets-receive-incomplete-charclass"]) WriteExternDecl(f, Decls["mbcsets-get-characteristics"]) WriteExternDecl(f, Decls["mbcsets-get-chars"]) -- add_equivs and add_collation not implemented yet... remove them from -- the module, so that gcc doesn't complain about such passive code. --[=[ WriteExternDecl(f, Decls["mbcsets-add-equivs"]) WriteExternDecl(f, Decls["mbcsets-add-collation"]) --]=] WriteExternDecl(f, Decls["mbcsets-completed"]) -- Finally, add trailer lines (idempotency, vim) assert(f:write([[ #endif /* MBCSETS_H */ /* vim:set shiftwidth=2: */ ]])) assert(f:close()) ----------------******** mbcsets.c ********---------------- print("Creating mbcsets.c...") local f = assert(io.open("mbcsets.c", "w")) assert(f:write([[ /* mbcsets -- Handle multi-byte and/or locale-dependent sets of chars. ]])) assert(f:write(RawText("Copyright.dfac"), "\n")) assert(f:write(RawText("LicenseWarranty.dfac"), "\n")) assert(f:write(RawText("Authors.dfac"))) assert(f:write([[ /* 2014: Repackaged by "untangle" script, written by behoffski. */ /* ?? Need to document things here... */ /* Always import environment-specific configuration items first. */ #include #include "charclass.h" #include "mbcsets.h" #include #include #include #include #include #include "xalloc.h" /* Flesh out opaque type given in the header. */ struct mbcsets_set_struct { /* Singly-linked list of all instances, so destroy_module can release all resources by traversing the list. */ mbcsets_set_t *next_instance; charclass_t *charclass; bool invert; wchar_t *chars; /* Normal characters. */ size_t nchars; size_t chars_alloc; wctype_t *ch_classes; /* Character classes. */ size_t nch_classes; size_t ch_classes_alloc; struct /* Range characters. */ { wchar_t beg; /* Range start. */ wchar_t end; /* Range end. */ } *ranges; size_t nranges; size_t ranges_alloc; char **equivs; /* Equivalence classes. */ size_t nequivs; size_t equivs_alloc; char **coll_elems; size_t ncoll_elems; /* Collating elements. */ size_t coll_alloc; }; /* Linked list of all instances created by this module. */ static mbcsets_set_t *mbcsets_instances_list_head = NULL; ]])) assert(f:write(RawText("maybe-realloc"), "\n")) assert(f:write(Decls["mbcsets-set-match-sense"], [[ { mbc->invert = invert; } ]])) assert(f:write(Decls["mbcsets-add-wchar"], [[ { /* ?? Quietly ignore WEOF... is this reasonable? */ if (wc == WEOF) return; /* Could this character fit into a charclass set? */ /* ?? We don't know directly the signedness of wint_t; in gnulib, it is unsigned, and so testing for "wc >= 0" is flagged as an error as it is redundant. Have removed the test here for now, but not confident that this is the best treatment. */ if (/* wc >= 0 && */ wc < CHARCLASS_NOTCHAR) { /* Yes, does this wide char have a valid unichar representation? */ /* ?? The documentation states that wctob should not be used; perhaps the unibyte cache mbrtowc_cache in fsalex, built by initialise_uchar_to_wc_cache (), might be relevant here? */ int b = wctob (wc); if (b != EOF) { /* Yes, add the char (byte?) (octet?) to the charclass set. */ charclass_setbit (b, mbc->charclass); return; } } /* Ensure we have space to store another character. */ mbc->chars = maybe_realloc(mbc->chars, mbc->nchars, &mbc->chars_alloc, sizeof *mbc->chars); /* Add the character to the list. */ mbc->chars[mbc->nchars++] = wc; } ]])) assert(f:write(Decls["mbcsets-add-wchar-list"], [[ { size_t i; /* Ensure we have space to store the incoming list element(s). */ mbc->chars = maybe_realloc(mbc->chars, mbc->nchars + len, &mbc->chars_alloc, sizeof *mbc->chars); /* Add all the characters to the list. (?? use memcpy here?) */ for (i = 0; i < len; i++) mbc->chars[mbc->nchars++] = wc_list[i]; } ]])) assert(f:write(Decls["mbcsets-add-class"], [[ { /* ?? We don't check that the descriptor is valid. */ /* Ensure we have space to store another class descriptor. */ mbc->ch_classes = maybe_realloc(mbc->ch_classes, mbc->nch_classes, &mbc->ch_classes_alloc, sizeof *mbc->ch_classes); /* Add the class descriptor to the list. */ mbc->ch_classes[mbc->nch_classes++] = wchar_class; } ]])) assert(f:write(Decls["mbcsets-add-range"], [[ { /* ?? We don't check that the begin/end chars are valid. */ /* Ensure we have space to store another begin/end char pair. */ mbc->ranges = maybe_realloc(mbc->ranges, mbc->nranges + 1, &mbc->ranges_alloc, sizeof *mbc->ranges); /* Add the range to the list. */ mbc->ranges[mbc->nranges].beg = beg; mbc->ranges[mbc->nranges++].end = end; } ]])) -- add_equivs and add_collation not implemented yet... remove them from -- the module, so that gcc doesn't complain about passive code. --[=[ assert(f:write(Decls["mbcsets-add-equivs"], [[ { } ]])) assert(f:write(Decls["mbcsets-add-collation"], [[ { } ]])) --]=] assert(f:write(Decls["mbcsets-receive-incomplete-charclass"], [[ { charclass_unionset (ccl, mbc->charclass); charclass_abandon (ccl); } ]])) assert(f:write(Decls["mbcsets-completed"], [[ { charclass_t *zeroset; /* Did we end up putting anything into the charclass? */ zeroset = charclass_get_pointer (0); if (charclass_equal (mbc->charclass, zeroset)) { /* No, abandon the class, and use NULL as our sentinel. */ charclass_abandon (mbc->charclass); mbc->charclass = NULL; } else { /* Yes, complete the class, and obtain a persistent pointer. */ charclass_index_t index; index = charclass_completed (mbc->charclass); mbc->charclass = charclass_get_pointer (index); } } ]])) assert(f:write(Decls["mbcsets-get-characteristics"], [[ { *p_invert = mbc->invert; *pp_charclass = mbc->charclass; *p_nchars = mbc->nchars; *p_nch_classes = mbc->nch_classes; *p_nranges = mbc->nranges; *p_nequivs = mbc->nequivs; *p_ncoll_elems = mbc->ncoll_elems; } ]])) assert(f:write(Decls["mbcsets-get-chars"], [[ { memcpy (char_list, mbc->chars, mbc->nchars * sizeof(*char_list)); } ]])) assert(f:write(Decls["mbcsets-initialise"], [[ { /* Initialise the linked list of instances created by this module. */ mbcsets_instances_list_head = NULL; atexit (mbcsets_destroy_module); } ]])) assert(f:write([[ /* Internal function to free all resources directly or indirectly used by an instance. The pointer is no longer valid after this call. */ static void free_instance (mbcsets_set_t *mbc) { size_t i; free (mbc->chars); free (mbc->ch_classes); free (mbc->ranges); for (i = 0; i < mbc->nequivs; ++i) free (mbc->equivs[i]); free (mbc->equivs); for (i = 0; i < mbc->ncoll_elems; ++i) free (mbc->coll_elems[i]); free (mbc->coll_elems); free (mbc); } ]])) assert(f:write(Decls["mbcsets-destroy-module"], [[ { mbcsets_set_t *p_list; mbcsets_set_t *p_next; /* Move the global list head into a local variable, and immediately clear the global. This is a half-hearted attempt to avoid race conditions; to do things properly, a system-wide atomic operation (locked, including multi-CPU cache coherency) operation should be used. */ p_list = mbcsets_instances_list_head; mbcsets_instances_list_head = NULL; /* Traverse the list of instances, releasing all resources associated with each one. */ while (p_list) { p_next = p_list->next_instance; free_instance (p_list); p_list = p_next; } } ]])) assert(f:write(Decls["mbcsets-new"], [[ { mbcsets_set_t *new_set; /* Allocate memory for new instance. */ new_set = xzalloc (sizeof (*new_set)); /* Lint new instance into list of instances created by this module. */ new_set->next_instance = mbcsets_instances_list_head; mbcsets_instances_list_head = new_set; /* Allocate a charclass set to the instance, so we can easily push codes into that representation if possible. */ new_set->charclass = charclass_alloc (); /* Report created instance to the caller. */ return new_set; } ]])) -- Finally, add trailer line (vim) assert(f:write([[ /* vim:set shiftwidth=2: */ ]])) -------------------------------------------------------------------------- ----------------******** proto-lexparse.h ********---------------- print("Creating proto-lexparse.h...") local f = assert(io.open("proto-lexparse.h", "w")) assert(f:write([[ /* proto-lexparse -- Define how lexer and parser can interact. ]])) assert(f:write(RawText("Copyright.dfac"), "\n")) assert(f:write(RawText("LicenseWarranty.dfac"), "\n")) assert(f:write(RawText("Authors.dfac"))) assert(f:write([[ /* 2014: Created by "untangle" script, written by behoffski. (EVENTUALLYREWRITEME: A very lengthy editorial/discussion of this module follows, partially interspersed with a description of its intended function. Comments and criticism are welcome; if adopted, I expect that this comment block will be rewritten to be much more focussed and much shorter.) This file has been added very, very late in the non-linear process of developing this code. It addresses a feature that was not anticipated early on, but which I believe is worth exploring in the future: The presence of pluggable lexers to supply to the parser, or, equivalently, breaking up the master/slave relationship between the parser and the lexer, and replacing it with a peer-to-peer conversation (perhaps the software equivalent of a shared bus). Quite a bit of the need for this protocol/interface module is that I've demanded that the individual modules strictly hide their internals, and only share information and/or control of resources (such as the parser calling lex ()) in an explicit fashion. Early on, the nature of connections between the lexer and the parser appeared with the need to communicate {min,max} values as part of the REPMN token; then the need to communicate the wide character implied by the WCHAR token. The crisis came arose from an early decision that at the time seemed fairly mild, but gradually grew and grew over time: when I decided to extend the meaning of the fsalex_syntax () call to not only include the parameters named in the call (reg_syntax_t, case folding, eol char), but to also capture the locale in force at that point, and make the lexer's behaviour obey that locale, even if the locale was subsequently changed. Another example of data interchange is that the lexer builds structures to model multibyte character sets, but (at the time or writing) does not provide an interface for clients to access this information. The need for many and varied interface function has been steadily growing, and this is just another example. The breaking point came when I saw that the parser needed to know if it was in a unibyte or multibyte locale (currently done by testing MB_CUR_MAX > 1). If the lexer was the authority on the locale, then the parser needed to query the lexer, since, because of the pluggable-lexer architecture I'd created, the lexer was already the authority, but needed to share its information with others. The number of specialised get/set functions needed was out of control, and, if a pluggable architecture was to be maintained,the number had to be drastically reduced. Hence, this module, that defines the interface/protocol that any lexer must serve, and which allows a lexer and its client to exchange information and negotiate in a flexible fashion. This comes at the extent of type safety in many cases, but makes data exchange more formal. This module only models essential exchanges between the parser and lexer instances, and encourages the user to negotiate directly with the modules for unrelated topics. For example, the parser does not need to know what the original plain-text pattern was; it receives a full description of the pattern via the lexer tokens and associated parameters/structures. So, there is no "pattern" interchange described here: The client can work directly with the lexer. The design is to split interactions between modules into two sets: For code that is so central to the design of the modules on each side, where an indirect approach would tend to significantly complicate the exchange, and/or where efficiency is a central concern, then a direct function is provided to facilitate each such possible exchange. The other set consists of the majority of cases (by type, but usually not by runtime volume): For these cases, a generic "exchange" function is provided by the lexer, and is plugged into the parser. This function has parameters and return values that permit the parties to exchange information, although some strict type safety is sacrificed where memory pointers can cross the boundary. A slightly simplified definition of the exchange function is: int lexer_provided_exchange (void *lexer_instance_context, exchange_opcode_enum opcode, void *generic_parameter); The opcode defines what meaning (if any) is assigned to the parameter and/or to the return value. The exchange function, these opcodes, and the required meaning the values involved are formally defined by this module, and both the lexer and the parser must conform to this protocol/interface. By having this module as an independent entity, it reinforces the independence of each party in the arrangement, and can ease the creation of alternatives (e.g. perhaps use a much simpler lexer for "grep -F" in a unibyte locale?). On the downside, some of the initial opcodes/exchanges I'm writing here are obviously directly lifted from the existing dfa.c lexer/parser interactions (e.g. the multibyte sets struct(s)). I'm hoping that this will not be a fatal flaw, but in the final analysis this module is a side-effect of requiring a pluggable lexer architecture, and this may not be acceptable to others. */ ]])) -- Add preprocessor lines to make this header file idempotent. assert(f:write([[ #ifndef PROTO_LEXPARSE_H #define PROTO_LEXPARSE_H 1 /* Always import environment-specific configuration items first. */ #include /* The lexer returns a token, defined by fsatoken. */ #include "fsatoken.h" ]])) assert(f:write([[ /* Define opcodes for lexer/parser exchanges. */ typedef enum proto_lexparse_opcode_enum { /* Possible future opcode: PROTO_LEXPARSE_OP_GET_LOCALE, entire locale from uselocale/duplocale */ PROTO_LEXPARSE_OP_GET_IS_MULTIBYTE_LOCALE, PROTO_LEXPARSE_OP_GET_REPMN_MIN, PROTO_LEXPARSE_OP_GET_REPMN_MAX, PROTO_LEXPARSE_OP_GET_WIDE_CHAR_LIST_MAX, PROTO_LEXPARSE_OP_GET_WIDE_CHARS, PROTO_LEXPARSE_OP_GET_DOTCLASS, PROTO_LEXPARSE_OP_GET_MBCSET, } proto_lexparse_opcode_t; /* Most opcodes above are self-documenting; the "WIDE_CHAR(S)" items require a little more explanation: 1. When the lexer returns a WCHAR token, there is at least one explicit character (specified in the pattern), plus a list of zero or more alternate characters that also match (usually because case folding is selected). The parser should use GET_WIDE_CHARS in response to a WCHAR token to fetch this list (including the explicit character as the first element of the list); and 2. The GET_WIDE_CHAR_LIST_MAX is provided so that the parser can allocate storage for the longest possible list, either as a once-off allocation during startup, or on a case-by-case basis. */ ]])) assert(f:write([[ /* Declare prototypes for main lex function (lex (), fetch a token), and the exchange function. */ typedef fsatoken_token_t proto_lexparse_lex_fn_t (void *lexer_context); typedef int proto_lexparse_exchange_fn_t (void *lexer_context, proto_lexparse_opcode_t opcode, void *parameter); ]])) -- Finally, add trailer lines (idempotency, vim) assert(f:write([[ #endif /* PROTO_LEXPARSE_H */ /* vim:set shiftwidth=2: */ ]])) assert(f:close()) -------------------------------------------------------------------------- ----------------******** fsalex.h ********---------------- print("Creating fsalex.h...") local f = assert(io.open("fsalex.h", "w")) assert(f:write([[ /* fsalex - Repackage pattern text as compact, expressive tokens ]])) assert(f:write(RawText("Copyright.dfac"), "\n")) assert(f:write(RawText("LicenseWarranty.dfac"), "\n")) assert(f:write(RawText("Authors.dfac"))) assert(f:write([[ /* 2014: Repackaged by "untangle" script, written by behoffski. */ ]])) -- Add preprocessor lines to make this header file idempotent. assert(f:write([[ #ifndef FSALEX_H #define FSALEX_H 1 /* Always import environment-specific configuration items first. */ #include #include "fsatoken.h" #include "proto-lexparse.h" #include /* Multiple lexer instances can exist in parallel, so define an opaque type to collect together all the context relating to each instance. */ typedef struct fsalex_ctxt_struct fsalex_ctxt_t; ]])) -- Functions to deal with entire module and *all* created instances. WriteExternDecl(f, Decls["lex-initialise"]) WriteExternDecl(f, Decls["lex-destroy-module"]) -- Declare a function to create a new lexer state. WriteExternDecl(f, Decls["lex-new"]) assert(f:write([[ /* ?? Reserve "fsalex_discard ()" for getting rid of a lexer instance. */ ]])) -- Add external decls for fsalex_syntax and fsalex_pattern. WriteExternDecl(f, Decls["lex-syntax"]) WriteExternDecl(f, Decls["lex-pattern"]) -- While dfa.h declares dfawarn() and dfaerror(), and demands that the -- client supply functions at link time, we instead provide an -- interface function so that the functions can be handed over -- explicitly. This style may be useful in the future if we want to -- move from a single lexer instance to multiple instances (objects?) assert(f:write([[ /* Define function prototypes for warning and error callbacks. */ typedef void fsalex_warn_callback_fn (const char *); typedef void /* ?? _Noreturn? */ fsalex_error_callback_fn (const char *); /* Receive functions to deal with exceptions detected by the lexer: Warnings and errors. Internally, we add the _Noreturn attribute to the error callback, to help the compiler with code flow analysis. */ extern void fsalex_exception_fns (fsalex_ctxt_t *lexer, fsalex_warn_callback_fn *warningfn, fsalex_error_callback_fn *errorfn); ]])) -- Add interface to lex() for use by the parser. assert(f:write([[ /* Main function to incrementally consume and interpret the pattern text, and return a token describing a single lexical element as a token, perhaps with implied parameters such as character classes for CSET tokens, and {min,max} values for each REPMN token. The user should call this function repeatedly, receiving one token each time, until the lexer detects a fatal error, or returns the END token. */ /* This function must conform to proto_lexparse_lex_fn_t. */ ]])) WriteExternDecl(f, Decls["lex"]) -- Non-core interactions between lexer and parser done by a plug-in function. WriteExternDecl(f, Decls["lex-exchange"]) -- Finally, add trailer lines (idempotency, vim) assert(f:write([[ #endif /* FSALEX_H */ /* vim:set shiftwidth=2: */ ]])) assert(f:close()) ----------------******** fsalex.c ********---------------- print("Creating fsalex.c...") local f = assert(io.open("fsalex.c", "w")) assert(f:write([[ /* fsalex - Repackage pattern text as compact, expressive tokens ]])) assert(f:write(RawText("Copyright.dfac"), "\n")) assert(f:write(RawText("LicenseWarranty.dfac"), "\n")) assert(f:write(RawText("Authors.dfac"))) assert(f:write([[ /* 2014: Repackaged by "untangle" script, written by behoffski. */ /* Always import environment-specific configuration items first. */ #include /* define _GNU_SOURCE for regex extensions. */ #include #include "charclass.h" #include #include "fsalex.h" #include "fsatoken.h" #include #include #include "mbcsets.h" #include "proto-lexparse.h" #include #include #include #include #include #include #include "xalloc.h" ]])) -- Include gettext locale support for strings: _("message") assert(f:write(RawText("NaturalLangSupport"), "\n")) -- General helper macros assert(f:write(RawText("ISASCIIDIGIT"), "\n")) assert(f:write(RawText("STREQ"), "\n")) assert(f:write(RawText("MIN"), "\n")) -- Define the predicate *template* list before fleshing out the main -- lexer struct, as we need to copy the template, verbatim, into the -- lexer context when a new lexer is initialised (there may be -- reasons, such as locale, that the same predicate might map to -- different elements in different lexers). assert(f:write([[ /* The following list maps the names of the Posix named character classes to predicate functions that determine whether a given character is in the class. The leading [ has already been eaten by the lexical analyzer. Additional objects are provided to assist the client: wchar_desc for multibyte matching, and class for octet matching. Lazy evaluation and caching are used to minimise processing costs, so these additional items are only valid after a class has been located using find_pred (). */ typedef int predicate_t (wint_t, wctype_t); typedef struct predicate_entry_struct { const char *name; wctype_t wchar_desc; charclass_t *class; bool may_be_multibyte; } predicate_entry_t; /* This list is a template, copied into each lexer's state, and interrogated and updated from there. The membership of a class can vary due to locale and other settings, so each lexer must maintain its own list. Duplicate class sharing across different lexer instances is facilitated by checks in charclass_completed. */ /* Locale portability note: We may need to record the entire explicit locale when syntax () is called, and then use isalpha_l () etc. here, as otherwise the wchar_desc may be interpreted at runtime in the context of the current locale. If a complete copy of the locale needs to be stored locally, see uselocale(3) and duplocale(3) (and free the copy with freelocale(3)). */ static predicate_entry_t template_predicate_list[] = { {"alpha", 0, NULL, true}, {"alnum", 0, NULL, true}, {"blank", 0, NULL, true}, {"cntrl", 0, NULL, true}, {"digit", 0, NULL, false}, {"graph", 0, NULL, true}, {"lower", 0, NULL, true}, {"print", 0, NULL, true}, {"punct", 0, NULL, true}, {"space", 0, NULL, true}, {"upper", 0, NULL, true}, {"xdigit", 0, NULL, true}, {NULL, 0, NULL} }; #define PREDICATE_TEMPLATE_ITEMS \ (sizeof template_predicate_list / sizeof *template_predicate_list) ]])) assert(f:write([[ /* Flesh out the opaque instance context type given in the header. */ struct fsalex_ctxt_struct { /* Singly-linked list of all lexer instances, so destroy_module can release all resources by traversing the list. */ fsalex_ctxt_t *next_instance; /* Using the lexer without setting the syntax is a fatal error, so use a flag so we can report such errors in a direct fashion. */ bool syntax_initialised; /* Syntax flags/characters directing how to interpret the pattern. */ reg_syntax_t syntax_bits; bool case_fold; unsigned char eolbyte; /* Exception handling is done by explicit callbacks. */ fsalex_warn_callback_fn *warn_client; fsalex_error_callback_fn *abandon_with_error; /* Pattern pointer/length, updated as pattern is consumed. */ char const *lexptr; size_t lexleft; /* Break out some regex syntax bits into boolean vars. Do this for the ones that are heavily used, and/or where the nature of the bitmask flag test tends to clutter the lexer code. */ bool re_gnu_ops; /* GNU regex operators are allowed. */ /* Carry dotclass here, as it's easier for clients (UTF-8) to perform class operations with this class, rather than to know intimate details of the regex syntax configuration bits and items such as eolbyte. */ charclass_t *dotclass; /* ".": All chars except eolbyte and/or NUL, depending on syntax flags. */ charclass_index_t dotclass_index; /* Work variables to help organise lexer operation. */ fsatoken_token_t lasttok; bool laststart; size_t parens; /* Character class predicate mapping/caching table. */ predicate_entry_t predicates[PREDICATE_TEMPLATE_ITEMS]; /* Minrep and maxrep are actually associated with the REPMN token, and need to be accessible outside this module (by the parser), perhaps by an explicit interface call. In the far, far future, a completely-reworked token list may see these values properly become integrated into the token stream (perhaps by a pair of "Parameter" tokens? Perhaps by a MINREP token with 1 parameter, followed by a MAXREP token with a corresponding parameter?) */ int minrep, maxrep; /* Booleans to simplify unibyte/multibyte code selection paths. In addition, other flags are placed here to summarise properties of the locale in a concise fashion that can useful when implementing optimisations. There may be some overlap and/or redundancy here; flag names are chosen to let the user be direct and concise, but we only provide names for the more popular cases, not all. */ char *locale_name; bool unibyte_locale; bool multibyte_locale; bool ascii_7bit_encoding; bool using_simple_locale; /* REVIEWME: Wide-character support variables. */ int cur_mb_len; /* Length (in bytes) of the last character fetched; this is needed when backing up during lexing. In a non-multibyte situation (locale?), this variable remains at 1; otherwise, it is updated as required by FETCH_WC. */ /* These variables are used only if in a multibyte locale. */ wchar_t wctok; /* Storage for a single multibyte character, used both during lexing, and as the implied parameter of a WCHAR token returned by the lexer. */ mbstate_t mbrtowc_state; /* State management area for mbrtowc to use. */ ]])) assert(f:write(EditedText("dfa-struct mbrtowc_cache", "NOTCHAR", "FSATOKEN_NOTCHAR"), "\n")) -- Use mbcsets to handle multibyte sets, instead of doing these -- ourselves assert(f:write([[ mbcsets_set_t **mbcsets; size_t nmbcsets; size_t mbcsets_alloc; ]])) assert(f:write([[ }; /* Linked list of all instances created by this module. */ static fsalex_ctxt_t *fsalex_instance_list_head = NULL; ]])) -- Add utility function "maybe-realloc", previously -- "REALLOC_IF_NECESSARY", here, as a generic helper funtion placed -- immediately after the type/structure definitions, but before -- interlinked sets of helper functions such as "setbit_case_fold_c". assert(f:write(RawText("maybe-realloc"), "\n")) assert(f:write(EditedText("setbit::wchar_t comment", "Even for MB_CUR_MAX == 1,", "Even in unibyte locales,"))) -- Up until today (9 Mar 2014), setbit_case_fold_c from v. 2.17 was a -- bit of a mess, and well within my sights to clean up. However, -- last night, I resynced with the latest git master, and it has been -- recently cleaned up into a version that's probably better than what -- I was contemplating. (My mail feed screwed up just after the 2.17 -- release, and so I missed a week or two's worth of messages.) assert(f:write(FSATokenSubst(EditedText("setbit_case_fold_c", "charclass c", "charclass_t *c", "\n setbit %(i, c%)", "\n charclass_setbit (i, c)", " MB_CUR_MAX must be 1.", "We must be in an unibyte locale.")), "\n")) -- to_uchar converts a char to unsigned char, using a function call -- rather than merely typecasting as it catches some type errors that -- the cast doesn't. This fn is used in FETCH_SINGLE_CHAR. assert(f:write(RawText("to_uchar_typecheck"), "\n")) -- Single-byte and multibyte character fetching, rearranged by hand. -- Merely write dfatombcache verbatim at first; we will need to -- modify this to work properly. --assert(f:write(RawText("dfambcache-and-mbs_to_wchar"), "\n")) assert(f:write(EditedText("dfambcache", "dfambcache %(struct dfa %*d", "initialise_uchar_to_wc_cache (fsalex_ctxt_t *lexer", "d%->mbrtowc_cache", "lexer->mbrtowc_cache"), "\n")) --]] assert(f:write([[ /* This function is intimately connected with multibyte (wide-char) handling in the macro FETCH_WC below, in the case where FETCH_SINGLE_CHAR has run but the result has been found to be inconclusive. It works by unwinding the FETCH_SINGLE_CHAR side-effects (lexptr/lexleft), then calling mbrtowc on the pattern space, and communicates mbrtowc's understanding of the octet stream back to the caller: - If a valid multibyte octet sequence is next, then the wide character associated with this sequence is written back to *p_wchar, and the number of octets consumed is returned; or - If the sequence is invalid for any reason, the mbrtowc working state is reset (zeroed), *p_wchar is not modified, and 1 is returned. Lexer state variables, including cur_mb_len, mbs, lexleft and lexptr, are updated as appropriate by this function (mainly if mbrtowc succeeds). The wide char NUL is unusual as it is a 1-octet sequence, the length returned is 0, we report it as length 1, but write the converted wide character in temp_wchar to the caller. */ /* Additional notes: This code, in partnership with the macro FETCH_WC, is closely related to mbs_to_wchar in dfa.c. There is documentation there (e.g. pattern must end in a sentinel, shift encodings not supported, plus other comments/guarantees) that is important, but I'm deferring writing up anything at present until I see how this code is received. */ static size_t fetch_offset_wide_char (fsalex_ctxt_t *lexer, wchar_t *p_wchar) { size_t nbytes; wchar_t temp_wchar; nbytes = mbrtowc (&temp_wchar, lexer->lexptr - 1, lexer->lexleft + 1, &lexer->mbrtowc_state); switch (nbytes) { case (size_t) -2: case (size_t) -1: /* Conversion failed: Incomplete (-2) or invalid (-1) sequence. */ memset (&lexer->mbrtowc_state, 0, sizeof (lexer->mbrtowc_state)); return 1; case (size_t) 0: /* This is the wide NUL character, actually 1 byte long. */ nbytes = 1; break; default: /* Converted character is in temp_wchar, and nbytes is a byte count. */ break; } /* We converted 1 or more bytes, tell result to caller. */ *p_wchar = temp_wchar; /* Update the number of bytes consumed (offset by 1 since FETCH_SINGLE_CHAR grabbed one earlier). */ lexer->lexptr += nbytes - 1; lexer->lexleft -= nbytes - 1; return nbytes; } ]])) assert(f:write([[ /* Single-character input fetch, with EOF/error handling. Note that characters become unsigned here. If no characters are available, the macro either returns END or reports an error, depending on eoferr. Otherwise, one character is consumed (lexptr/lexleft), the char is converted into an unsigned char, and is written into the parameter c. */ #define FETCH_SINGLE_CHAR(lexer, c, eoferr) \ do { \ if (! (lexer)->lexleft) \ { \ if ((eoferr) != 0) \ (lexer)->abandon_with_error (eoferr); \ else \ return FSATOKEN_TK_END; \ } \ (c) = to_uchar (*(lexer)->lexptr++); \ (lexer)->lexleft--; \ } while (0) /* Do the fetch in stages: Single char, octet+multibyte cache check, and possible wide char fetch if the cache result indicates that the input sequence is longer than a single octet. The first fetch handles end-of-input cases (if this happens, control never reaches the rest of the macro); otherwise, it returns temp_uchar which is used in the cache lookup, and may be the single-octet result. A cache result of WEOF means that the octet is not a complete sequence by itself, so a second fetch tweaks lexptr/lexleft to undo the single-char-fetch side-effects, and, depending on mbrtowc valid/invalid result, propagates either the multichar fetch or the single-char fetch back to the caller. */ # define FETCH_WC(lexer, c, wc, eoferr) \ do { \ wchar_t temp_wc; \ unsigned char temp_uchar; \ (lexer)->cur_mb_len = 1; \ FETCH_SINGLE_CHAR ((lexer), temp_uchar, (eoferr)); \ temp_wc = (lexer)->mbrtowc_cache[temp_uchar]; \ if (temp_wc != WEOF) \ { \ (c) = temp_uchar; \ (wc) = temp_wc; \ } \ else \ { \ size_t nbytes; \ temp_wc = temp_uchar; \ nbytes = fetch_offset_wide_char ((lexer), &temp_wc); \ (wc) = temp_wc; \ (c) = nbytes == 1 ? temp_uchar : EOF; \ (lexer)->cur_mb_len = nbytes; \ } \ } while (0) ]])) assert(f:write([[ /* Given a predicate name, find it in a list, and report the list entry to the caller. If the name is not recognised, the function returns NULL. The list entry includes a charclass set and (if relevant) a wide-char descriptor for testing for the predicate. Lazy evaluation and caching are used to keep processing costs down. */ static predicate_entry_t * find_pred (fsalex_ctxt_t *lexer, const char *str) { predicate_entry_t *p_entry; charclass_t *work_class; for (p_entry = lexer->predicates; p_entry->name; p_entry++) { if (STREQ (str, p_entry->name)) break; } /* If there was no matching predicate name found, return NULL. */ if (! p_entry->name) return NULL; /* Is the charclass pointer NULL for this entry? */ if (p_entry->class == NULL) { /* Yes, allocate, set up and cache a charclass for this predicate. Note that the wchar_desc entries were set up in fsalex_syntax (). */ int i; charclass_index_t index; wctype_t wctype_desc; wctype_desc = p_entry->wchar_desc; work_class = charclass_alloc (); for (i = 0; i < FSATOKEN_NOTCHAR; i++) { wchar_t wc; /* Try integer->unsigned char->wide char using lexer's mbrtowc_cache array, and, if successful, test for class membership, and set the bit in the class if the value is a member. */ /* FIXME: iswctype depends on the *current* locale (LC_CTYPE); this breaks the long-term goal of having each lexer instance use only the locale in force when fsalex_syntax was called. We can fix this by using isalnum_l () etc here, but carrying around a full copy of the locale in the lexer instance may be expensive or possibly non-portable, so it's being avoided while this code is still in an experimental/proof-of-concept form. */ wc = lexer->mbrtowc_cache[i]; if (iswctype (wc, wctype_desc)) charclass_setbit (i, work_class); } /* Mark the class as completed, and obtain a persistent class pointer. */ index = charclass_completed (work_class); p_entry->class = charclass_get_pointer (index); } /* Return predicate entry to the caller. */ return p_entry; } ]])) -- Recently-added multibyte code: lonesome_lower, case_folded_counterparts assert(f:write(RawText("unicode-lonesome-lower-table"), "\n")) -- Add in CASE_FOLDED_BUFSIZE and case_folded_counterparts code -- added Feb/Mar 2014; revised May 2014 to fit within PROTO_LEXPARSE -- infrastructure. assert(f:write([[ /* Maximum number of wide characters that can be the case-folded counterparts of a single wide character, not counting the character itself. This is 1 for towupper, 1 for towlower, and 1 for each entry in lonesome_lower[]. */ enum { CASE_FOLDED_BUFSIZE = 1 + 1 + 19 }; ]])) assert(f:write([[ /* Given a wide character c, and working to a search specification that demands that case-folded variants of the character must match wherever that character could match, generate a list of all possible match counterparts to the specified character, and return that list, the characters via a parameter pointer, and the list length (possibly 0) via the function return value. */ /* ?? CHECKME: Should parameter "c" in, and/or "folded" out, be wint_t? */ static int case_folded_counterparts (fsalex_ctxt_t *lexer, wchar_t c, wchar_t folded[CASE_FOLDED_BUFSIZE]) ]])) assert(f:write(RawText("case_folded_counterparts-body"), "\n")) -- Write out parse_bracket_exp declaration, modified to return an -- fsatoken value, and to include a lexer context a single input -- parameter. assert(f:write(EditedText("parse_bracket_exp-decl", "static token", "static fsatoken_token_t", "%(void%)", "(fsalex_ctxt_t *lexer)"))) -- Remove zeroclass definition from a nested place inside -- parse_bracket_exp as we now depend on charclass's guarantee that -- class index 0 is the zeroset. local ParseBracketExp = RawText("parse_bracket_exp-body") -- Classes are now opaque. ParseBracketExp = TextSubst(ParseBracketExp, [[ { bool invert; int c, c1, c2; charclass ccl; ]], [[ { bool invert; int c, c1, c2; charclass_t *ccl; mbcsets_set_t *work_mbc = NULL; ]]) -- Replace dfaerror and dfawarn with explicitly-provided functions. ParseBracketExp = TextSubst(ParseBracketExp, " dfawarn ", " lexer->warn_client ", " dfaerror ", " lexer->abandon_with_error ") -- ?? TRIAL: Remove "FIXME: multibyte code 'wrong in multiple ways, but -- never used in practice.'" ParseBracketExp = TextSubst(ParseBracketExp, [[ /* When case folding map a range, say [m-z] (or even [M-z]) to the pair of ranges, [m-z] [M-Z]. Although this code is wrong in multiple ways, it's never used in practice. FIXME: Remove this (and related) unused code. */ if (wc != WEOF && wc2 != WEOF) { work_mbc->ranges = maybe_realloc (work_mbc->ranges, work_mbc->nranges + 2, &ranges_al, sizeof *work_mbc->ranges); work_mbc->ranges[work_mbc->nranges].beg = case_fold ? towlower (wc) : wc; work_mbc->ranges[work_mbc->nranges++].end = case_fold ? towlower (wc2) : wc2; if (case_fold && (iswalpha (wc) || iswalpha (wc2))) { work_mbc->ranges[work_mbc->nranges].beg = towupper (wc); work_mbc->ranges[work_mbc->nranges++].end = towupper (wc2); } } ]], [[ /* Merely record range as given; don't try to handle the case-folding variants here for now. */ if (wc != WEOF && wc2 != WEOF) { mbcsets_add_range(work_mbc, wc, wc2); /* FIXME, part 2: Since the canonical measure of case equivalence is uppercase(A) == uppercase(B), perhaps we could do this transformation above if case_fold is true... but need to be careful not to throw away information. */ } ]]) -- MBCSETS: Change multibyte initialisation to use mbcsets module. -- Change "work" charclass (ccl) from local variable to explicitly-allocated -- dynamic class from charclass. ParseBracketExp = TextSubst(ParseBracketExp, [[ /* Work area to build a mb_char_classes. */ struct mb_char_classes *work_mbc; size_t chars_al, ranges_al, ch_classes_al, equivs_al, coll_elems_al; chars_al = ranges_al = ch_classes_al = equivs_al = coll_elems_al = 0; if (dfa->multibyte) { dfa->mbcsets = maybe_realloc (dfa->mbcsets, dfa->nmbcsets, &dfa->mbcsets_alloc, sizeof *dfa->mbcsets); /* dfa->multibyte_prop[] hold the index of dfa->mbcsets. We will update dfa->multibyte_prop[] in addtok, because we can't decide the index in dfa->tokens[]. */ /* Initialize work area. */ work_mbc = &(dfa->mbcsets[dfa->nmbcsets++]); memset (work_mbc, 0, sizeof *work_mbc); } else work_mbc = NULL; memset (ccl, 0, sizeof ccl); FETCH_WC (c, wc, _("unbalanced [")); if (c == '^') { FETCH_WC (c, wc, _("unbalanced [")); invert = true; known_bracket_exp = using_simple_locale (); } else invert = false; ]], [[ if (lexer->multibyte_locale) { /* Ensure we have space to store the new set. */ lexer->mbcsets = maybe_realloc (lexer->mbcsets, lexer->nmbcsets, &lexer->mbcsets_alloc, sizeof work_mbc); /* Initialize work area. */ work_mbc = mbcsets_new (); lexer->mbcsets[lexer->nmbcsets++] = work_mbc; } ccl = charclass_alloc (); FETCH_WC (c, wc, _("unbalanced [")); if (c == '^') { FETCH_WC (c, wc, _("unbalanced [")); invert = true; known_bracket_exp = lexer->using_simple_locale; } else invert = false; ]]) -- Change find_pred code to use the typedef, and to merge the found -- predicate static class into the function's dynamic class. Also -- replace dfaerror with internal abandon_with_error. ParseBracketExp = TextSubst(ParseBracketExp, [[ if (c1 == ':') /* Build character class. POSIX allows character classes to match multicharacter collating elements, but the regex code does not support that, so do not worry about that possibility. */ { char const *class = (case_fold && (STREQ (str, "upper") || STREQ (str, "lower")) ? "alpha" : str); const struct dfa_ctype *pred = find_pred (class); if (!pred) dfaerror (_("invalid character class")); if (dfa->multibyte && !pred->single_byte_only) { /* Store the character class as wctype_t. */ wctype_t wt = wctype (class); work_mbc->ch_classes = maybe_realloc (work_mbc->ch_classes, work_mbc->nch_classes, &ch_classes_al, sizeof *work_mbc->ch_classes); work_mbc->ch_classes[work_mbc->nch_classes++] = wt; } for (c2 = 0; c2 < NOTCHAR; ++c2) if (pred->func (c2)) setbit (c2, ccl); } ]], [[ if (c1 == ':') /* Find and merge named character class. POSIX allows character classes to match multicharacter collating elements, but the regex code does not support that, so do not worry about that possibility. */ { char const *class; predicate_entry_t *pred; class = str; if (case_fold && (STREQ (class, "upper") || STREQ (class, "lower"))) class = "alpha"; pred = find_pred (lexer, class); if (! pred) lexer->abandon_with_error (_("invalid character class")); charclass_unionset (pred->class, ccl); /* Does this class have a wide-char type descriptor? */ if (lexer->multibyte_locale && pred->may_be_multibyte) { mbcsets_add_class(work_mbc, pred->wchar_desc); } } ]]) -- Now have a flag, "using_simple_locale"; also use setbit_range. ParseBracketExp = TextSubst(ParseBracketExp, [[ else if (using_simple_locale ()) { for (c1 = c; c1 <= c2; c1++) setbit (c1, ccl); if (case_fold) { int uc = toupper (c); int uc2 = toupper (c2); for (c1 = 0; c1 < NOTCHAR; c1++) { int uc1 = toupper (c1); if (uc <= uc1 && uc1 <= uc2) setbit (c1, ccl); } } ]], [[ else if (lexer->using_simple_locale) { charclass_setbit_range (c, c2, ccl); if (case_fold) { int uc = toupper (c); int uc2 = toupper (c2); for (c1 = 0; c1 < NOTCHAR; c1++) { int uc1 = toupper (c1); if (uc <= uc1 && uc1 <= uc2) charclass_setbit (c1, ccl); } } ]]) -- Keep cracking down on MB_CUR_MAX, case-by-case ParseBracketExp = TextSubst(ParseBracketExp, [[ if (!dfa->multibyte) { if (case_fold) setbit_case_fold_c (c, ccl); else setbit (c, ccl); continue; } ]], [[ if (lexer->unibyte_locale) { if (case_fold) setbit_case_fold_c (c, ccl); else charclass_setbit (c, ccl); continue; } ]]) ParseBracketExp = TextSubst(ParseBracketExp, [[ if (c2 != ']') { if (c2 == '\\' && (syntax_bits & RE_BACKSLASH_ESCAPE_IN_LISTS)) FETCH_WC (c2, wc2, _("unbalanced [")); if (dfa->multibyte) { ]], [[ if (c2 != ']') { if (c2 == '\\' && (syntax_bits & RE_BACKSLASH_ESCAPE_IN_LISTS)) FETCH_WC (c2, wc2, _("unbalanced [")); if (lexer->multibyte_locale) { ]]) -- Hack around with CASE_FOLDED_BUFSIZE code by hand ParseBracketExp = TextSubst(ParseBracketExp, [[ wchar_t folded[CASE_FOLDED_BUFSIZE + 1]; int i; int n = (case_fold ? case_folded_counterparts (wc, folded + 1) + 1 : 1); folded[0] = wc; for (i = 0; i < n; i++) if (!setbit_wc (folded[i], ccl)) { work_mbc->chars = maybe_realloc (work_mbc->chars, work_mbc->nchars, &chars_al, sizeof *work_mbc->chars); work_mbc->chars[work_mbc->nchars++] = folded[i]; } ]], [[ wchar_t folded[CASE_FOLDED_BUFSIZE + 1]; int n = 1; folded[0] = wc; if (case_fold) n += case_folded_counterparts (lexer, wc, &folded[1]); mbcsets_add_wchar_list (work_mbc, n, folded); ]]) -- Convert module-global variable references to instance-local refs. ParseBracketExp = TextSubst(ParseBracketExp, [[ FETCH_WC (c, wc, _("unbalanced [")); if ((c == c1 && *lexptr == ']') || lexleft == 0) ]], [[ FETCH_WC (c, wc, _("unbalanced [")); if ((c == c1 && *lexer->lexptr == ']') || lexer->lexleft == 0) ]]) ParseBracketExp = TextSubst(ParseBracketExp, [[ /* In the case [x-], the - is an ordinary hyphen, which is left in c1, the lookahead character. */ lexptr -= cur_mb_len; lexleft += cur_mb_len; ]], [[ /* In the case [x-], the - is an ordinary hyphen, which is left in c1, the lookahead character. */ lexer->lexptr -= lexer->cur_mb_len; lexer->lexleft += lexer->cur_mb_len; ]]) ParseBracketExp = TextSubst(ParseBracketExp, [=[ /* A bracket expression like [a-[.aa.]] matches an unknown set. Treat it like [-a[.aa.]] while parsing it, and remember that the set is unknown. */ if (c2 == '[' && *lexptr == '.') ]=], [=[ /* A bracket expression like [a-[.aa.]] matches an unknown set. Treat it like [-a[.aa.]] while parsing it, and remember that the set is unknown. */ if (c2 == '[' && *lexer->lexptr == '.') ]=]) ParseBracketExp = TextSubst(ParseBracketExp, [[ if (dfa->multibyte) { static charclass zeroclass; work_mbc->invert = invert; work_mbc->cset = equal (ccl, zeroclass) ? -1 : charclass_index (ccl); return MBCSET; } if (invert) { assert (!dfa->multibyte); notset (ccl); if (syntax_bits & RE_HAT_LISTS_NOT_NEWLINE) clrbit (eolbyte, ccl); } return CSET + charclass_index (ccl); ]], [[ if (lexer->multibyte_locale) { mbcsets_set_match_sense (work_mbc, invert); mbcsets_receive_incomplete_charclass (work_mbc, ccl); mbcsets_completed (work_mbc); return MBCSET; } if (invert) { charclass_notset (ccl); if (syntax_bits & RE_HAT_LISTS_NOT_NEWLINE) charclass_clrbit (eolbyte, ccl); } return CSET + charclass_completed (ccl); ]]) -- Rewrite token references to have explicit FSATOKEN_ prefixes. ParseBracketExp = FSATokenSubst(ParseBracketExp) -- Rewrite references to variables moved into lexer context. -- Try pattern search/replace for these variables; this may cause -- collateral damage, but the alternative is tedious. for _, Keyword in ipairs{"case_fold", "syntax_bits", "eolbyte", "dotclass"} do ParseBracketExp = ParseBracketExp:gsub("([^%w_])" .. Keyword .. "([^%w_])", "%1lexer->" .. Keyword .. "%2") end ParseBracketExp = ParseBracketExp:gsub(" FETCH_WC %(", " FETCH_WC (lexer, ") assert(f:write(ParseBracketExp, "\n")) -- Lex decl and body; lots more lexer-> edits in the body... assert(f:write(Decls["lex"])) local LexBody = RawText("lex-body") -- We set up static "letters" and "notletters" classes (and associated -- indices) in fsalex_syntax, so the code here can be much, much simpler. -- Note (again) that "letters" is a rotten name for IS_WORD_CONSTITUENT -- characters: isalnum() plus '_', but that's how letters/notletters is -- set up. We also remove variables c2 and ccl from LexBody. Finally, -- we test syntax_initialised fiorst, as leaving this unset is a fatal -- error. LexBody = TextSubst(LexBody, [[ { int c, c2; bool backslash = false; charclass ccl; int i; ]], [[ { int c; bool backslash = false; int i; predicate_entry_t *predicate; charclass_t *work_class; charclass_index_t class_index; mbcsets_set_t *work_mbc; bool invert; /* Ensure that syntax () has been called on this lexer instance; many things will fail if this isn't done. */ assert (lexer->syntax_initialised); ]]) -- Replace dfaerror and dfawarn with explicitly-provided functions. LexBody = LexBody:gsub(" dfawarn ", " lexer->warn_client ") LexBody = LexBody:gsub(" dfaerror ", " lexer->abandon_with_error ") -- Deal with some early cases where static vars are now in lexer state LexBody = TextSubst(LexBody, [[ for (i = 0; i < 2; ++i) { FETCH_WC (c, wctok, NULL); switch (c) { case '\\': if (backslash) goto normal_char; if (lexleft == 0) ]], [[ for (i = 0; i < 2; ++i) { FETCH_WC (c, lexer->wctok, NULL); switch (c) { case '\\': if (backslash) goto normal_char; if (lexer->lexleft == 0) ]]) -- Make "not-RE_NO_GNU_OPTS" more readable by using a direct flag variable. LexBody = TextSubst(LexBody, "!(syntax_bits & RE_NO_GNU_OPS)", "lexer->re_gnu_ops") -- Handle "\s" and "\S" in the same fashion as "\w" amd "\W". However, -- multibyte issues are more of a direct issue in the code. LexBody = TextSubst(LexBody, [==[ case 's': case 'S': if (!backslash || (syntax_bits & RE_NO_GNU_OPS)) goto normal_char; if (!dfa->multibyte) { zeroset (ccl); for (c2 = 0; c2 < NOTCHAR; ++c2) if (isspace (c2)) setbit (c2, ccl); if (c == 'S') notset (ccl); laststart = false; return lasttok = CSET + charclass_index (ccl); } #define PUSH_LEX_STATE(s) \ do \ { \ char const *lexptr_saved = lexptr; \ size_t lexleft_saved = lexleft; \ lexptr = (s); \ lexleft = strlen (lexptr) #define POP_LEX_STATE() \ lexptr = lexptr_saved; \ lexleft = lexleft_saved; \ } \ while (0) /* FIXME: see if optimizing this, as is done with ANYCHAR and add_utf8_anychar, makes sense. */ /* \s and \S are documented to be equivalent to [[:space:]] and [^[:space:]] respectively, so tell the lexer to process those strings, each minus its "already processed" '['. */ PUSH_LEX_STATE (c == 's' ? "[:space:]]" : "^[:space:]]"); lasttok = parse_bracket_exp (); POP_LEX_STATE (); laststart = false; return lasttok; ]==], [==[ case 's': case 'S': /* "\s" == "[[:space:]]"; "\S" == "[^[:space:]]". */ if (! (backslash && lexer->re_gnu_ops)) goto normal_char; lexer->laststart = false; invert = c == 'S'; predicate = find_pred (lexer, "space"); if (c == 's') class_index = charclass_get_index (predicate->class); else { /* Invert the predicate class in a work class. */ work_class = charclass_alloc (); charclass_copyset (predicate->class, work_class); charclass_notset (work_class); class_index = charclass_completed (work_class); } if (lexer->unibyte_locale) return lexer->lasttok = FSATOKEN_TK_CSET + class_index; /* FIXME: see if optimizing this, as is done with ANYCHAR and add_utf8_anychar, makes sense. */ /* Multibyte locale: Fill out an entire set description. */ lexer->mbcsets = maybe_realloc (lexer->mbcsets, lexer->nmbcsets, &lexer->mbcsets_alloc, sizeof work_mbc); work_mbc = mbcsets_new (); lexer->mbcsets[lexer->nmbcsets++] = work_mbc; mbcsets_set_match_sense (work_mbc, invert); mbcsets_add_class (work_mbc, predicate->wchar_desc); /* ?? REVIEWME: "invert" in parse_bracket_exp leads to BACKREF (== "this is too hard for me") token via known_bracket_exp flag. */ if ((c == 'S') && !lexer->using_simple_locale) return lexer->lasttok = FSATOKEN_TK_BACKREF; else return lexer->lasttok = FSATOKEN_TK_MBCSET; ]==]) LexBody = TextSubst(LexBody, [[ case 'w': case 'W': if (!backslash || (syntax_bits & RE_NO_GNU_OPS)) goto normal_char; zeroset (ccl); for (c2 = 0; c2 < NOTCHAR; ++c2) if (IS_WORD_CONSTITUENT (c2)) setbit (c2, ccl); if (c == 'W') notset (ccl); laststart = false; return lasttok = CSET + charclass_index (ccl); ]], [==[ case 'w': case 'W': /* Can mean "[_[:alnum:]]" (\w) or its inverse (\W). */ if (! (backslash && lexer->re_gnu_ops)) goto normal_char; lexer->laststart = false; predicate = find_pred (lexer, "alnum"); work_class = charclass_alloc (); charclass_copyset (predicate->class, work_class); charclass_setbit ('_', work_class); if (c == 'W') charclass_notset (work_class); return lasttok = FSATOKEN_TK_CSET + charclass_completed (work_class); ]==]) -- Edit match-any (".") to use a static class, in a similar fashion to -- the substitution above. The work has been shifted into fsalex_syntax. LexBody = TextSubst(LexBody, [[ case '.': if (backslash) goto normal_char; if (dfa->multibyte) { /* In multibyte environment period must match with a single character not a byte. So we use ANYCHAR. */ laststart = false; return lasttok = ANYCHAR; } zeroset (ccl); notset (ccl); if (!(syntax_bits & RE_DOT_NEWLINE)) clrbit (eolbyte, ccl); if (syntax_bits & RE_DOT_NOT_NULL) clrbit ('\0', ccl); laststart = false; return lasttok = CSET + charclass_index (ccl); ]], [[ case '.': if (backslash) goto normal_char; lexer->laststart = false; if (lexer->multibyte_locale) { /* In multibyte environment period must match with a single character not a byte. So we use ANYCHAR. */ return lasttok = FSATOKEN_TK_ANYCHAR; } return lasttok = FSATOKEN_TK_CSET + lexer->dotclass_index; ]]) -- Finally, edit lexbody where alphanumeric characters are hacked into -- classes if case folding is selected. This is the key place that I -- would like to attack in Stage 2: Recast the token set so that -- high-level information is modelled explicitly for longer (especially -- up to where dfamusts can contemplate forming "case-insensitive fixed -- strings" as a search option). LexBody = TextSubst(LexBody, [[ default: normal_char: laststart = false; /* For multibyte character sets, folding is done in atom. Always return WCHAR. */ if (dfa->multibyte) return lasttok = WCHAR; if (case_fold && isalpha (c)) { zeroset (ccl); setbit_case_fold_c (c, ccl); return lasttok = CSET + charclass_index (ccl); } return lasttok = c; ]], [[ default: normal_char: lexer->laststart = false; /* For multibyte character sets, folding is done in atom, so always return WCHAR. */ if (lexer->multibyte_locale) return lasttok = FSATOKEN_TK_WCHAR; if (case_fold && isalpha (c)) { charclass_t *ccl = charclass_alloc (); setbit_case_fold_c (c, ccl); return lasttok = FSATOKEN_TK_CSET + charclass_completed (ccl); } return lasttok = c; ]]) LexBody = TextSubst(LexBody, [[ case '$': if (backslash) goto normal_char; if (syntax_bits & RE_CONTEXT_INDEP_ANCHORS || lexleft == 0 || (syntax_bits & RE_NO_BK_PARENS ? lexleft > 0 && *lexptr == ')' : lexleft > 1 && lexptr[0] == '\\' && lexptr[1] == ')') || (syntax_bits & RE_NO_BK_VBAR ? lexleft > 0 && *lexptr == '|' : lexleft > 1 && lexptr[0] == '\\' && lexptr[1] == '|') || ((syntax_bits & RE_NEWLINE_ALT) && lexleft > 0 && *lexptr == '\n')) return lasttok = ENDLINE; goto normal_char; ]], [[ case '$': if (backslash) goto normal_char; if (syntax_bits & RE_CONTEXT_INDEP_ANCHORS || lexer->lexleft == 0 || (syntax_bits & RE_NO_BK_PARENS ? lexer->lexleft > 0 && *lexer->lexptr == ')' : lexer->lexleft > 1 && lexer->lexptr[0] == '\\' && lexer->lexptr[1] == ')') || (syntax_bits & RE_NO_BK_VBAR ? lexer->lexleft > 0 && *lexer->lexptr == '|' : lexer->lexleft > 1 && lexer->lexptr[0] == '\\' && lexer->lexptr[1] == '|') || ((syntax_bits & RE_NEWLINE_ALT) && lexer->lexleft > 0 && *lexer->lexptr == '\n')) return lasttok = ENDLINE; goto normal_char; ]]) -- Use temporary minrep/maxrep variables in lexer "{}" processing, and copy -- these values to lexer->minrep and lexer->maxrep when complete. LexBody = TextSubst(LexBody, [[ /* Cases: {M} - exact count {M,} - minimum count, maximum is infinity {,N} - 0 through N {,} - 0 to infinity (same as '*') {M,N} - M through N */ { char const *p = lexptr; char const *lim = p + lexleft; minrep = maxrep = -1; ]], [[ /* Cases: {M} - exact count {M,} - minimum count, maximum is infinity {,N} - 0 through N {,} - 0 to infinity (same as '*') {M,N} - M through N */ { char const *p = lexer->lexptr; char const *lim = p + lexer->lexleft; int minrep = -1; int maxrep = -1; ]]) LexBody = TextSubst(LexBody, [[ if (RE_DUP_MAX < maxrep) lexer->abandon_with_error (_("regular expression too big")); lexptr = p; lexleft = lim - p; } laststart = false; return lasttok = REPMN; ]], [[ if (RE_DUP_MAX < maxrep) lexer->abandon_with_error (_("regular expression too big")); lexer->lexptr = p; lexer->lexleft = lim - p; lexer->minrep = minrep; lexer->maxrep = maxrep; } lexer->laststart = false; return lasttok = REPMN; ]]) LexBody = TextSubst(LexBody, [[ case '^': if (backslash) goto normal_char; if (syntax_bits & RE_CONTEXT_INDEP_ANCHORS || lasttok == END || lasttok == LPAREN || lasttok == OR) return lasttok = BEGLINE; goto normal_char; ]], [[ case '^': if (backslash) goto normal_char; if (syntax_bits & RE_CONTEXT_INDEP_ANCHORS || lexer->lasttok == END || lexer->lasttok == LPAREN || lexer->lasttok == OR) return lasttok = BEGLINE; goto normal_char; ]]) -- Replace all "return lasttok = " cases with "return lexer->lasttok = " LexBody = TextSubst(LexBody, " return lasttok = ", " return lexer->lasttok = ") -- There's a few remaining " laststart" cases in the code LexBody = TextSubst(LexBody, " laststart", " lexer->laststart") -- Deal with "parens" variable, moved into lewxer state LexBody = TextSubst(LexBody, " ++parens;", " ++lexer->parens;") LexBody = TextSubst(LexBody, " --parens;", " --lexer->parens;") LexBody = TextSubst(LexBody, [[ if (parens == 0 && syntax_bits & RE_UNMATCHED_RIGHT_PAREN_ORD) ]], [[ if (lexer->parens == 0 && syntax_bits & RE_UNMATCHED_RIGHT_PAREN_ORD) ]]) LexBody = LexBody:gsub(" FETCH_WC %(", " FETCH_WC (lexer, ") LexBody = LexBody:gsub(" parse_bracket_exp %(%)", " parse_bracket_exp (lexer)") -- Add fsatoken prefixes after major edits have been done. LexBody = FSATokenSubst(LexBody) -- Rewrite references to variables moved into lexer. -- Try pattern search/replace for these variables; this may cause -- collateral damage, but the alternative is tedious. for _, Keyword in ipairs{"case_fold", "syntax_bits", "eolbyte", "dotclass"} do LexBody = LexBody:gsub("([^%w_])" .. Keyword .. "([^%w_])", "%1lexer->" .. Keyword .. "%2") end assert(f:write(LexBody, "\n")) assert(f:write(Decls["lex-pattern"], [[ { /* Copy parameters to internal state variables. */ lexer->lexptr = pattern; lexer->lexleft = pattern_len; /* Reset lexical scanner state. */ lexer->lasttok = FSATOKEN_TK_END; lexer->laststart = 1; lexer->parens = 0; /* Reset multibyte parsing state. */ lexer->cur_mb_len = 1; memset (&lexer->mbrtowc_state, 0, sizeof (lexer->mbrtowc_state)); } ]])) assert(f:write([[ /* Report whether codes 0-127 conform to ASCII encoding. This is handy for optimisation, particularly lower-case and upper-case characters, as the codes are contiguous, unlike EBCDIC. ASCII is also a common denominator in many other unibyte code pages (locales), and also in the multibyte UTF-8 locale. */ static bool _GL_ATTRIBUTE_PURE char_encoding_7bits_is_ascii (void) { /* True if the native character set is known to be compatible with the C locale. The following test isn't perfect, but it's good enough in practice, as only ASCII and EBCDIC are in common use and this test correctly accepts ASCII and rejects EBCDIC. */ const bool is_ascii = '\b' == 8 && '\t' == 9 && '\n' == 10 && '\v' == 11 && '\f' == 12 && '\r' == 13 && ' ' == 32 && '!' == 33 && '"' == 34 && '#' == 35 && '%' == 37 && '&' == 38 && '\'' == 39 && '(' == 40 && ')' == 41 && '*' == 42 && '+' == 43 && ',' == 44 && '-' == 45 && '.' == 46 && '/' == 47 && '0' == 48 && '9' == 57 && ':' == 58 && ';' == 59 && '<' == 60 && '=' == 61 && '>' == 62 && '?' == 63 && 'A' == 65 && 'Z' == 90 && '[' == 91 && '\\' == 92 && ']' == 93 && '^' == 94 && '_' == 95 && 'a' == 97 && 'z' == 122 && '{' == 123 && '|' == 124 && '}' == 125 && '~' == 126; return is_ascii; } ]])) assert(f:write(Decls["lex-syntax"], [[ { charclass_t *work_class; predicate_entry_t *pred; char const *locale_name; /* Set a flag noting that this lexer has had its syntax params set. */ lexer->syntax_initialised = true; /* Record the function parameters in our local context. */ lexer->syntax_bits = bits; lexer->case_fold = fold; lexer->eolbyte = eol; /* Set up unibyte/multibyte flags, based on MB_CUR_MAX, which depends on the current locale. We capture this information here as the locale may change later. At present, we don't capture MB_CUR_MAX itself. */ if (MB_CUR_MAX > 1) { /* Multibyte locale: Prepare booleans to make code easier to read */ lexer->unibyte_locale = false; lexer->multibyte_locale = true; /* Discard any earlier storage we may have acquired. */ free (lexer->mbcsets); lexer->mbcsets = NULL; /* Set up an array of structures to hold multibyte character sets. */ lexer->nmbcsets = 0; lexer->mbcsets_alloc = 2; lexer->mbcsets = xzalloc (sizeof (*lexer->mbcsets) * lexer->mbcsets_alloc); } else { /* Unibyte locale: Prepare booleans to make code easier to read */ lexer->unibyte_locale = true; lexer->multibyte_locale = false; } /* Charclass guarantees that class index 0 is zeroclass, so we don't need to set it up here. */ /* Set up a character class to match anychar ('.'), tailored to accommodate options from the regex syntax. */ work_class = charclass_alloc (); charclass_notset (work_class); if (! (lexer->syntax_bits & RE_DOT_NEWLINE)) { charclass_clrbit (lexer->eolbyte, work_class); } if (lexer->syntax_bits & RE_DOT_NOT_NULL) { charclass_clrbit (0, work_class); } lexer->dotclass_index = charclass_completed (work_class); lexer->dotclass = charclass_get_pointer (lexer->dotclass_index); /* Testing for the absence of RE_NO_GNU_OPS in syntax_bits happens often, so set a direct flag variable: This makes code more readable. */ lexer->re_gnu_ops = ! (lexer->syntax_bits & RE_NO_GNU_OPS); /* Initialise cache and other tables that have syntax and/or locale influences. */ /* Set up the wchar_desc fields of the predicate table. */ for (pred = lexer->predicates; pred->name != NULL; pred++) pred->wchar_desc = wctype (pred->name); /* Record the name of the current locale. Note that setlocale can return NULL for the name. */ lexer->locale_name = NULL; locale_name = setlocale (LC_ALL, NULL); if (locale_name != NULL) lexer->locale_name = xstrdup (locale_name); /* Determine if the locale's treatment of codes 0-127 matches ASCII, as some operations (e.g. "[c-v]") become easier to handle. */ lexer->ascii_7bit_encoding = char_encoding_7bits_is_ascii (); /* Further optimisations are possible if the locale is an unibyte ASCII 7-bit C/POSIX environment, and this is a common case, so spend some effort looking for this situation. */ lexer->using_simple_locale = false; if (lexer->unibyte_locale && lexer->ascii_7bit_encoding) { lexer->using_simple_locale = ! lexer->locale_name || STREQ (lexer->locale_name, "C") || STREQ (lexer->locale_name, "POSIX"); } /* Initialise cache that distinguishes between unibyte and multibyte (wide) characters based on the leading octet. */ initialise_uchar_to_wc_cache (lexer); /* ?? If a complete copy of the locale needs to be stored within this instance, see uselocale(3) and duplocale(3) (and free the copy with freelocale(3)). */ } ]])) -- 12 Apr 2014: Support the newly-introduced "exchange" interface assert(f:write(Decls["lex-exchange"], [[ { switch (opcode) { case PROTO_LEXPARSE_OP_GET_IS_MULTIBYTE_LOCALE: return (int) lexer->multibyte_locale; case PROTO_LEXPARSE_OP_GET_REPMN_MIN: return lexer->minrep; case PROTO_LEXPARSE_OP_GET_REPMN_MAX: return lexer->maxrep; case PROTO_LEXPARSE_OP_GET_WIDE_CHAR_LIST_MAX: /* Original token, plus upper/lowercase, plus extended variants. */ return 1 + CASE_FOLDED_BUFSIZE; case PROTO_LEXPARSE_OP_GET_WIDE_CHARS: { wchar_t *p_wc = (wchar_t *) param; /* Record the original token (wctok) implicitly associated with WCHAR. */ *p_wc++ = lexer->wctok; /* Add in direct and/or indirect case equivalents, as appropriate. */ if (lexer->case_fold) p_wc += case_folded_counterparts (lexer, lexer->wctok, p_wc); /* Return how many wide chars (at least 1) are in the list. */ return p_wc - (wchar_t *) param; } break; case PROTO_LEXPARSE_OP_GET_DOTCLASS: *((charclass_t **) param) = lexer->dotclass; break; case PROTO_LEXPARSE_OP_GET_MBCSET: /* Return the set associated with the most recent MBCSET token. */ *((mbcsets_set_t **) param) = lexer->mbcsets[lexer->nmbcsets - 1]; break; default: /* ?? Not sure if we should complain/assert or merely ignore an opcode that we don't recognise here. */ assert (!"unrecognised PROTO_LEXPARSE opcode"); /* NOTREACHED */ break; } /* If we reach here, return value is unimportant, so just say 0. */ return 0; } ]])) assert(f:write(Decls["lex-exception-fns"], [[ { /* Record the provided functions in the lexer's context. */ lexer->warn_client = warningfn; lexer->abandon_with_error = errorfn; } ]])) assert(f:write([[ /* Add "not provided!" stub function that gets called if the client fails to provide proper resources. This is a hack, merely to get the module started; better treatment needs to be added later. */ static void no_function_provided (void *unused) { assert (!"fsalex: Plug-in function required, but not provided."); } ]])) -- Define a "new" function to generate an initial parser context. assert(f:write(Decls["lex-new"], [[ { fsalex_ctxt_t *new_context; /* Acquire zeroed memory for new lexer context. */ new_context = XZALLOC (fsalex_ctxt_t); /* ?? Point warning and error functions to a "you need to tell me these first!" function? */ new_context->warn_client = (fsalex_warn_callback_fn *) no_function_provided; new_context->abandon_with_error = (fsalex_error_callback_fn *) no_function_provided; /* Default to working in a non-multibyte locale. In some cases, FETCH_WC never sets this variable (as it's assumed to be 1), so fulfil this expectation here. */ new_context->cur_mb_len = 1; /* Copy the template predicate list into this context, so that we can have lexer-specific named predicate classes. */ memcpy (new_context->predicates, template_predicate_list, sizeof (new_context->predicates)); /* Default to unibyte locale at first; the final locale setting is made according to what's in force when fsalex_syntax () is called. */ new_context->unibyte_locale = true; new_context->multibyte_locale = false; /* Many things depend on decisions made in fsalex_syntax (), so note here that it hasn't been called yet, and fail gracefully later if the client hasn't called the function before commencing work. */ new_context->syntax_initialised = false; /* Add this instance at the head of the module's list. */ new_context->next_instance = fsalex_instance_list_head; fsalex_instance_list_head = new_context; return new_context; } ]])) assert(f:write([[ /* Internal function to free all resources directly or indirectly used by an instance. The pointer is no longer valid after this call. */ static void free_instance (fsalex_ctxt_t *lexer) { /* Free top-level structures. */ free (lexer->locale_name); free (lexer->mbcsets); free (lexer); /* Note that multibyte sets are no longer stored explicitly in the instance; apart from the pointer list (freed above), the set contents are freed separately when mbcsets module is destroyed. */ } ]])) assert(f:write(Decls["lex-destroy-module"], [[ { fsalex_ctxt_t *p_list; fsalex_ctxt_t *p_next; /* Move the global list head into a local variable, and immediately clear the global. This is a half-hearted attempt to avoid race conditions; to do things properly, a system-wide atomic operation (locked, including multi-CPU cache coherency) operation should be used. */ p_list = fsalex_instance_list_head; fsalex_instance_list_head = NULL; /* Traverse the list of instances, releasing all resources associated with each one. */ while (p_list) { p_next = p_list->next_instance; free_instance (p_list); p_list = p_next; } } ]])) assert(f:write(Decls["lex-initialise"], [[ { /* Initialise the linked list of instances created by this module. */ fsalex_instance_list_head = NULL; atexit (fsalex_destroy_module); } ]])) -- Finally, add trailer line (vim) assert(f:write([[ /* vim:set shiftwidth=2: */ ]])) assert(f:close()) -------------------------------------------------------------------------- ----------------******** fsaparse.h ********---------------- print("Creating fsaparse.h...") local f = assert(io.open("fsaparse.h", "w")) assert(f:write([[ /* fsaparse -- Build a structure naming relationships (sequences, alternatives, options and precedence) of tokens ]])) assert(f:write(RawText("Copyright.dfac"), "\n")) assert(f:write(RawText("LicenseWarranty.dfac"), "\n")) assert(f:write(RawText("Authors.dfac"))) assert(f:write([[ /* 2014: Repackaged by "untangle" script, written by behoffski. */ /* This function receives a stream of tokens from fsalex, and processes them to impose precedence rules and to describe complex pattern elements that are beyond the capability of the simple lexer. In addition to the cases explicit in the syntax (e.g. "(ab|c)", variable-length multibyte encodings (UTF-8; codesets including modifiers and/or shift items) also require these enhanced facilities. */ ]])) -- Add preprocessor lines to make this header file idempotent. assert(f:write([[ #ifndef FSAPARSE_H #define FSAPARSE_H 1 /* Always import environment-specific configuration items first. */ #include #include "fsatoken.h" #include "proto-lexparse.h" /* Multiple parser instances can exist in parallel, so define an opaque type to collect together all the context relating to each instance. */ typedef struct fsaparse_ctxt_struct fsaparse_ctxt_t; /* Allow configurable parser/lexer combinations by using a plugin interface for lexer invocation. */ typedef fsatoken_token_t fsaparse_lexer_fn_t (void *lexer_context); /* Functions to initialise and tear down entire module and all associated instances. */ ]])) WriteExternDecl(f, Decls["parse-initialise"]) WriteExternDecl(f, Decls["parse-destroy-module"]) assert(f:write([[ /* Functions to create a new parser instance, and to connect it with a compatible lexer instance. */ ]])) WriteExternDecl(f, Decls["parse-new"]) WriteExternDecl(f, Decls["parse-lexer"]) -- FIXME: The following is a copy-paste-and-rename-edit from the fsalex -- code. Is there a less redundant way of doing this? -- While dfa.h declares dfawarn() and dfaerror(), and demands that the -- client supply functions at link time, we instead provide an -- interface function so that the functions can be handed over -- explicitly. This style may be useful in the future if we want to -- move from a single parser instance to multiple instances (objects?) assert(f:write([[ /* Define function prototypes for warning and error callbacks. */ typedef void fsaparse_warn_callback_fn (const char *); typedef void /* ?? _Noreturn? */ fsaparse_error_callback_fn (const char *); /* Receive functions to deal with exceptions detected by the parser: Warnings and errors. Internally, we add the _Noreturn attribute to the error callback, to help the compiler with code flow analysis. */ extern void fsaparse_exception_fns (fsaparse_ctxt_t *parser, fsaparse_warn_callback_fn *warningfn, fsaparse_error_callback_fn *errorfn); ]])) WriteExternDecl(f, Decls["parse"]) WriteExternDecl(f, Decls["parse-get-token-list"]) -- Finally, add trailer lines (idempotency, vim) assert(f:write([[ #endif /* FSAPARSE_H */ /* vim:set shiftwidth=2: */ ]])) assert(f:close()) ----------------******** fsaparse.c ********---------------- print("Creating fsaparse.c...") local f = assert(io.open("fsaparse.c", "w")) assert(f:write([[ /* fsaparse -- Build a structure naming relationships (sequences, alternatives, backreferences, options and precedence) of tokens ]])) assert(f:write(RawText("Copyright.dfac"), "\n")) assert(f:write(RawText("LicenseWarranty.dfac"), "\n")) assert(f:write(RawText("Authors.dfac"))) assert(f:write([[ /* 2014: Repackaged by "untangle" script, written by behoffski. */ /* This function receives a stream of tokens from fsalex, and processes them to impose precedence rules and to describe complex pattern elements that are beyond the capability of the simple lexer. In addition to the cases explicit in the syntax (e.g."(ab|c)", variable-length multibyte encodings (UTF-8; codesets including modifiers and/or shift items) also require these enhanced facilities. */ ]])) assert(f:write([[ /* Always import environment-specific configuration items first. */ #include #include #include "charclass.h" #include "fsaparse.h" #include "fsalex.h" #include "fsatoken.h" #include "mbcsets.h" #include "proto-lexparse.h" #include #include #include "xalloc.h" /* gettext.h ensures that we don't use gettext if ENABLE_NLS is not defined */ #include "gettext.h" #define _(str) gettext (str) #include #include ]])) -- General helper macros assert(f:write([[ #if HAVE_LANGINFO_CODESET # include #endif ]])) assert(f:write([[ /* fsaparse_ctxt: Gather all the context to do with the parser into a single struct. We do this mainly because it make it easier to contemplate having multiple instances of this module running in parallel, but also because it makes translating from "dfa->" easier. This definition fleshes out the opaque type given in the module header. */ struct fsaparse_ctxt_struct { /* Singly-linked list of all parser instances, so destroy_module can release all resources by traversing the list. */ fsaparse_ctxt_t *next_instance; /* Warning and abort functions provided by client. */ fsalex_warn_callback_fn *warn_client; fsalex_error_callback_fn *abandon_with_error; /* Plug-in functions and context to deal with lexer at arm's length. */ proto_lexparse_lex_fn_t *lexer; proto_lexparse_exchange_fn_t *lex_exchange; void *lex_context; /* Information about locale (needs to sync with lexer...?) */ bool multibyte_locale; bool unibyte_locale; /* Variable to store a dynamically-allocated list of wide characters. The parser needs to fetch this list whenever a WCHAR token is received. */ int wide_chars_max; wchar_t *wide_char_list; fsatoken_token_t lookahead_token; size_t current_depth; /* Current depth of a hypothetical stack holding deferred productions. This is used to determine the depth that will be required of the real stack later on in dfaanalyze. */ ]])) -- Ugh... chuck in a slab of the dfa struct here, sigh. assert(f:write(EditedText("dfa-struct parser", " token %*tokens", " fsatoken_token_t *tokens", " token utf8_anychar_", " fsatoken_token_t utf8_anychar_"))) assert(f:write(RawText("dfa-struct parser-multibyte"), "\n")) -- Use mbcsets to handle multibyte sets, instead of doing these ourselves assert(f:write([[ mbcsets_set_t **mbcsets; size_t nmbcsets; size_t mbcsets_alloc; ]])) assert(f:write([[ }; /* Linked list of all instances created by this module. */ static fsaparse_ctxt_t *fsaparse_instance_list_head = NULL; ]])) -- Add utility function "maybe-realloc", previously "REALLOC_IF_NECESSARY", -- here, as a generic helper funtion. assert(f:write(RawText("maybe-realloc"), "\n")) assert(f:write(EditedText("using_utf8", "\nint\nusing_utf8 ", "\nstatic int\nusing_utf8 "), "\n")) assert(f:write(RawText("recursive-descent parser intro"), "\n")) local Addtok_mbBody = EditedText("addtok_mb", "addtok_mb %(token t,", "addtok_mb (fsaparse_ctxt_t *parser, fsatoken_token_t t,", "dfa%->", "parser->", "MB_CUR_MAX > 1", "parser->multibyte_locale", "%-%-depth", "--parser->current_depth", "%+%+depth", "++parser->current_depth") Addtok_mbBody = TextSubst(Addtok_mbBody, [[ if (depth > parser->depth) parser->depth = depth; ]], [[ if (parser->depth < parser->current_depth) parser->depth = parser->current_depth; ]]) assert(f:write(FSATokenSubst(Addtok_mbBody), "\n")) assert(f:write(EditedText("addtok_wc fwd decl", "addtok_wc %(", "addtok_wc (fsaparse_ctxt_t *parser, "), "\n")) --[[ -- MBCSETS local AddtokBody = EditedText("addtok", "\naddtok %(token t%)", "\naddtok (fsaparse_ctxt_t *parser, fsatoken_token_t t)", "dfa%->", "parser->", "MB_CUR_MAX > 1", "parser->multibyte_locale", " addtok_wc %(", " addtok_wc (parser, ", " addtok_mb %(", " addtok_mb (parser, ", " addtok %(", " addtok (parser, ") assert(f:write(FSATokenSubst(AddtokBody))) --]] assert(f:write([[ /* Sigh. Mbcsets is so disruptive (intimate details hidden, by design), that we are writing addtok here from scratch. */ static void addtok (fsaparse_ctxt_t *parser, fsatoken_token_t t) { bool need_or = false; mbcsets_set_t *work_mbc; bool invert; charclass_t *charclass; size_t nchars; size_t nch_classes; size_t nranges; size_t nequivs; size_t ncoll_elems; if (t != FSATOKEN_TK_MBCSET || parser->unibyte_locale) { addtok_mb (parser, t, 3); return; } work_mbc = parser->mbcsets[parser->nmbcsets - 1]; mbcsets_get_characteristics (work_mbc, &invert, &charclass, &nchars, &nch_classes, &nranges, &nequivs, &ncoll_elems); if (nchars && !invert) { size_t i; wchar_t *char_list; char_list = xnmalloc (nchars, sizeof *char_list); mbcsets_get_chars (work_mbc, char_list); addtok_wc (parser, char_list[0]); for (i = 1; i < nchars; i++) { addtok_wc (parser, char_list[i]); addtok (parser, FSATOKEN_TK_OR); } free (char_list); need_or = true; } /* If the set contains non-trivial components, it's too hard to deal with here, so hand it on to higher-up machinery. */ if (invert || nch_classes || nranges || ncoll_elems) { addtok_mb (parser, FSATOKEN_TK_MBCSET, ((parser->nmbcsets - 1) << 2) + 3); if (need_or) addtok (parser, FSATOKEN_TK_OR); } /* Individual characters have been handled above, so the only case remaining is where codes have been simplified into charclass members. */ else if (charclass) { addtok (parser, FSATOKEN_TK_CSET + charclass_get_index (charclass)); if (need_or) addtok (parser, FSATOKEN_TK_OR); } } ]])) -- CHANGE, possibly buggy: cur_mb_len appears in clearly-connected -- places in the lexer (FETCH_WC and parse_bracket_exp); however, it -- also appears in addtok_wc, in a way that seems to be local-only. -- I'm choosing to add a local addtok_mb_len variable in addtok_wc, -- as my belief (after reviewing the code) is that the two uses of -- the same variable are not related. Without this addition, the -- variable is flagged as undeclared, as the other definition is in -- the lexer module. local Addtok_wcMBSBody = EditedText("addtok_wc", "\naddtok_wc %(", "\naddtok_wc (fsaparse_ctxt_t *parser, ", " addtok_mb %(", " addtok_mb (parser, ", " addtok %(", " addtok (parser, ") Addtok_wcMBSBody = TextSubst(Addtok_wcMBSBody, [[ unsigned char buf[MB_LEN_MAX]; mbstate_t s = { 0 }; int i; size_t stored_bytes = wcrtomb ((char *) buf, wc, &s); if (stored_bytes != (size_t) -1) ]], [[ unsigned char buf[MB_LEN_MAX]; mbstate_t s = { 0 }; int i; int cur_mb_len; size_t stored_bytes = wcrtomb ((char *) buf, wc, &s); if (stored_bytes != (size_t) -1) ]]) assert(f:write(FSATokenSubst(Addtok_wcMBSBody))) -- Use beefed-up charclass (setbit_range) and dotclass (from lexer) to -- construct charclasses for UTF-8 sequences without knowing charclass -- internals and also without knowing details of regex -- syntax/configuration regarding end-of-line. We also depend on -- charclass's permission to use 0 as a "not-initialised" sentinel -- here. assert(f:write(EditedText("add_utf8_anychar-decl", "\nadd_utf8_anychar %(void%)", "\nadd_utf8_anychar (fsaparse_ctxt_t *parser)"))) assert(f:write(RawText("add_utf8_anychar-body-start"))) assert(f:write([[ unsigned int i; /* Have we set up the classes for the 1-byte to 4-byte sequence types? */ if (parser->utf8_anychar_classes[0] == 0) { /* No, first time we've been called, so set them up now. */ charclass_t *ccl; const charclass_t *dotclass; /* Index 0: 80-bf -- Non-leading bytes. */ ccl = charclass_alloc (); charclass_setbit_range (0x80, 0xbf, ccl); parser->utf8_anychar_classes[0] = charclass_completed (ccl); /* Index 1: 00-7f -- 1-byte leading seq, minus dotclass exceptions. */ ccl = charclass_alloc (); charclass_setbit_range (0x00, 0x7f, ccl); fsalex_exchange (parser->lex_context, PROTO_LEXPARSE_OP_GET_DOTCLASS, &dotclass); charclass_intersectset (dotclass, ccl); parser->utf8_anychar_classes[1] = charclass_completed (ccl); /* Index 2: c2-df -- 2-byte sequence. */ ccl = charclass_alloc (); charclass_setbit_range (0xc2, 0xdf, ccl); parser->utf8_anychar_classes[2] = charclass_completed (ccl); /* Index 2: e0-ef -- 3-byte sequence. */ ccl = charclass_alloc (); charclass_setbit_range (0xe0, 0xef, ccl); parser->utf8_anychar_classes[3] = charclass_completed (ccl); /* Index 2: f0-f7 -- 4-byte sequence. */ ccl = charclass_alloc (); charclass_setbit_range (0xf0, 0xf7, ccl); parser->utf8_anychar_classes[4] = charclass_completed (ccl); } ]])) -- Write out utf8 sequence documentation; fix some doc typos. assert(f:write(EditedText("add_utf8_anychar-description", "|[0xe0-0xef[0x80-0xbf][0x80-0xbf]", "|[0xe0-0xef][0x80-0xbf][0x80-0xbf]", "|[0xf0-f7][0x80-0xbf][0x80-0xbf][0x80-0xbf]", "|[0xf0-0xf7][0x80-0xbf][0x80-0xbf][0x80-0xbf]" ))) assert(f:write([[ /* Write out leaf tokens for each of the four possible starting bytes. */ for (i = 1; i < 5; i++) addtok (parser, FSATOKEN_TK_CSET + parser->utf8_anychar_classes[i]); /* Add follow-on classes, plus tokens to build a postfix tree covering all four alternatives of valid UTF-8 sequences. */ for (i = 1; i <= 3; i++) { addtok (parser, FSATOKEN_TK_CSET + parser->utf8_anychar_classes[0]); addtok (parser, FSATOKEN_TK_CAT); addtok (parser, FSATOKEN_TK_OR); } ]])) assert(f:write(RawText("add_utf8_anychar-body-end"), "\n")) assert(f:write(FSATokenSubst(RawText("Grammar summary comment")), "\n")) assert(f:write([[ /* Provide a forward declaration for regexp, as it is at the top of the parse tree, but is referenced by atom, at the bottom of the tree. */ static void regexp (fsaparse_ctxt_t *parser); ]])) --[[]] assert(f:write([[ /* 3 June 2014: I must be a sucker for punishment, and/or going crazy; whatever. I've looked at the partially-edited code for "atom", looking at it in the (incomplete) light of introducing the mbcsets change... and have decided, just as I previously did with closure () below, to rewrite it from scratch. So, here goes... */ static void atom (fsaparse_ctxt_t *parser) { fsatoken_token_t tok = parser->lookahead_token; /* For a unibyte character, it is its own token. */ if (tok >= 0 && tok < FSATOKEN_NOTCHAR) { addtok (parser, tok); parser->lookahead_token = parser->lexer (parser->lex_context); return; } /* For a unibyte character set (CSET + index), it is its own token. */ if (tok >= FSATOKEN_TK_CSET) { addtok (parser, tok); parser->lookahead_token = parser->lexer (parser->lex_context); return; } /* For ANYCHAR in an UTF-8 setting, use a hierarchy of CSETs. */ if (tok == FSATOKEN_TK_ANYCHAR && using_utf8 ()) { /* For UTF-8, expand the period to a series of CSETs that define a valid UTF-8 character. This avoids using the slow multibyte path. I'm pretty sure it would be both profitable and correct to do it for any encoding; however, the optimization must be done manually as it is done above in add_utf8_anychar. So, let's start with UTF-8: it is the most used, and the structure of the encoding makes the correctness more obvious. */ add_utf8_anychar (parser); parser->lookahead_token = parser->lexer (parser->lex_context); return; } /* Use switch to carve up the remaining cases into groups. */ switch (tok) { case FSATOKEN_TK_WCHAR: { /* Wide character, possibly including equivalent characters (probably due to selection of case folding). */ int nr_wide_chars; nr_wide_chars = fsalex_exchange (parser->lex_context, PROTO_LEXPARSE_OP_GET_WIDE_CHARS, parser->wide_char_list); assert(nr_wide_chars >= 1); if (parser->wide_char_list[0] == WEOF) addtok (parser, FSATOKEN_TK_BACKREF); else { int i; addtok_wc (parser, parser->wide_char_list[0]); for (i = 1; i < nr_wide_chars; i++) { addtok_wc (parser, parser->wide_char_list[i]); addtok (parser, FSATOKEN_TK_OR); } } parser->lookahead_token = parser->lexer (parser->lex_context); return; } case FSATOKEN_TK_MBCSET: { mbcsets_set_t *work_mbc; /* Acquire space to store set pointer. Sadly (?), fsaparse maintains a duplicate list to fsalex at present. */ parser->mbcsets = maybe_realloc (parser->mbcsets, parser->nmbcsets, &parser->mbcsets_alloc, sizeof work_mbc); /* Get set pointer from fsalex, and add it to the list. */ fsalex_exchange (parser->lex_context, PROTO_LEXPARSE_OP_GET_MBCSET, &work_mbc); parser->mbcsets[parser->nmbcsets++] = work_mbc; /* We could fall-through case labels to the addtok code below, but am keeping code sections separate due to an overabundance of caution. */ addtok (parser, tok); parser->lookahead_token = parser->lexer (parser->lex_context); return; } case FSATOKEN_TK_ANYCHAR: case FSATOKEN_TK_BACKREF: case FSATOKEN_TK_BEGLINE: case FSATOKEN_TK_ENDLINE: case FSATOKEN_TK_BEGWORD: case FSATOKEN_TK_ENDWORD: case FSATOKEN_TK_LIMWORD: case FSATOKEN_TK_NOTLIMWORD: addtok (parser, tok); parser->lookahead_token = parser->lexer (parser->lex_context); return; case FSATOKEN_TK_LPAREN: parser->lookahead_token = parser->lexer (parser->lex_context); regexp (parser); if (parser->lookahead_token != FSATOKEN_TK_RPAREN) parser->abandon_with_error (_("unbalanced (")); parser->lookahead_token = parser->lexer (parser->lex_context); return; default: addtok (parser, FSATOKEN_TK_EMPTY); return; } /* NOTREACHED */ } ]])) --[===[ local AtomFn = RawText("atom") AtomFn = TextSubst(AtomFn, [[ static void atom (void) { if (tok == WCHAR) { if (wctok == WEOF) addtok (BACKREF); else { addtok_wc (wctok); if (case_fold) { wchar_t folded[CASE_FOLDED_BUFSIZE]; int i, n = case_folded_counterparts (wctok, folded); for (i = 0; i < n; i++) { addtok_wc (folded[i]); addtok (OR); } } } ]], [[ static void atom (fsaparse_ctxt_t *parser) { fsatoken_token_t tok = parser->lookahead_token; if (tok == WCHAR) { int nr_wide_chars; nr_wide_chars = fsalex_exchange (parser->lex_context, PROTO_LEXPARSE_OP_GET_WIDE_CHARS, parser->wide_char_list); assert (nr_wide_chars >= 1); if (parser->wide_char_list[0] == WEOF) addtok (BACKREF); else { int i; addtok_wc (parser->wide_char_list[0]); for (i = 1; i < nr_wide_chars; i++) { addtok_wc (parser->wide_char_list[i]); addtok (FSATOKEN_TK_OR); } } ]]) AtomFn = TextSubst(AtomFn, [[ else if (tok == LPAREN) { tok = lex (); regexp (); if (tok != RPAREN) ]], [[ else if (tok == LPAREN) { parser->lookahead_token = parser->lexer (parser->lex_context); regexp (parser); tok = parser->lookahead_token; if (tok != RPAREN) ]]) AtomFn = TextSubst(AtomFn, " tok = lex ()", " parser->lookahead_token = parser->lexer (parser->lex_context)") AtomFn = TextSubst(AtomFn, " dfaerror (_(", " parser->abandon_with_error (_(") AtomFn = AtomFn:gsub(" addtok %(", " addtok (parser, ") AtomFn = AtomFn:gsub(" addtok_wc %(", " addtok_wc (parser, ") AtomFn = AtomFn:gsub(" add_utf8_anychar %(%)", " add_utf8_anychar (parser)") assert(f:write(FSATokenSubst(AtomFn), "\n")) --]===] assert(f:write(FSATokenSubst(EditedText("nsubtoks", "\nnsubtoks %(", "\nnsubtoks (fsaparse_ctxt_t *parser, ", " nsubtoks %(", " nsubtoks (parser, ", "dfa%->", "parser->")), "\n")) assert(f:write(FSATokenSubst(EditedText("copytoks", "\ncopytoks %(", "\ncopytoks (fsaparse_ctxt_t *parser, ", " addtok_mb %(", " addtok_mb (parser, ", "dfa%->", "parser->", "MB_CUR_MAX > 1", "parser->multibyte_locale")), "\n")) assert(f:write([[ /* Rewriting fsaparse:closure () from scratch; original is clever but a little tricky to follow, so I'm trying to break up a while + compound-if loop into a simpler construct (more like a finite-state machine). Also, edits such as replacing "dfa->" with "parser->" are done here, adding "parser" as a parameter in lots of places, as well as the long-winded FSATOKEN_TK_" prefix. I'm not sure if this version is an improvement over the original; the need to use "parser->lookahead_token" instead of "tok" influenced my decision to try this... but the jury is still out. */ static void closure (fsaparse_ctxt_t *parser) { restart_closure: atom (parser); for (;;) { switch (parser->lookahead_token) { case FSATOKEN_TK_QMARK: case FSATOKEN_TK_STAR: case FSATOKEN_TK_PLUS: addtok (parser, parser->lookahead_token); parser->lookahead_token = parser->lexer (parser->lex_context); continue; case FSATOKEN_TK_REPMN: /* REPMN needs extra work; move outside the switch statement. */ break; default: /* Merely let the intial atom call stand as our return result. */ return; } /* Deal with REPMN{min, max} cases in a separate block. */ { int i; size_t prev_sub_index, ntokens; int minrep, maxrep; /* Get the {min, max} pair decoded by the lexer. */ minrep = parser->lex_exchange (parser->lex_context, PROTO_LEXPARSE_OP_GET_REPMN_MIN, NULL); maxrep = parser->lex_exchange (parser->lex_context, PROTO_LEXPARSE_OP_GET_REPMN_MAX, NULL); /* Find out how many tokens are in the peceding token list that are covered by this REPMN directive. This involves carefully working backwards through the linear, postfix token ordering. */ ntokens = nsubtoks (parser, parser->tindex); /* If min and max are both zero, merely remove preceding subexpression, get a new token, and restart the atom/closure processing from the top of the function. Not sure if people will like this goto statement, but we'll give it a whirl. */ if (minrep == 0 && maxrep == 0) { parser->tindex -= ntokens; parser->lookahead_token = parser->lexer (parser->lex_context); goto restart_closure; } /* Non-zero min or max, defined as follows: {n} The preceding item is matched exactly n times. {n,} The preceding item is matched n or more times. {,m} The preceding item is matched at most m times (GNU ext.) {n,m} The preceding item is matched at least n, but not more than m times. For {n,} and {,m} cases, the omitted parameter is reported here as a negative value. */ prev_sub_index = parser->tindex - ntokens; if (maxrep < 0) addtok (parser, FSATOKEN_TK_PLUS); if (minrep == 0) addtok (parser, FSATOKEN_TK_QMARK); for (i = 1; i < minrep; ++i) { copytoks (parser, prev_sub_index, ntokens); addtok (parser, FSATOKEN_TK_CAT); } for (; i < maxrep; ++i) { copytoks (parser, prev_sub_index, ntokens); addtok (parser, FSATOKEN_TK_QMARK); addtok (parser, FSATOKEN_TK_CAT); } /* Prime the parser with the next token after REPMN and loop. */ parser->lookahead_token = parser->lexer (parser->lex_context); } } } ]])) local BranchBody = EditedText("branch", "\nbranch %(void%)", "\nbranch (fsaparse_ctxt_t *parser)", " addtok %(", " addtok (parser, ", " closure %(%)", " closure (parser)") BranchBody = TextSubst(BranchBody, [[ { closure (parser); while (tok != RPAREN && tok != OR && tok >= 0) { closure (parser); addtok (parser, CAT); } ]], [[ { fsatoken_token_t tok; closure (parser); tok = parser->lookahead_token; while (tok != RPAREN && tok != OR && tok >= 0) { closure (parser); tok = parser->lookahead_token; addtok (parser, CAT); } ]]) assert(f:write(FSATokenSubst(BranchBody), "\n")) local RegexpBody = EditedText("regexp", "\nregexp %(void%)", "\nregexp (fsaparse_ctxt_t *parser)", "dfa%->", "parser->", " addtok %(", " addtok (parser, ", " atom %(%)", " atom (parser)", " branch %(%)", " branch (parser)", " closure %(%)", " closure (parser)") RegexpBody = TextSubst(RegexpBody, " tok = lex ()", " parser->lookahead_token = parser->lexer (parser->lex_context)") RegexpBody = TextSubst(RegexpBody, [[ while (tok == OR) { ]], [[ while (parser->lookahead_token == OR) { ]]) assert(f:write(FSATokenSubst(RegexpBody), "\n")) -- Rewrite body of fsaparse_parse (dfaparse) without substitutions, as much -- of the initialisation code here has been made redundant as the client can -- instantiate and configure the lexer independently. assert(f:write(Decls["parse"], [[ { /* Obtain an initial token for lookahead, and keep tracking tree depth. */ parser->lookahead_token = parser->lexer (parser->lex_context); parser->current_depth = parser->depth; /* Run regexp to manage the next level of parsing. */ regexp (parser); if (parser->lookahead_token != FSATOKEN_TK_END) parser->abandon_with_error (_("unbalanced )")); /* If multiple expressions are parsed, second and subsequent patters are presented as alternatives to preceding patterns. */ addtok (parser, FSATOKEN_TK_END - parser->nregexps); addtok (parser, FSATOKEN_TK_CAT); if (parser->nregexps) addtok (parser, FSATOKEN_TK_OR); ++parser->nregexps; } ]])) assert(f:write([[ /* Receive functions to deal with exceptions detected by the parser: Warnings and errors. Internally, we add the _Noreturn attribute to the error callback, to help the compiler with code flow analysis. */ extern void fsaparse_exception_fns (fsaparse_ctxt_t *parser, fsaparse_warn_callback_fn *warningfn, fsaparse_error_callback_fn *errorfn) { /* Exception handling is done by explicit callbacks. */ parser->warn_client = warningfn; parser->abandon_with_error = errorfn; } /* Add "not provided!" stub function that gets called if the client fails to provide proper resources. This is a hack, merely to get the module started; better treatment needs to be added later. */ static void no_function_provided (void *unused) { assert (!"fsaparse: Plug-in function required, but not provided."); } ]])) assert(f:write(Decls["parse-lexer"], [[ { bool is_multibyte; int wchars_max; /* Record supplied lexer function and context for use later. */ parser->lex_context = lexer_context; parser->lexer = lex_fn; parser->lex_exchange = lex_exchange_fn; /* Query lexer to get multibyte nature of this locale. */ is_multibyte = lex_exchange_fn (lexer_context, PROTO_LEXPARSE_OP_GET_IS_MULTIBYTE_LOCALE, NULL); parser->multibyte_locale = is_multibyte; parser->unibyte_locale = ! is_multibyte; /* Set up WCHAR token infrastructure: Wide-char list/array/whatever. We free the wide_char_list first, in case someone calls this function multiple times, so that earlier allocations are correctly freed up, and we use the value advised by the most recent lexer presented to us. */ wchars_max = lex_exchange_fn (lexer_context, PROTO_LEXPARSE_OP_GET_WIDE_CHAR_LIST_MAX, NULL); parser->wide_chars_max = wchars_max; free (parser->wide_char_list); parser->wide_char_list = NULL; if (wchars_max) parser->wide_char_list = xnmalloc (wchars_max, sizeof (wchar_t)); } ]])) -- Define a "new" function to generate an initial parser context. assert(f:write(Decls["parse-new"], [[ { fsaparse_ctxt_t *new_context; /* Acquire zeroed memory for new parser context. */ new_context = XZALLOC (fsaparse_ctxt_t); /* ?? Point warning, error and lexer functions to a "you need to tell me these first!" function? */ new_context->warn_client = (fsaparse_warn_callback_fn *) no_function_provided; new_context->abandon_with_error = (fsaparse_error_callback_fn *) no_function_provided; new_context->lexer = (fsaparse_lexer_fn_t *) no_function_provided; /* Default to unibyte locale... but we synchronise with the lexer later. */ new_context->multibyte_locale = false; new_context->unibyte_locale = true; /* Add this instance at the head of the module's list. */ new_context->next_instance = fsaparse_instance_list_head; fsaparse_instance_list_head = new_context; return new_context; } ]])) -- FSAMusts, and also debugging users, use the final token list generated -- by the parser, so provide an interface for them to access the list. assert(f:write(Decls["parse-get-token-list"], [[ { *nr_tokens = parser->tindex; *token_list = parser->tokens; } ]])) assert(f:write([[ /* Internal function to free all resources directly or indirectly used by an instance. The pointer is no longer valid after this call. */ static void free_instance (fsaparse_ctxt_t *parser) { free (parser->wide_char_list); free (parser->tokens); free (parser->multibyte_prop); free (parser); } ]])) assert(f:write(Decls["parse-destroy-module"], [[ { fsaparse_ctxt_t *p_list; fsaparse_ctxt_t *p_next; /* Move the global list head into a local variable, and immediately clear the global. This is a half-hearted attempt to avoid race conditions; to do things properly, a system-wide atomic operation (locked, including multi-CPU cache coherency) operation should be used. */ p_list = fsaparse_instance_list_head; fsaparse_instance_list_head = NULL; /* Traverse the list of instances, releasing all resources associated with each one. */ while (p_list) { p_next = p_list->next_instance; free_instance (p_list); p_list = p_next; } } ]])) assert(f:write(Decls["parse-initialise"], [[ { /* Initialise the linked list of instances created by this module. */ fsaparse_instance_list_head = NULL; /* Clean up resources upon exit. */ atexit (fsaparse_destroy_module); } ]])) -- Finally, add trailer lines (vim) assert(f:write([[ /* vim:set shiftwidth=2: */ ]])) assert(f:close()) -------------------------------------------------------------------------- ----------------******** fsamusts.h ********---------------- print("Creating fsamusts.h...") local f = assert(io.open("fsamusts.h", "w")) assert(f:write([[ /* fsamusts -- Report a list of must-have simple strings in the pattern ]])) assert(f:write(RawText("Copyright.dfac"), "\n")) assert(f:write(RawText("LicenseWarranty.dfac"), "\n")) assert(f:write(RawText("Authors.dfac"))) assert(f:write([[ /* 2014: Repackaged by "untangle" script, written by behoffski. */ /* ?? Insert long description/discussion here. */ ]])) -- Add preprocessor lines to make this header file idempotent. assert(f:write([[ #ifndef FSAMUSTS_H #define FSAMUSTS_H 1 /* Always import environment-specific configuration items first. */ #include #include "fsatoken.h" ]])) -- Rather than accumulating "must" strings inside the internals, and -- obtaining a pointer to the list at the end, we explicitly name the -- list in the "must" (note: *not* "musts") call, and receive an -- updated pointer as the return value. Because the structure is -- involved more heavily in the interface, change it to a typedef. assert(f:write(RawText("dfamust-struct description"))) assert(f:write([[ typedef struct fsamusts_list_element { bool exact; bool begline; bool endline; char *must; struct fsamusts_list_element *next; } fsamusts_list_element_t; ]])) WriteExternDecl(f, Decls["must"]) -- Finally, add trailer lines (idempotency, vim) assert(f:write([[ #endif /* FSAMUSTS_H */ /* vim:set shiftwidth=2: */ ]])) assert(f:close()) ----------------******** fsamusts.c ********---------------- print("Creating fsamusts.c...") local f = assert(io.open("fsamusts.c", "w")) assert(f:write([[ /* fsamusts -- Report a list of must-have simple strings in the pattern ]])) assert(f:write(RawText("Copyright.dfac"), "\n")) assert(f:write(RawText("LicenseWarranty.dfac"), "\n")) assert(f:write(RawText("Authors.dfac"))) assert(f:write([[ /* 2014: Repackaged by "untangle" script, written by behoffski. */ /* (?? Long description/discussion goes here...) */ ]])) assert(f:write([[ /* Always import environment-specific configuration items first. */ #include #include #include "charclass.h" #include #include "fsamusts.h" #include "fsatoken.h" #include #include #include #include #include "xalloc.h" #if DEBUG #include #endif /* DEBUG */ ]])) assert(f:write(RawText("STREQ"), "\n")) assert(f:write(RawText("'musts' explanation"), "\n")) assert(f:write(RawText("icatalloc"), "\n")) assert(f:write(RawText("istrstr"), "\n")) assert(f:write(RawText("freelist"), "\n")) assert(f:write(RawText("enlist"), "\n")) assert(f:write(RawText("comsubs"), "\n")) assert(f:write(RawText("addlists"), "\n")) assert(f:write(RawText("inboth"), "\n")) assert(f:write(RawText("must typedef"), "\n")) assert(f:write(RawText("allocmust"), "\n")) assert(f:write(RawText("resetmust"), "\n")) assert(f:write(RawText("freemust"), "\n")) -- Change dfamust to fsamust_must, remove dfa struct, and rename tokens assert(f:write(Decls["must"])) local MustBody = EditedText("dfamust definition", " token t;", " fsatoken_token_t t;", " token t = ", " fsatoken_token_t t = ", " prtok ", " fsatoken_prtok ") local MustBody = TextSubst(MustBody, [[ struct dfamust *dm; ]], "") MustBody = TextSubst(MustBody, "d->tindex", "nr_tokens") MustBody = TextSubst(MustBody, "d->tokens", "token_list") MustBody = TextSubst(MustBody, "d->musts", "must_list") MustBody = TextSubst(MustBody, "!d->multibyte", "unibyte_locale") MustBody = TextSubst(MustBody, [[ charclass *ccl = &d->charclasses[t - CSET]; int j; for (j = 0; j < NOTCHAR; j++) if (tstbit (j, *ccl)) break; if (! (j < NOTCHAR)) break; t = j; while (++j < NOTCHAR) if (tstbit (j, *ccl) ]], [[ charclass_t *ccl = charclass_get_pointer (t - CSET); int j; for (j = 0; j < NOTCHAR; j++) if (charclass_tstbit (j, ccl)) break; if (! (j < NOTCHAR)) break; t = j; while (++j < NOTCHAR) if (charclass_tstbit (j, ccl) ]]) MustBody = TextSubst(MustBody, [[ done: if (*result) { dm = xmalloc (sizeof *dm); dm->exact = exact; dm->begline = begline; dm->endline = endline; dm->must = xstrdup (result); dm->next = must_list; must_list = dm; } while (mp) { must *prev = mp->prev; freemust (mp); mp = prev; } ]], [[ done: if (strlen (result)) { fsamusts_list_element_t *dm; dm = xmalloc (sizeof *dm); dm->exact = exact; dm->begline = begline; dm->endline = endline; dm->must = xstrdup (result); dm->next = must_list; must_list = dm; } while (mp) { must *prev = mp->prev; freemust (mp); mp = prev; } return must_list; ]]) assert(f:write(FSATokenSubst(MustBody), "\n")) -- Not needed: assert(f:write(RawText("dfamusts"), "\n")) -- Finally, add trailer lines (vim) assert(f:write([[ /* vim:set shiftwidth=2: */ ]])) assert(f:close()) -------------------------------------------------------------------------- --[=[ ----------------******** high-level-dfa.h ********---------------- print("Creating high-level-dfa.h...") local f = assert(io.open("high-level-dfa.h", "w")) assert(f:write([[ /* high_level_dfa -- High-level dfa functions that process lower-level info ]])) assert(f:write(RawText("Copyright.dfac"), "\n")) assert(f:write(RawText("LicenseWarranty.dfac"), "\n")) assert(f:write(RawText("Authors.dfac"))) assert(f:write([[ /* 2014: Repackaged by "untangle" script, written by behoffski. */ /* (?? Long description/discussion goes here...) */ ]])) -- Add preprocessor lines to make this header file idempotent. assert(f:write([[ #ifndef HIGH_LEVEL_DFA_H #define HIGH_LEVEL_DFA_H 1 /* Always import environment-specific configuration items first. */ #include #include "fsatoken.h" ]])) assert(f:write([[ /* Functions to initialise and tear down entire module and all associated instances. */ ]])) WriteExternDecl(f, Decls["hldfa-initialise"]) WriteExternDecl(f, Decls["hldfa-destroy-module"]) assert(f:write([[ /* Functions to create a new parser instance, and to connect it with a compatible lexer instance. */ ]])) -- WriteExternDecl(f, Decls["hldfa-new"]) -- FIXME: The following is a copy-paste-and-rename-edit from the fsalex code. -- Is there a less redundant way of doing this? -- While dfa.h declares dfawarn() and dfaerror(), and demands that the client -- supply functions at link time, we instead provide an interface function -- so that the functions can be handed over explicitly. This style may be -- useful in the future if we want to move from a single hldfa instance to -- multiple instances (objects?) assert(f:write([[ /* Define function prototypes for warning and error callbacks. */ typedef void hldfa_warn_callback_fn (const char *); typedef void /* ?? _Noreturn? */ hldfa__error_callback_fn (const char *); ]])) -- Finally, add trailer lines (idempotency, vim) assert(f:write([[ #endif /* HIGH_LEVEL_DFA_H */ /* vim:set shiftwidth=2: */ ]])) -------------------------------------------------------------------------- ----------------******** high-level-dfa.c ********---------------- print("Creating high-level-dfa.c...") local f = assert(io.open("high-level-dfa.c", "w")) assert(f:write([[ /* high_level_dfa -- High-level dfa functions that process lower-level info ]])) assert(f:write(RawText("Copyright.dfac"), "\n")) assert(f:write(RawText("LicenseWarranty.dfac"), "\n")) assert(f:write(RawText("Authors.dfac"))) assert(f:write([[ /* 2014: Repackaged by "untangle" script, written by behoffski. */ /* (?? Long description/discussion goes here...) */ ]])) assert(f:write([[ /* Always import environment-specific configuration items first. */ #include #include #include "charclass.h" #include #include "high-level-dfa.h" #include "fsatoken.h" #include #include #include #include #include "xalloc.h" #if DEBUG #include #endif /* DEBUG */ ]])) assert(f:write(RawText("PositionAtom+DynSet"), "\n")) assert(f:write(RawText("leaf_set_typedef"), "\n")) assert(f:write(RawText("dfa_state_typedef"), "\n")) assert(f:write(RawText("state_num_typedef"), "\n")) assert(f:write(RawText("copy"), "\n")) assert(f:write(RawText("alloc_position_set"), "\n")) assert(f:write(RawText("insert"), "\n")) assert(f:write(RawText("merge"), "\n")) assert(f:write(RawText("delete"), "\n")) assert(f:write(RawText("state_index"), "\n")) assert(f:write(RawText("epsclosure"), "\n")) assert(f:write(RawText("charclass_context"), "\n")) assert(f:write(RawText("state_separate_contexts"), "\n")) assert(f:write(RawText("dfaanalyse summary comment"), "\n")) assert(f:write(RawText("dfaanalyse"), "\n")) assert(f:write(RawText("dfastate summary comment"), "\n")) assert(f:write(RawText("dfastate"), "\n")) assert(f:write(RawText("realloc_trans_if_necessary"), "\n")) assert(f:write(RawText("build_state"), "\n")) assert(f:write(RawText("Multibyte fns section comment"), "\n")) assert(f:write(RawText("status_transit_state typedef"), "\n")) assert(f:write(RawText("transit_state_singlebyte"), "\n")) assert(f:write(RawText("match_anychar"), "\n")) assert(f:write(RawText("match_mb_charset"), "\n")) assert(f:write(RawText("check_matching_with_multibyte_ops"), "\n")) assert(f:write(RawText("transit_state_consume_1char"), "\n")) assert(f:write(RawText("transit_state"), "\n")) assert(f:write(RawText("dfaexec"), "\n")) assert(f:write(RawText("dfasuperset"), "\n")) assert(f:write(RawText("ddaisfast"), "\n")) assert(f:write(RawText("free_mbdata"), "\n")) assert(f:write(RawText("dfainit"), "\n")) assert(f:write(RawText("dfaoptimize"), "\n")) assert(f:write(RawText("dfassbuild"), "\n")) assert(f:write(RawText("dfacomp"), "\n")) assert(f:write(RawText("dfafree"), "\n")) assert(f:write(RawText("dfaalloc"), "\n")) assert(f:close()) --]=] -------------------------------------------------------------------------- ----------------******** dfa-prl.c ********---------------- -- dfa-prl.c -- "Parallel" version of dfa.c, which is used to quantify the -- performance of the new code by running it in parallel with the existing -- code, and checking that the outputs are identical. print("Generating dfa-prl.c by copying dfa.c and then applying edits") os.execute("cp -fp dfa.c dfa-prl.c") -- Read the entire file into a single segment local dfaprlc = NewFileTable("dfa-prl.c") Section("dfa-prl", dfaprlc, 0) Segs:Tag("dfa-prl.c", 1, #dfaprlc) -- Apply edits rather crudely, sigh. -- dfasyntax() needs to be modified to initialise fsalex_syntax(); -- this is also perhaps the best point to add once-off initialisation -- calls for the new modules. -- We change dfa-prl.c's lex() function into OriginalLexFunction() (or -- whatever), introduce a new lex() function that calls the new code and -- the existing code side-by-side, and compares the result. local body = RawText("dfa-prl.c") body = TextSubst(body, [[ #include #include #include #include #include #include #include ]], [[ #include #include #include #include #include #include #include /* HOOK: Hack in interfaces to new charclass and fsa* modules. */ #include "charclass.h" #include "fsatoken.h" #include "fsalex.h" #include "fsamusts.h" #include "fsaparse.h" #include "mbcsets.h" #include "proto-lexparse.h" /* HOOK: File handle for hook/parallel lex/parse debug/log messages */ FILE *pll_log = NULL; #define HOOK_LOG_FILENAME "/tmp/parallel.log" /* HOOK: Static variables to hold opaque parser and lexer contexts. */ static fsaparse_ctxt_t *parser = NULL; static fsalex_ctxt_t *lexer = NULL; /* HOOK: Store musts list as a static variable so we can free it correctly. */ static fsamusts_list_element_t *musts = NULL; ]]) -- When dfasyntax receives syntax options, also tell fsalex_syntax. body = TextSubst(body, [[ /* Entry point to set syntax options. */ void dfasyntax (reg_syntax_t bits, int fold, unsigned char eol) { unsigned int i; syntax_bits_set = 1; syntax_bits = bits; case_fold = fold != 0; eolbyte = eol; ]], [[ typedef struct regex_name_mapping_struct { reg_syntax_t flag; const char *name; } regex_name_mapping_t; static regex_name_mapping_t regex_names[] = { {RE_BACKSLASH_ESCAPE_IN_LISTS, "backslash_escape_in_lists"}, {RE_BK_PLUS_QM, "bk_plus_qm"}, {RE_CHAR_CLASSES, "char_classes"}, {RE_CONTEXT_INDEP_ANCHORS, "context_indep_anchors"}, {RE_CONTEXT_INDEP_OPS, "context_indep_ops"}, {RE_CONTEXT_INVALID_OPS, "context_invalid_ops"}, {RE_DOT_NEWLINE, "dot_newline"}, {RE_DOT_NOT_NULL, "dot_not_null"}, {RE_HAT_LISTS_NOT_NEWLINE, "hat_lists_not_newline"}, {RE_INTERVALS, "intervals"}, {RE_LIMITED_OPS, "limited_ops"}, {RE_NEWLINE_ALT, "newline_alt"}, {RE_NO_BK_BRACES, "no_bk_braces"}, {RE_NO_BK_PARENS, "no_bk_parens"}, {RE_NO_BK_REFS, "no_bk_refs"}, {RE_NO_BK_VBAR, "no_bk_vbar"}, {RE_NO_EMPTY_RANGES, "no_empty_ranges"}, {RE_UNMATCHED_RIGHT_PAREN_ORD, "unmatched_right_paren_ord"}, {RE_NO_POSIX_BACKTRACKING, "no_posix_backtracking"}, {RE_NO_GNU_OPS, "no_gnu_ops"}, {RE_DEBUG, "debug"}, {RE_INVALID_INTERVAL_ORD, "invalid_interval_ord"}, {RE_ICASE, "icase"}, {RE_CARET_ANCHORS_HERE, "caret_anchors_here"}, {RE_CONTEXT_INVALID_DUP, "context_invalid_dup"}, {RE_NO_SUB, "no_sub"}, {0, NULL} }; /* Entry point to set syntax options. */ void dfasyntax (reg_syntax_t bits, int fold, unsigned char eol) { unsigned int i; /* Hook: Debug buffer to record search syntax specifications. */ static char buf[256]; char *p_buf; char *locale; syntax_bits_set = 1; syntax_bits = bits; case_fold = fold != 0; eolbyte = eol; HOOK_set_up_fsa_stuff_if_not_done_already (); /* HOOK: Tell fsalex module about syntax selections. */ fsalex_syntax (lexer, bits, fold, eol); /* HOOK: Record syntax selections in debug logfile. */ if (! pll_log) pll_log = fopen (HOOK_LOG_FILENAME, "a"); locale = setlocale (LC_ALL, NULL); fprintf (pll_log, "\nSyntax: Case fold: %d; eol char: %02x; locale: %s", fold, (int) eol, locale); p_buf = buf; *p_buf++ = '\n'; *p_buf++ = ' '; *p_buf = '\0'; for (i = 0; regex_names[i].name; i++) { char flag_char = (bits & regex_names[i].flag) ? '+' : '-'; p_buf += sprintf(p_buf, " %c%s", flag_char, regex_names[i].name); if (strlen (buf) >= 82) { fprintf (pll_log, "%s", buf); p_buf = &buf[2]; *p_buf = '\0'; } } fprintf (pll_log, "%s\n", buf); /* Okay, now the parser can talk to a lexer that knows locale and syntax configuration details. */ fsaparse_lexer (parser, lexer, (proto_lexparse_lex_fn_t *) fsalex_lex, (proto_lexparse_exchange_fn_t *) fsalex_exchange); ]]) body = TextSubst (body, [[ /* Entry point to set syntax options. */ void dfasyntax (reg_syntax_t bits, int fold, unsigned char eol) ]], [[ /* HOOK: When we abort, get rid of resources that we've acquired, before finally handing control to the abort mechanism. We do this to minimise resource allocation error reported by Valgrind; I believe that it is not vital to do so otherwise (but I may be wrong). */ static void HOOK_tear_down_additional_infrastructure (void) { /* Modules destroy themselves via a module-wide atexit () registration. */ #if 0 fsalex_destroy_module (); fsaparse_destroy_module (); charclass_destroy_module (); #endif /* 0 */ parser = NULL; lexer = NULL; /* fsatoken, proto-lexparse and dfa-prl itself do not need attention (yet?). */ /* Release the "musts" list we compiled earlier. */ { fsamusts_list_element_t *mlist; fsamusts_list_element_t *mnext; for (mlist = musts; mlist; mlist = mnext) { mnext = mlist->next; free (mlist->must); free (mlist); } musts = NULL; } } /* HOOK: Provide error/warning functions that include logging. */ static void lexer_warn(const char *warning) { if (! pll_log) pll_log = fopen (HOOK_LOG_FILENAME, "a"); fprintf (pll_log, "lexer warning: %s\n", warning); fflush (NULL); dfawarn (warning); } static _Noreturn void lexer_error(const char *reason) { if (! pll_log) pll_log = fopen (HOOK_LOG_FILENAME, "a"); fprintf (pll_log, "lexer error: %s\n", reason); HOOK_tear_down_additional_infrastructure (); fflush (NULL); dfaerror (reason); } static void parser_warn(const char *warning) { if (! pll_log) pll_log = fopen (HOOK_LOG_FILENAME, "a"); fprintf (pll_log, "parser warning: %s\n", warning); fflush (NULL); dfawarn (warning); } static _Noreturn void parser_error(const char *reason) { if (! pll_log) pll_log = fopen (HOOK_LOG_FILENAME, "a"); fprintf (pll_log, "parser error: %s\n", reason); HOOK_tear_down_additional_infrastructure (); fflush (NULL); dfaerror (reason); } static void HOOK_set_up_fsa_stuff_if_not_done_already (void) { /* If lexer context is present, this function has been run previously. */ if (lexer != NULL) return; /* Initialise modules before creating instances. */ fsalex_initialise (); fsaparse_initialise (); /* Create a new lexer instance, and give it error/warning fns */ lexer = fsalex_new (); fsalex_exception_fns (lexer, lexer_warn, lexer_error); /* Start with a pool of 10 charclasses. */ charclass_initialise (10); /* Iniialise multibyte-char-sets module. */ mbcsets_initialise (); /* Create a new parser instance, and give it error/warning functions. Defer connecting up the lexer until lexer syntax has been set up. */ parser = fsaparse_new (); fsaparse_exception_fns (parser, parser_warn, parser_error); } /* Entry point to set syntax options. */ void dfasyntax (reg_syntax_t bits, int fold, unsigned char eol) ]]) -- Whenever lex is called to get a token, call the original lex and also -- fsalex_lex in parallel, and compare the results. Two changes to do -- this: 1. Rename "lex" to "original_lex". 2. Add a new "lex" function -- that calls both "original_lex" and then "fsalex_lex", and compares the -- result. body = TextSubst(body, [[ static token lex (void) { int c, c2; bool backslash = false; ]], [[ static token original_lex (void) { int c, c2; bool backslash = false; ]]) body = TextSubst(body, [[ /* Recursive descent parser for regular expressions. */ static token tok; /* Lookahead token. */ static size_t depth; /* Current depth of a hypothetical stack ]], [[ static token lex (void) { token original_token; fsatoken_token_t fsalex_token; const char *alarm; original_token = original_lex (); fsalex_token = fsalex_lex (lexer); alarm = ""; if ((original_token != fsalex_token) && (original_token < CSET)) alarm = " *** ERROR: Token mismatch"; fprintf (pll_log, "Token debug: Original, fsalex: %08lx %08lx %s\n", original_token, fsalex_token, alarm); if (fsalex_token == FSATOKEN_TK_REPMN) { int x_minrep, x_maxrep; x_minrep = fsalex_exchange(lexer, PROTO_LEXPARSE_OP_GET_REPMN_MIN, NULL); x_maxrep = fsalex_exchange(lexer, PROTO_LEXPARSE_OP_GET_REPMN_MAX, NULL); fprintf (pll_log, " Original REPMN{%d,%d}; ", minrep, maxrep); fprintf (pll_log, " FSATOKEN_TK_REPMN{%d,%d}\n", x_minrep, x_maxrep); if ((x_minrep != minrep) || (x_maxrep != maxrep)) fprintf (pll_log, " *** ERROR: Min, max rep doesn't match\n"); } else if (fsalex_token >= FSATOKEN_TK_CSET) { size_t index; unsigned int * orig_ccl; int i; charclass_t *charset; char *description; static char buf[256]; char *p_buf; /* Nybble (4bit)-to-char conversion array for little-bit-endian nybbles. */ static const char *disp_nybble = "084c2a6e195d3b7f"; /* Report details of the original charclas produced by dfa.c. */ index = original_token - CSET; p_buf = buf; orig_ccl = dfa->charclasses[index]; for (i = 0; i < CHARCLASS_WORDS; i += 2) { charclass_word j = orig_ccl[i]; *p_buf++ = ' '; *p_buf++ = disp_nybble[(j >> 0) & 0x0f]; *p_buf++ = disp_nybble[(j >> 4) & 0x0f]; *p_buf++ = disp_nybble[(j >> 8) & 0x0f]; *p_buf++ = disp_nybble[(j >> 12) & 0x0f]; *p_buf++ = disp_nybble[(j >> 16) & 0x0f]; *p_buf++ = disp_nybble[(j >> 20) & 0x0f]; *p_buf++ = disp_nybble[(j >> 24) & 0x0f]; *p_buf++ = disp_nybble[(j >> 28) & 0x0f]; j = orig_ccl[i + 1]; *p_buf++ = disp_nybble[(j >> 0) & 0x0f]; *p_buf++ = disp_nybble[(j >> 4) & 0x0f]; *p_buf++ = disp_nybble[(j >> 8) & 0x0f]; *p_buf++ = disp_nybble[(j >> 12) & 0x0f]; *p_buf++ = disp_nybble[(j >> 16) & 0x0f]; *p_buf++ = disp_nybble[(j >> 20) & 0x0f]; *p_buf++ = disp_nybble[(j >> 24) & 0x0f]; *p_buf++ = disp_nybble[(j >> 28) & 0x0f]; } *p_buf++ = '\0'; fprintf (pll_log, " original [%3lu]:%s\n", index, buf); /* Also report the charclass member details from fsalex etc. */ index = fsalex_token - FSATOKEN_TK_CSET; charset = charclass_get_pointer (index); description = charclass_describe (charset); index = charclass_get_index (charset); fprintf (pll_log, " fsalex: [%3lu] %s\n", index, description); if (! charclass_equal (charset, (charclass_t const *) orig_ccl)) fprintf (pll_log, " *** ERROR: Charclass mismatch\n"); } return original_token; } static void show_musts (const char *title, fsamusts_list_element_t *list) { fsamusts_list_element_t *elem; static char buf[256]; char *p_buf; fprintf (pll_log, "\n%s:\n", title); p_buf = buf; for (elem = list; elem != NULL; elem = elem->next) { if (((p_buf - buf) + 4 + strlen (elem->must)) > 72) { fprintf(pll_log, " %s\n", buf); p_buf = buf; } p_buf += sprintf (p_buf, " (%s) >%s<", elem->exact ? "Entire" : "partial", elem->must); } fprintf (pll_log, "%s\n", buf); } /* Recursive descent parser for regular expressions. */ static token tok; /* Lookahead token. */ static size_t depth; /* Current depth of a hypothetical stack ]]) body = TextSubst(body, [[ /* Parse and analyze a single string of the given length. */ void dfacomp (char const *s, size_t len, struct dfa *d, int searchflag) { dfainit (d); dfambcache (d); dfaparse (s, len, d); dfamust (d); ]], [[ /* Run the fsa* etc modules (parsing/lexing/tokens/charclass) first, then run the original dfa code, with a hook added that the parser/lexer interface in the original code calls the new code in parallel, so that the veracity of the lexical token streams can be inspected (they should be identical). After these runs, the parser token list from each side is extracted and compared, and should also be identical. */ /* Parse and analyze a single string of the given length. */ void dfacomp (char const *s, size_t len, struct dfa *d, int searchflag) { size_t i; HOOK_set_up_fsa_stuff_if_not_done_already (); /* HOOK: Write the search pattern to the logfile.. */ if (! pll_log) pll_log = fopen(HOOK_LOG_FILENAME, "a"); fprintf (pll_log, "Pattern:"); for (i = 0; i < len; i++) fprintf (pll_log, " %c", isprint (s[i]) ? s[i] : ' '); fprintf (pll_log, "\n "); for (i = 0; i < len; i++) fprintf (pll_log, " %02x", ((unsigned) s[i]) & 0xff); fprintf (pll_log, "\n"); /* HOOK: Run the new code first so any dependencies on old code are more likely to bring up errors. */ fsalex_pattern (lexer, s, len); fsaparse_parse (parser); /* Re-prime fsalex () to analyse the pattern a second time, so that lexical tokens from both systems can be compared side-by-side. This lexical-token-check watches the activity at the parser/lexer boundary; the code below compares the outputs of the original parser and fsaparse, and they should also be functionally identical (some differences can occur in the indices added to CSET tokens, but the referenced set should be identical). */ fsalex_pattern (lexer, s, len); dfainit (d); dfambcache (d); dfaparse (s, len, d); dfamust (d); /* YET ANOTHER HACK, 16 April 2014 (was it related to the lunar eclipse last night?? !!?? ) Compare, side-by-side, the list of tokens generated by dfa.c and by fsaparse, and write these to the debug log file. As elsewhere, these should be identical, as the modularised code starts as a functional clone of the original code. (Later, if/when tokens are reworked to maintain abstractions at a higher level, the token lists will differ.) */ { size_t nr_tokens; fsatoken_token_t *token_list; size_t j; size_t max_index; fsaparse_get_token_list (parser, &nr_tokens, &token_list); fprintf (pll_log, "\ntokens: original fsaparse\n"); max_index = d->tindex > nr_tokens ? d->tindex : nr_tokens; for (j = 0; j < max_index; ++j) { static char buf[256]; if (j < d->tindex) { sprintf (buf, "%02lx", d->tokens[j]); fprintf (pll_log, "%17s ", buf); } else fprintf (pll_log, "%17s", ""); if (j < nr_tokens) { sprintf (buf, "%02lx", token_list[j]); fprintf (pll_log, "%9s", buf); } fprintf (pll_log, "\n"); } /* And finally, see how extracting musts from dfa.c compares to extracting musts via the fsa/charclass family of functions; again, these should be identical. */ musts = (fsamusts_list_element_t *) d->musts; show_musts ("original dfa.c", musts); /* ANOTHER UGLY HACK: Rely on dfa.c's case_fold and unibyte locale when instructing dfamust how to operate; an "Exchange" function might be more appropriate in the short-to-mid-term, but in the longer term, the token vocabluary should get more expressive, so that information can be conveyed directly. */ musts = fsamusts_must (NULL, nr_tokens, token_list, /* dfa.c copy: */ case_fold, /* current (dfa.c) locale: */ MB_CUR_MAX == 1); show_musts ("fsa* et al functions", musts); } ]]) -- Now, add calls at the end of dfafree () to destroy the entire modules -- and associated instances set up by earlier hooks. body = TextSubst(body, [[ for (dm = d->musts; dm; dm = ndm) { ndm = dm->next; free (dm->must); free (dm); } if (d->superset) dfafree (d->superset); ]], [[ for (dm = d->musts; dm; dm = ndm) { ndm = dm->next; free (dm->must); free (dm); } if (d->superset) dfafree (d->superset); /* HOOK: Tear down all the infrastructure added for parallel operation. */ HOOK_tear_down_additional_infrastructure (); ]]) -- Finally, write the modified dfa code to a separate C file. local f = assert(io.open("dfa-prl.c", "w")) assert(f:write(body)) assert(f:close()) ----------------******** Makefile.am ********---------------- print("Modifying (if needed) Makefile.am; you may need to re-run automake...") local Seg = Segs.SegList["grep_SOURCES"] local t = Seg.RawText if not t:match(" charclass.c fsatoken.c fsalex.c " .. "fsaparse.c fsamusts.c mbcsets.c dfa-prl.c ") then t = t:gsub("dfa.c ", "charclass.c fsatoken.c fsalex.c " .. "fsaparse.c fsamusts.c mbcsets.c dfa-prl.c ") -- It is very Bad Form to modify the original raw segment, but we're -- tired at this point. Seg.RawText = t -- Write out the modified file; assume backup made separately (Git?) WriteFile(makefileam) end