[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Fri, 28 Sep 2012 05:21:14 -0400
On Fri, Sep 28, 2012 at 01:38:18AM +0200, Ákos Sülyi wrote:
> It's well known that it'd be much hard work over little good creating
> But does any kind of a design even exists?
no - basically because no one's working on it. The relevance of the DOM
model comments is that the scripts assume that one can set/get values based
on the document - some of the comments are technically valid, although that
(I'm not sure of the scope in elinks - it's more ambitious).
as a comment. Technically those could be reprocessed as they are found
and (like links) values gleaned for later use and substitution into later
After SGML.c there's HTML.c which handles the html level, yielding a
linked list of fragments of text which are displayed in GridText.c
The document at that point has been digested into little parts aimed
at doing just what lynx needs (no scripts...). Things like mouseovers
would require substantial work to integrate.
> What are the requirements beside than it should interpret the scripts?
> I don't think it could nor should even do anything with a great amount
> of most script.
links' approach was to identify a useful (but very small) subset.
With only about 30kb of source code, it wasn't that useful though.
elinks integrates a copy of a fairly large library for this.
> Would it be possible to use a rhino like engine to interpret the
> code after filtering the scripts?
> The tricky part is handling the returned data, isn't it?
> I've just apt-got the source. Where'd I looking for the part that
> handles regexps? There is one, right?
no - there's no regexp module. It's an occasional request for searches
(within a page), but lynx provides only case-independent matching of
exact strings. To see this, I'd follow the code down from the case
in LYMainLoop.c which provides the search feature.
Thomas E. Dickey <address@hidden>
Description: Digital signature