lexer, tokens, and content-types
Matthias Andree
matthias.andree at gmx.de
Sun Dec 8 15:20:29 CET 2002
David Relson <relson at osagesoftware.com> writes:
> At the present time, bogofilter gets tokens by calling get_token() which
> is in lexer.l. As bogofilter becomes more sophisticated in its parsing,
> the C code in lexer.l is growing and will continue to do so. I'm
> planning to take the get_token() routine out of lexer.l and make it the
> basis of a new module, token.c. Some other routines may move with it -
> I won't know for sure until I do the partitioning. I _do_ intend to
> leave lexer_fgets() and its passthrough handling in lexer.l, as I think
> they belong there. With the addition of a content-type module, we'll
> have a structure better suited for handling mime, base64, uuencode, etc,
> etc.
lexer_fgets might also be provided separately as a fgetsl() function, it
is useful when you need NUL-aware fgets().
Other than set, if it's just splitting code to avoid unnecessarily
recompiling the whole lexer core, then that's fine with me. I consider
myself warned.
--
Matthias Andree
More information about the bogofilter-dev
mailing list