lexer, tokens, and content-types

Matthias Andree matthias.andree at gmx.de
Mon Dec 9 02:45:12 CET 2002


David Relson <relson at osagesoftware.com> writes:

> Matthias & Gyepi,
>
> Looking at eps, I notice several things.
>
> In content.h are #defined lots of content-types,
> content-transfer-encoding types, and content-disposition types.  In
> content.c are tables to convert types (as strings) to the defined names.
> I also see files for converting mime and base64.  What I don't see are
> connections between the #defines and the code.  For example eps doesn't
> give us the connection from ENC_BASE64 to base64.c.  It seems we must
> write our own "glue" code.  I don't see a #define for UUENCODE or code
> for processing other types they recognize, e.g. 7BIT, 8BIT, QP, or RAW.
>
> There is a lot there, but eps isn't complete.  If we want completeness,
> we need to extend their package (perhaps a whole lot) or use another
> package.

So we'd go on searching and kill EPS. Can we use C++?

I know Postfix has a good parser, but it's license (IBM public license)
is incompatible with the GNU GPL,
http://www.gnu.org/licenses/license-list.html

Sucks, but that's what it is.

UUDeview ships with a library and is GPL software and looks more
complete, it claims to support uuencode, xxencode, MIME encodings (qp,
base64), yenc, ... http://www.fpx.de/fp/Software/UUDeview/

However it was originally aimed at NewsReaders, we'd need to figure if
it's good enough for bogofilter.

-- 
Matthias Andree




More information about the bogofilter-dev mailing list