THE ALGORITHM

Marpa is essentially the parser described in John Aycock and R. Nigel Horspool's "Practical Earley Parsing", The Computer Journal, Vol. 45, No. 6, 2002, pp. 620-630. This combined LR(0) with Jay Earley's parsing algorithm. I've made some improvements.

First, Aycock and Horspool's algorithm rewrites the original grammar into NNF (Nihilist Normal Form). Earley's original algorithms had serious issues with nullable symbols and productions, and NNF fixes most of them. (A nullable symbol or production is one which could eventually parse out to the empty string.) Importantly, NNF also allows complete and easy mapping of the semantics of the original grammar to its NNF rewrite, so that NNF and the whole rewrite process can be made invisible to the user.

My problem with NNF grammar is that the rewritten grammar is exponentially larger than the original in the theoretical worst case, and I just don't like exponential explosion, even as a theoretical possibility in pre-processing. Furthermore, I think that in some cases likely to arise in practice (Perl 6 "rules" with significant whitespace, for example), the size explosion, while not exponential, is linear with a very large multiplier.

My solution was is Chomsky-Horspool-Aycock Form (CHAF). This is Horspool and Aycock's NNF, but with the further restriction that no more than two nullable symbols may appear in any production. (In the literature, the discovery that any context-free grammar can be rewritten into productions of at most a small fixed size is credited to Noam Chomsky.) The shortened CHAF production maps back to the original grammar, so that like NNF, the CHAF rewrite can be made invisible to the user. With CHAF, the theoretical worst behavior is linear, and in those difficult cases likely to arise in practice the multiplier is smaller.

Second, I've extended the scanning step of Earley's algorithm, and introduced the "earleme" (named after Jay Earley). Previous implementations required the Earley grammar's input to be broken up into tokens, presumably by lexical analysis of the input using DFA's (deterministic finite automata, which are the equivalent of regular expressions). Requiring that the first level of analysis be performed by a DFA hobbles a general parser like Earley's.

Marpa loosens the restriction, by allowing the scanning phase of Earley's algorithm to add items not just to the current Earley set and the next one, but to any later Earley set. Since items can be scanned onto several different Earley sets, so that the input to the Earley scanning step no longer has to be deterministic. Several alternative scans of the input can be put into the Earley sets, and the power of Earley's algorithm harnessed to deal with the indeterminism.

In the new Marpa scanner, each scanned item has a length in "earlemes", call it l. If the current Earley set is i, a newly scanned Earley item is added to Earley set l+i. The earleme is the distance measured in Earley sets, and an implementation can sync earlemes up with any measure that's convenient. For example, the distance in earlemes may be the length of a string, as measured either in ASCII characters, or UNICODE graphemes. Another implementation may define the earleme length as the distance in a token stream, measured in tokens.