Tag Archives: Finite State Automaton

Essays regarding finite state automata.

Palling around.

Pall (n.): pawl.

I couldn't write last week, and my upgrade to QL has progressed no further.
For reference, I stalled before comparing the efficiency of nested Objects to
that of nested Arrays, which I must test before experimenting further with the
prototype compiler or even refining the design. I intend to do that this month.
In the meantime, here's a snapshot of MLPTK with new experiments included.

http://www.mediafire.com/download/566ln3t1bc5jujp/mlptk-p9k-08apr2016.zip

And a correction to my brief about the grammar ("Saddlebread"): actually, the
InchoateConjugation sequence does not cause differentiation, because the OP_CAT
prevents the original from reducing. Other parts may be inaccurate. I'll revise
the grammar brief and post a new one as soon as I have fixed the QL speed bug.

I took some time out from writing Quadrare Lexema to write some code I've been
meaning to write for a very long time: pal9000, the dissociated companion.
This software design is remarkably similar to the venerable "Eggdrop," whose C
source code is available for download at various locations on the Internets.
Obviously, my code is free and within the Public Domain (as open as open source
can be); you can find pal9000 bundled with today's edition of MLPTK, beneath the
/reference/ directory.

The chatbot is a hardy perennial computer program.
People sometimes say chatbots are artificial intelligence; although they aren't,
exactly, or at least this one isn't, because it doesn't know where it is or what
it's doing (actually it makes some assumptions about itself that are perfectly
wrong) and it doesn't apply the compiler-like technique of categorical learning
because I half-baked the project. Soon, though, I hope...

Nevertheless, mathematics allows us to simulate natural language.
Even a simplistic algorithm like Dissociated Press (see "Internet Jargon File,"
maintained somewhere on the World Wide Web, possibly at Thyrsus Enterprises by
Eric Steven Raymond) can produce humanoid phrases that are like real writing.
Where DisPress fails, naturally, is paragraphs and coherence: as you'll see when
you've researched, it loses track of what it was saying after a few words.

Of course, that can be alleviated with any number of clever tricks; such as:
	1. Use a compiler.
	2. Use a compiler.
	3. Use a compiler.
I haven't done that with p9k, yet, but you can if you want.

Of meaningful significance to chat robots is the Markov chain.
That is a mathematical model, used to describe some physical processes (such as
diffusion), describing a state machine in which the probability of any given
state occurring is dependent only on the next or previous state of the system,
without regard to how that state was encountered.
Natural language, especially that language which occurs during a dream state or
drugged rhapsody (frequently and too often with malicious intent, these are
misinterpreted as the ravings of madmen), can also be modeled with something
like a Markov chain because of the diffusive nature of tangential thought.

The Markov-chain chat robot applies the principle that the state of a finite
automaton can be described in terms of a set of states foregoing the present;
that is, the state of the machine is a sliding window, in which is recorded some
number of states that were encountered before the state existent at the moment.
Each such state is a word (or phrase / sentence / paragraph if you fancy a more
precise approach to artificial intelligence), and the words are strung together
one after another with respect to the few words that fit in the sliding window.
So, it's sort of like a compression algorithm in reverse, and similar to the way
we memorize concepts by relating them to other concepts. "It's a brain. Sorta."

One problem with Markov robots, and another reason why compilers are of import
in the scientific examination of artificial intelligence, is that of bananas.
The Banana Problem describes the fact that, when a Markov chain is traversed, it
"forgets" what state it occupied before the sliding window moved.
Therefore, for any window of width W < 6, the input B A N A N A first produces
state B, then states A and N sequentially forever.
Obviously, the Banana Problem can be solved by widening the window; however, if
you do that, the automaton's memory consumption increases proportionately.

Additionally, very long inputs tend to throw a Markov-'bot for a loop.
You can sorta fix this by increasing the width of the sliding window signifying
which state the automaton presently occupies, but then you run into problems
when the sliding window is too big and it can't think of any suitable phrase
because no known windows (phrases corresponding to the decision tree's depth)
fit the trailing portion of the input.
It's a sticky problem, which is why I mentioned compilers; they're of import to
artificial intelligence, which is news to absolutely no one, because compilers
(and grammar, generally) describe everything we know about the learning process
of everyone on Earth: namely, that intelligent beings construct semantic meaning
by observing their environments and deducing progressively more abstract ideas
via synthesis of observations with abstractions already deduced.
Nevertheless, you'd be hard-pressed to find even a simple random-walk chatbot
that isn't at least amusing.
(See the "dp" module in MLPTK, which implements the vanilla DisPress algorithm.)

My chatbot, pal9000, is inspired by the Dissociated Press & Eggdrop algorithms;
the copy rights of which are held by their authors, who aren't me.
Although p9k was crafted with regard only to the mathematics and not the code,
if my work is an infringement, I'd be happy to expunge it if you want.

Dissociated Press works like this:
	1. Print the first N words (letters? phonemes?) of a body of text.
	2. Then, search for a random occurrence of a word in the corpus
	   which follows the most recently printed N words, and print it.
	3. Ad potentially infinitum, where "last N words" are round-robin.
It is random: therefore, humorously disjointed.

And Eggdrop works like this (AFAICR):
	1. For a given coherence factor, N:
	2. Build a decision tree of depth N from a body of text.
	3. Then, for a given input text:
	4. Feed the input to the decision tree (mmm, roots), and then
	5. Print the least likely response to follow the last N words
	   by applying the Dissociated Press algorithm non-randomly.
	6. Terminate response after its length exceeds some threshold;
	   the exact computation of which I can't recall at the moment.
It is not random: therefore, eerily humanoid. (Cue theremin riff, thundercrash.)

A compiler, such as I imagined above, could probably employ sliding windows (of
width N) to isolate recurring phrases or sentences. Thereby it may automatically
learn how to construct meaningful language without human interaction.
Although I think you'll agree that the simplistic method is pretty effective on
its own; notwithstanding, I'll experiment with a learning design once I've done
QL's code generation method sufficiently that it can translate itself to Python.

Or possibly I'll nick one of the Python compiler compilers that already exists.
(Although that would take all the fun out of it.)

I'll parsimoniously describe how pal9000 blends the two:

First of all, it doesn't (not exactly), but it's close.
Pal9000 learns the exact words you input, then generates a response within some
extinction threshold, with a sliding window whose width is variable and bounded.
Its response is bounded by a maximum length (to solve the Banana Problem).
Because it must by some means know when a response ends "properly," it also
counts the newline character as a word.
These former are departures from Eggdrop.
It also learns from itself (to avoid saying something twice), as does Eggdrop.

In addition, p9k's response isn't necessarily random.
If you use the database I included, or choose the experimental "generator"
response method, p9k produces a response that is simply the most surprising word
it encountered subsequent to the preceding state chain.
This produces responses more often, and they are closer to something you said
before, but of course this is far less surprising and therefore less amusing.
The classical Eggdrop method takes a bit longer to generate any reply; but, when
it does, it drinks Dos Equis.
... Uh, I mean... when it does, the reply is more likely to be worth reading.
After I have experimented to my satisfaction, I'll switch the response method
back to the classic Eggdrop algorithm. Until then, if you'd prefer the Eggdrop
experience, you must delete the included database and regenerate it with the
default values and input a screenplay or something. I think Eggdrop's Web site
has the script for Alien, if you want to use that. Game over, man; game over!

In case you're curious, the algorithmic time complexity of PAL 9000 is somewhere
in the ballpark of O(((1 + MAX_COHERENCE - MIN_COHERENCE) * N) ^ X) per reply,
where N is every unique word ever learnt and X is the extinction threshold.
"It's _SLOW._" It asymptotically approaches O(1) in the best case.

For additional detail, consult /mlptk/reference/PAL9000/readme.txt.

Pal9000 is a prototypical design that implements some strange ideas about how,
exactly, a Markov-'bot should work. As such, some parts are nonfunctional (or,
indeed, malfunction actually) and vestigial. "Oops... I broke the algorithm."
While designing, I altered multiple good ideas that Eggdrop and DisPress did
right the first time, and actually made the whole thing worse on the whole. For
a more classical computer science dish, try downloading & compiling Eggdrop.

I’m happy to relate that puns about stacks & state have become out of date.

More of QL, courtesy of MediaFire, with the MLPTK framework included as usual:

https://www.mediafire.com/?xaqp7cq9ziqjkdz (http://www.mediafire.com/download/xaqp7cq9ziqjkdz/mlptk-operafix-21mar2016.zip)

I have, at last, made time to fix the input bug in Opera!
I'm sorry for the delay and inconvenience. I didn't think it'd be that tricky.

I fixed an infinite loop bug in "rep" (where I didn't notice that someone might
type in a syntax error, because I didn't have a syntax error in my test cases),
and added to the bibliography the names of all my teachers that I could recall.
I added a quip to the bootstrap sequence. More frills.

My work on QL has been painfully slow, due to pain, but it'll all be over soon.
I see zero fatal flaws in ::hattenArDin. I guess debugging'll be about a month,
and then it's back to the Command Line Interpretation Translator. Sorry for the
delay, but I have to rewrite it right before I can right the written writing.
The improved QL (and, hopefully, the CLIT) is my goal for second quarter, 2016.
After those, I'll take one of two options: either a windowing operating system
for EmotoSys or overhaul of QL. Naturally, I favor the prior, but I must first
review QL to see if the API needs to change again. (Doubtful.)

MLPTK, and QL, are free of charge within the Public Domain.

Last time I said I needed to make the input stream object simpler so that it can
behave like a buffer rather than a memory hog. I have achieved this by returning
to my original design for the input stream. Although that design is ironically
less memory-efficient in action than the virtual file allocation table, it leans
upon the Javascript interpreter's garbage collector so that I mustn't spend as
much time managing the fragmented free space. (That would've been more tedious
than I should afford at this point in the software design, which still awaits an
overhaul to improve its efficiency and concision of code.) And, like I thought,
it'll be too strenuous for me to obviate the precedence heuristic by generating
static code. In fact: unless tokens are identified by enumeration, dropping that
heuristic actually reduces efficiency, because string comparisons; also, I was
wrong about the additional states needed to obviate it: there're none, but a new
"switch" statement (these, like "if" and "while," are heuristics) is needed, and
it's still slower (even best-case) than "if"ing the precedence integer per node.

Here's an illustration of the "old" input stream object (whither I now return):

	 _________________________________
	| ISNode: inherits from Array     | -> [ ISNode in thread 0 ]
	|    ::value = data (i.e., token) |
	|    ::[N] = next node (thread N) | -> [ ISNode in thread 1 ]
	|    (Methods...)                 |
	|    (State....)                  | -> [ etc ... ]
	-----------------------------------

As you see, each token on input is represented by a node in a linked list.
Each node requires memory to store the token, plus the list's state & methods.
Nodes in the istream link forward. I wanted to add back-links, but nightmare.
For further illustration, see my foregoing briefs.

Technically the input stream can be processed in mid-parse by using the "thread"
metaphor (see "Stacking states on dated stacks"); however, by the time you need
a parser that complex, you are probably trying to work a problem that requires a
nested Turing machine instead of using only one. Don't let my software design
lure you into the Turing tar-pit: before using the non-Bison-like parts, check &
see if you can accomplish your goal by doing something similar to what you'd do
in Yacc or Bison. (At least then your grammar & algorithm may be portable.)
QL uses only thread zero by default... I hope.

GNU Bison is among the tools available from the Free Software Foundation.
Oh, yeah, and Unix was made by Bell Laboratories, not Sun Microsystems. (Oops.)

Here's an illustration of the "new" input stream object (whence I departed):

	Vector table 0: file of length 2. Contents: "HI".
		       (table index in first column, data position in second column)
		0 -> 0 (first datum in vector 0 is at data position 0)
		1 -> 1 (second datum, vector 0, is at data position 1)
	Vector table 1: file of length 3. Contents: "AYE".
		0 -> 3
		1 -> 4
		2 -> 7
	Vector table 2: file of length 3. Contents: "H2O".
		0 -> 0
		1 -> 5
		2 -> 10
	Vector table 3: file of length 6. Contents: "HOHOHO".
		0 -> 0
		1 -> 10
		2 -> 0
		3 -> 10
		4 -> 0
		5 -> 10
	Data table: (data position: first row, token: second row)
		0     1     2     3     4     5     6     7     8     9    10
		H     I     W     A     Y     2     H     E     L     L    O

In that depreciated scheme, tokens were stored in a virtual file.
The file was partitioned by a route vector table, similar to vectabs everywhere.
Nodes required negligible additional memory, but fragmentation was troublesome.
Observe that, if data repeats itself, the data table need not increase in size:
lossless data compression, like a zip file or PNG image, operates with a similar
principle whereby sequences of numbers that repeat themselves are squeezed into
smaller sequences by creating a table of sequences that repeat themselves. GIF89
employs run-length encoding, which is a similar algorithm.

There is again some space remaining, and not much left to say about the update.
I'll spend a few paragraphs to walk you through how Quadrare Lexema parses.

QL is based upon a simple theory (actually, axiom) of how to make sense of data:
    1. Shift a token from input onto lookahead; the former LA becomes active.
    2. If the lookahead's operator precedence is higher, block the sequence.
    3. If the active token proceeds in sequence, shift it onto the sequence.
    4. Otherwise, if the sequence reduces, pop it and unshift the reduction.
    5. Otherwise, block the sequence and wait for the expected token to reduce.
And, indeed: when writing a parser in a high-level language such as Javascript,
everything really is exactly that simple. The thousands lines of code are merely
a deceptive veil of illusion, hiding the truth from your third eye.

Whaddayamean, "you don't got a third eye"?

QL doesn't itself do the tokenization. I wrote a separate scanner for that. It's
trivial to accomplish. Essentially the scanner reads a string of text to find a
portion that starts at the beginning of the string and matches a pattern (like a
regular expression, which can be done with my ReGhexp library, but I used native
Javascript reg exps to save time & memory and make the code less nonstandard);
then the longest matching pattern (or, if tied, the first or last or poka-dotted
one: whichever suits your fancy) is chopped off the beginning of the string and
stored in a token object who contains an identifier and a semantic value. Value
of a token is computed by the scanner in similar fashion as parser's reductions.
The scanner doesn't need to do any semantic analysis, but some is OK in HLLs.

Now that your input has ascended from the scanning chakra, let's manifest
another technical modal analysis. I wrote all my code while facing the upstream
direction of a flowing river, sitting under a waterfall, and meditating on the
prismatic refraction of a magical crystal when exposed to a candle dipped at the
eostre of the dawn on the vernal solstice -- which caused a population explosion
among the nearby forest rabbits, enraged a Zen monk who had reserved a parking
space, divested a Wiccan coven, & reanimated the zombie spirits of my ancestors;
although did not result in an appreciable decrease in algorithmic complexity --
but, holy cow, thou may of course ruminate your datum howsoever pleases thee.

The parser begins with one extremum, situated at the root node. The root is that
node whence every possible complete sequence begins; IOW, it's a sentry value.
Thence, symbols are read from input as described above. The parse stack -- that
is, the extrema -- builds on itself, branching anywhen more than one reduction
results from a complete sequence. As soon as a symbol can't go on the stack: if
the sequence is complete, the stack is popped to gather the symbols comprising
the completed sequence; that occurs by traversing the stack backward til arrival
at the root node, which is also popped. (I say "pop()" although, as written, the
stack doesn't implement proper pushing and popping. Actually, it isn't even a
stack: it's a LIFO queue. Yes, yes: I know queues are FIFO; but I could not FIFO
the FIFO because the parser doesn't know where the sequence began until it walks
backwards through the LIFO. So now it's a queue and a stack at the same time;
which incited such anger in the national programming community that California
seceded from the union and floated across the Pacific to hang out with Nippon,
prompting Donald Trump to reconsider his business model. WTF, m8?) Afterward,
the symbols (now rearranged in the proper, forward, order) are trundled along to
the reduction constructor, which is an object class I generated with GenProtoTok
(a function that assembles heritable classes). That constructor makes an object
which represents _a reduction that's about to occur;_ id est: a deferment, which
contains the symbols that await computation. The deferment is sent to the parse
stack (in the implementation via ::toTheLimit) or the input stream (::hatten);
when only one differential parse branch remains, computation is executed and the
deferred reduction becomes a reduction proper. Sequences compound as described.

Stacking states on dated stacks to reinstate Of Stack & State: first rate?

I have accomplished the first major portion of the code I set out to write last
Christmas. My milestone for Quadrare Lexema, first quarter 2016, is available
at MediaFire here:

http://www.mediafire.com/download/26qa4xnidxaoq46/mlptk-2016q1-typeset-full.zip

It's a bit larger than my other snapshots this year, because I included another
full PDF typesetting of my software (which, today, runs into hundreds of pages)
and two audio recordings evoking my most beautiful childhood memories. These
added a few Mbytes. I'll remove them in the next edition.
(No, really; Tuna Loaf truly does encode smaller as a PCM .WAV than an MP3.)

Don't bother downloading this one unless you really want to see how QL's shaping
up or need a preformatted typesetting of progress since the prospectus. There is
nothing much new: all my work in the first quarter has been in Quadrare Lexema &
these updates, and the command line interpreter upgrade isn't yet operational.

My source code is, as usual, free of charge and Public Domain.
Some of the graphics and frills don't belong to me, and I plan to remove them
(either at some unspecified future time or upon any such notification/demand),
but please restrict yourself to my signed code if copying & pasting for profit.

Today's edition of MLPTK includes an upgraded QL. I haven't had time to overhaul
yet, but I did manage to shoehorn in a somewhat improved parser mechanism. This
should make things less of a nightmare for those of you presently experimenting
with the beta version of QL, which is at least a _stable_ beta by now.

Of particular interest are MLPTK's text-mode console, which I employed to debug
Quadrare Lexema in alpha, and which is now stable/maintenance (doesn't crash);
and QL itself, now a stable Beta (doesn't crash much).
My tentative additions to the Bison parser's "left/right" associativity model &
their demonstration in the Command Line Interpretation Translator's library file
are of most interest to experienced programmers. When I've finished writing the
manuscript in its native language, I'll write an additional reference sheet in
English for the novitiate. (See the old AFURACC supplemental schematic reference
for an idea of what this shall look like.)

I've altered QL's schematic slightly, to permit the programmer (that's you!) to
encode a minor degree of context in your grammar, and my code is so easy to read
& alter that you can really have a field day with this thing if you've the mind.

Several of my foregoing briefs, which cover some of these already; consider also
the Blargument rule, which is what makes the CLIT so fascinating. Utilizing QL's
mechanism to unshift multiple tokens back onto the input stream when executing a
reduction (which is really remarkably clever; because, IIRC, Bison can't unshift
multiple tokens during a reduction deferment, so multiple-symbol productions are
very difficult to achieve by using the industry-standard technology), Blargument
consumes 1 less token than it "actually" does by unshifting the trailing token.
Because of how I wrote QL, the parser differentiates its input stream (virtually
or actually) each time a reduction produces symbols; so, even when a deferred
reduction pushes symbols back onto the input stream, these don't interfere with
any of the other deferred reductions (which exist as though the input stream was
not modified by anything that didn't happen during their "timeline").

In addition, my upgrade to QL's programming interface permits differentiation at
any point within the input stream. Although somewhat wobbly, this serves to show
how it's possible for even a deferred computation to polymorph the code - before
the parser has even arrived there, & without affecting any of the other quanta.
Or at least it will, if I can figure out how to apply it before the next time I
overhaul the code; otherwise, I think I'll drop it from the schematic, because
this input stream metaphor should probably be optimized due to large inputs.

The old Capsule Parser generator is still there, via the ::toTheLimit() method,
and the upgraded Ettercap Parser is generated by ::hattenArDin().

Here is an abridged comparison & contrast:
1. Stack nodes are generic; their place taken by hard-coded transition methods.
   Some alterations to stack node constructor argument vector, algorithm.
   There is more hard code and less heuristics. Actually, the whole parser could
   be generated as static code, and I hope to implement that sometime this year.
2. Input is now a linked list; previously an Array.
   Arrays of tokens are preprocessed to generate this list, which functions as
   a stream buffer with a sequential-access read head.
3. The input stream links forward and the parse stack links backward.
4. Stack extrema now interface with the input stream differentially; that is,
   the input stream itself is differentiated (obviating recursion), instead of
   the recursive virtual differentiation that occurred before.
5. Differentiation of the input stream occurs actually, not virtually, and can
   be programmed at any point in the input stream (although it's such a pain).

Here's an illustration of the extremum-to-read-head interface juncture:

                    -> [ EXTREMUM A ] -> [ READ HEAD A ] -> [ DIFFERENTIAL A ]
                   /                                               v
 -> [ PARSE STACK ] -> [ EXTREMUM B ] -> [ READ HEAD B ] -> [ INPUT BUFFER ] ->
                   \                                               ^
                    -> [ EXTREMUM C ] -> [ READ HEAD C ] -> [ DIFFERENTIAL C ]

Actually it is a little more complicated than that, but that is the basic idea.
As you can see, my parser models a Turing machine that sort of unravels itself
at the interface between what it has already seen and what it presently sees.
Then it zips itself back up when reduce/reduce conflicts resolve themselves.
I imagine it like a strand of DNA that is reconstituting itself in karyokinesis.

The extrema A through C are, actually, probably exactly the same extremum.
However, when the next parse iteration steps forward to the location within the
input stream that is pointed-to by the read head, the parse stack branches; so,
it's easiest to imagine that the extrema have already branched at the point of
interface, because if they haven't branched yet then they are about to.

Here's an illustration of timeline differentiation within the input stream:

                        -> [ DIFFERENTIAL A ] -.       -> [ DIFFERENTIAL C ]
                       /                   v---¯     /                       \
    -> [ READ HEAD(S) ] -> [ UNMODIFIED INPUT ] -> [ TOKEN W ] -> [ TOKEN Z ] ->
                       \                   ^-----------------------------------.
                        -> [ DIFFERENTIAL B ] -> [ DIFFERENTIAL B CONTINUES ] -¯

The parser selects differentials based on either of two criteria: the read head,
which points at the "beginning" of the input stream buffer as it is perceived by
the extremum in that timeline; or a timeline identifier ("thread ID") that tells
the algorithm when it should, for example, jump from token W to differential C
and therefore skip token Z. Thus, multiple inputs are "the same program."
Again: parser state becomes unzipped at differential points, then re-zips itself
after the differentiated timelines have resolved themselves by eliminating those
who encounter a parse error until only one possible timeline remains.

So your code can polymorph itself. In fact, it can polymorph itself as much as
you want it to, because the parser returns ambiguous parses as long as they have
produced the origin symbol during its finalization phase. However, I haven't yet
tested this and it probably causes more difficulty than it alleviates.

However, if you've experimented with AFURACC, you might be disappointed to see
that I have failed to write an interface for you to polymorph the grammar at run
time using QL. Probably that's possible to do, but the scaffold (as it is at the
moment) is rigid and inflexible. I want to work it into a future revision; but,
because a heuristic parser is more-or-less sine qua non to the whole idea of a
polymorphic parser, I'm pretty sure I'll stick to just polymorphing the code.

As I work to reduce the time complexity of the parser algorithm, similarity to
the Free Software Foundation's compiler-compiler, GNU Bison, becomes pronounced.
I'm trying to add some new ideas, but Bison is pretty much perfect as it is, so
I haven't had much luck - things that aren't Bison seem to break or complicate.
I still haven't heard anything from the FSF about this, so I guess it's OK?

However, the scaffold seems stable, so I'll work on the command-line interpreter
for a while and formalize its grammar. After that, probably yet another improved
static code generator, and if that doesn't malfunction I'll try video games IDK.

I intend to complete the formal commandline grammar within second quarter 2016.
Goals: command substitution, javascript substitution (a la 'csh'), environment
variables and shell parameter expansion, and shell arithmetic.
Additionally, if there's time, better loops and a shell scripting language.

Om. Wherefore ore? Om. Om. Ore… art thou ore?

Dynamic recompilers: gold? Maybe. But what is the significance of recompilation
in the fast-paced, high-tech world of today? Does, indeed, my idea present any
useful information -- or am I merely setting forth in search of lost CPU time?
What am I doing? What am I saying? Where am I going? What am I computing? I've
built the swing, but what about the swang? After the swang, whither the swung?
(Paraphrased from sketches performed in "Monty Python's Flying Circus.")

This lecture provides, in abstract, my concept design of a specialized Turing
machine: the compound compiler/disassembler. It is feasible via my published
work, Quadrare Lexema, which is in the public domain as is my essay.

(Actually, I think these are more appropriately called "dynamic recompilers.")

If you're just joining me, Quadrare Lexema is a Bison-like parser generator
written in the Javascript programming language for use in Web browsers (FF3.6).
It is portable to any browser employing a modern, standards-compliant Javascript
interpreter; such as Chromium, Chrome, Opera, and (hypothetically) Konqueror.
I have not yet had an opportunity to acquire a copy of Microsoft's Web browser,
Internet Explorer, because I've had no opportunity to acquire a copy of Windows
(and couldn't pay for it, even if I did). QL produces a parser like GNU Bison;
which is to say, an LALR parser for Backus-Naur Format context-free grammars.
For more information, visit the Free Software Foundation's Web site; and/or the
Wikipedia articles treating Backus-Naur Format, regular grammars, LR parsers,
and Turing machines.

Because I have already described my work with the hundreds of pages of code and
the few pages of abstract condensed lectures, this is curtate. However, observe
that my foregoing proof constructs the basis upon which this concept rests.
Besides my work, bachelors of computer science worldwide have demonstrated in
their own dissertations similar proofs to that I have presented you by now.

Again: concept, not implementation. Toward that end lies the specification of a
Turing-complete virtual machine, wherein there be dragons. I will write you one
soon, but I am pending some subsequent blueprints on my acquisition of better
laboratory equipment. In the meantime, I'm pretty sure you can write one w/ QL.

Obviously: a compiler translates the syntax of a human-interface language into
machine instructions; parser generators can be employed to construct compilers;
and such generated parsers can be attached to other parsers in a daisy-chain.

So, make a compiler that converts the abstract instructions it reads into some
progressively simpler set of other instructions, by unshifting multiple tokens
back onto the input stream during the reduction step. You could either read them
again and thereby convert them to yet other computations, or simply skip-ahead
within the input stream until it has been entirely disassembled.

Consider: you're writing an artificial intelligence script, somewhat like this:
	FOR EACH actor IN village:
		WAKE actor,
		actor ACTS SCRIPT FOR TIME OF DAY IN GAME WORLD.
(This syntax is very similar to the A.I. language specified for the Official
 Hamster Republic Role-Playing Game Creation Engine (O.H.R.RPG.C.E.).)
Your context-free grammar is walking along der strasse minding its own business;
when, suddenly, to your surprise, it encounters the ambiguous TIME OF DAY symbol
and the parser does the mash. It does the monster mash. It does the mash, and it
is indeed a graveyard smash in the sense that your carefully-crafted artificial
intelligence -- the very stuff and pith of whose being you have painstakingly
specified as a set of atomic operations in an excruciating mathematical grammar
-- has now encountered a fatal syntax error and you would like nothing better
than to percuss the computer until it does what you say because you've used each
!@#$ free minute learning how to program this thing for the past fifteen !@#$
years and God Forfend(TM) that it should deny you the fruit of your labor.

Well, actually, the OHRRPGCE does solve that problem for you, which is nice; and
maybe it even solves it in exactly this way, which is cool; but I'll continue...
Let's say your parser lets you unshift multiple symbols upon reduction, like QL
(or like any parser that does a similar thing, of which there are surely many);
then "TIME OF DAY" could reduce to an algorithm that does nothing more than find
out what time of day it is and then unshift a symbol corresponding to the script
that most closely matches what time of day it is.

In other words, you've specified the possible computations that the language can
execute as a set of atomic operations, then decomposed the abstract instructions
into those atomic operations before executing them. The functionality is similar
to that of the C preprocessor's #include directive, which directs the CPP to add
the source code from the #included header verbatim like a copy-paste operation.
That's the most straightforward application of this thought, anyway: copy-and-
pasting artificial intelligence scripts into other AI scripts.

Another thought: let's say your compiler is, in addition to its named capacity,
also a virtual machine that's keeping track of state in some artificial world.
So, it's two virtual machines in one. Rather silly, I know, but why not follow
me a bit deeper into the Turing tar-pit where everything is possible and nothing
is NP-complete? After all, "What if?" makes everything meaningless, like magic.

So, again, it's trucking along and hits some instruction like "SLAY THE DRAGON;"
but the dragon is Pete, who cleverly hides from the unworthy. Now, you could say
just ignore this instruction and carry on with the rest of the script, and maybe
even try to slay the dragon again later after you have slung Chalupas out the TB
drive-through window for a few hours. You could even say travel back in time and
slay the dragon while there's yet a chance -- because wholesale violation of the
law of causality is okay when you can't afford to lose your heroic street cred.
But why not check the part of the script you've yet to execute, to see if maybe
there's something else you need to do while you're here, and then try doing it
while you keep an eye out in case Pete shows up?
I mean, you could write that check as part of the "SLAY" atom; or as part of the
next action in your script that happens after the "SLAY" atom; but why not write
it as part of what happens before the "SLAY" atom even executes?

Also, what if you wanted the compiler to try two entirely different sequences of
symbols upon computation of a symbol reduction? Or one reduction modifies the
input stream somehow, while another does not? Or to reorganize/polymorph itself?

All this and more is certainly possible. QL does a big chunk of it the wrong way
by gluing-together all sorts of stuff that could have been done by making your
symbols each a parser instead. Oh, and iterators in Icon also do it better. And
let us not forget the honorable Haskell, whose compiler hails from Scotland and
is nae so much a man as a blancmange. "Learn you a Haskell for great good!" --
this is both the title of a book and an amusing pearl of wisdom.

The only thing that bothers me about this idea is that the procedure necessary
to polymorph the code at runtime seems to be (a) something that I ought to be
doing by nesting context-free parsers, rather than writing a contextual one; and
(b) probably requires a heuristic algorithm, which would be awfully slow.

I haven't developed this idea very well and there's some space left.
Here is a verbatim excerpt with an example Haskell algorithm, for no reason:

  From The Free On-line Dictionary of Computing (30 January 2010) [foldoc]:

  Quicksort
  
     A sorting {algorithm} with O(n log n) average time
     {complexity}.
  
     One element, x of the list to be sorted is chosen and the
     other elements are split into those elements less than x and
     those greater than or equal to x.  These two lists are then
     sorted {recursive}ly using the same algorithm until there is
     only one element in each list, at which point the sublists are
     recursively recombined in order yielding the sorted list.
  
     This can be written in {Haskell}:
  
        qsort               :: Ord a => [a] -> [a]
        qsort []             = []
        qsort (x:xs)         = qsort [ u | u<-xs, u<x ] ++
                               [ x ] ++
                               qsort [ u | u<-xs, u>=x ]

     [Mark Jones, Gofer prelude.]

Now that I've burst my metaphorical payload, in regard to theory, I think future
lectures shall be of a mind to walk novices through some of the smaller bits and
pieces of the algorithms I've already written. There aren't many, but they're a
real pain to translate from source code to English (which is why I haven't yet).
I might also try to explain QL in further detail, but IDK if I can make it much
more straightforward than it is (no questions == no idea if you comprehend)...
and, besides, I've already written more than ten pages about it & it's tedious.

Depending on how much Pokemon I must play to clear my head after overhauling the
parser this year, and how much time is left after that to finish the shell's new
scripting language, I'll aim to write some of those simplified lectures by July.

States stacked on Of Stack & State to date.

New MLPTK development snapshot is available here, courtesy of Mediafire:
https://www.mediafire.com/?a71rpd8lp0o7st7 ( http://www.mediafire.com/download/a71rpd8lp0o7st7/mlptk-qlprospectusmilestone-1q2016.zip )
It's the best thing since the last edition. QL remains somewhat broken, however.
Parental discretion is advised, because some people get very upset when they see
illustrations of mature females whose mammaries are exposed. I'm not sure what
Happy the Smiling Cow thinks about the issue, though I imagine she doesn't mind.
Public domain source code and lectures. Free of charge.

(Spoilers: try reading the first word of each line. Solution near the middle.)

This installment of my spellbinding WordPress saga, like that foregoing, is a
brief treatment of part of my library, Quadrare Lexema. I'm sorry these are so
dull, and I'll try to deliver a lecture on binary trees or something between now
and when I expound upon some other uninteresting piece of software anatomy.

Boring further into the workings of this monstrous library; which, you'd agree,
is fit for nothing less than overhaul; the next significant algorithm is the
parser, which executes computations in response to input. QL employs numerous
acrobatics, of truly astounding profligacy, but the function of the parser is
largely similar to that of Bison: in fact, if you disregard all the useless
frippery that does nothing more than wrap a data structure around the 'menergy'
function, it is the same idea at its heart:
    1. Read a token.
    2. Consider whether the sequence in progress proceeds, blocks, or reduces.
    3. Compute the semantic value of the input, dependent consideration.
Simple, right?

Except for the structural junk, and manual recursion stacks because Javascript.

QL's parser is an unholy menagerie of object-oriented design patterns, arranged
haphazardly. The parse stack is a linked list, the reduction tokens are manually
constructed Array-like objects that point to themselves in the stack (until they
don't), and static code generation is still only a possibility. It's easy to
confuse the reduction routines, which aren't part of the parser, with the parse
step when reading them as written; also, I'm going to overhaul this library soon
with a simplified algorithm; so, for now, if you'd kindly take me at "it does
run in Firefox 3.6.13," I'll this year be about rewriting the library to be less
foolishly circumspect.

Like AFURACC, QL encodes the reduction routines as object constructors. (Forgive
me; it'll be less unlike Bison soon, the FSF's mighty bootloader permitting.)
The constructors create the "reduction deferral objects" I wrote of previously.
Parse proceeds as I wrote yet previously, in LR fashion, but: during reduction
phase, instead of unshifting the reduced token back onto the input stream, QL
describes a (broken) recursion that shifts the reduced token right away. It's
Hell on metaphorical wheels; I'm going to revise it out and sorry for the crap.

Abandon that thought a moment. Before reduction, a sequence must agglutinate. I
hope not to have differed much from Bison in that particular with this edition.
Succinctly:
    1. Agglutinate a new token onto each stack branch.
    2. Divergence due to syntax ambiguity is handled by deferring reductions.
    3. Eliminate divergent branches by discarding syntax errors.
       (This occurs if a sequence can transition or reduce in more than 1 way.)

When menergy (the name of this function is derived from the mock commercial,
"PowerThirst," which received millions of hits at YouTube) is called by step, it
strikes each divergent stack extremum, to be replaced with a shift/reduce/block.

There's not much more to menergy; the parser theory lecture suffices. Spare me
laughter, and I'll write at some length about the parse stack and deferrals.



Here is the solution to the riddle:

"Next, it's parental illustrations; happy, public? Spoilers: this brief dull and
boring is. Parser acrobatics largely frippery; function: read, consider, compute
-- simple, except QL's haphazardly constructed. Don't confuse step with run
foolishly, like me. The parse phase describes Hell: abandon hope. Succinctly:
agglutinate; divergence: eliminate this. When PowerThirst strikes, there's
laughter."



And here is a condensed treatment of where my practice has differed from theory:

The parse stack is a branching backwards-linked list. Imagine this as a treelike
structure that is traversed from the extrema toward the root, rather than from
root to extrema as in "vanilla" trees. Nearly everything relevant to the parser,
and distinct from the state mapper, is described in the constructor returned by
method named "toTheLimit" (lines 1,308 through 2,038 of quadrare-lexema.js); the
parse node constructor is specified in pieces at lines 1,895-1,918 & 1,315-1,389
(boilerplate data) and 1,985-2,012 (instantiation). Parse nodes are elements of
the parse stack; translated from map nodes, they encode both the symbol on the
stack and the map paths thither and thence. Observe that there's not much to a
node besides the backward link, the symbol, and the precedence level within any
parser stack node: all the rest of that stuff is the parser's state data.

Implementation of the parse stack as a tree is one means to permit the parser
to automatically solve some kinds of syntax conflicts by deferring reductions on
the parse stack, then waiting until only one extremum remains before traversing
it to execute the computations specified by the deferred reduction. To "defer" a
reduction is to withhold from computing its semantic value.

The deferral is represented on the parse stack by a placeholder token (I call
these "dummies", from a colloquialism that refers to mannequins) while divergent
parse branches are agglutinating; then, reduction is executed when all branches
except one have been eliminated by syntax errors.

If you can't recall the first lecture, an LR parser behaves like this...
    1. Shift a token from input onto lookahead; the former LA becomes active.
    2. If the lookahead's operator precedence is higher, block the sequence.
    3. If the active token proceeds in sequence, shift it onto the sequence.
    4. Otherwise, if the sequence reduces, pop it and unshift the reduction.
    5. Otherwise, block the sequence and wait for the expected token to reduce.
... but, even though these steps appear to (and usually do) happen one after
another in exactly one order, mathematicians who'd like a challenge may consider
syntactical ambiguity too. For instance, there might be more than one way: for
the working token to proceed from the sequence in step 3 (called a "shift/shift
conflict"); for the sequence to reduce to one or more tokens in step 4 ("reduce/
reduce conflict"); or even for operator precedence/associativity to cause the
sequence to block or not in step 2. In addition, a sequence might block (step 5)
and not block simultaneously. These last are also shift/shift conflicts; a type
of syntactical ambiguity that tends to result in an infinite loop. "Shift/reduce
conflict" describes an impasse where the parser can't tell whether to shift a
symbol onto a sequence in progress or reduce the sequence further (because of
reasons a competent mathematician could explain to you, but I cannot). Finally,
if a blocking sequence produces multiple possible reductions, a shift/{shift,
reduce} conflict could result even when the reduce/reduce could be auto-solved.

For more information, consult the FSF's Bison Parser Algorithm (©). Again, if my
algorithm is at all an infringement, I'll delete it. Please let me know.
The Free Software Foundation has verified their algorithm precisely; where their
mathematics or descriptions conflict with mine, I recommend you favor theirs.

QL attempts to automatically solve some syntactical ambiguity by looking for a
contextual clue in the sequence that follows next. To do so, it defers reduction
of all symbols during the strike-replace loop: reduction objects are constructed
as deferment "dummies," which are filled-in with the semantic value of reduction
when only one stack-branch exists. Any of the above conflicts could branch stack
(and, by the way, this has nought to do with the parse stack's treelike format),
but at the moment QL simply chokes on everything except a reduce/reduce conflict
-- and I have not tested even this, except the math looks kinda-sorta passable.
Thus, at the present time, can a sequence (hypothetically) produce differtent
semantic values upon reduction, depending on which possible reduction fits-into
the sequence that was encountered beforehand. Any such divergence is handled by
putting-off the computation of the reduction until all the branches except for
one have been eliminated by encountering a token that doesn't fit.

Footnote: I added non-standard associativity specifiers to the parser logic.
These ideas were so wrong-headed that I can't even explain how broken they are,
but here is what I intended them to do:
    mate    Mating tokens with the same precedence always cleave to each other.
            (Except not really, because that required too many special cases!)
    stop    Stopping tokens with lower precedence than the extremum prevent any
            precedence/associativity block when the working token is a stopper.
            (Except not really, because special cases!)
    wrong   Wrong-associative lookaheads don't cause a precedence block when the
            sequence preceding the working token was a right-associative block;
            and don't block on r-assoc when the lookahead is wrong-associative.
Both associativity specifiers shall be removed from an upcoming revision; which
will be more similar to Bison than ever, and therefore more correct.

On Loggin’.

The post title, "On Loggin'," is a pun on the algorithmic time-complexity of an
ordered binary tree seek operation: O(n * log(n)).

This lecture provides my advice in regard to sorting data by using binary trees.
(From Bouvier's Law Dictionary, Revised 6th Ed (1856) [bouvier]:
 ADVICE, com. law. A letter [...] intended to give notice of [facts contained].)
My lectures are in the Public Domain, and are free of charge.
"By order of the Author." - Mark Twain, _Huckleberry Finn._

A tree is a directed acyclic graph. ("Tree" (1998-11-12) via FOLDOC 30jan2010.)
Other sort methods besides tree sort include quicksort, bubble sort, heap sort,
and insertion sort. ("Sort" (1997-02-12) via FOLDOC, 30th January 2010.)

This pattern can be seen almost everywhere that data are represented as discrete
notional objects: for example, the directory trees contained within the file
systema of your personal computer, the technology/skill trees that are likely to
present themselves in your favorite video game, and in any computer program that
maintains the logical order of a data set as new data are added. These aren't
necessarily treelike data structures; but, because the perceived "shape" of the
data is a tree, they're probably similar if coded in object-oriented language.
I imagine a file system as "what an object even is;" although a computer program
is volatile, whereas files are nonvolatile, the idea is much the same.
See also: object-oriented programming, procedural vs. functional languages, file
systems, {non,}volatile memory such as RAM and EEPROM, sort algorithms.

Of course, treelike structures aren't always the optimum schematic to represent
a hierarchical assortment of data; definitively, like all other object-oriented
designs, they're mere abstractions employed to delineate pieces of a contiguous
unidimensional memory space (the Turing model of a logic machine considers RAM
to be just a big empty pair of set brackets; data are populated & operated-upon
arbitrarily, but the container is merely a set) and to signify correlation.
An object is a collection of related data; which comprise a several datum.
See also: pointers, heap memory, set theory, discrete math, and Boolean logic.

¶ The following (til the next pilcrow) is nearly a verbatim excerpt from FOLDOC.
The data stored in a tree is processed by traversal. The algorithm begins at
root node, transforms or collects or computes against the data, and then repeats
itself for the root's child-nodes ("branches") in some specific order. Three
common traversal orders are pre-order, post-order, and in-order traversal.
For the binary tree illustrated below:
        T
       / \
      I   S
     / \
    D   E
A pre-order traversal visits the nodes in the order T I D E S.
A post-order traversal visits them in the order D E I S T.
An in-order traversal visits them in the order D I E T S.
¶ "Traversal" (2001-10-01) via FOLDOC, ed. 30th January 2010.
See also: recursion, the call stack (subroutine invocation), digraph traversal.

To encode a tree walk via recursive algorithm is straightforward, and is how the
concept of data set traversal is introduced to novitiate computer programmers.
(Well, after they've learned how to loop over an array with "for.")

Succinctly:

    function put (pNode, value) {
        if (pNode.isEmpty()) { pNode.datum = value; }
        else if (value < pNode.datum) { put(pNode.bLeft, value); }
        else if (value > pNode.datum) { put(pNode.bRight, value); }
    } // Ordered insertion.

    function tell (pNode, value) {
        if (value < pNode.datum) { return tell(pNode.bLeft, value); }
        else if (value > pNode.datum) { return tell(pNode.bRight, value); }
        else if (pNode.isEmpty()) { return NULL; }
        return pNode;
    } // Seek pointer to inserted.

_Any_ recursive algorithm can be reduced to an iterative algorithm, provided you
can use variable-length arrays to simulate the function of the call stack, which
stores the arguments to the subroutine you invoked and the address to restore to
the instruction pointer when the subroutine returns. That is, the call stack is
like bookmarks or tabbed indices to tell the routine where it was before a jump.
Replace the "instruction pointer and arguments" model of a stack frame with any
constant-width data set that remembers what you were doing before you jumped to
the next node in the digraph, and, naturally, you can remember the same stuff as
if you had utilized the convenient paradigm that is recursivity. (Stack frames
don't really all have to be the same size, but a definite frame width spares you
from doing yet another algorithm to parse frames. See: network protocol stacks.)

The "stuff you remember" is the state of the finite state automaton who tends to
the mechanism whereby the machine knows which instruction to execute next.
Recursion provides this automaton "for free," because it crops up so frequently;
but, for whatever reason (such as the 3000-call recursion limit in Firefox 3.6),
you might want to write a tree sort without using recursion at all.

Gentlemen, behold...
    corm = GenProtoTok(
        Object,
        function () {
            this.tree = new this.Hop();
            return this;
        },
        "Hopper",
        undefined,

        "locus", undefined,

        "seek", function (u_val) {
            while (this.locus[2] != undefined) {
                if (u_val > this.locus[2]) { this.locus = this.hop(1); }
                else if (u_val == this.locus[2]) { return false; }
                else { this.locus = this.hop(0); } // u < loc[2] || u == undef
            }
            return true;
        }, // end function Hopper::seek

        "tellSorted", function () {
            if (this.tree == undefined) { return undefined; }
            var evaluation = new Array();
            for (var orca = new Array(this.tree), did = new Array(false);
                orca.length>0;
            ) {
                this.locus = orca.pop();
                if (did.pop()) {
                    evaluation.push(this.locus[2]);
                    if (this.locus[1] != undefined) {
                        orca.push(this.locus[1]);
                        did.push(false);
                    }
                } else {
                    orca.push(this.locus);
                    did.push(true);
                    if (this.locus[0] != undefined) {
                        orca.push(this.locus[0]);
                        did.push(false);
                    }
                }
            }
            return evaluation;
        }, // end function Hopper::tellSorted

        "hop", function (index) {
            if (this.locus[index]==undefined) {
                this.locus[index] = new this.Hop();
            }
            return this.locus[index];
        }, // end function Hopper::hop

        "addUnique", function (value) {
            this.locus = this.tree;
            if (this.seek(value)) { this.locus[2] = value; return true; }
            return false;
        }, // end function Hopper::addUnique

        "Hop", GenPrototype(Array, new Array())
    );
... corm!

Here is how corm works: it's an Array retrofitted with a binary tree algorithm.

Each node of the tree is an Array whose members, at these indices, signify:
    0. Left-hand ("less-than") branch pointer.
    1. Right-hand ("greater-than") branch pointer.
    2. Value stored in the node.

Values are added via the corm object's addUnique method, which resets a pointer
to the algorithm's location in the tree and then seeks a place to put the datum.
Seeking causes a node to be created, if no node yet contains the datum; so, the
corm::seek method's name is misleading, but seeking is exactly what it does.

In-order traversal is executed by corm::tellSorted which returns the sorted set.
Recursion is simulated using a state stack whose frames each contain a pointer
to the "previous" node and a boolean value that signifies whether its left child
has been visited. Here is the main loop, step-by-step, in natural English:
    for (var orca = new Array(this.tree), did = new Array(false);
        orca.length>0;
    ) {
"Given the stack, which I consider until it is exhausted..."
        this.locus = orca.pop();
"I am here." -^
        if (did.pop()) {
"If I did the left branch already,"
            evaluation.push(this.locus[2]);
"then this value occurs at this location in the ordered set;"
            if (this.locus[1] != undefined) {
"if the right branch exists,"
                orca.push(this.locus[1]);
"visit the right child on the next loop iteration"
                did.push(false);
"(and, naturally, I haven't visited the right child's left child yet)"
            }
"."
        } else {
"If I haven't done the left branch already,"
            orca.push(this.locus);
"then I have to visit this node again when I come back"
            did.push(true);
"(except I shall have visited the left child by then);"
            if (this.locus[0] != undefined) {
"if the left branch exists,"
                orca.push(this.locus[0]);
"visit the left child on the next loop iteration"
                did.push(false);
"(and, naturally, I haven't visited the left child's left child yet)"
            }
"."
        }
"... I consume pointers and bools, doing all that there stuff"
    }
"."

And that is how to do everything I did with the Hopper data structure
(which is here renamed corm, because I like corm) in quadrare-lexema.js.



Full disclosure: whereas; although I do put on women's clothing; and although
indeed I sleep all night and work all day; and, in addition, despite that I have
been known, of an occasion: to eat eat my lunch, go to the lavatory, pick wild
flowers, hang around in bars, and want to be a girlie; my Wednesdays have not
historically been known to involve shopping or buttered scones, I don't know how
to press a flower, I have neither suspenders nor heels nor a bra (but thanks for
offering), and am not, in fact, a lumberjack.

Of Stack & State.

Next MLPTK development snapshot is available here, courtesy of Mediafire:
http://www.mediafire.com/download/476ynd7ee7cla07/mlptk-qlds16feb2016.zip
My manuscript describes some fascinating new ideas worth their weight in paper.
Parental discretion is advised, until I can excerpt the swearing and easter-egg.

This discourse follows the one I issued previously, which explained the theory
behind LALR parsers. Now I shall write of my library, Quadrare Lexema, which is
the work I've been engineering so that I can teach myself about that theory.

Beware that this algorithm is inefficient and a poor teaching example.
Also, because I have described it so briefly, this post is not yet a lecture.
As with the foregoing, it's public domain. So is my software.

A general-format parser (i.e.: a parser written to parse any grammar, as opposed
to parsers that are written by hand and whose grammar is fixed; or, in yet other
words, a soft-coded parser as opposed to a hard-coded one) must be capable of
reading a formal grammar. A formal grammar specifies how the input sequence is
to be agglutinated to produce semantic meaning via iterative computation of the
"phrases" encountered as the input is consumed. In an LR parser, this could be a
one-dimensional linked list or tree; or, if you're better at math than I, simply
a recursive function that generates the parser's static code as it reads rules.

I designed QL to generate a circularly-linked tree, in the hope I could rewrite
it to be more efficient later. The tree encodes, essentially, the conditional
expressions necessary for the parser to know which state it enters next when it
encounters a new input datum. In other words, "where do I go from here?" Later,
the tree is spidered by the parser, but that's another lecture. BTW, a spider is
a program that walks a path through {hyper,}linked data; i.e., spider = fractal.

QL reads a grammar that is very similar to Backus-Naur Format, or BNF, which is
the format read by GNU Bison. (For more information, consult the FSF or Wikipe.)
There are some non-deterministic extensions and other such frippery in my script
but the format of a grammar rule is mostly the same as BNF:
    - Symbol identifiers match a symbol w/ that ID at that position in sequence.
    - The "|", or OR operator, signifies either one symbol or another etc.
    - Parentheses are used to nest mutually exclusive ("OR"ed) subsequences.
    - Operator precedence and associativity is specified globally. (Non-BNF???)
The nonstandard extensions permit precedence and associativity to be specified
per-rule; arbitrary-length look{ahead,behind} assertions; & new specifiers for
operator precedence. Of these, only the new precedences are operational -- and
they both tend to break if not handled with nitroglycerine-quality caution.
There are a few other knobs and levers that change the behavior of the parser in
subtle ways, but these have naught to do with the state mapping phase.

Each grammar rule is specified as a duplet comprising a string (sequence of IDs)
and an object constructor (the reduction deferral object). The former element of
the duplet is significant to the state mapper; the latter, to the parser itself.

The rules are tokenized with a naive scanner; it changes non-operator, non-space
substrings to objects that represent nodes in the state map, passing the stream
of mixed tokens (nodes) & character literals (operators) to another method that
implements a "recursive" algorithm to attach each node to those succeeding it.
(It's a manual-transmission stack, but tantamount to recursion.) Succinctly:
    Beginning at locus zero, the root state whence all other states proceed,
    Either handle the operators:
    1. If this item of the token/operator stream is '(', increase stack depth.
    2. Else, if it's '|', the mediate (present) loci are stored @ terminal loci
       (that is, the "split ends" of the subsequence), & mediate loci cleared.
    3. Else if it's ')', decrease stack depth & the loci termini are catenated
       onto the mediate loci: i.e., the working loci now point at the nodes who
       signify each end of each branch in the foregoing parenthetical sequence.
    Otherwise,
    4. The identifier at this location in the rule string, which is a map node,
       is entered into the map at some index; this index is keyed (by the ID) in
       to the hash tables of all nodes corresponding to present loci.
    5. The new map entry becomes the new mediate locus. Or, possibly, an entry
       was entered separately for each present loci; in which case the new map
       entries become the new mediate loci.
    6. Proceed to next token in map path specification, or return mediate loci.
Note that the operators are less problematic when implemented w/ recursion; but,
because Javascript's maximum recursion in Firefox 3 is 3,000 calls, I didn't.

Here's an illustration of what the state map looks like:
                                | ROOT |
                               /   |   \
                     | NUMBER | | ABC | | XYZ |
                    /    |     \
               | + |   | - |  | * |  ... etc ...
              /     \    |      |
    | NUMBER | |STRING| |NUM| |NUM| ("NUM" is == NUMBER, but I ran out of space)
At each node, the sequence that's agglutinated by then can either reduce or not;
that, again, is the parser; I haven't space in this essay to explain it; however
I suffice to describe it thus.

I glossed-over the details to make the procedure more obvious. Again succinctly,
    - When the scanner produces map nodes, it also encodes some data in regard
      to lookahead/lookbehind predicates; and certain special-format identifiers
      such as "¿<anonymous Javascript function returning true or false>?" & ".",
      which I don't presently use and are either broken or unstable.
    - In transition between steps 4 and 5 above, if a node at some locus already
      has the map rule's identifier keyed in, the new path could be merged with
      the old one; in fact, it is, for now, but I'll change it later...
    - A backward link to the previous node, as well as the identifier of the
      eventual reduction of the sequence, is added in step 4.
    - Assorted other state data is added to the map retroactively: such as which
      tokens can block the parser and when; where an illegal reduction @ state 0
      can be made a node-skip elsewhere in the map; & so much other frippy stuff
      that I can't even remember and hope to revise out of the schematic.
      ("Frippy," cf. "Frippery," which is garish or cast-off finery.)
As you can see, the state mapper tries to do far too much.

Observe that the "manual-transmission stack," which I wrote by hand, encodes a
recursive algorithm that looks something like this in pseudocode:
    recursive function computeMapExpansion (Token_Stream) {
        declare variable Radices, to encode mutually exclusive subsets.
        // ("Radix" means "root;" here, the beginning of the tree subset.)

        declare variable Brooms, an array containing the "split ends" of the map
            subsequence that began at Radix.
        // (From "Witch's Broom," a crotch near the end of a tree branch.)

        for each token in Token_Stream {
            if the token is '(' {
                gather the parenthetical subsequence in some variable, eg Subseq

                duplet (SubRadices, SubBrooms) = computeMapExpansion(Subseq);

                attach each SubRadix as a next node after each current Broom.

                update Brooms (Brooms[current_brooms] = SubBrooms).
            } else, if the token is '|' {
                begin a new Radix and Brooms pair.
            } else {
                attach map node as next node following each current Broom,
                and update Brooms.
            }// end if-elif-else (parens operator? Mutex operator? Or map node?)
        } // end for (over token stream)

        return duplet containing Radices and Brooms;
    } // end function computeMapExpansion
Obviously that does the same thing, via the function invocation stack.

After the map is prepared, the parser uses it as a state configuration while it
reads input tokens. I plan to brief you on the parser in my next post.

Because some space is left, here are the relevant parts of quadrare-lexema.js:

    "addPath", function (atomized_rule, reduction_id) {
        // exactly the algorithm described above.
    },

    "_whither", function (present_state, next_symbol) {
        // essentially, "can the map node at <state> proceed to <next_symbol>?"
    },

    "reRefactor", function () {
        // call <refactor> for each illegal reduction removed from the root node
    },

    "refactor", function (reduction_id) {
        // essentially exactly the same algorithm as addPath; walks the tree to
        // factor the specified identifier out of any rules in which it appears.
    },

    "deriveTerminalsAndNon", function () {
        // separate which symbols are reductions, and which are from tokenizer.
    },

    "tellReductionTracks", function (initial_symbol_id) {
        // "if <initial_id> proceeds from state 0, to what does it reduce?"
    },

    "derivePotentialReductions", function (init_id) {
        // "what reductions could happen via compound reduction of sequence?"
    },

    "deriveAllPotentialReductions", function () {
        // "do that one up there a bunch of times."
    },

    "attachBlockingPredicates", function () {
        // "at any node in the map; if the parser must block here, and given the
        //  symbol that shall appear first in the sequence that begins when the
        //  parser blocks, can any potential reduction thence be shifted onto
        //  this sequence in progress which is blocking at this node?"
    },

    "StateNode", GenProtoTok(
        // metric tons of crap
    ),