States stacked on Of Stack & State to date.

New MLPTK development snapshot is available here, courtesy of Mediafire: ( )
It's the best thing since the last edition. QL remains somewhat broken, however.
Parental discretion is advised, because some people get very upset when they see
illustrations of mature females whose mammaries are exposed. I'm not sure what
Happy the Smiling Cow thinks about the issue, though I imagine she doesn't mind.
Public domain source code and lectures. Free of charge.

(Spoilers: try reading the first word of each line. Solution near the middle.)

This installment of my spellbinding WordPress saga, like that foregoing, is a
brief treatment of part of my library, Quadrare Lexema. I'm sorry these are so
dull, and I'll try to deliver a lecture on binary trees or something between now
and when I expound upon some other uninteresting piece of software anatomy.

Boring further into the workings of this monstrous library; which, you'd agree,
is fit for nothing less than overhaul; the next significant algorithm is the
parser, which executes computations in response to input. QL employs numerous
acrobatics, of truly astounding profligacy, but the function of the parser is
largely similar to that of Bison: in fact, if you disregard all the useless
frippery that does nothing more than wrap a data structure around the 'menergy'
function, it is the same idea at its heart:
    1. Read a token.
    2. Consider whether the sequence in progress proceeds, blocks, or reduces.
    3. Compute the semantic value of the input, dependent consideration.
Simple, right?

Except for the structural junk, and manual recursion stacks because Javascript.

QL's parser is an unholy menagerie of object-oriented design patterns, arranged
haphazardly. The parse stack is a linked list, the reduction tokens are manually
constructed Array-like objects that point to themselves in the stack (until they
don't), and static code generation is still only a possibility. It's easy to
confuse the reduction routines, which aren't part of the parser, with the parse
step when reading them as written; also, I'm going to overhaul this library soon
with a simplified algorithm; so, for now, if you'd kindly take me at "it does
run in Firefox 3.6.13," I'll this year be about rewriting the library to be less
foolishly circumspect.

Like AFURACC, QL encodes the reduction routines as object constructors. (Forgive
me; it'll be less unlike Bison soon, the FSF's mighty bootloader permitting.)
The constructors create the "reduction deferral objects" I wrote of previously.
Parse proceeds as I wrote yet previously, in LR fashion, but: during reduction
phase, instead of unshifting the reduced token back onto the input stream, QL
describes a (broken) recursion that shifts the reduced token right away. It's
Hell on metaphorical wheels; I'm going to revise it out and sorry for the crap.

Abandon that thought a moment. Before reduction, a sequence must agglutinate. I
hope not to have differed much from Bison in that particular with this edition.
    1. Agglutinate a new token onto each stack branch.
    2. Divergence due to syntax ambiguity is handled by deferring reductions.
    3. Eliminate divergent branches by discarding syntax errors.
       (This occurs if a sequence can transition or reduce in more than 1 way.)

When menergy (the name of this function is derived from the mock commercial,
"PowerThirst," which received millions of hits at YouTube) is called by step, it
strikes each divergent stack extremum, to be replaced with a shift/reduce/block.

There's not much more to menergy; the parser theory lecture suffices. Spare me
laughter, and I'll write at some length about the parse stack and deferrals.

Here is the solution to the riddle:

"Next, it's parental illustrations; happy, public? Spoilers: this brief dull and
boring is. Parser acrobatics largely frippery; function: read, consider, compute
-- simple, except QL's haphazardly constructed. Don't confuse step with run
foolishly, like me. The parse phase describes Hell: abandon hope. Succinctly:
agglutinate; divergence: eliminate this. When PowerThirst strikes, there's

And here is a condensed treatment of where my practice has differed from theory:

The parse stack is a branching backwards-linked list. Imagine this as a treelike
structure that is traversed from the extrema toward the root, rather than from
root to extrema as in "vanilla" trees. Nearly everything relevant to the parser,
and distinct from the state mapper, is described in the constructor returned by
method named "toTheLimit" (lines 1,308 through 2,038 of quadrare-lexema.js); the
parse node constructor is specified in pieces at lines 1,895-1,918 & 1,315-1,389
(boilerplate data) and 1,985-2,012 (instantiation). Parse nodes are elements of
the parse stack; translated from map nodes, they encode both the symbol on the
stack and the map paths thither and thence. Observe that there's not much to a
node besides the backward link, the symbol, and the precedence level within any
parser stack node: all the rest of that stuff is the parser's state data.

Implementation of the parse stack as a tree is one means to permit the parser
to automatically solve some kinds of syntax conflicts by deferring reductions on
the parse stack, then waiting until only one extremum remains before traversing
it to execute the computations specified by the deferred reduction. To "defer" a
reduction is to withhold from computing its semantic value.

The deferral is represented on the parse stack by a placeholder token (I call
these "dummies", from a colloquialism that refers to mannequins) while divergent
parse branches are agglutinating; then, reduction is executed when all branches
except one have been eliminated by syntax errors.

If you can't recall the first lecture, an LR parser behaves like this...
    1. Shift a token from input onto lookahead; the former LA becomes active.
    2. If the lookahead's operator precedence is higher, block the sequence.
    3. If the active token proceeds in sequence, shift it onto the sequence.
    4. Otherwise, if the sequence reduces, pop it and unshift the reduction.
    5. Otherwise, block the sequence and wait for the expected token to reduce.
... but, even though these steps appear to (and usually do) happen one after
another in exactly one order, mathematicians who'd like a challenge may consider
syntactical ambiguity too. For instance, there might be more than one way: for
the working token to proceed from the sequence in step 3 (called a "shift/shift
conflict"); for the sequence to reduce to one or more tokens in step 4 ("reduce/
reduce conflict"); or even for operator precedence/associativity to cause the
sequence to block or not in step 2. In addition, a sequence might block (step 5)
and not block simultaneously. These last are also shift/shift conflicts; a type
of syntactical ambiguity that tends to result in an infinite loop. "Shift/reduce
conflict" describes an impasse where the parser can't tell whether to shift a
symbol onto a sequence in progress or reduce the sequence further (because of
reasons a competent mathematician could explain to you, but I cannot). Finally,
if a blocking sequence produces multiple possible reductions, a shift/{shift,
reduce} conflict could result even when the reduce/reduce could be auto-solved.

For more information, consult the FSF's Bison Parser Algorithm (©). Again, if my
algorithm is at all an infringement, I'll delete it. Please let me know.
The Free Software Foundation has verified their algorithm precisely; where their
mathematics or descriptions conflict with mine, I recommend you favor theirs.

QL attempts to automatically solve some syntactical ambiguity by looking for a
contextual clue in the sequence that follows next. To do so, it defers reduction
of all symbols during the strike-replace loop: reduction objects are constructed
as deferment "dummies," which are filled-in with the semantic value of reduction
when only one stack-branch exists. Any of the above conflicts could branch stack
(and, by the way, this has nought to do with the parse stack's treelike format),
but at the moment QL simply chokes on everything except a reduce/reduce conflict
-- and I have not tested even this, except the math looks kinda-sorta passable.
Thus, at the present time, can a sequence (hypothetically) produce differtent
semantic values upon reduction, depending on which possible reduction fits-into
the sequence that was encountered beforehand. Any such divergence is handled by
putting-off the computation of the reduction until all the branches except for
one have been eliminated by encountering a token that doesn't fit.

Footnote: I added non-standard associativity specifiers to the parser logic.
These ideas were so wrong-headed that I can't even explain how broken they are,
but here is what I intended them to do:
    mate    Mating tokens with the same precedence always cleave to each other.
            (Except not really, because that required too many special cases!)
    stop    Stopping tokens with lower precedence than the extremum prevent any
            precedence/associativity block when the working token is a stopper.
            (Except not really, because special cases!)
    wrong   Wrong-associative lookaheads don't cause a precedence block when the
            sequence preceding the working token was a right-associative block;
            and don't block on r-assoc when the lookahead is wrong-associative.
Both associativity specifiers shall be removed from an upcoming revision; which
will be more similar to Bison than ever, and therefore more correct.

Be careful what you say. The U.S.A. is not as free a country as you have been led to believe.

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s