Tag Archives: Methodology

Essays regarding computer programming in practice.

Cheroot Privileges: a Potpourri of Pointlessness.

Cheroot (Tamil "shuruttu" meaning "a roll"): a cigar. Reputed to be pungent.
chroot (GNU coreutils, manual section 8): run command in special root directory.
Potpourri: a compost heap, montage, medley, or ragout. NB: never compost meat.
Root privileges: to have these is to be the super-user, operator, admin, etc.
Root: a dental nerve, et c.



My foregoing post touched on socket programming, when I mentioned TFTP. (BTW, MS
Windows has a TFTP client built-in: in the Programs & Features control panel app
open "turn Windows features on or off.")

Sockets are a hardware abstraction layer that deals with computer networking.
As usual, gritty details are beyond me and I gloss them over. (Tee hee. That's a
pun about oyster pearls.) Suffice to say that sockets are ports of call for data
transmitted between computers: hardware and protocol not withstanding, bytes fly
out one socket and land in another. We built this Internet on socket calls.
(A pun on Jefferson {Airplane,Starship}'s "We Built This City.")

For more information, consult the RFCs, and the IEEE's 802.* network specs.
Perhaps ftp.rfc-editor.org, www.faqs.org/rfcs, or www.ietf.org/rfc are of use?

And an update to my Javascript snippet in the remedial lecture...
    function initnary (ctr) { for (var i=0; i < ctr.length; ctr[i++] = 0); }
    function incnary (counter) { // Faster, but rollover instead of sentry.
        for (var L = counter.length, i = L - 1;
             (nary[i] = ((nary[i--] + 1) % L)) == 0 && i > -1;
        ) ; // Faster than the example in WP12, but rollover not sentry.
    } // end incnary(): Increments N-ary counter (length >= 1), by reference
    // ...
    var nary = new Array(items.length);
    initnary(nary); // nary's state: 0 0 0 ... 0
    incnary(nary); // nary's state: 0 0 0 ... 1
    // ...
... which is possibly a bit faster than the other one, although neither will be
optimized by an optimizing compiler (due to the complicated loop initializer), &
therefore both are of marginal utility.



It's 2017. To begin my new year on the right foot, I began it on the wrong foot.

My first hint that I'd need to effect some impromptu renovations to my skeleton
came to me when I noticed that I had begun to experience an unpleasant taste of
musty dust after picking clean my right anterior maxillar tricuspid. (The reason
why shattered teeth taste of moist chalk is probably because dentine & chalk are
both calcareous substances. I'd guess chalk rots too, if infected.) Another way
I could describe the taste of a rotten tooth is "like hard-boiled eggs that were
rotten before they were boiled," because they smell and taste alike. The dentine
(the material composing the interior of teeth) also feels distinctly like chalk,
or like gritty soil, when I palpate it with my tongue.

Anyway, my left anterior mandibular tricuspid has also been a goner since auld
lang syne, and the bone fragments left over inside my gums have really begun to
bug me, so a taste of fetor was the last straw.

Luckily, I had a small piece of surgical gauze left over from when I foolishly
had my wisdom teeth removed. (If you're considering removal of yours, then I am
here to tell you: DON'T! It's a waste of money, and, unless your teeth are truly
rotten or a source of pain, there is simply _no reason_ to remove even one.) If
you haven't tried to get a grip on one of your teeth before, you wouldn't know,
but even a tooth you've wiped dry is difficult to grasp without gauze.

I'm also the lucky owner of a pair of surgical forceps. These handy little tools
look like a long and delicate pair of pliers with the fulcrum very close to the
gripping side of the levers. ("They really pinch.")

In case you were curious, forceps are usually employed to grasp small objects in
surgical procedures. They can also be used as roach clips. (For avoiding burns &
stains of the fingers while smoking. Wide pipe stems containing packed cotton
accomplish the same end: you can make one from a hollow ballpoint pen and cotton
balls sold at any general store. Nevertheless a forcep is more generally utile.)

Those teeth's days had long been numbered. Their time had come!

So it was that I spent tedious hours doubled over with my fingers crammed in my
mouth, wiggling that thrice-damned curse of a bone to try and work it loose.
I quite unwisely, and disregarding the risk of breaking my jaw, channeled thirty
years of pent aggression into what remained of my tricuspid molar, as malodorous
flakes of rotten enamel & dentine fell upon my tongue like evil snow.

I knew I had effected some kind of progress when I heard a muffled click inside
of my head -- bones have eerie acoustic properties, like an unsettling resonance
and a tendency to produce a crunching sound (rather than a snap) when fractured
-- and felt a stabbing pain travel up the side of my head. Thankfully the pain I
felt due to prolonged migraine headache rendered this somewhat less intolerable.

I repeated this procedure until I lost consciousness.
Well, that's how I had hoped that this would end, but it didn't.
I could not bear the pain, and had to stop trying to pull my tooth.

Unfortunately for me, although I did manage to work the molar somewhat further
out of my jaw than it had loosened already (my dental hygiene, in case the memo
hasn't reached you, is worse than Austin Powers'), I didn't completely extract.
All I managed to do was cause a hairline fracture of my maxilla, which will un-
doubtedly be a source of major difficulty and pain to me in the decades to come.

Worse yet, my application of too much pressure via the forceps caused additional
shattering of the tooth; further attempts at extrication are counterindicated.
That's just as well, because the kind of general-purpose forceps I had available
aren't for dental extraction: this requires a special kind of forcep I hadn't.

I suppose it's just as well: considering the fact that some dentine remained in
the shell of the tooth, its nerve was probably still alive and well. The nerves
connecting teeth to the root canal are extremely sensitive, and interconnected;
what's worse, I could easily have broken my jaw by violently levering the tooth;
therefore, extracting my tooth myself would very likely have been suicide.

So, as far as sockets go, my teeth will be rotting in theirs for some time yet.



Other noteworthy pratfalls during January:
1. Accidentally locked myself out of Windows by attempting to install Ubuntu 16
   alongside, which occurred after it prompted me to designate a BIOS boot part
   (prior installs didn't manifest the prompt and gave me no trouble).
2. Locked myself out of Ubuntu too by trying to unbrick Windows.
3. Flashed in the backup EFI system partition and boot sector from a disk image,
   reset the partition table with fdisk, thanked lucky stars, began again at 1.
4. Broke shiny new laptop's fragile keyboard connector. Cursed fate.

Incidentally, I had some luck using this procedure to regain access to a Lenovo
IdeaPad 100-151BD 80QQ's UEFI Firmware Configurator after I had set my boot mode
to Legacy Support before installing Ubuntu, which locked me out of the config:
    1. At GRUB operating system selection screen key 'c' for a command line.
    2. normal_exit
    3. initrd
    (initrd fails because you didn't load the kernel, but then Windows Boot
     Manager tries to load in UEFI mode for some reason & presents a screen
     politely offering to give you the FW Config if you give it the ESC key,
     which it doesn't usually when your boot mode is Legacy Support instead
     of UEFI with or without secure boot.)
I ought note: Ubuntu 16 boots the configurator automagically in UEFI boot mode:
the option reappeared when I `sudo update-grub`ed while in UEFI mode.

Speaking of GRUB, here's a boot procedure (in case you've never driven stick):
    1. root=hd0,gpt8
       (Linux is at sda8 on my system)
    2. linux /vmlinuz
    3. initrd /initrd.img
    4. boot
Or, to shift gears into Windows:
    1. root=hd0,gpt1
    2. chainloader /EFI/Microsoft/Boot/bootmgfw.efi
    3. boot

While I'm on the topic, here's how to play a tune at boot time using GRUB:
    A.1. @ boot menu (operating system selection), key 'c' for a GRUB shell.
    A.2. play TEMPO PITCH1 DURATION1 PITCH2 DURATION2 P3 D3 ... ad infinitum
         Pitches are frequencies in Hertz; duration is a fraction of tempo.
or
    B.1. In Ubuntu, Control + Alt + T to open a terminal emulator window.
    B.2. sudo gedit /etc/default/grub
    B.3. Feed the recordable piano by editing the line at the bottom:
         GRUB_INIT_TUNE="325 900 6 1000 1 900 2 800 2 750 2 800 1 900 2 600
5 0 1 500 1 600 1 800 1 750 2 600 2 675 2 750 4"
         # ^- The Amazing Water (NiGHTS)
         GRUB_INIT_TUNE="1024 600 2 650 2 700 2 950 10 900 20 0 10 600 2 650
2 700 2 950 20 1050 10 1100 5"
         # ^- Batman, the Animated Series.
         GRUB_INIT_TUNE="2048 600 5 0 1 600 5 0 1 575 5 0 1 575 5 0 1 550 5
0 1 550 5 0 1 575 5 0 1 575 5 0 1 600 5 0 1 600 5 0 1 575 5 0 1 575 5 0 1
550 5 0 1 550 5 0 1 575 5 0 1 575 5 0 1 900 8 0 4 900 24"
         # ^- classic Batman.
    B.4. Save the file, and then sudo update-grub && sudo reboot
Musical notes within the 500-1500 Hz range tend to be within 100Hz of each other
(therefore ± 50 Hz for flats & sharps) typically, but act strange around 600 Hz.



GNU/Linux is dandy for computer programming, especially data processing, because
it is now (thanks to Ubuntu) easier to use than ever; but it changes so quickly
that I've barely skimmed over the repository before the next long-term support
version has been finalized. The installer wizard also sometimes makes mistakes.
The software repository is slowly morphing into a dime-store, any software worth
using requires considerable technical expertise cultivated @ your great expense,
and if anything breaks then you have to be the fastest teletype gun in the west.

And, because my comments re: Linux may mislead, I'm thrilled about Windows 10.
Have you played Microsoft Flight Simulator recently? Great game.

Automaton Empyreum: the Key to Pygnition. (Trivial File Transfer Protocol edition.)

(I have implemented the Trivial File Transfer Protocol, revision 2, in this milestone snapshot. If you have dealt with reprogramming your home router, you may have encountered TFTP. Although other clients presently exist on Linux and elsewhere, I have implemented the protocol with a pair of Python scripts. You’ll need a Python interpreter, and possibly Administrator privileges (if the server requires them to open port 69), to run them. They can transfer files of size up to 32 Megabytes between any two computers communicating via UDP/IP. Warning: you may need to pull out your metaphorical monkey wrench and tweak the network timeout, or other parameters, in both the client and server before they work to your specification. You can also use TFTP to copy files on your local machine, if for whatever reason you need some replacement for the cp command. Links, courtesy of MediaFire, follow:

Executable source code (the programs themselves, ready to run on your computer): http://www.mediafire.com/file/rh5fmfq8xcmb54r/mlptk-2017-01-07.zip

Candy-colored source code (the pretty colors help me read, maybe they’ll help you too?): http://www.mediafire.com/file/llfacv6t61z67iz/mlptk-src-hilite-2017-01-07.zip

My life in a book (this is what YOUR book can look like, if you learn to use my automatic typesetter and tweak it to make it your own!): http://www.mediafire.com/file/ju972na22uljbtw/mlptk-book-2017-01-07.zip

)

Title is a tediously long pun on "Pan-Seared Programming" from the last lecture.
Key: mechanism to operate an electric circuit, as in a keyboard.
Emporium: ein handelsplatz; or, perhaps, the brain.
Empyreuma: the smell/taste of organic matter burnt in a close vessel (as, pans).
Lignite: intermediate between peat & bituminous coal. Empyreumatic odor.
Pignite: Pokémon from Black/White. Related to Emboar & Tepig (ember & tepid).
Pygmalion (Greek myth): a king; sculptor of Galatea, who Aphrodite animated.

A few more ideas that pop up often in the study of computer programming: which,
by the way, is not computer science. (Science isn't as much artifice as record-
keeping, and the records themselves are the artifact.)

MODULARITY
As Eric Steven Raymond of Thyrsus Enterprises writes in "The Art of Unix
Programming," "keep it simple, stupid." If you can take your programs apart, and
then put them back together like Lego(TM) blocks, you can craft reusable parts.

CLASSES
A kind of object with methods (functions) attached. These are an idiom that lets
you lump together all your program's logic with all of its data: then you can
take the class out of the program it's in, to put it in another one. _However,_
I have been writing occasionally for nearly twenty years (since I was thirteen)
and here's my advice: don't bother with classes unless you're preparing somewhat
for a team effort (in which case you're a "class" actor: the other programmers
are working on other classes, or methods you aren't), think your code would gain
from the encapsulation (perhaps you find it easier to read?), or figure there's
a burning need for a standardized interface to whatever you've written (unlikely
because you've probably written something to suit one of your immediate needs:
standards rarely evolve on their own from individual effort; they're written to
the specifications of consortia because one alone doesn't see what others need).
Just write your code however works, and save the labels and diagrams for some
time when you have time to doodle pictures in the margins of your notebook, or
when you _absolutely cannot_ comprehend the whole at once.

UNIONS
This is a kind of data structure in C. I bet you're thinking "oh, those fuddy-
duddy old C dinosaurs, they don't know what progress is really about!" Ah, but
you'll see this ancient relic time and again. Even if your language doesn't let
you handle the bytes themselves, you've got some sort of interface to them, and
even if you don't need to convert between an integer and four ASCII characters
with zero processing time, you'll still need to convert various data of course.
Classes then arise which simulate the behavior of unions, storing the same datum
in multiple different formats or converting back and forth between them.
(Cue the scene from _Jurassic Park,_ the film based on Michael Crichton's book,
 where the velociraptor peeks its head through the curtains at a half-scaffolded
 tourist resort. Those damn dinosaurs just don't know when to quit!)

ACTUALLY, VOID POINTERS WERE WHAT I WAS THINKING OF HERE
The most amusing use of void*s I've imagined is to implement the type definition
for parser tokens in a LALR parser. Suppose the parser is from a BNF grammar:
then the productions are functions receiving tokens as arguments and returning a
token. Of course nothing's stopping you from knowing their return types already,
but what if you want to (slow the algorithm down) add a layer of indirection to
wrap the subroutines, perhaps by routing everything via a vector table, and now
for whatever reason you actually _can't_ know the return types ahead of time?
Then of course you cast the return value of the function as whatever type fits.

ATOMICITY, OPERATOR OVERLOADING, TYPEDEF, AND WRAPPERS
Washing brights vs darks, convenience, convenience, & convenience, respectively.
Don't forget: convenience helps you later, _when_ you review your code.

LINKED LISTS
These are a treelike structure, or should I say a grasslike structure.
I covered binary trees at some length in my fourth post, titled "On Loggin'."

RECURSION
The reason why you need recursion is to execute depth-first searches, basically.
You want to get partway through the breadth of whatever you're doing at this
level of recursion, then set that stuff aside until you've dealt with something
immensely more important that you encountered partway through the breadth. Don't
confuse this with realtime operating systems (different than realtime priority)
or with interrupt handling, because depth-first searching is far different than
those other three topics (which each deserve lectures I don't plan to write).

REALTIME OPERATING SYSTEMS, REALTIME PRIORITY, INTERRUPT HANDLING
Jet airplanes, video games versus file indexing, & how not to save your sanity.

GENERATORS
A paradigm appearing in such pleasant languages as Python and Icon.
Generators are functions that yield, instead of return: they act "pause-able,"
and that is plausible because sometimes you really don't want to copy-and-paste
a block of code to compute intermediate values without losing execution context.
Generators are the breadth-first search to recursion's depth-first search, but
of course search algorithms aren't all these idioms are good for.
Suppose you wanted to iterate an N-ary counter over its permutations. (This is
similar to how you configure anagrams of a word, although those are combinations
-- for which, see itertools.combinations in the Python documentation, or any of
the texts on discrete mathematics that deal with combinatorics.) Now, an N-ary
counter looks a lot like this, but you probably don't want a bunch of these...
    var items = new Array(A, B, C, D, ...);       // ... tedious ...
    var L = items.length;                         // ... lines ...
    var nary = new Array(L);                      // ... of code ...
    for (var i = 0; i < L; nary[i++] = 0) ;       // ... cluttering ...
    for (var i = L - 1; i >= 0 && ++nary[i] == L; // ... all ...
        nary[i--] = ((i < 0) ? undefined : 0)     // ... your other ...
    ) ; // end for (incrementation)               // ... computations ...
... in the middle of some other code that's doing somewhat tangentially related.
So, you write a generator: it takes the N-ary counter by reference, then runs an
incrementation loop to update it as desired. The counter is incremented, where-
upon control returns to whatever you were doing in the first place. Voila!
(This might not seem important, but it is when your screen size is 80 by 24.)



NOODLES AND DOODLES, POMS ON YOUR POODLES, OODLES AND OODLES OF KITS & CABOODLES
(Boodle (v.t.): swindle, con, deceive. Boodle (n.): gimmick, device, strategy.)
Because this lecture consumed only about a half of the available ten thousand
characters permissible in a WordPress article, here's a PowerPoint-like summary
that I was doodling in the margins because I couldn't concentrate on real work.
Modularity: perhaps w/ especial ref to The Art of Unix Programming. "K.I.S.S."
Why modularity is important: take programs apart, put them together like legos.
Data structures: unions, classes.
Why structures are important: atomicity, op overloading, typedefs, wrappers.
linked lists: single, double, circular. Trees. Binary trees covered in wp04??
recursion: tree traversal, data aggregation, regular expressions -- "bookmarks"
Generators. Perhaps illustrate by reference to an N-ary counter?

AFTER-CLASS DISCUSSION WITH ONE HELL OF A GROUCHY ETHICS PROFESSOR
Suppose someone is in a coma and their standing directive requests you to play
some music for them at a certain time of day. How can you be sure the music is
not what is keeping them in a coma, or that they even like it at all? Having
experienced death firsthand, when I cut myself & bled with comical inefficiency,
I can tell you that only the dying was worth it. The pain was not, and I assure
you that my entire sensorium was painful for a while there -- even though I had
only a few small lacerations. Death was less unpleasant with less sensory input.
I even got sick of the lightbulb -- imagine that! I dragged myself out of the
lukewarm bathtub to switch the thing off, and then realized that I was probably
not going to die of exsanguination any time soon and went for a snack instead.

AFTER-CLASS DISCUSSION WITH ONE HELL OF A GROUCH
"You need help! You are insane!"
My 1,000 pages of analytical logic versus your plaintive bleat.

Palling around.

Pall (n.): pawl.

I couldn't write last week, and my upgrade to QL has progressed no further.
For reference, I stalled before comparing the efficiency of nested Objects to
that of nested Arrays, which I must test before experimenting further with the
prototype compiler or even refining the design. I intend to do that this month.
In the meantime, here's a snapshot of MLPTK with new experiments included.

http://www.mediafire.com/download/566ln3t1bc5jujp/mlptk-p9k-08apr2016.zip

And a correction to my brief about the grammar ("Saddlebread"): actually, the
InchoateConjugation sequence does not cause differentiation, because the OP_CAT
prevents the original from reducing. Other parts may be inaccurate. I'll revise
the grammar brief and post a new one as soon as I have fixed the QL speed bug.

I took some time out from writing Quadrare Lexema to write some code I've been
meaning to write for a very long time: pal9000, the dissociated companion.
This software design is remarkably similar to the venerable "Eggdrop," whose C
source code is available for download at various locations on the Internets.
Obviously, my code is free and within the Public Domain (as open as open source
can be); you can find pal9000 bundled with today's edition of MLPTK, beneath the
/reference/ directory.

The chatbot is a hardy perennial computer program.
People sometimes say chatbots are artificial intelligence; although they aren't,
exactly, or at least this one isn't, because it doesn't know where it is or what
it's doing (actually it makes some assumptions about itself that are perfectly
wrong) and it doesn't apply the compiler-like technique of categorical learning
because I half-baked the project. Soon, though, I hope...

Nevertheless, mathematics allows us to simulate natural language.
Even a simplistic algorithm like Dissociated Press (see "Internet Jargon File,"
maintained somewhere on the World Wide Web, possibly at Thyrsus Enterprises by
Eric Steven Raymond) can produce humanoid phrases that are like real writing.
Where DisPress fails, naturally, is paragraphs and coherence: as you'll see when
you've researched, it loses track of what it was saying after a few words.

Of course, that can be alleviated with any number of clever tricks; such as:
	1. Use a compiler.
	2. Use a compiler.
	3. Use a compiler.
I haven't done that with p9k, yet, but you can if you want.

Of meaningful significance to chat robots is the Markov chain.
That is a mathematical model, used to describe some physical processes (such as
diffusion), describing a state machine in which the probability of any given
state occurring is dependent only on the next or previous state of the system,
without regard to how that state was encountered.
Natural language, especially that language which occurs during a dream state or
drugged rhapsody (frequently and too often with malicious intent, these are
misinterpreted as the ravings of madmen), can also be modeled with something
like a Markov chain because of the diffusive nature of tangential thought.

The Markov-chain chat robot applies the principle that the state of a finite
automaton can be described in terms of a set of states foregoing the present;
that is, the state of the machine is a sliding window, in which is recorded some
number of states that were encountered before the state existent at the moment.
Each such state is a word (or phrase / sentence / paragraph if you fancy a more
precise approach to artificial intelligence), and the words are strung together
one after another with respect to the few words that fit in the sliding window.
So, it's sort of like a compression algorithm in reverse, and similar to the way
we memorize concepts by relating them to other concepts. "It's a brain. Sorta."

One problem with Markov robots, and another reason why compilers are of import
in the scientific examination of artificial intelligence, is that of bananas.
The Banana Problem describes the fact that, when a Markov chain is traversed, it
"forgets" what state it occupied before the sliding window moved.
Therefore, for any window of width W < 6, the input B A N A N A first produces
state B, then states A and N sequentially forever.
Obviously, the Banana Problem can be solved by widening the window; however, if
you do that, the automaton's memory consumption increases proportionately.

Additionally, very long inputs tend to throw a Markov-'bot for a loop.
You can sorta fix this by increasing the width of the sliding window signifying
which state the automaton presently occupies, but then you run into problems
when the sliding window is too big and it can't think of any suitable phrase
because no known windows (phrases corresponding to the decision tree's depth)
fit the trailing portion of the input.
It's a sticky problem, which is why I mentioned compilers; they're of import to
artificial intelligence, which is news to absolutely no one, because compilers
(and grammar, generally) describe everything we know about the learning process
of everyone on Earth: namely, that intelligent beings construct semantic meaning
by observing their environments and deducing progressively more abstract ideas
via synthesis of observations with abstractions already deduced.
Nevertheless, you'd be hard-pressed to find even a simple random-walk chatbot
that isn't at least amusing.
(See the "dp" module in MLPTK, which implements the vanilla DisPress algorithm.)

My chatbot, pal9000, is inspired by the Dissociated Press & Eggdrop algorithms;
the copy rights of which are held by their authors, who aren't me.
Although p9k was crafted with regard only to the mathematics and not the code,
if my work is an infringement, I'd be happy to expunge it if you want.

Dissociated Press works like this:
	1. Print the first N words (letters? phonemes?) of a body of text.
	2. Then, search for a random occurrence of a word in the corpus
	   which follows the most recently printed N words, and print it.
	3. Ad potentially infinitum, where "last N words" are round-robin.
It is random: therefore, humorously disjointed.

And Eggdrop works like this (AFAICR):
	1. For a given coherence factor, N:
	2. Build a decision tree of depth N from a body of text.
	3. Then, for a given input text:
	4. Feed the input to the decision tree (mmm, roots), and then
	5. Print the least likely response to follow the last N words
	   by applying the Dissociated Press algorithm non-randomly.
	6. Terminate response after its length exceeds some threshold;
	   the exact computation of which I can't recall at the moment.
It is not random: therefore, eerily humanoid. (Cue theremin riff, thundercrash.)

A compiler, such as I imagined above, could probably employ sliding windows (of
width N) to isolate recurring phrases or sentences. Thereby it may automatically
learn how to construct meaningful language without human interaction.
Although I think you'll agree that the simplistic method is pretty effective on
its own; notwithstanding, I'll experiment with a learning design once I've done
QL's code generation method sufficiently that it can translate itself to Python.

Or possibly I'll nick one of the Python compiler compilers that already exists.
(Although that would take all the fun out of it.)

I'll parsimoniously describe how pal9000 blends the two:

First of all, it doesn't (not exactly), but it's close.
Pal9000 learns the exact words you input, then generates a response within some
extinction threshold, with a sliding window whose width is variable and bounded.
Its response is bounded by a maximum length (to solve the Banana Problem).
Because it must by some means know when a response ends "properly," it also
counts the newline character as a word.
These former are departures from Eggdrop.
It also learns from itself (to avoid saying something twice), as does Eggdrop.

In addition, p9k's response isn't necessarily random.
If you use the database I included, or choose the experimental "generator"
response method, p9k produces a response that is simply the most surprising word
it encountered subsequent to the preceding state chain.
This produces responses more often, and they are closer to something you said
before, but of course this is far less surprising and therefore less amusing.
The classical Eggdrop method takes a bit longer to generate any reply; but, when
it does, it drinks Dos Equis.
... Uh, I mean... when it does, the reply is more likely to be worth reading.
After I have experimented to my satisfaction, I'll switch the response method
back to the classic Eggdrop algorithm. Until then, if you'd prefer the Eggdrop
experience, you must delete the included database and regenerate it with the
default values and input a screenplay or something. I think Eggdrop's Web site
has the script for Alien, if you want to use that. Game over, man; game over!

In case you're curious, the algorithmic time complexity of PAL 9000 is somewhere
in the ballpark of O(((1 + MAX_COHERENCE - MIN_COHERENCE) * N) ^ X) per reply,
where N is every unique word ever learnt and X is the extinction threshold.
"It's _SLOW._" It asymptotically approaches O(1) in the best case.

For additional detail, consult /mlptk/reference/PAL9000/readme.txt.

Pal9000 is a prototypical design that implements some strange ideas about how,
exactly, a Markov-'bot should work. As such, some parts are nonfunctional (or,
indeed, malfunction actually) and vestigial. "Oops... I broke the algorithm."
While designing, I altered multiple good ideas that Eggdrop and DisPress did
right the first time, and actually made the whole thing worse on the whole. For
a more classical computer science dish, try downloading & compiling Eggdrop.

I will gladly pay you Tuesday for a Hnakkerbröd today.

Beware: when speaking to Trolls, listen carefully. It could save your ice hole.
        (Trolls are known for their skill in wintertime fishing.)

My software, courtesy of MediaFire:

https://www.mediafire.com/?ebbyj47e35mmytg (http://www.mediafire.com/download/ebbyj47e35mmytg/mlptk-qlupgradecomplete-awaitingspeedfix-27mar2016.zip)

This update (unlike the foregoing, which fixed the Opera bug) regards only QL.
Again: Quadrare Lexema is similar to GNU Bison. If an infringement, I'll delete
it as soon as I hear or see from you. BTW, Bison is time-tested; QL isn't.

Oh, and a correction to "First Rate?": Bison can indeed unshift multiple tokens
back onto its input stream, even though productions can't be multiple symbols in
length, by unshifting directly onto its input during reduction (which is how QL
does it too, during deferment, which amounts to the same exact thing because no
reason exists to compute anything during deferment -- otherwise, there'd be more
race conditions than the Kentucky Derby, which is very silly).

QL is now "kinda-sorta" debugged and functioning to my specification AFAICT. Now
the API has changed considerably from how it was last month (the argument vector
to reduction deferment class constructors has been modified, some new faculties
now exist, and some were removed); this necessitated additional instantiations
of "new Array()," and interolably reduces efficiency when operating on very long
inputs, but I wanted to hurry-up this design iteration. (That was one sentence.)

The token agglutination mechanism of the parser logic is the same as before.
Code to determine precedence & blocking has been abridged; toddling steps toward
a method to generate the parser as in-line code. (As you see, that isn't yet.)

I'm tempted to redo the infrastructure to reduce the number of new Array()s that
are instantiated during the parse phase, but I'm pretty sure I can do that by
rewriting the underlying code without changing the API. The interface between
the parser's stack extremum and its input stream is passed to reductions as an
Array(), but that doesn't mean it always has to be allocated anew.

Remember: the old Command Line Interpretation Translator scaffold isn't decided;
I left ::toTheLimit() where it was, pending a ::hatten() that shall suit you; if
you'd like to use the horrifying monstrosity that is my software architecture,
you can see Mr. Skeleton awaiting you in clit.js -- asking where is his flesh, &
rapidly becoming impatient with my poking and prodding him all day. Soon, Mr.
Skeleton; soon shall be the day when the hat is yours at last, & your calcareous
projection from within my library becomes a fully fledged automaton unto itself.

For the meantime I'm satisfied with the half-measure. I think the API is okay to
start building upon, so I'll start building. Overhaul of the back-end late this
year or early in the next, & it's looking good for me to furnish the CLIT before
the third quarter. Therefore I'd say: expect full CLIT functionality in 2016.

Before I apprise you of my progress so far, let's take a moment for a thoroughly
detailed technical analysis of Mr. Skeleton's bony protrusion.

Phoneme    <=    (EscapeSequence | AnyCharacter | Number | String)
                 (EscapeSequence | AnyCharacter | Number | String | Phoneme | )
	Concatenate a "word" that is one argument in an argument vector.

ISLDQ    <=    '\"'
	Open a <String>.

InchoateString    <=    (ISLDQ | InchoateString)
                        (OP_CAT | OP_CONJ | EscapeSequence | AnyCharacter
                         | Number | Space)
	Make strings out of any symbol following an open string.
	(As you can see, this rule must be rewritten...)

String    <=    InchoateString '\"'
	Close a <String>.

Argument    <=    Phoneme (Space | ) | Argument Argument
	Concatenate the argument vector comprising an executable MLPTK command.
	That bit with "(Space | )" should probably be just "Space".

Catenation    <=    (Argument | Group | Conjugation) OP_CAT
	Concatenate the output of commands.

MalformedGroupCohesion    <=    (Argument | Group | Conjugation) OP_CLPAR
	Automatically correct the user's malformed syntax where the last
	command in a parenthetical sub-grouping was not followed by a ";".

ExecutableInchoateConjugation    <=    Argument OP_CONJ | Blargument
	Signify that a command can be executed as part of a <Conjugation>.

InchoateConjugation    <=    Group OP_CONJ | Conjugation
	Convert a conjugated <Group>, or the output of a <Conjugation>,
	to an <InchoateConjugation> token that can form the left-hand
	part of a further <Conjugation>.
	This reduction causes parser stack differentiation, because it
	conflicts with "Catenation <= Conjugation OP_CAT".
	In that circumstance, the sequence "<Conjugation> <OP_CAT> ..."
	is both a "<Catenation> ..." and a "<InchoateConjugation> <OP_CAT> ...".
	Observe that the latter always produces a syntax error.
	I'm pretty sure I could rewrite the grammar of the <Conjugation> rule to
	fix this; IDK why I didn't. (Maybe a bug elsewhere makes it impossible.)

Conjugation    <=    (ExecutableInchoateConjugation | InchoateConjugation)
                     ExecutableInchoateConjugation
	Execute the command in the <ExecutableInchoateConjugation> at right,
	supplying on its standard input the standard output of that at left.

InchoateGroup    <=    (OP_OPPAR | InchoateGroup) Catenation
	Concatenate the contents of a parenthesized command sub-grouping.

Group    <=    InchoateGroup (OP_CLPAR | MalformedGroupCohesion)
	Close an <InchoateGroup>. Concatenate the contents of a
	<MalformedGroupCohesion> if it trailed.

CommandLine    <=    (CommandLine | ) Catenation
	Concatenate the output of <Catenation>s into one Array.
	This one actually doesn't differentiate. Either a <CommandLine> waits at
	left to consume a Catenation when it reduces, or something else does, &
	<Catenations> in mid-parse never reduce to <CommandLine>s except when
	fatal syntax errors occur, in which case the parser belches brimstone.
	
Blargument    <=    Argument (OP_CAT | OP_CLPAR)
    Duplicate the trailing concatenation operator or close parenthesis following
    an <Argument>, so that a <Conjugation> doesn't conflict with a <Catenation>
    or an <InchoateGroupCohesion>. I think this can be specified formally in a
    proper grammar, without the multiple-symbol unshift, but IDK how just yet --
    because (without lookahead) the parser can't know when the argument vector
    ends without seeing a trailing operator, so execution of the last command in
    the conjugation sequence <InchoateConjugation> <Argument> <OP_CAT> would
    occur when <Argument> <OP_CAT> reduces & executes, disregarding its standard
    input (the contents of the foregoing <InchoateConjugation>.
    "Blargument <= Argument" can never happen and "ExecutableInchoateConjugation
    <= Argument" would grab the <Argument> before it could concatenate with the
    next <Argument>, so I'm at a loss for how I should accomplish this formally.
    BTW, <Blargument> is the <WalkingBassline>, with trivial alterations.
    The <Blargument> reduction causes parser stack differentiation, because it
    conflicts with both <Catenation> and <MalformedGroupCohesion>. In either
    case, the <Blargument> branch encounters a syntax error & disappears
    when <Blargument> didn't immediately follow an inchoate conjugation; the
    other branch disappears in the inverse circumstance.

Token identifier   Operator precedence  Associativity
"Space", - - - - - 2, - - - - - - - - - "wrong",
"OP_CAT",  - - - - 0, - - - - - - - - - "wrong",
"EscapeSequence",  0, - - - - - - - - - "right",
"AnyCharacter",  - 0, - - - - - - - - - "right", (the sequence to the right of
"Number",  - - - - 0, - - - - - - - - - "right",  a right-associative token is
"String",  - - - - 0, - - - - - - - - - "right",  reduced first, except...)
"OP_OPPAR",  - - - 0, - - - - - - - - - "left",
"OP_CLPAR",  - - - 0, - - - - - - - - - "wrong", (... when wrong-associativity
"OP_CONJ", - - - - 0, - - - - - - - - - "wrong",  forces QL to reduce the right-
"QL_FINALIZE", - - 0, - - - - - - - - - "wrong"   associative sequence.)

The avid reader shall observe that my "wrong-associativity" specifier, when used
to define runs of right-associative tokens that stick to one another, is similar
to the lexical comparison & matching algorithm of a lexical analyzer (scanner).
In fact, as written, it _is_ a scanner. For an amusing diversion, try excerpting
the portions of the semantical analyzer (parser) that can be made into lexical
analyzer rules, then put them into the scanner; or wait a few months and I will.

But that's enough bony baloney.
If you preferred Mr. Skeleton as he was, see mlptk/old/clit.js.21mar2016.

As of probably a few days before I posted this brief, my upgrade to QL is now
sufficient to function vice its predecessor. I spruced-up Mr. Skeleton, so that
the test scaffold in clit.js now functions with ::hattenArDin() in QL, and now
everything looks all ready to go for shell arithmetic & such frills as that.

Ironically, it seems that I've made it slower by attempting to make it faster. I
 should have known better. I'll try to fix the speed problem soon; however,
until then, I've abandoned work on the Translator due to intolerable slowness.
I'm sorry for the inconvenience. If it's any help, I think the problem is due to
 too many new Array() invocations or too many nested Arrays, one of the two.
Either way, I intend to fix it this year by rewriting the whole parser (again)
as a static code generator.

I was also thinking of writing a windowing operating system for EmotoSys, but I
am uncertain how or whether to put clickable windows in the text console. I mean
-- it'd be simple now that QL's rolling, but maybe a more Spartan design? I will
post some flowcharts here when I've exhausted my ministrations to the CLIT.

I’m happy to relate that puns about stacks & state have become out of date.

More of QL, courtesy of MediaFire, with the MLPTK framework included as usual:

https://www.mediafire.com/?xaqp7cq9ziqjkdz (http://www.mediafire.com/download/xaqp7cq9ziqjkdz/mlptk-operafix-21mar2016.zip)

I have, at last, made time to fix the input bug in Opera!
I'm sorry for the delay and inconvenience. I didn't think it'd be that tricky.

I fixed an infinite loop bug in "rep" (where I didn't notice that someone might
type in a syntax error, because I didn't have a syntax error in my test cases),
and added to the bibliography the names of all my teachers that I could recall.
I added a quip to the bootstrap sequence. More frills.

My work on QL has been painfully slow, due to pain, but it'll all be over soon.
I see zero fatal flaws in ::hattenArDin. I guess debugging'll be about a month,
and then it's back to the Command Line Interpretation Translator. Sorry for the
delay, but I have to rewrite it right before I can right the written writing.
The improved QL (and, hopefully, the CLIT) is my goal for second quarter, 2016.
After those, I'll take one of two options: either a windowing operating system
for EmotoSys or overhaul of QL. Naturally, I favor the prior, but I must first
review QL to see if the API needs to change again. (Doubtful.)

MLPTK, and QL, are free of charge within the Public Domain.

Last time I said I needed to make the input stream object simpler so that it can
behave like a buffer rather than a memory hog. I have achieved this by returning
to my original design for the input stream. Although that design is ironically
less memory-efficient in action than the virtual file allocation table, it leans
upon the Javascript interpreter's garbage collector so that I mustn't spend as
much time managing the fragmented free space. (That would've been more tedious
than I should afford at this point in the software design, which still awaits an
overhaul to improve its efficiency and concision of code.) And, like I thought,
it'll be too strenuous for me to obviate the precedence heuristic by generating
static code. In fact: unless tokens are identified by enumeration, dropping that
heuristic actually reduces efficiency, because string comparisons; also, I was
wrong about the additional states needed to obviate it: there're none, but a new
"switch" statement (these, like "if" and "while," are heuristics) is needed, and
it's still slower (even best-case) than "if"ing the precedence integer per node.

Here's an illustration of the "old" input stream object (whither I now return):

	 _________________________________
	| ISNode: inherits from Array     | -> [ ISNode in thread 0 ]
	|    ::value = data (i.e., token) |
	|    ::[N] = next node (thread N) | -> [ ISNode in thread 1 ]
	|    (Methods...)                 |
	|    (State....)                  | -> [ etc ... ]
	-----------------------------------

As you see, each token on input is represented by a node in a linked list.
Each node requires memory to store the token, plus the list's state & methods.
Nodes in the istream link forward. I wanted to add back-links, but nightmare.
For further illustration, see my foregoing briefs.

Technically the input stream can be processed in mid-parse by using the "thread"
metaphor (see "Stacking states on dated stacks"); however, by the time you need
a parser that complex, you are probably trying to work a problem that requires a
nested Turing machine instead of using only one. Don't let my software design
lure you into the Turing tar-pit: before using the non-Bison-like parts, check &
see if you can accomplish your goal by doing something similar to what you'd do
in Yacc or Bison. (At least then your grammar & algorithm may be portable.)
QL uses only thread zero by default... I hope.

GNU Bison is among the tools available from the Free Software Foundation.
Oh, yeah, and Unix was made by Bell Laboratories, not Sun Microsystems. (Oops.)

Here's an illustration of the "new" input stream object (whence I departed):

	Vector table 0: file of length 2. Contents: "HI".
		       (table index in first column, data position in second column)
		0 -> 0 (first datum in vector 0 is at data position 0)
		1 -> 1 (second datum, vector 0, is at data position 1)
	Vector table 1: file of length 3. Contents: "AYE".
		0 -> 3
		1 -> 4
		2 -> 7
	Vector table 2: file of length 3. Contents: "H2O".
		0 -> 0
		1 -> 5
		2 -> 10
	Vector table 3: file of length 6. Contents: "HOHOHO".
		0 -> 0
		1 -> 10
		2 -> 0
		3 -> 10
		4 -> 0
		5 -> 10
	Data table: (data position: first row, token: second row)
		0     1     2     3     4     5     6     7     8     9    10
		H     I     W     A     Y     2     H     E     L     L    O

In that depreciated scheme, tokens were stored in a virtual file.
The file was partitioned by a route vector table, similar to vectabs everywhere.
Nodes required negligible additional memory, but fragmentation was troublesome.
Observe that, if data repeats itself, the data table need not increase in size:
lossless data compression, like a zip file or PNG image, operates with a similar
principle whereby sequences of numbers that repeat themselves are squeezed into
smaller sequences by creating a table of sequences that repeat themselves. GIF89
employs run-length encoding, which is a similar algorithm.

There is again some space remaining, and not much left to say about the update.
I'll spend a few paragraphs to walk you through how Quadrare Lexema parses.

QL is based upon a simple theory (actually, axiom) of how to make sense of data:
    1. Shift a token from input onto lookahead; the former LA becomes active.
    2. If the lookahead's operator precedence is higher, block the sequence.
    3. If the active token proceeds in sequence, shift it onto the sequence.
    4. Otherwise, if the sequence reduces, pop it and unshift the reduction.
    5. Otherwise, block the sequence and wait for the expected token to reduce.
And, indeed: when writing a parser in a high-level language such as Javascript,
everything really is exactly that simple. The thousands lines of code are merely
a deceptive veil of illusion, hiding the truth from your third eye.

Whaddayamean, "you don't got a third eye"?

QL doesn't itself do the tokenization. I wrote a separate scanner for that. It's
trivial to accomplish. Essentially the scanner reads a string of text to find a
portion that starts at the beginning of the string and matches a pattern (like a
regular expression, which can be done with my ReGhexp library, but I used native
Javascript reg exps to save time & memory and make the code less nonstandard);
then the longest matching pattern (or, if tied, the first or last or poka-dotted
one: whichever suits your fancy) is chopped off the beginning of the string and
stored in a token object who contains an identifier and a semantic value. Value
of a token is computed by the scanner in similar fashion as parser's reductions.
The scanner doesn't need to do any semantic analysis, but some is OK in HLLs.

Now that your input has ascended from the scanning chakra, let's manifest
another technical modal analysis. I wrote all my code while facing the upstream
direction of a flowing river, sitting under a waterfall, and meditating on the
prismatic refraction of a magical crystal when exposed to a candle dipped at the
eostre of the dawn on the vernal solstice -- which caused a population explosion
among the nearby forest rabbits, enraged a Zen monk who had reserved a parking
space, divested a Wiccan coven, & reanimated the zombie spirits of my ancestors;
although did not result in an appreciable decrease in algorithmic complexity --
but, holy cow, thou may of course ruminate your datum howsoever pleases thee.

The parser begins with one extremum, situated at the root node. The root is that
node whence every possible complete sequence begins; IOW, it's a sentry value.
Thence, symbols are read from input as described above. The parse stack -- that
is, the extrema -- builds on itself, branching anywhen more than one reduction
results from a complete sequence. As soon as a symbol can't go on the stack: if
the sequence is complete, the stack is popped to gather the symbols comprising
the completed sequence; that occurs by traversing the stack backward til arrival
at the root node, which is also popped. (I say "pop()" although, as written, the
stack doesn't implement proper pushing and popping. Actually, it isn't even a
stack: it's a LIFO queue. Yes, yes: I know queues are FIFO; but I could not FIFO
the FIFO because the parser doesn't know where the sequence began until it walks
backwards through the LIFO. So now it's a queue and a stack at the same time;
which incited such anger in the national programming community that California
seceded from the union and floated across the Pacific to hang out with Nippon,
prompting Donald Trump to reconsider his business model. WTF, m8?) Afterward,
the symbols (now rearranged in the proper, forward, order) are trundled along to
the reduction constructor, which is an object class I generated with GenProtoTok
(a function that assembles heritable classes). That constructor makes an object
which represents _a reduction that's about to occur;_ id est: a deferment, which
contains the symbols that await computation. The deferment is sent to the parse
stack (in the implementation via ::toTheLimit) or the input stream (::hatten);
when only one differential parse branch remains, computation is executed and the
deferred reduction becomes a reduction proper. Sequences compound as described.

Stacking states to sublimate or cultivate an emotrait leads me to etiolate.

Emotrait: a portmanteau from emotion and trait (L. tractus, trahere, to draw),
          signifying what is thought of an individual when judging by emotion.
Etiolate: to blanch, or become white; as, by cloistering away from the sun.

IDK whether there is precisely twenty percent more QL by now, but I've tightened
the parser code and QL is now better than before. Nothing much new in MLPTK: I
fixed a few typos; that's about all. Today's installment is, courtesy MediaFire:

https://www.mediafire.com/?5my42sl41rywzsg (http://www.mediafire.com/download/5my42sl41rywzsg/mlptk-qlds14mar2016.zip)

As usual, my work is free of charge within the Public Domain.

The parser's state logic is exactly the same as it was before; but, in addition
to the changes I described (extremum-input interface differentiation) two posts
ago in the foregoing brief, I've altered the format of sequence reductions such
that the argument vector is now two args wide, comprising the interface triplet
(a parser stack node, a thread index, and a file handle into the thread) and a
reference to the parser object. Oh, and I depreciated the "mate" and "stop"
associativity specifiers, because making them work the way I wanted would have
been a nightmare.

And, even though Luna feels better after eating forty Snickers, that's terrible.

Anyway, let me bend your ear (eye?) for a while about the interface and handles.

The parse stack is more slender, but mostly the same, except that nodes are now
Arrays whose members point backward and forward in the stack. The parser's input
is stored (although I call this sometimes, erroneously, "buffering;" buffers do
not work that way) in a data structure named the input stream, which is a random
access virtual file that stores each token as an array element in itself. Again,
the input stream is not a buffer; it is a file that grows in size as tokens are
unshifted. This makes it unsuitable for very large inputs. I'll fix it soon. For
now, you'll have to put up with the memory leak. Maybe you can find some use for
it as a volatile file system or something, but it's useless as a buffer; what it
should have been in the first place. To fix the problem shall require only that
the object is made simpler, so I expect to have it improved by next update. In
the meantime, it functions by mapping an Array to a vector table whose indices
correspond to the Nth item in the data and whose values are the indices of that
item in the Array (which has certainly become fragmented).

That's about all that's developed since last time. As you can see @ hattenArDin,
I'm crafting QL as a quasi static code generator, with fewer heuristics. Instead
of storing within the parser stack nodes the information necessary to transition
the parser's state, Hatten walks the state map to determine exactly what symbols
transition, block, or reduce and whether they are right-associative. I could
also have done this for the precedence decision, but that would require that the
parser generator creates a significant number of additional states: like about
the number of symbols whose precedence is specified multiplied by the number of
symbols that can possibly occur anywhere in any sequence. In other words, such a
computational speed gain (one if-statement) would be about the square of present
memory consumption, or vaguely proximal to that. So that design decision was due
to my failure to ponder the math before writing, and not due to impossibility.
I'll work that problem, too, before I think I'm done improving the blueprint.

I could excerpt some code and walk you through it or something, but it is plain
language to me, and I have no idea how to make it any plainer in English unless
someone asks a question. I think WordPress is set to email me if you comment on
one of my posts, so I should see your letters if they ever arrive.

And, sadly, the tightened code is yet again "sorta functional." If you require a
testbed for your grammar, refer to the test sequence in clit.js & ::toTheLimit()
in quadrare-lexema.js; both of which work well enough, except for recursion.

Tentatively: expect the New & Improved Quadrare Lexema sometime around June.

Stacking states on dated stacks to reinstate Of Stack & State: first rate?

I have accomplished the first major portion of the code I set out to write last
Christmas. My milestone for Quadrare Lexema, first quarter 2016, is available
at MediaFire here:

http://www.mediafire.com/download/26qa4xnidxaoq46/mlptk-2016q1-typeset-full.zip

It's a bit larger than my other snapshots this year, because I included another
full PDF typesetting of my software (which, today, runs into hundreds of pages)
and two audio recordings evoking my most beautiful childhood memories. These
added a few Mbytes. I'll remove them in the next edition.
(No, really; Tuna Loaf truly does encode smaller as a PCM .WAV than an MP3.)

Don't bother downloading this one unless you really want to see how QL's shaping
up or need a preformatted typesetting of progress since the prospectus. There is
nothing much new: all my work in the first quarter has been in Quadrare Lexema &
these updates, and the command line interpreter upgrade isn't yet operational.

My source code is, as usual, free of charge and Public Domain.
Some of the graphics and frills don't belong to me, and I plan to remove them
(either at some unspecified future time or upon any such notification/demand),
but please restrict yourself to my signed code if copying & pasting for profit.

Today's edition of MLPTK includes an upgraded QL. I haven't had time to overhaul
yet, but I did manage to shoehorn in a somewhat improved parser mechanism. This
should make things less of a nightmare for those of you presently experimenting
with the beta version of QL, which is at least a _stable_ beta by now.

Of particular interest are MLPTK's text-mode console, which I employed to debug
Quadrare Lexema in alpha, and which is now stable/maintenance (doesn't crash);
and QL itself, now a stable Beta (doesn't crash much).
My tentative additions to the Bison parser's "left/right" associativity model &
their demonstration in the Command Line Interpretation Translator's library file
are of most interest to experienced programmers. When I've finished writing the
manuscript in its native language, I'll write an additional reference sheet in
English for the novitiate. (See the old AFURACC supplemental schematic reference
for an idea of what this shall look like.)

I've altered QL's schematic slightly, to permit the programmer (that's you!) to
encode a minor degree of context in your grammar, and my code is so easy to read
& alter that you can really have a field day with this thing if you've the mind.

Several of my foregoing briefs, which cover some of these already; consider also
the Blargument rule, which is what makes the CLIT so fascinating. Utilizing QL's
mechanism to unshift multiple tokens back onto the input stream when executing a
reduction (which is really remarkably clever; because, IIRC, Bison can't unshift
multiple tokens during a reduction deferment, so multiple-symbol productions are
very difficult to achieve by using the industry-standard technology), Blargument
consumes 1 less token than it "actually" does by unshifting the trailing token.
Because of how I wrote QL, the parser differentiates its input stream (virtually
or actually) each time a reduction produces symbols; so, even when a deferred
reduction pushes symbols back onto the input stream, these don't interfere with
any of the other deferred reductions (which exist as though the input stream was
not modified by anything that didn't happen during their "timeline").

In addition, my upgrade to QL's programming interface permits differentiation at
any point within the input stream. Although somewhat wobbly, this serves to show
how it's possible for even a deferred computation to polymorph the code - before
the parser has even arrived there, & without affecting any of the other quanta.
Or at least it will, if I can figure out how to apply it before the next time I
overhaul the code; otherwise, I think I'll drop it from the schematic, because
this input stream metaphor should probably be optimized due to large inputs.

The old Capsule Parser generator is still there, via the ::toTheLimit() method,
and the upgraded Ettercap Parser is generated by ::hattenArDin().

Here is an abridged comparison & contrast:
1. Stack nodes are generic; their place taken by hard-coded transition methods.
   Some alterations to stack node constructor argument vector, algorithm.
   There is more hard code and less heuristics. Actually, the whole parser could
   be generated as static code, and I hope to implement that sometime this year.
2. Input is now a linked list; previously an Array.
   Arrays of tokens are preprocessed to generate this list, which functions as
   a stream buffer with a sequential-access read head.
3. The input stream links forward and the parse stack links backward.
4. Stack extrema now interface with the input stream differentially; that is,
   the input stream itself is differentiated (obviating recursion), instead of
   the recursive virtual differentiation that occurred before.
5. Differentiation of the input stream occurs actually, not virtually, and can
   be programmed at any point in the input stream (although it's such a pain).

Here's an illustration of the extremum-to-read-head interface juncture:

                    -> [ EXTREMUM A ] -> [ READ HEAD A ] -> [ DIFFERENTIAL A ]
                   /                                               v
 -> [ PARSE STACK ] -> [ EXTREMUM B ] -> [ READ HEAD B ] -> [ INPUT BUFFER ] ->
                   \                                               ^
                    -> [ EXTREMUM C ] -> [ READ HEAD C ] -> [ DIFFERENTIAL C ]

Actually it is a little more complicated than that, but that is the basic idea.
As you can see, my parser models a Turing machine that sort of unravels itself
at the interface between what it has already seen and what it presently sees.
Then it zips itself back up when reduce/reduce conflicts resolve themselves.
I imagine it like a strand of DNA that is reconstituting itself in karyokinesis.

The extrema A through C are, actually, probably exactly the same extremum.
However, when the next parse iteration steps forward to the location within the
input stream that is pointed-to by the read head, the parse stack branches; so,
it's easiest to imagine that the extrema have already branched at the point of
interface, because if they haven't branched yet then they are about to.

Here's an illustration of timeline differentiation within the input stream:

                        -> [ DIFFERENTIAL A ] -.       -> [ DIFFERENTIAL C ]
                       /                   v---¯     /                       \
    -> [ READ HEAD(S) ] -> [ UNMODIFIED INPUT ] -> [ TOKEN W ] -> [ TOKEN Z ] ->
                       \                   ^-----------------------------------.
                        -> [ DIFFERENTIAL B ] -> [ DIFFERENTIAL B CONTINUES ] -¯

The parser selects differentials based on either of two criteria: the read head,
which points at the "beginning" of the input stream buffer as it is perceived by
the extremum in that timeline; or a timeline identifier ("thread ID") that tells
the algorithm when it should, for example, jump from token W to differential C
and therefore skip token Z. Thus, multiple inputs are "the same program."
Again: parser state becomes unzipped at differential points, then re-zips itself
after the differentiated timelines have resolved themselves by eliminating those
who encounter a parse error until only one possible timeline remains.

So your code can polymorph itself. In fact, it can polymorph itself as much as
you want it to, because the parser returns ambiguous parses as long as they have
produced the origin symbol during its finalization phase. However, I haven't yet
tested this and it probably causes more difficulty than it alleviates.

However, if you've experimented with AFURACC, you might be disappointed to see
that I have failed to write an interface for you to polymorph the grammar at run
time using QL. Probably that's possible to do, but the scaffold (as it is at the
moment) is rigid and inflexible. I want to work it into a future revision; but,
because a heuristic parser is more-or-less sine qua non to the whole idea of a
polymorphic parser, I'm pretty sure I'll stick to just polymorphing the code.

As I work to reduce the time complexity of the parser algorithm, similarity to
the Free Software Foundation's compiler-compiler, GNU Bison, becomes pronounced.
I'm trying to add some new ideas, but Bison is pretty much perfect as it is, so
I haven't had much luck - things that aren't Bison seem to break or complicate.
I still haven't heard anything from the FSF about this, so I guess it's OK?

However, the scaffold seems stable, so I'll work on the command-line interpreter
for a while and formalize its grammar. After that, probably yet another improved
static code generator, and if that doesn't malfunction I'll try video games IDK.

I intend to complete the formal commandline grammar within second quarter 2016.
Goals: command substitution, javascript substitution (a la 'csh'), environment
variables and shell parameter expansion, and shell arithmetic.
Additionally, if there's time, better loops and a shell scripting language.