Tag Archives: Data Structure

Palling around.

Pall (n.): pawl.

I couldn't write last week, and my upgrade to QL has progressed no further.
For reference, I stalled before comparing the efficiency of nested Objects to
that of nested Arrays, which I must test before experimenting further with the
prototype compiler or even refining the design. I intend to do that this month.
In the meantime, here's a snapshot of MLPTK with new experiments included.

http://www.mediafire.com/download/566ln3t1bc5jujp/mlptk-p9k-08apr2016.zip

And a correction to my brief about the grammar ("Saddlebread"): actually, the
InchoateConjugation sequence does not cause differentiation, because the OP_CAT
prevents the original from reducing. Other parts may be inaccurate. I'll revise
the grammar brief and post a new one as soon as I have fixed the QL speed bug.

I took some time out from writing Quadrare Lexema to write some code I've been
meaning to write for a very long time: pal9000, the dissociated companion.
This software design is remarkably similar to the venerable "Eggdrop," whose C
source code is available for download at various locations on the Internets.
Obviously, my code is free and within the Public Domain (as open as open source
can be); you can find pal9000 bundled with today's edition of MLPTK, beneath the
/reference/ directory.

The chatbot is a hardy perennial computer program.
People sometimes say chatbots are artificial intelligence; although they aren't,
exactly, or at least this one isn't, because it doesn't know where it is or what
it's doing (actually it makes some assumptions about itself that are perfectly
wrong) and it doesn't apply the compiler-like technique of categorical learning
because I half-baked the project. Soon, though, I hope...

Nevertheless, mathematics allows us to simulate natural language.
Even a simplistic algorithm like Dissociated Press (see "Internet Jargon File,"
maintained somewhere on the World Wide Web, possibly at Thyrsus Enterprises by
Eric Steven Raymond) can produce humanoid phrases that are like real writing.
Where DisPress fails, naturally, is paragraphs and coherence: as you'll see when
you've researched, it loses track of what it was saying after a few words.

Of course, that can be alleviated with any number of clever tricks; such as:
	1. Use a compiler.
	2. Use a compiler.
	3. Use a compiler.
I haven't done that with p9k, yet, but you can if you want.

Of meaningful significance to chat robots is the Markov chain.
That is a mathematical model, used to describe some physical processes (such as
diffusion), describing a state machine in which the probability of any given
state occurring is dependent only on the next or previous state of the system,
without regard to how that state was encountered.
Natural language, especially that language which occurs during a dream state or
drugged rhapsody (frequently and too often with malicious intent, these are
misinterpreted as the ravings of madmen), can also be modeled with something
like a Markov chain because of the diffusive nature of tangential thought.

The Markov-chain chat robot applies the principle that the state of a finite
automaton can be described in terms of a set of states foregoing the present;
that is, the state of the machine is a sliding window, in which is recorded some
number of states that were encountered before the state existent at the moment.
Each such state is a word (or phrase / sentence / paragraph if you fancy a more
precise approach to artificial intelligence), and the words are strung together
one after another with respect to the few words that fit in the sliding window.
So, it's sort of like a compression algorithm in reverse, and similar to the way
we memorize concepts by relating them to other concepts. "It's a brain. Sorta."

One problem with Markov robots, and another reason why compilers are of import
in the scientific examination of artificial intelligence, is that of bananas.
The Banana Problem describes the fact that, when a Markov chain is traversed, it
"forgets" what state it occupied before the sliding window moved.
Therefore, for any window of width W < 6, the input B A N A N A first produces
state B, then states A and N sequentially forever.
Obviously, the Banana Problem can be solved by widening the window; however, if
you do that, the automaton's memory consumption increases proportionately.

Additionally, very long inputs tend to throw a Markov-'bot for a loop.
You can sorta fix this by increasing the width of the sliding window signifying
which state the automaton presently occupies, but then you run into problems
when the sliding window is too big and it can't think of any suitable phrase
because no known windows (phrases corresponding to the decision tree's depth)
fit the trailing portion of the input.
It's a sticky problem, which is why I mentioned compilers; they're of import to
artificial intelligence, which is news to absolutely no one, because compilers
(and grammar, generally) describe everything we know about the learning process
of everyone on Earth: namely, that intelligent beings construct semantic meaning
by observing their environments and deducing progressively more abstract ideas
via synthesis of observations with abstractions already deduced.
Nevertheless, you'd be hard-pressed to find even a simple random-walk chatbot
that isn't at least amusing.
(See the "dp" module in MLPTK, which implements the vanilla DisPress algorithm.)

My chatbot, pal9000, is inspired by the Dissociated Press & Eggdrop algorithms;
the copy rights of which are held by their authors, who aren't me.
Although p9k was crafted with regard only to the mathematics and not the code,
if my work is an infringement, I'd be happy to expunge it if you want.

Dissociated Press works like this:
	1. Print the first N words (letters? phonemes?) of a body of text.
	2. Then, search for a random occurrence of a word in the corpus
	   which follows the most recently printed N words, and print it.
	3. Ad potentially infinitum, where "last N words" are round-robin.
It is random: therefore, humorously disjointed.

And Eggdrop works like this (AFAICR):
	1. For a given coherence factor, N:
	2. Build a decision tree of depth N from a body of text.
	3. Then, for a given input text:
	4. Feed the input to the decision tree (mmm, roots), and then
	5. Print the least likely response to follow the last N words
	   by applying the Dissociated Press algorithm non-randomly.
	6. Terminate response after its length exceeds some threshold;
	   the exact computation of which I can't recall at the moment.
It is not random: therefore, eerily humanoid. (Cue theremin riff, thundercrash.)

A compiler, such as I imagined above, could probably employ sliding windows (of
width N) to isolate recurring phrases or sentences. Thereby it may automatically
learn how to construct meaningful language without human interaction.
Although I think you'll agree that the simplistic method is pretty effective on
its own; notwithstanding, I'll experiment with a learning design once I've done
QL's code generation method sufficiently that it can translate itself to Python.

Or possibly I'll nick one of the Python compiler compilers that already exists.
(Although that would take all the fun out of it.)

I'll parsimoniously describe how pal9000 blends the two:

First of all, it doesn't (not exactly), but it's close.
Pal9000 learns the exact words you input, then generates a response within some
extinction threshold, with a sliding window whose width is variable and bounded.
Its response is bounded by a maximum length (to solve the Banana Problem).
Because it must by some means know when a response ends "properly," it also
counts the newline character as a word.
These former are departures from Eggdrop.
It also learns from itself (to avoid saying something twice), as does Eggdrop.

In addition, p9k's response isn't necessarily random.
If you use the database I included, or choose the experimental "generator"
response method, p9k produces a response that is simply the most surprising word
it encountered subsequent to the preceding state chain.
This produces responses more often, and they are closer to something you said
before, but of course this is far less surprising and therefore less amusing.
The classical Eggdrop method takes a bit longer to generate any reply; but, when
it does, it drinks Dos Equis.
... Uh, I mean... when it does, the reply is more likely to be worth reading.
After I have experimented to my satisfaction, I'll switch the response method
back to the classic Eggdrop algorithm. Until then, if you'd prefer the Eggdrop
experience, you must delete the included database and regenerate it with the
default values and input a screenplay or something. I think Eggdrop's Web site
has the script for Alien, if you want to use that. Game over, man; game over!

In case you're curious, the algorithmic time complexity of PAL 9000 is somewhere
in the ballpark of O(((1 + MAX_COHERENCE - MIN_COHERENCE) * N) ^ X) per reply,
where N is every unique word ever learnt and X is the extinction threshold.
"It's _SLOW._" It asymptotically approaches O(1) in the best case.

For additional detail, consult /mlptk/reference/PAL9000/readme.txt.

Pal9000 is a prototypical design that implements some strange ideas about how,
exactly, a Markov-'bot should work. As such, some parts are nonfunctional (or,
indeed, malfunction actually) and vestigial. "Oops... I broke the algorithm."
While designing, I altered multiple good ideas that Eggdrop and DisPress did
right the first time, and actually made the whole thing worse on the whole. For
a more classical computer science dish, try downloading & compiling Eggdrop.
Advertisements

I’m happy to relate that puns about stacks & state have become out of date.

More of QL, courtesy of MediaFire, with the MLPTK framework included as usual:

https://www.mediafire.com/?xaqp7cq9ziqjkdz (http://www.mediafire.com/download/xaqp7cq9ziqjkdz/mlptk-operafix-21mar2016.zip)

I have, at last, made time to fix the input bug in Opera!
I'm sorry for the delay and inconvenience. I didn't think it'd be that tricky.

I fixed an infinite loop bug in "rep" (where I didn't notice that someone might
type in a syntax error, because I didn't have a syntax error in my test cases),
and added to the bibliography the names of all my teachers that I could recall.
I added a quip to the bootstrap sequence. More frills.

My work on QL has been painfully slow, due to pain, but it'll all be over soon.
I see zero fatal flaws in ::hattenArDin. I guess debugging'll be about a month,
and then it's back to the Command Line Interpretation Translator. Sorry for the
delay, but I have to rewrite it right before I can right the written writing.
The improved QL (and, hopefully, the CLIT) is my goal for second quarter, 2016.
After those, I'll take one of two options: either a windowing operating system
for EmotoSys or overhaul of QL. Naturally, I favor the prior, but I must first
review QL to see if the API needs to change again. (Doubtful.)

MLPTK, and QL, are free of charge within the Public Domain.

Last time I said I needed to make the input stream object simpler so that it can
behave like a buffer rather than a memory hog. I have achieved this by returning
to my original design for the input stream. Although that design is ironically
less memory-efficient in action than the virtual file allocation table, it leans
upon the Javascript interpreter's garbage collector so that I mustn't spend as
much time managing the fragmented free space. (That would've been more tedious
than I should afford at this point in the software design, which still awaits an
overhaul to improve its efficiency and concision of code.) And, like I thought,
it'll be too strenuous for me to obviate the precedence heuristic by generating
static code. In fact: unless tokens are identified by enumeration, dropping that
heuristic actually reduces efficiency, because string comparisons; also, I was
wrong about the additional states needed to obviate it: there're none, but a new
"switch" statement (these, like "if" and "while," are heuristics) is needed, and
it's still slower (even best-case) than "if"ing the precedence integer per node.

Here's an illustration of the "old" input stream object (whither I now return):

	 _________________________________
	| ISNode: inherits from Array     | -> [ ISNode in thread 0 ]
	|    ::value = data (i.e., token) |
	|    ::[N] = next node (thread N) | -> [ ISNode in thread 1 ]
	|    (Methods...)                 |
	|    (State....)                  | -> [ etc ... ]
	-----------------------------------

As you see, each token on input is represented by a node in a linked list.
Each node requires memory to store the token, plus the list's state & methods.
Nodes in the istream link forward. I wanted to add back-links, but nightmare.
For further illustration, see my foregoing briefs.

Technically the input stream can be processed in mid-parse by using the "thread"
metaphor (see "Stacking states on dated stacks"); however, by the time you need
a parser that complex, you are probably trying to work a problem that requires a
nested Turing machine instead of using only one. Don't let my software design
lure you into the Turing tar-pit: before using the non-Bison-like parts, check &
see if you can accomplish your goal by doing something similar to what you'd do
in Yacc or Bison. (At least then your grammar & algorithm may be portable.)
QL uses only thread zero by default... I hope.

GNU Bison is among the tools available from the Free Software Foundation.
Oh, yeah, and Unix was made by Bell Laboratories, not Sun Microsystems. (Oops.)

Here's an illustration of the "new" input stream object (whence I departed):

	Vector table 0: file of length 2. Contents: "HI".
		       (table index in first column, data position in second column)
		0 -> 0 (first datum in vector 0 is at data position 0)
		1 -> 1 (second datum, vector 0, is at data position 1)
	Vector table 1: file of length 3. Contents: "AYE".
		0 -> 3
		1 -> 4
		2 -> 7
	Vector table 2: file of length 3. Contents: "H2O".
		0 -> 0
		1 -> 5
		2 -> 10
	Vector table 3: file of length 6. Contents: "HOHOHO".
		0 -> 0
		1 -> 10
		2 -> 0
		3 -> 10
		4 -> 0
		5 -> 10
	Data table: (data position: first row, token: second row)
		0     1     2     3     4     5     6     7     8     9    10
		H     I     W     A     Y     2     H     E     L     L    O

In that depreciated scheme, tokens were stored in a virtual file.
The file was partitioned by a route vector table, similar to vectabs everywhere.
Nodes required negligible additional memory, but fragmentation was troublesome.
Observe that, if data repeats itself, the data table need not increase in size:
lossless data compression, like a zip file or PNG image, operates with a similar
principle whereby sequences of numbers that repeat themselves are squeezed into
smaller sequences by creating a table of sequences that repeat themselves. GIF89
employs run-length encoding, which is a similar algorithm.

There is again some space remaining, and not much left to say about the update.
I'll spend a few paragraphs to walk you through how Quadrare Lexema parses.

QL is based upon a simple theory (actually, axiom) of how to make sense of data:
    1. Shift a token from input onto lookahead; the former LA becomes active.
    2. If the lookahead's operator precedence is higher, block the sequence.
    3. If the active token proceeds in sequence, shift it onto the sequence.
    4. Otherwise, if the sequence reduces, pop it and unshift the reduction.
    5. Otherwise, block the sequence and wait for the expected token to reduce.
And, indeed: when writing a parser in a high-level language such as Javascript,
everything really is exactly that simple. The thousands lines of code are merely
a deceptive veil of illusion, hiding the truth from your third eye.

Whaddayamean, "you don't got a third eye"?

QL doesn't itself do the tokenization. I wrote a separate scanner for that. It's
trivial to accomplish. Essentially the scanner reads a string of text to find a
portion that starts at the beginning of the string and matches a pattern (like a
regular expression, which can be done with my ReGhexp library, but I used native
Javascript reg exps to save time & memory and make the code less nonstandard);
then the longest matching pattern (or, if tied, the first or last or poka-dotted
one: whichever suits your fancy) is chopped off the beginning of the string and
stored in a token object who contains an identifier and a semantic value. Value
of a token is computed by the scanner in similar fashion as parser's reductions.
The scanner doesn't need to do any semantic analysis, but some is OK in HLLs.

Now that your input has ascended from the scanning chakra, let's manifest
another technical modal analysis. I wrote all my code while facing the upstream
direction of a flowing river, sitting under a waterfall, and meditating on the
prismatic refraction of a magical crystal when exposed to a candle dipped at the
eostre of the dawn on the vernal solstice -- which caused a population explosion
among the nearby forest rabbits, enraged a Zen monk who had reserved a parking
space, divested a Wiccan coven, & reanimated the zombie spirits of my ancestors;
although did not result in an appreciable decrease in algorithmic complexity --
but, holy cow, thou may of course ruminate your datum howsoever pleases thee.

The parser begins with one extremum, situated at the root node. The root is that
node whence every possible complete sequence begins; IOW, it's a sentry value.
Thence, symbols are read from input as described above. The parse stack -- that
is, the extrema -- builds on itself, branching anywhen more than one reduction
results from a complete sequence. As soon as a symbol can't go on the stack: if
the sequence is complete, the stack is popped to gather the symbols comprising
the completed sequence; that occurs by traversing the stack backward til arrival
at the root node, which is also popped. (I say "pop()" although, as written, the
stack doesn't implement proper pushing and popping. Actually, it isn't even a
stack: it's a LIFO queue. Yes, yes: I know queues are FIFO; but I could not FIFO
the FIFO because the parser doesn't know where the sequence began until it walks
backwards through the LIFO. So now it's a queue and a stack at the same time;
which incited such anger in the national programming community that California
seceded from the union and floated across the Pacific to hang out with Nippon,
prompting Donald Trump to reconsider his business model. WTF, m8?) Afterward,
the symbols (now rearranged in the proper, forward, order) are trundled along to
the reduction constructor, which is an object class I generated with GenProtoTok
(a function that assembles heritable classes). That constructor makes an object
which represents _a reduction that's about to occur;_ id est: a deferment, which
contains the symbols that await computation. The deferment is sent to the parse
stack (in the implementation via ::toTheLimit) or the input stream (::hatten);
when only one differential parse branch remains, computation is executed and the
deferred reduction becomes a reduction proper. Sequences compound as described.

On Loggin’.

The post title, "On Loggin'," is a pun on the algorithmic time-complexity of an
ordered binary tree seek operation: O(n * log(n)).

This lecture provides my advice in regard to sorting data by using binary trees.
(From Bouvier's Law Dictionary, Revised 6th Ed (1856) [bouvier]:
 ADVICE, com. law. A letter [...] intended to give notice of [facts contained].)
My lectures are in the Public Domain, and are free of charge.
"By order of the Author." - Mark Twain, _Huckleberry Finn._

A tree is a directed acyclic graph. ("Tree" (1998-11-12) via FOLDOC 30jan2010.)
Other sort methods besides tree sort include quicksort, bubble sort, heap sort,
and insertion sort. ("Sort" (1997-02-12) via FOLDOC, 30th January 2010.)

This pattern can be seen almost everywhere that data are represented as discrete
notional objects: for example, the directory trees contained within the file
systema of your personal computer, the technology/skill trees that are likely to
present themselves in your favorite video game, and in any computer program that
maintains the logical order of a data set as new data are added. These aren't
necessarily treelike data structures; but, because the perceived "shape" of the
data is a tree, they're probably similar if coded in object-oriented language.
I imagine a file system as "what an object even is;" although a computer program
is volatile, whereas files are nonvolatile, the idea is much the same.
See also: object-oriented programming, procedural vs. functional languages, file
systems, {non,}volatile memory such as RAM and EEPROM, sort algorithms.

Of course, treelike structures aren't always the optimum schematic to represent
a hierarchical assortment of data; definitively, like all other object-oriented
designs, they're mere abstractions employed to delineate pieces of a contiguous
unidimensional memory space (the Turing model of a logic machine considers RAM
to be just a big empty pair of set brackets; data are populated & operated-upon
arbitrarily, but the container is merely a set) and to signify correlation.
An object is a collection of related data; which comprise a several datum.
See also: pointers, heap memory, set theory, discrete math, and Boolean logic.

¶ The following (til the next pilcrow) is nearly a verbatim excerpt from FOLDOC.
The data stored in a tree is processed by traversal. The algorithm begins at
root node, transforms or collects or computes against the data, and then repeats
itself for the root's child-nodes ("branches") in some specific order. Three
common traversal orders are pre-order, post-order, and in-order traversal.
For the binary tree illustrated below:
        T
       / \
      I   S
     / \
    D   E
A pre-order traversal visits the nodes in the order T I D E S.
A post-order traversal visits them in the order D E I S T.
An in-order traversal visits them in the order D I E T S.
¶ "Traversal" (2001-10-01) via FOLDOC, ed. 30th January 2010.
See also: recursion, the call stack (subroutine invocation), digraph traversal.

To encode a tree walk via recursive algorithm is straightforward, and is how the
concept of data set traversal is introduced to novitiate computer programmers.
(Well, after they've learned how to loop over an array with "for.")

Succinctly:

    function put (pNode, value) {
        if (pNode.isEmpty()) { pNode.datum = value; }
        else if (value < pNode.datum) { put(pNode.bLeft, value); }
        else if (value > pNode.datum) { put(pNode.bRight, value); }
    } // Ordered insertion.

    function tell (pNode, value) {
        if (value < pNode.datum) { return tell(pNode.bLeft, value); }
        else if (value > pNode.datum) { return tell(pNode.bRight, value); }
        else if (pNode.isEmpty()) { return NULL; }
        return pNode;
    } // Seek pointer to inserted.

_Any_ recursive algorithm can be reduced to an iterative algorithm, provided you
can use variable-length arrays to simulate the function of the call stack, which
stores the arguments to the subroutine you invoked and the address to restore to
the instruction pointer when the subroutine returns. That is, the call stack is
like bookmarks or tabbed indices to tell the routine where it was before a jump.
Replace the "instruction pointer and arguments" model of a stack frame with any
constant-width data set that remembers what you were doing before you jumped to
the next node in the digraph, and, naturally, you can remember the same stuff as
if you had utilized the convenient paradigm that is recursivity. (Stack frames
don't really all have to be the same size, but a definite frame width spares you
from doing yet another algorithm to parse frames. See: network protocol stacks.)

The "stuff you remember" is the state of the finite state automaton who tends to
the mechanism whereby the machine knows which instruction to execute next.
Recursion provides this automaton "for free," because it crops up so frequently;
but, for whatever reason (such as the 3000-call recursion limit in Firefox 3.6),
you might want to write a tree sort without using recursion at all.

Gentlemen, behold...
    corm = GenProtoTok(
        Object,
        function () {
            this.tree = new this.Hop();
            return this;
        },
        "Hopper",
        undefined,

        "locus", undefined,

        "seek", function (u_val) {
            while (this.locus[2] != undefined) {
                if (u_val > this.locus[2]) { this.locus = this.hop(1); }
                else if (u_val == this.locus[2]) { return false; }
                else { this.locus = this.hop(0); } // u < loc[2] || u == undef
            }
            return true;
        }, // end function Hopper::seek

        "tellSorted", function () {
            if (this.tree == undefined) { return undefined; }
            var evaluation = new Array();
            for (var orca = new Array(this.tree), did = new Array(false);
                orca.length>0;
            ) {
                this.locus = orca.pop();
                if (did.pop()) {
                    evaluation.push(this.locus[2]);
                    if (this.locus[1] != undefined) {
                        orca.push(this.locus[1]);
                        did.push(false);
                    }
                } else {
                    orca.push(this.locus);
                    did.push(true);
                    if (this.locus[0] != undefined) {
                        orca.push(this.locus[0]);
                        did.push(false);
                    }
                }
            }
            return evaluation;
        }, // end function Hopper::tellSorted

        "hop", function (index) {
            if (this.locus[index]==undefined) {
                this.locus[index] = new this.Hop();
            }
            return this.locus[index];
        }, // end function Hopper::hop

        "addUnique", function (value) {
            this.locus = this.tree;
            if (this.seek(value)) { this.locus[2] = value; return true; }
            return false;
        }, // end function Hopper::addUnique

        "Hop", GenPrototype(Array, new Array())
    );
... corm!

Here is how corm works: it's an Array retrofitted with a binary tree algorithm.

Each node of the tree is an Array whose members, at these indices, signify:
    0. Left-hand ("less-than") branch pointer.
    1. Right-hand ("greater-than") branch pointer.
    2. Value stored in the node.

Values are added via the corm object's addUnique method, which resets a pointer
to the algorithm's location in the tree and then seeks a place to put the datum.
Seeking causes a node to be created, if no node yet contains the datum; so, the
corm::seek method's name is misleading, but seeking is exactly what it does.

In-order traversal is executed by corm::tellSorted which returns the sorted set.
Recursion is simulated using a state stack whose frames each contain a pointer
to the "previous" node and a boolean value that signifies whether its left child
has been visited. Here is the main loop, step-by-step, in natural English:
    for (var orca = new Array(this.tree), did = new Array(false);
        orca.length>0;
    ) {
"Given the stack, which I consider until it is exhausted..."
        this.locus = orca.pop();
"I am here." -^
        if (did.pop()) {
"If I did the left branch already,"
            evaluation.push(this.locus[2]);
"then this value occurs at this location in the ordered set;"
            if (this.locus[1] != undefined) {
"if the right branch exists,"
                orca.push(this.locus[1]);
"visit the right child on the next loop iteration"
                did.push(false);
"(and, naturally, I haven't visited the right child's left child yet)"
            }
"."
        } else {
"If I haven't done the left branch already,"
            orca.push(this.locus);
"then I have to visit this node again when I come back"
            did.push(true);
"(except I shall have visited the left child by then);"
            if (this.locus[0] != undefined) {
"if the left branch exists,"
                orca.push(this.locus[0]);
"visit the left child on the next loop iteration"
                did.push(false);
"(and, naturally, I haven't visited the left child's left child yet)"
            }
"."
        }
"... I consume pointers and bools, doing all that there stuff"
    }
"."

And that is how to do everything I did with the Hopper data structure
(which is here renamed corm, because I like corm) in quadrare-lexema.js.



Full disclosure: whereas; although I do put on women's clothing; and although
indeed I sleep all night and work all day; and, in addition, despite that I have
been known, of an occasion: to eat eat my lunch, go to the lavatory, pick wild
flowers, hang around in bars, and want to be a girlie; my Wednesdays have not
historically been known to involve shopping or buttered scones, I don't know how
to press a flower, I have neither suspenders nor heels nor a bra (but thanks for
offering), and am not, in fact, a lumberjack.