Tag Archives: Theory

Essays regarding the abstract concept of a technical subject.

Give Me That NP-Time Religion.

"Give Me That NP-Time Religion."
"What was Good Enough For Granddad is Good Enough For Me."
"Day-Old Panglossary."
"Speakeasy."
"Babel: the New Fishing Sensation!"

"NP Religion" is a pun on nondeterministic polynomial time & language jihads.
"Granddad" is a pun on the title of a song by the Squirrel Nut Zippers.
"Panglossary" is a pun on linguistics and bread.
"Speakeasy" is a pun on methamphetamine during the Prohibition era.
"Babel" is a pun on Babel Fish, in D. Adams' _Hitchhiker's Guide to the Galaxy._

NB: This lecture is a summary, or tour de force, _not_ a tutorial.
    Until such time as I write some of those, I suggest you seek them elsewhere.
    As with all my publication, it's free of charge to the Public Domain.



I've sometimes encountered the question: "what's the best language to study?"

The answer is as varied as the races of man, but boils down to "whatever."

Nevertheless, I have picked one for the moment: Python 3, a C wrapper.
Python is easy to learn, as powerful as C, and implements many fine metaphors
such as generators, list comprehensions, and generator comprehensions. If you
haven't a clue what I've said, that's fine, because Python's still great for you
as you begin with a computer programming curriculum. See also the Icon language.

In the field of information technology, all have their own preferred flavor of
computing. If you'd like some language I've never written, such as Haskell, you
are probably better off to study that instead of Py3. Prefer specially languages
your friends write and will help you learn. A human being is a better tutor than
a reference manual -- whence I've learnt much.



Briefly, I'll summarize languages I know or with which I'm familiar in passing.
I'll describe the category (What & How), utility (Where & When), and specialty
(Why & Who) of each such language.

There are hundreds of computer programming & markup languages, of which only a
few I've reviewed here. See also The Daily WTF (Worse Than Failure) and your
friendly neighborhood University's computer science department, as well as text-
books like Structure & Interpretation of Computer Programs and tutorials.

I'll not limit these to programming languages, properly; also included shall be
descriptions of markup languages and perhaps a few esoteric mini-languages like
NullSoft's Scriptable Install System. One noteworthy limitation: ubiquity.

(The difference between programming & markup language is loops, i.e. iteration.)

NB: any language can, hypothetically, be compiled. Interpreted languages usually
assume that the speed loss and overhead from interpretation are tolerable, or
there's some caveat (dynamic typing or execution environment) that requires an
interpreter even when the language is "compiled" (such as, IDK, Java).

In no particular order:




JAVASCRIPT

What: ECMAscript (Javascript) is a procedural language used in Web pages.
      Javascript is also used in Web browser design, usually supplementing C++,
      for certain runtime code such as Firefox's ill fated tab gallery.
      The Windows Script Host will execute Javascript outside a browser.
How:  Much like C, without Malloc, but quite a bit slower because "everything is
      an object" and for other amusing reasons not limited to the interpreter.
      Browser-based implementations can access the Document Object Model.
      Javascript is an interpreted language.

Where: Dynamic HTML pages for the Worldwide Web. In-browser apps like my MLPTK.
When:  Although Javascript fails at three-dimensional rendering, it is opportune
       for such tasks as moving parts of Web pages (where Cascading Stylesheets
       cannot suffice), making them appear and disappear, and sending beacon
       queries at specified intervals so pages update without clicking "Reload."

Why: Javascript requires no compiler. Compatible Web browsers exist for any OS.
     Modern browsers implement a recursion limit, due to heap and stack overflow
     exploits, which hinders traditionally-generated reentrant parsers; other
     than this caveat, and the speed problem, JS is good for simple computation.
Who: Web developers, browser programmers, casual programming, portability.

Summary: Great for Dynamic HTML! The recursion limit can be a pain, however.



PHP HYPERTEXT PREPROCESSOR

What: An interpreted language executed by Web servers such as Apache or nginx.
How:  Like other languages compatible with the Common Gateway Interface, which
      are many and varied, it's executed by the server.

Where: PHP can be written in conjunction with HTML, within the same document.
       Dynamic Web forms and dynamically generated pages "feel" more natural.
When:  PHP is useful for the server-side ("back end") of Web applications,
       especially where the user must never know how passwords are validated and
       where the server can take some bandwidth load off of the transaction by
       computing something on its side rather than transmitting a massive JS lib
       to the client side and asking the client to compute for it.

Why: Because it's server-side, the end user never sees what it's doing, and so
     it's fantastic for bookkeeping on your side of the transaction.
     HOWEVER! Always remember that you need to keep a careful eye on security,
     especially where the user supplies any input at all. You'll need to escape
     and/or sanitize this input in other ways, to avoid clever little hacks like
     Johnny "; DROP TABLE *; ".
Who: Writers of CAPTCHAs, Web shopping carts, members-only galleries, etc.

Summary: Like other CGI tongues, PHP is a Webmaster's vade mecum.



HYPERTEXT MARKUP LANGUAGE

What: A formatting language used for writing HTML pages on the Worldwide Web.
How:  HTML marks up a hypertext document for rendering by a Web browser.

Where: On the WorldWideWeb, or a mirror on your local file system, etc.
When:  HTML documents are the markup format of choice for Web browser rendering.

Why: Web pages, GUIs for DHTML apps, front ends for business.
Who: Web developers, casual rich text writers, authors of technical documents.

Summary: Good for quick GUIs, reference manuals, and writing (w/o MS Office).



CASCADING STYLE SHEETS

What: CSS (not to be confused with the DVD content encryption scheme of the same
      name) is a formatting (markup) language with programming-like abilities.
How:  It is an interpreted language. Its facilities exclude iteration/recursion.

Where: Web pages, résumés, and other such documents.
When:  Whenever HTML formatting is insufficient to the task.
       CSS, AFAIK, lacks loops, and can't use HTTP. Supplement it w/Javascript.

Why: CSS is a faster, prettier way to mark up Web pages.
Who: Web developers, publishers, résumé formatting, etc.

Summary: Useful to format your curriculum vitae, if LaTeX is too much trouble.



C/++

What: C is a procedural programming language that's near to machine language.
      C is "sorta" object-oriented. C++ is more conducive to OO design. I use C.
How:  The programmer works with bytes & system interfaces to effect his design.
      C is compiled to assembly language and then assembled to machine code.

Where: Unix and Unix-like systems, such as MacOS 10+ and Linux.
       C can also be compiled on other systems, like Windows: Dev-C++, mingw.
       The Linux Programmer's Manual, included with Ubuntu's "build-essential"
       package, contains the complete Posix specification described by the ISO
       538 Committee Draft, August 3, 1998 WG14/N843, titled C '98, as well as
       by the Single Unix Specification. C is, substantially, Unix.
When:  The C programming language is close to machine language in many respects:
       therefore, and because compilers have been written and optimized much, it
       is blazing fast. C is most fruitfully employed in operating system design
       because the computer's hardware must be spoken to in machine code.

Why: If I need a fast program or a portable one, C is the lingua franca.
Who: Operating system designers, simulation/game programmers, hackers.

Summary: C and C++ are the Speedy Gonzales of the programming world.



PYTHON

What: Python is a procedural programming language with functional capabilities.
      It contains all of C++'s object-orientation, with additional benefits.
      Python is an interpreted language, and can be compiled.
How:  Using its substantial standard library, and extra OO benefits, Python is
      quite able to solve the vast majority of information technology woes.
      Python's an interpreted language. Its interpreter runs on Windows & Linux.

Where: Python's extensive manual (& library reference) describe convenient fixes
       for most IT-related problems. My only problem with it: unlike, e.g., the
       Haskell language, programs can't be described in pseudo-EBNF.
When:  Just about anytime for fun & profit. However, if you use it commercially,
       Python's author would like you to contact him. If you'd like to write a
       program more like a flowchart than an algorithmic procedure -- e.g., if
       recursion & parsing are your cup of tea -- you might try Haskell?

Why: Like C, but don't like malloc? Python's for you! :) See also: python.org
Who: Information technologists, discrete mathematicians, C programmers, etc.

Summary: Easier than C, with all the phenomenal cosmic power, in a box.
         Python also executes nearly as quickly as C, even when interpreted.



HASKELL

Actually I know next to nothing about this language. Ask Glasgow University?
Also you might try the book "Learn You A Haskell For Great Good!"



JAVA

Java is not Javascript. That's about all I know. Read more @ Sun Microsystems.



LISP, SCHEME, AND SIMILAR

What: Functional programming languages, suitable for lambda calculus etc.
      I believe these are interpreted, but can probably also be compiled.
How:  Unlike object-oriented languages, in which the programmer arranges data in
      groups (structures, objects) that are sensible to the human mind's spatial
      orientation perceptions and its imagination, functional programming uses a
      calculus-like notation to make all programs function()s of other programs.

Where: Because all programs are functions of other programs, the transformative
       idea of the computer program (parser) as a finite state automaton that
       manipulates other such automatons finds an easy home here.
When:  The program itself is also data and object-orientation is (to some minds)
       easier to understand from a functional perspective. So, lisp is a good
       language for programs that have to change their code a lot.

Why: Calculus, I think, and iteration.
Who: The Knights of the Lambda Calculus.

Summary: I haven't much used functional languages. This entry is guesswork.



ASSEMBLER

What: Several varieties of mnemonic languages used to write machine code.
How:  Mnemonics are "assembled" to machine language, or some form of executable
      code, based on the assembler's translation table.

Where: Assemblers are like compilers; but, instead of parsers, they're scanners.
When:  Therefore, they're easier to write. Also, see "MACHINE CODE" below.

Why: Because you don't want to write machine code yourself.
     After all, only a simplistic scanner is required to do this for you.
Who: Manly men, womanly women, and all sorts in between.

Summary: Just write this part after you write your compiler. (It's easier.)



MACHINE CODE

What: All computermachines speak electronically via binary ones and zeroes.
How:  "Machine language" is the coded sequences of binary numbers that are used
      to instruct a computer's computational circuitry (its ALU, or CPU).

Where: Primitive machinery, new interfaces, low-level hacking & intrusion.
When:  Whenever flowcharts and buzzwords just aren't enough to get you hired.

Why: Compilers do most of this work for us these days. Sometimes needed, though.
Who: "Real Programmers," electrical engineers, chipset architects.

Summary: Don't bother unless you're an electrician. (It's tedious / dangerous.)



FLASH ACTIONSCRIPT

What: A stack-heavy programming language similar to assembly language.
      FlAS is an interpreted language, and it's pretty slow.
How:  A FlAS is embedded in a Shockwave (Adobe) Flash file to animate parts and
      provide for user interaction. 

Where: Because Javascript is no good for complicated rendering tasks, Flash AS
       is often employed to write Web games to be played in browsers using the
       Macromedia Shockwave -- ahem -- Adobe Flash plugin.
When:  If you need a Web game in a browser, or if you must format your gallery
       in such manner as to make it less scrutable to text-mode scraping 'bots.
       Scrapers can still take your gallery to pieces even if you use encrypted
       Javascript or amusing cryptographic puzzles in Flash, but I'll bet SWF is
       just about the end of the line for the casually-scraping archivist.

Why: Probably just because Javascript is no good for Web games.
     Flash's "tween motions" are very amusing, too, and I keep wondering whether
     Adobe will implement a Disney-like strech-and-squish. Technically, so could
     any programmer, and perhaps you too might like to take a crack at Mickey?
Who: Game programmers, paranoid gallerias, art students.

Summary: I've used this language less than Lisp, but it's popular.



DOCBOOK

What: A formatting language used for writing technical manuals and textbooks.
How:  DocBook is converted via DB -> TeX -> DVI pipeline, then rendered as PDF.

Where: Dissertations (M.S., Ph.D., ...), portfolios, scientific documents.
When:  Particularly opportune for automatic compilation of reports, such as in
       my report.sh, where formatting several thousand documents as individual
       PDFs and then merging them all would be a waste of time. Of course,
       markup languages are always apropos of computer-generated output.

Why: Professional-quality formatting to PDF, in a markup language, without any
     such editor as Adobe's Acrobat and others. DocBook also has a 
     tag for formatting simplistic chemical equations, although many other
     symbols aren't rendered by dblatex -- so, write in pure LaTeX instead!
Who: Scientists, yuppies, and anyone who can't afford Adobe's software... :)

Summary: LaTeX for newbies, and there's nothing wrong with newbies: they learn.
         As DocBook parsers and rendering technology improve, it may even grow
         until it supplants LaTeX entirely. (DocBook _is_ a favored contender.)



LATEX

What: A programming language that is also a formatting (markup) language.
How:  LaTeX is rendered by a TeX -> DVI -> PDF pipeline involving several tools
      in Linux (and, probably, because Linux is Posix-compliant, also Unix).
      

Where: Unique among programming languages, LaTeX is used to format rich text.
When:  See DocBook, above, and also where extensive fine formatting is needed.

Why: Conventional tools, such as MS Office, typically lack somewhat in rich text
     formatting facilities -- equations with strange symbols and subscripts are
     particularly difficult to write. LaTeX solves these problems, and also has
     fine-grained control over where and how items appear on the printed page.
     It's only a step away from PostScript (a printer language), in that regard,
     and writing in LaTeX is much easier than writing in PostScript.
Who: See DocBook, above, and also especially mathematicians.

Summary: DocBook for oldbies, and naught is wrong with oldbies: they're elite.



EXTENDED POSTSCRIPT

This is a control language for inkjets & laser printers used to write on paper.
It is also a programming language (has loops), I believe, and can be used to
exploit printers in horrifying ways (BTW, kids, don't try that at home).
Because PostScript is a part of Adobe's Portable Document Format (PDF), there've
been computer viruses transmitted via PDF files. Technically, this caveat cannot
be avoided, which is why your boss advises you to only open trustworthy PDFs.



BASH, BATCH, MAKE, SHELL, & SIMILAR

What: Such languages are used to automate repetitive invocation of commands as
      may be encountered during the operation of computer systems like Unix.
How:  Shell scripts simply automate a shell: something like Bash or MLPTK, or
      Microsoft's "Command Prompt" (which used to be MS-DOS).

Where: Although (except for MS-DOS, now outdated) a shell isn't an underlying
       part of the operating system, it can invoke any command compatible with
       its standard input & output.
When:  Shells are therefore suitable to batch processing of data, such as report
       generation, transcoding, and file system rearrangement.

Why: For example, my report.sh: compiling my book and massaging all the parts
     into DocBook markup for dblatex would take too long if I did so by hand.
Who: Anyone who uses a computermachine to compute, archive, and collate data.

Summary: Great for computing. Probably anathema to intellectual slavery laws.



PERL, SED, AWK, GREP, & SIMILAR

What: Although Perl is a fully fledged programming language, Sed & Awk are both
      more useful for text processing than anything else.
How:  Using regular expressions (regular grammars that describe alphabets), the
      Stream EDitor and Mawk administer Swedish massage to the output of unit
      tests and other experimental data. Additionally, to text-based protocols
      such as HTTP, sed is the PaperMate of the information technology world.

Where: Tabulating the results of experiments, directory listings, reams of data,
       rewriting the Web as you browse it (see my HTTPyre) and such tasks.
When:  As lifetime Unix gurus will tell you, "every [EXPLETIVE] day."

Why: Any text that makes sense in any legible way can be quickly massaged and
     reformatted by the stream editor. Perl's a bit more powerful, but I hate it
     for irrelevant reasons. I do mean _any_ text, btw: even computer programs.
Who: Virus scanners, information technologists, Web router programmers, cads.

Summary: Sed is to IT as the slide rule is to mechanical engineers.
         The combination of sed, awk, and grep suffice to effect many simplistic
         Web robots (but mind you ROBOTS.TXT); if you're careful to format your
         computer programs' output as legible text, you'll find them handy too.



NSIS

What: The Nullsoft Scriptable Install System is a mini-language for installers.
How:  It's like a shell script that unpacks a ZIP file, moves the contents to
      specified directories, and modifies the Windows registry accordingly.

Where: Similarly to the InstallShield Wizard, NSIS effects "installation" of the
       kind that Windows users expect. It's much like dpkg / apt-get on Linux
       distributions such as Debian and Ubuntu.
When:  NSIS is free of charge, although InstallShield might be too? :)

Why: Windows users don't want to unpack an archive and move stuff themselves.
Who: Anyone writing a program for Windows. See also: InstallShield.

Summary: One of several ways to write self-contained installers.
Advertisements

She Sells C Shells By The Seashore.

(This post, as are all my scientific works, is free of charge.)

I've written several shells in various languages, with varying success.

My most successful has been MLPTK: the blockbuster sensation that set the world
afire due to its author's controversial personality. (Or, rather, I ought to say
that it would have if the United Fascist States of America hadn't [EXPLETIVE] a
big pile of their rhetoric and censorship all over it.) I'll run you through its
schematic in a cursory manner with some discourse on my thoughts re: design.




Since I was about fifteen I dreamt of writing a parser. Specifically, the kind I
wanted to write was a shell: I intended to write a computer operating system, or
a text-mode adventure game similar to Zork or Skullkeep. I never did get 'round
to those goals, but I did write a shell (MLPTK) around age twenty five.

The first task was, of course, to effect a console with which I could debug the
shell as I wrote. This was easily accomplished using HTML4: my preferred tongue
for writing simplistic graphical user interfaces, and therefore the reason I so
often write computer programs using the Javascript language. I began with a text
field for command input and a rectangular textarea of dimensions 80 columns by
25 rows: gnome-terminal's canonical console width.

Then I needed to effect the console machinery in mlptk.js. This file contained a
function to bind console commands (other functions) to a global variable "tk."
The tk (Tool Kit) object contained a simplistic parser that split its commands
into legible chunks, processed them, then passed the processed arguments & input
to the bound function (all of which were subordinate to command line parsing).
This file is whence tk._bind() and tk._alias(): the Words of Power for beginner
MLPTK users to start adding their own modules. The file mlptk.js encapsulates a
command parser, history, console, log, input/output hooks, & related functions.

Because I had become sidetracked as I was writing, I then proceeded to write the
overload.js, which contains a few inefficient convenience functions overloading
Javascript's canonical primitive types (namely: Object, Array, & Function). The
library's most useful function was a JSON-like enumeration of object membership,
with a simplistic recursion sentinel (thanks, "==="!) for greater convenience.

Later on, I effected some changes to commands and the standard input / output
metaphor that permitted MLPTK users to use Javascript's objects within their
command lines and stdin/stdout. This could potentially be an improvement over
traditional Unix-like "bag of bytes" streams, although nothing is stopping any-
one from writing somewhat like fscanf(stdin, "%s", &classInstance)... except for
the facts that: (a) C does not work that way (easily circumvented via memcpy()),
and (b) input should never be written directly to memory without parsing it very
carefully beforehand (because of clever little Johnny Drop Tables).

For the finishing touches -- tee hee, I sound very much like the hosts of the TV
series "This Old House!" :) -- I added a rudimentary file system (cut buffer), &
some clickable buttons for command execution. Withal, MLPTK had become a bare-
bones shell with a functional console, and it was suitable to the task of basic
software design in Javascript. Not exactly Unix, but not too shabby!




MLPTK's parser is naive, but translates command lines in a way that allows argv
(the argument vector: an array containing the program's command line arguments)
to contain Javascript's primitive data types. This is actually an amusing trick,
because data need not be redundantly parsed by each pipelined module. Printing
to the console is effected by the module's return statement, but can also be
accomplished by invoking parent.dbug(), parent.println(), or parent.echo().

The module's standard input is accessed, within the module's anonymous function,
via this.stdin, which is an Array() object whose each element corresponds to the
output of the command at corresponding position in the command grouping that was
piped into this module. Segregation of grouped commands' output was also my idea
-- Bourne Again Shell on Linux certainly doesn't behave so -- & could be utile.

User input can be acquired mid-module with window.prompt(): an ugly hack, but
the only way MLPTK modules can properly acquire such input at runtime until I've
completed Quadrare Lexema and MLPTK can be implemented as an operating system.
Until then, input is more efficaciously processed via argv & stdin.

Quadrare Lexema is a Bison-like Look Ahead Left-to-Right (LALR) parser generator
I wrote in Javascript. I intend to use it supplant MLPTK's currently hand-hacked
parsing mechanism and implement a scheduled operating system in Javascript. This
would be substantially similar to what ChromeOS seems to have arrived upon. Of
course, such an operating system would be slow, so don't hold your breath.

Incidentally: Polynom, one of the modules, is a complicated parser in itself. It
 can shorten the time needed to reduce polynomials to like terms.
 However, I didn't test it much (because it's an informal parser),
 so don't use it to do your homework for you. Same with matloc.




In the years since I was twelve I've thought often of operating system design &
specifications thereof. I've arrived at the conclusion that computer programs
are most properly represented as tokens in a formal parser: they execute in time
to this parser's schedule, and are themselves interpreted by another parser. Of
course, this approach to the procedure of computer operation is inefficient, but
offers the several advantages of hypervisory oversight (which grants the OS a
certain degree of control over hacking / exploitation of bugs & an opportune HAL
or Hardware Abstraction Layer that may provide an easier way to load drivers) &
a more easily altered operating system (because, in this hypothetical parser, it
doesn't have a thousand tangled spaghetti pieces sticking keyboard hooks into
each other all day long).

Of course, because all computer programs and even the central processing unit
itself can be considered finite state automata (Alan Turing's brilliant axiom,
which cracked the German "Enigma" machine), the whole computer can be seen as a
parser. Likewise can any parser be seen as a miniature computermachine of a sort
-- therefore a shell is a subset of any graphical OS, as Microsoft demonstrated
with Windows '95 & '98 (which were built atop MicroSoft Disk Operating System).

But what kind of shell underpins the operating system? It could be interactive
or not, or some shell-like infrastructure accessible only by way of an API. 
Regardless what, something resembling a programming language comprises the basis
of any computer operating system, because they're sufficiently complex as must
require parsing a variety of input in a parser-like way. Due to the relative
ease of writing in high-level interpreted languages, the thought of writing a
whole operating system as an interpreted script is tempting. Viz: Stanford, ITS,
and the Lisp machines on which JEDGAR found his legendary home.

I suppose any OS can be built as a kind of hypervisory shell, in which all code
experiences its runtime within a virtual machine (this particular kind of VM is
called a "sandbox;" for instance, Firefox's Javascript sandbox). However, like
all sandboxes, it is not impregnable, and its boundaries are porous. This means
computer viruses will always be, but it also means the OS can play fun tricks
with scheduling (such as system requisites rescheduling themselves, or weighted
time slices) and interfaces between programs (say, pipes?) can be hypervised.

The comparisons I draw between mathematically-oriented software design (parser
theory, formal grammars of the sort one writes in extended Backus-Naur format, &
object oriented -- especially origin-symbol oriented -- systems programming) and
Microsoft's infamous XBOX Hypervisor are intentional. Because carefully written
parsers can sandbox the instructions they interpret, and because artificially
generated automatons are can be verified by way of mathematics, OS design can be
(I suppose) implemented as a modular series of sandboxed sandboxes. Each sandbox
could change slightly over time (by modifying the EBNF, or whatever, notation of
its grammar) so that computer viruses are more difficult to write. Additionally,
each sandbox could contain a set of heuristic anti-virus instructions and fire-
walls (black- or white-lists) to permit the human system operator to choose when
certain instructions (hard disk access, network, etc) are allowed to execute.

Needless to say, the benefits of a fully interpreted & hypervised computer OS
are potentially vast and of great benefit to humanity. I'm hopeful for future
developments in this field, but can't spare the time for research. Because this
and all my briefs & lectures are in the public domain, I encourage scientists to
take my "ball" and run with it. But, as in the science of basketball -- that is,
the mathematical study of geometry -- don't forget to dribble.




Because that discourse was in fact quite vapid (until I've written it myself),
here's today's most amusing quote from fortune(6):
"
 I
 am
 not
 very
 happy
 acting
 pleased
 whenever
 prominent
 scientists
 overmagnify
 intellectual
 enlightenment
"
:)

こんにちわ い-よ, わんこ いいんちよ…

Title translates loosely, if at all, as "I ain't say hi to the First Dog."




Discrete mathematics treats of sets (groups of items, of known size) and logic.
The operations on sets are union (OR), intersection (AND), difference (-), and
symmetric difference (XOR). I often call AND conjunction and XOR disjunction.

As for logic, the usual Boolean operations with which programmers are familiar:
 AND: Only the items ("1" bits) in both sets (bytes). 01001 & 01100 == 01000
 OR: All the items ("1" bits) in either set (byte). 01001 | 01100 == 01101
 XOR: Everything in one set or the other, _not_ both. 01001 ^ 01100 == 00101
 NOT: Logical inversion. (Not conversion.) Same as ¬. ! 01001 == 10110
(NOT is a unary operator. The binary operators are commutative. The exclusive OR
 (XOR) operator has this additional property: given, axiomatically, A ^ B = C,
 then C ^ B = A and C ^ A = B. I think that makes XOR evanescent, but I can't
 remember if that's transitivity. Anyhow: symmetric difference self-reverses.
 See also memfrob() @ string.h, & especially Python manual section 6.7 (sets).)

I may have misrepresented some of the details: consult a discrete math text too.
Salient trivia: don't confuse the bitwise intersection (&) & union (|) with the
 boolean operators && and ||. Although equivalent for the values
 1 and 0, they behave differently for all others. For example:
 2 & 5 == 0; although both (bool)2 and (bool)5 typecast to true,
 and therefore (bool)2 & (bool)5 == (bool)2 && (bool)5 == true,
 bitwise & is valid on integers and the compiler will produce the
 integer bitwise & operation which results in false.
Et trivia: xor can be used (cautiously) to logically invert comparisons (^ 1).
(See also: GNU Bison, Extended Backus-Naur Format, formal grammar (maths).)

Combinatorics (the science of counting) is discrete mathematics: in which the
delightful factorial (a unary operator, !, whose operand is to the left, which
distinguishes it from unary logical NOT) makes many an appearance. Combinatorial
explosion is a task for any mathematician who wishes to optimize (by trading
memory for repeat computation) such procedures as decision trees, which are NP.
(See also: algorithmic complexity, non-deterministic polynomial time).

Combinations of a set, because order makes no difference, are fewer than its
permutations. A combiner can be written with sliding windows, as in Python's
"itertools" library. (I've also implemented a heuristic version in Javascript.)
Combinations represent all the ways you'd choose some number of items from a set
such that no two combinations contain exactly the same items.
There are S! / (M! * (S - M)!) M-length combinations chosen from a S-length set.
(X! is "X factorial." Don't confuse it with !X, which is "not X," i.e. "¬X." )

Contrariwise: permutations, in which the same items chosen in a different order
constitute a different permutation than any other, represent all the ways you'd
choose items from a set such that the time when you chose each item matters.
Permutation is most simply and directly implemented by way of recursion.
(Combinations can also be written this way, by narrowing the slices, as I have.)
There are S! / (S - M)! permutations of length M chosen from a set of length S.

The number of "same items, different order" permutations can be arrived upon via
subtraction of the combination formula from the permutation formula:

S! S! M! * S! S! (M! - 1) * S!
-------- - ------------- == ------------- - ------------- == ------------- 
(S - M)! M! * (S - M)! M! * (S - M)! M! * (S - M)! M! * (S - M)!




Because the "N-ary Counter" appears frequently in computer programs that behave
in a combinatorial way, and because I implemented them too in Mutate-o-Matic:
the items in any set, when numbered in monotonic integral progression from 0,
can represent the numerals of numbers in base N (where N == S, to reuse the "set
length" nomial from above). Arithmetic with these figures rarely makes sense,
although hash tables and cryptographic checksums sometimes behave similarly.
There are S ** M (S to the power M; ** is exponentiation in Python) N-ary
"numbers" of length M that are possible by counting in base S.

Combinations and permutations are not defined when the length M of the mutation
exceeds the length of the set. N-ary counters are defined for any M, however.




Today I'll describe a mutator I wrote in C. It permutes, combines, and counts.
It is somewhat more complex than the recursive one-liner you'd sensibly write,
but not much different once you see past all the pointers.

Such a one-liner, before I get carried away, would look something like this...
 def anagrams (w): # Pseudocode (read: I haven't tested it) anagram generator
 if len(w) == 1: # <-- Recursion always requires a sentinel condition,
 yield list(w) # unless you are extremely clever indeed.
 return
 for i in range(len(w)): yield [ w[i] ] + anagrams(w[0 : i] + w[i + 1 :])
... in Python, though my own rendition in Python is somewhat more acrobatic (it
foolishly employs generator expressions); as for combinations, see PyDoc 10.7.1
(itertools.combinations). Python is fast, because it's just a shell around C, &
Python's "itertools" module will do the discrete math acrobatics in record time.
Itertools also contains many of the discrete math axioms I rewrote above -- see
its page in the Python manual, section 10.7, for the details.

Not content to waste nanoseconds, however, I resolved to write a significantly
less intelligible implementation using the C programming language. I ought to
point out that the savings in algorithmic complexity are vanishingly negligible:
it actually probably uses more time than the Python implementation, owing to its
prodigal invocation of malloc() (among my other C vices, such as **indirection).

Anyway, here's the Mutate-o-Matic in Stone Tablet Format courtesy of my C-hisel:
 
 typedef enum { PERM, COMB, NARY } mmode_t;
 // ^- Respectively, "permutation," "combination," or "N-ary counter" mode.
 void mutate (
 char *set[], // set of items to permute. An array of C-strings.
 size_t slen, // length of the set.
 size_t plen, // length of each permutation.
 mmode_t mode, // kind of mutation I'm executing.
 char delim[], // delimiter, per permutation item.
 char *prefix[] = NULL, // characters chosen already.
 size_t pflen = 0 // length of the prefix.
 ) {
 size_t subset_length = (mode == NARY) ? slen : slen - 1;
 char **next_set = (char**) calloc(subset_length, sizeof(char*));
 char **next_prefix = (char**) calloc(pflen + 1, sizeof(char*));
 if (next_set == NULL || next_prefix == NULL) { exit(-1); }
 for (int i = 0; i < pflen; next_prefix[i] = prefix[i], i++) ;
 
 for (int i = slen - 1, j, k; i >= ((mode == COMB) ? (int)plen : 0); i--) {
 next_prefix[last_item] = set[i];
 
 if (plen > 0) { // Either descend the recursion ...
 for (k = 0, j = 0; j < i; next_set[k++] = set[j++]) ;
 switch (mode) {// ^- the above are elements common to every mode
 case NARY: next_set[k++] = set[j]; // XXX NB: fallthrough
 case PERM: for (j++; j < slen; next_set[k++] = set[j++]); break;
 } // (^- conditionally copy elements not common to all three modes)
 
 mutate(next_set, k, plen, mode, delim, next_prefix, pflen);
 } else { // ... or print the mutation (selected set items).
 for (j = 0; j < last_item; printf("%s%s", next_prefix[j++], delim));
 printf("%s\n", next_prefix[last_item]);
 } // *Sound of cash register bell chiming.*
 } // end for (Select items and mutate subsets)
 free(next_set); free(next_prefix); // Paying tribute to Malloc, plz wait
 }

(To download the C file, plus the rest of my works, retrieve my portfolio by way
 of the links to MediaFire supplied in the top post at my WordPress blog.)

It seems so simple: only an idiot couldn't see how to compute mutations in this
manner, right? However, although I first encountered the problem when I was but
a wee lass, & although I finally discovered the formula for permutations and
N-ary counters after the better part of two !@#$ decades, I yet fail to grok
_why_ the "bounded sliding windows" illustrated by itertools successfully choose
only the combinations. (The best I can do is "because they're in sort order?")

Anyway, the procedure is woefully uncomplicated:
append on the prefix an unchosen item from the set, then either...
 [remaining choices > 0] recurse, having culled chosen item (unless in N-ary
 mode); and, if in Combination mode, having culled
 items subsequent to the sliding window (set[i]),
... or ...
 [choices == 0] print chosen items, which are a mutation, & continue

I'm sure you can see that this whole procedure would be a lot faster if I wasn't
spending precious clock ticks spelunking the dreaded caves of malloc(). A global
array of arrays with entries for each prefix length would really speed things up
-- as would splitting the mutater into three different functions, to eliminate
the branching instructions ("if-else," "switch," and "(predicate) ? a : b").

However, I recommend you don't bother trying to write it faster in C. What with
the huge cache memories CPUs have these days, it's probably plenty zippy. Even
though it _can_ be faster in C, it could be even faster in Assembler, so learn
an instruction set and assemble a core image if that's what you really want.

Until then, for all its inefficiency, Mutate-o-Matic renders about 100,000
mutations per second. That's faster than those sci-fi lockpicks in the movies.
Dash Maid, by Alason.

Booty Is In The Eye Of The Beholder.

(NB: at time of writing, Vide is still its Alpha testing phase.
     If you're only here for Vide, try again at 1q-2q 2018.)

Welcome yet again, intrepid reader, to the idyllic pastures of Moon Yu.
(^- Allusion to _Kung Pow: Enter the Fist._)

I've been having trouble coding anything extraordinary; so, for this edition,
I've written briefings covering the broad strokes of three quotidian programs.
Thankfully, modern languages like Bash and Python make software design easy, so
some of this (what I might have called dross, in better days) may interest you.

None of these programs are anything new, I'm sorry to say. Although you may not
have seen or heard of them before, I assure you they're quite trite. Nonetheless
this one in particular approaches a popular problem in an excellent way. I think
the memetic teenage and otaku demographics will be most enthusiastic!
(Unless they have to make a new font for it. What a dull task that would be.)




It's called "Videlicet." That's from two Latin words, "videre" & "licet," which
mean "to see" and "it is lawful; permitted; easy," thus "it can be seen." This
word "videlicet," abbreviated "viz," is used in the law: indicating the party is
not required to prove what he alleges (possibly because of a preceding verdict).
Here I mean simply "it's easy to see": a reference to its informational overlay.

My goal in writing Videlicet was to craft a programmatic means to attribution.
Because I often download many drawings and macros, and later forget where they
came from, the time was way past ripe to prepare somesuch "labeling machine." If
you're like me, you have Gigabytes of poorly-attributed drawings found at social
networks -- every one, inexplicably, without even the artist's signature! -- and
I'll bet you wish you had recorded as well as possible the provenance of each.

Perhaps with some program that automatically records this information for you?

Videlicet is such a script: a combination harvester for static-markup galleries.
I grant it to the Public Domain too, insofar as the source code _is_ mine.
Obviously, Python's libraries (and Python itself) belong to others: so, if you
intend commerce, consult its authors. (I think the language has copyrights.)




As usual, to download it, retrieve my portfolio from the links to MediaFire that
are supplied in the top post at my WordPress blog: https://ipduck.wordpress.com.




Two salient paradigms to be seen in Videlicet are typeface-rendering & grepping.
These are the major parts required to write stuff & to figure out what to write.
Shell scriveners worldwide have surely accomplished similar with their own tools
-- at a minimum, invoking wget to mirror the Web gallery containing the drawing
will do the same -- or simply written by hand in bibliographical text files, but
here I've described a program to inscribe that information _on_ the pictures.




First of all: how to make writing a thing? The Python Imaging Library (PIL) has
facilities fain to furnish fonts, but I felt a fool for failing to fathom their
fructuous function. I resolved to write my own: a simplistic cut-and-paste font
engine that gets the job done economically but without panache.

The desired typeface is encoded, character by character, as images of any type.
These are then loaded into memory and overlaid upon a panel of solid color. The
resulting "text box" is its own image, which I then attach to the retrieved one.
A few amusing effects are made possible by way of PIL's transformations.

The portions of the script that encode the font and the deceptively simple state
automaton that acts as a "typewriter" actually required about a thousand lines
and weeks before I was entirely satisfied. I give them short shrift here because
the vast majority of it is the exceedingly banal "move the cursor, check that it
hasn't traveled out of bounds, then paste a glyph into the image" procedure.

More on that procedure can be found in CharSet(), Overlay(), and JuiceFusion().
Videlicet can write forward, backward, up and down, using any font you create.




Second: how do I know what to write? This is a task I usually accomplish using
grep (the General Regular Expression Parser) and sed (the Stream Editor) in a
command line in my bourgeois gnome-terminal. I'd first feed the page's source,
probably via curl (a URL catenator), into a grep invocation that finds for me
the lines with the information I need, then into a sed invocation that massages
them into whatever format I want them to look like in the finished result.

(Such procedure is so commonplace that "grep" in jest can be a pun on "grope.")

Here, a similar procedure is executed against a downloaded HTML file in memory.
The files are retrieved by way of Python's hypertext transport facilities. If I
read Py's manual correctly: cookies, authentication, and proxies are supported.
Regular expressions are then employed to sieve the text and harvest any metadata
relevant to the user's records. The data, proper, is then harvested as well.

Actually sieving through the text necessitated contrivance. You see: text (such
as documents written in Hypertext Markup Language) changes a lot these days,
especially at corporate boilerplate factories like social networks. Interfaces
can't be relied upon from a robotic perspective, because programmers must always
be prepared to update the robots, which defeats the bot's purpose: to save you
valuable time and thought. Every time I visit MyTwitFeedFaceTube+(TM), I either
write a new "curl | grep | sed | mawk | while read ...; do ...; done" sieve or
just throw my hands up and use a foolish human GUI-capable Web browser. And, of
course, I still can't figure out Windows' interprocess communication metaphor,
so I'm dead in the water there too.

"Ballocks," I grumbled in this regard of late Spring, and grudgingly resolved to
implement a portable framework for execution of the usual sieving tasks.
At some overextended length, Videlicet's target acquisition procedure was born.
By the configuration files' powers combined, and with reference to a few short
(thanks to Python's standard libraries) convenience functions that effect the
vast majority of everything I ever do with GNU/Linux, it is Captain Planet.

Uh, I mean: it is able to effect the same reformatting operations.

As for retrieval of the text to be sieved and reformatted:

Videlicet simply employs the Python interpreter's capability of arbitrary code
execution via eval(), exec() and friends. In this fashion it constructs classes
that encode a recursion chain. Progressing from the initial phase of target
acquisition, through each step of winnowing the HTML (or Javascript, or whatever
-- although Videlicet doesn't implement a formal parser such as Bison or PLY and
is therefore devoid of any capacity to disassemble and recompile active code, it
can still effect a certain degree of penetration if the URL can be guessed) to
locate embedded Uniform Resource Locators within tags & stylesheets, and finally
to conclusory retrieval of the desired image & overlaying the historical record.




Once resources are harvested the discerning archivist has at his or her disposal
a wealth of pertinent data gathered automatically during Videlicet's operation.

Not only is the rendered image, with added informational overlay, saved to disk;
a copy of the original is also saved, as is a text file containing a record of
the transaction's HTTP headers (and, optionally, additional metadata). Actually,
the latter two records are saved for all fetched resources, just in case you
needed them. Naturally, this additional time stamping & record keeping clutters
up the burgh, so Videlicet is a lot less convenient to use than wget or curl.

Speaking of inconvenience, I ought to mention another one of its major caveats:
due to the manner of data retrieval, some of the target's original metadata
(including its times of last access, modification, status change, and birth,
which are remarkably important from a historical perspective) is irreparably
lost unless included with the HTTP headers. I could solve this problem by
reference to any Web browser's code to see how they preserve the information,
but after writing all the rest of it I was in a hurry to take some time off.

And, of course, the final indignity: because Vide requires the Python Imaging
Library, and I have no idea how to cross-compile this for Windows, you'll be out
of luck if you want to use it and can't find an installation with PIL available.
(_And_ I don't know where you can find one!) I am most sincerely regretful about
this fact, because Vide is one hell of a finite state automaton.




Given all that fuss and bother, why have I gone to the trouble of writing Vide?
Because I am sick and tired of downloading images from social networks only to
lose all the information recorded in the original post, even if there _is_ none.
A means to stamp images acquired from disparate sources was clearly necessary.




There are some other interesting bits in Videlicet. For example, it's scriptable
(employs arbitrary code execution via eval(), which is bad engineering practice
but saves me a little time), and can be employed to render text upon images you
already have. Regardless, all the technical complexity is as illustrated: the
remainder of Videlicet is a set of simple wrappers around grabbing and labeling.

Videlicet is my first Python 3 program. Unless the Python language undergoes any
further dramatic revision, this script will live on in posterity. (Really. It's
a pretty big deal.) Otherwise, you might want to get your kicks out of it now.
I don't plan to maintain it because, even if the language changes, I'm tired.

Pan Fried Programming

(Here's the update -- nothing much is new:
MLPTK: http://www.mediafire.com/file/m3u25i445lqkztb/mlptk-2016-12-16.zip
Source Highlight: http://www.mediafire.com/file/ygxb14ie94cwcuy/mlptk-src-hilite-2016-12-16.zip
Book: http://www.mediafire.com/file/vg439qruq3do90q/mlptk-book-2016-12-16.zip
)

Remedial (adj.): intended to rectify, amend, heal.
Panacea (n., myth): goddess of healing, daughter of Aesculapius.
Pansear (n.): Chili's Pokémon.

This remedial lecture will tersely cover a semester's curriculum,
similar to what you have learnt in your high school algebra class,
comprising the fundamentals of programming with synthetic languages
(those that are above machine code).

If you don't know what computer programming is, I would recommend that you study
some tutorials & encyclopedia articles. Much is available on the WWW (Worldwide
Web). The Web is a part of the Internet, and it is the Web you access from your
Web browser when you navigate to a Web page. You could also try one'a them there
"<name of programming language> For Dummies" textbooks: the "For Dummies" books
are excellent "Cliff's Notes"-style crash courses, and each aims to be similar
to a "101" course in the topic advertised.

To make a beginning with any programming language, all you must know is that a
computer computes: your instructions, issued in the program you write, tell the
machine how to progress from its input or initial state to a resultant output or
final state. These instructions look different in different languages -- some
languages require more or fewer -- but every computer program is an algorithm,
or "recipe for computation."

Computers and computer programs can be characterized as finite state automata.
They're like assembly-line robots who know where to weld each sheet of metal.
Also like assembly-line robots, they are "blind" in the sense that they'll kill
you with the soldering iron should you step between it and the sheet.
Computing machines do what they're told, even when it is particularly stupid:
that's why computer viruses, worms, and computer espionage exist.

In simplest terms, the computer's processing units receive some numbers and an
instruction that says what mathematical operation to execute, then operates:
like a calculator. High-level programming languages are more synthetic, like a
human language is, and comprise such ideas as objects (amalgamations of data) &
functions (modular sub-routines). Compilers or interpreters read these languages
and translate them into machine instructions, simplifying the lengthy series of
instructions necessary to make the calculator execute these difficult tasks.

In a high-level language, there are few technical concerns.
You can begin immediately with the abstract concepts.
Here are some:

VARIABLES
As in algebra, a variable is a name that represents a value.
As in solving a system of equations, values are typically assigned to some vars
and the value of the other variables is computed using the values given.
For example, in Javascript:
    var a = 2;
    var b = a + 2;
The variable <b> is now equal to 2 + 2. Similar operations function similarly.
In Javascript and other very-high-level languages, variables aren't only scalars
and can point at any object. They're like placeholders for procedure.
Although "variable" implies a value stored in memory, and "identifier" only its
mnemonic, the words "variable" & "identifier" used loosely mean about the same.
    "Just don't try that with the Captain."
        -- Geordi LaForge, to Data, _Star Trek: the Next Generation._

POINTERS, REFERENCES
These are important ideas that are abstracted away in VHLLs. A pointer stores an
address in memory, for a later indirect read/write or similar operation. In HLLs
a pointer/reference accesses an object directly instead of copying its value.
You'll rarely have to make the distinction in Javascript; but, for example:
    var a = new Array(1, 2, 3); // a[0] == 1, a[1] == 2, a[2] == 3
    var b = a; // Incidentally, b === a, and that is why in the next line...
    b[0] = 555; // ... b[0] == 555, and now a[0] also == 555!
As opposed to:
    var c = new Array(3); // c is a new array of length 3
    c[0] = b[0]; c[1] = b[1]; c[2] = b[2]; // copy scalar values one-by-one
    c[0] = 0; // c[0] == 0, but b[0] remains == a[0], which remains == 555.
    var d = 2;
    var e = d;
    e = 4; // e == 4, but d is still == 2.
As you can see, operating on an object (such as via array subscript operation)
changes the object, even if the object is pointed by multiple variables.
Likewise, objects passed as the argument of a function are passed by reference:
they aren't simply copied, and operating on the argument within the function is
equivalent to changing the object, whose scope is above that of the function.
Some high-level languages, like C, permit you to explicitly specify what is a
pointer or reference, which eliminates some of this confusion but requires more
exacting attention to detail in your design specification.

STATE
The state of a program is the value of all its variables, the current location
within the instruction set, and the environment of the operating system (or the
interpreter). In Javascript, within Web browsers, the browser typically provides
access to some of its state via the Document Object Model.

CONDITIONAL EXECUTION
Heuristics, or "guesswork," could not exist if there were no way to execute some
different code depending on the state of the program. Furthermore there are some
mathematics you can't write as exactly one set of instructions that produces one
semantic value: for instance, a function defined only on an interval, or an even
root of a positive number. In this circumstance, you are writing branches:
    if (5 > 10) { /* of course, the code in this block never happens. */ }
    else if (2 < 0) { /* neither does this, b/c cond'n is also false. */ }
    else { /* but all of this code happens, because the others didn't. */ }
... One of the branches executes, and the others don't.
The part in parentheses is the "conditional statement:" it's evaluated as either
"true" or "false," like in Boolean logic. 

SCOPE
Identifiers are only valid within the block (curly brackets, or { ... }) where
they were declared. Well, they're supposed to, anyway. Therefore, if you declare
a variable inside a function, you can't use it outside of the function or within
another function. Why would you want to, anyway? The next time you invoked the
function, the value of the variables you were using in there would change again.

LOOPS
Computers are great at repetition. Loops repeat a set of instructions: they are
typically written as a prefix, conditional, postfix, and body. For example:
    for (var T = 10; T > 0; T--) { alert("T minus " + T); }
... which counts down from ten to one with annoying alert popups.
While or do-while loops have only conditions & bodies.
A loop is an example of an "iterative algorithm." Each time the loop's body is
executed, it's called an "iteration." In computing fractal geometric patterns,
"iteration" means more like "recursion:" which, see below.

FUNCTIONS
A function is a modular segment of your program: a sequence of computation that
is repeated a few times, or can be reused as part of another computation.
Functions are "invoked," or called by name, with values supplied as arguments,
and return a value, similarly to how functions behave in algebra. When declaring
a function, you'd typically write the name of the function followed by its args
in parentheses and then the function body. For example, again in Javascript:
    function intp (N) { return (N % 1) == 0; } // integer predicate
... which returns true if N is probably an integer, or whole number:
    if (intp(5)) { alert("Yes. 5 is probably an integer."); }
    if (intp(5.55)) { alert("This box never appears..."); }
    else { alert("... but this one does, because 5.55 is a floater."); }
(Floating-point numbers are inaccurate, in Javascript as likewise elsewhere.)

RECURSION
A function that invokes itself is a recursive function. Any function invoking an
other function, which subsequently causes the original function to be invoked
again, causes a recursion-like situation that I think is called "re-entrancy."
It is essential to note that _any_ and _every_ recursive function you can write
for a computer to execute can be rewritten as an iterative algorithm. The proof
of this is complex: it follows from Alan Turing's model of finite state automata
and the read-execute model of arithmetic and logic units (CPUs), and basically
asserts that you'd never be able to execute recursion if you couldn't do it by
reading one instruction at a time. In other words, each time your function calls
itself again, it's simply storing its state in memory temporarily while the
machine executes a fresh copy: after the copy is done, the former state is re-
loaded, and execution proceeds from there. This is achieved with stacks: data
structures that grow in size as more is "pushed" onto them, and shrink when some
is "popped" off of the top.

OBJECTS
An object is a collection of data that comprises a several datum. That is, when
data are related to one another, they can be envisioned as a "shape" or "motion"
that is the sum of its parts. For example, a vector has direction and magnitude;
an individual has a first and last name; a parser has an input stream, a stack,
and a procedure. In Javascript, you'd write something like this:
    function Blah (newz) { if (newz) { this.z = newz; } return this; }
    Blah.prototype = new Object();
    Blah.prototype.z = 555;
    Blah.prototype.tell_me_z = function () { return this.z; }
    var a = new Blah(222), b = new Blah(); // a.z == 222; b.z = 555.
... syntax varies among languages. Essentially an object is a data structure
containing some members ("variables" attached to the object, like Blah::z above)
and, if the object is a class, some methods (functions, like ::tell_me_z).

Palling around.

Pall (n.): pawl.

I couldn't write last week, and my upgrade to QL has progressed no further.
For reference, I stalled before comparing the efficiency of nested Objects to
that of nested Arrays, which I must test before experimenting further with the
prototype compiler or even refining the design. I intend to do that this month.
In the meantime, here's a snapshot of MLPTK with new experiments included.

http://www.mediafire.com/download/566ln3t1bc5jujp/mlptk-p9k-08apr2016.zip

And a correction to my brief about the grammar ("Saddlebread"): actually, the
InchoateConjugation sequence does not cause differentiation, because the OP_CAT
prevents the original from reducing. Other parts may be inaccurate. I'll revise
the grammar brief and post a new one as soon as I have fixed the QL speed bug.

I took some time out from writing Quadrare Lexema to write some code I've been
meaning to write for a very long time: pal9000, the dissociated companion.
This software design is remarkably similar to the venerable "Eggdrop," whose C
source code is available for download at various locations on the Internets.
Obviously, my code is free and within the Public Domain (as open as open source
can be); you can find pal9000 bundled with today's edition of MLPTK, beneath the
/reference/ directory.

The chatbot is a hardy perennial computer program.
People sometimes say chatbots are artificial intelligence; although they aren't,
exactly, or at least this one isn't, because it doesn't know where it is or what
it's doing (actually it makes some assumptions about itself that are perfectly
wrong) and it doesn't apply the compiler-like technique of categorical learning
because I half-baked the project. Soon, though, I hope...

Nevertheless, mathematics allows us to simulate natural language.
Even a simplistic algorithm like Dissociated Press (see "Internet Jargon File,"
maintained somewhere on the World Wide Web, possibly at Thyrsus Enterprises by
Eric Steven Raymond) can produce humanoid phrases that are like real writing.
Where DisPress fails, naturally, is paragraphs and coherence: as you'll see when
you've researched, it loses track of what it was saying after a few words.

Of course, that can be alleviated with any number of clever tricks; such as:
	1. Use a compiler.
	2. Use a compiler.
	3. Use a compiler.
I haven't done that with p9k, yet, but you can if you want.

Of meaningful significance to chat robots is the Markov chain.
That is a mathematical model, used to describe some physical processes (such as
diffusion), describing a state machine in which the probability of any given
state occurring is dependent only on the next or previous state of the system,
without regard to how that state was encountered.
Natural language, especially that language which occurs during a dream state or
drugged rhapsody (frequently and too often with malicious intent, these are
misinterpreted as the ravings of madmen), can also be modeled with something
like a Markov chain because of the diffusive nature of tangential thought.

The Markov-chain chat robot applies the principle that the state of a finite
automaton can be described in terms of a set of states foregoing the present;
that is, the state of the machine is a sliding window, in which is recorded some
number of states that were encountered before the state existent at the moment.
Each such state is a word (or phrase / sentence / paragraph if you fancy a more
precise approach to artificial intelligence), and the words are strung together
one after another with respect to the few words that fit in the sliding window.
So, it's sort of like a compression algorithm in reverse, and similar to the way
we memorize concepts by relating them to other concepts. "It's a brain. Sorta."

One problem with Markov robots, and another reason why compilers are of import
in the scientific examination of artificial intelligence, is that of bananas.
The Banana Problem describes the fact that, when a Markov chain is traversed, it
"forgets" what state it occupied before the sliding window moved.
Therefore, for any window of width W < 6, the input B A N A N A first produces
state B, then states A and N sequentially forever.
Obviously, the Banana Problem can be solved by widening the window; however, if
you do that, the automaton's memory consumption increases proportionately.

Additionally, very long inputs tend to throw a Markov-'bot for a loop.
You can sorta fix this by increasing the width of the sliding window signifying
which state the automaton presently occupies, but then you run into problems
when the sliding window is too big and it can't think of any suitable phrase
because no known windows (phrases corresponding to the decision tree's depth)
fit the trailing portion of the input.
It's a sticky problem, which is why I mentioned compilers; they're of import to
artificial intelligence, which is news to absolutely no one, because compilers
(and grammar, generally) describe everything we know about the learning process
of everyone on Earth: namely, that intelligent beings construct semantic meaning
by observing their environments and deducing progressively more abstract ideas
via synthesis of observations with abstractions already deduced.
Nevertheless, you'd be hard-pressed to find even a simple random-walk chatbot
that isn't at least amusing.
(See the "dp" module in MLPTK, which implements the vanilla DisPress algorithm.)

My chatbot, pal9000, is inspired by the Dissociated Press & Eggdrop algorithms;
the copy rights of which are held by their authors, who aren't me.
Although p9k was crafted with regard only to the mathematics and not the code,
if my work is an infringement, I'd be happy to expunge it if you want.

Dissociated Press works like this:
	1. Print the first N words (letters? phonemes?) of a body of text.
	2. Then, search for a random occurrence of a word in the corpus
	   which follows the most recently printed N words, and print it.
	3. Ad potentially infinitum, where "last N words" are round-robin.
It is random: therefore, humorously disjointed.

And Eggdrop works like this (AFAICR):
	1. For a given coherence factor, N:
	2. Build a decision tree of depth N from a body of text.
	3. Then, for a given input text:
	4. Feed the input to the decision tree (mmm, roots), and then
	5. Print the least likely response to follow the last N words
	   by applying the Dissociated Press algorithm non-randomly.
	6. Terminate response after its length exceeds some threshold;
	   the exact computation of which I can't recall at the moment.
It is not random: therefore, eerily humanoid. (Cue theremin riff, thundercrash.)

A compiler, such as I imagined above, could probably employ sliding windows (of
width N) to isolate recurring phrases or sentences. Thereby it may automatically
learn how to construct meaningful language without human interaction.
Although I think you'll agree that the simplistic method is pretty effective on
its own; notwithstanding, I'll experiment with a learning design once I've done
QL's code generation method sufficiently that it can translate itself to Python.

Or possibly I'll nick one of the Python compiler compilers that already exists.
(Although that would take all the fun out of it.)

I'll parsimoniously describe how pal9000 blends the two:

First of all, it doesn't (not exactly), but it's close.
Pal9000 learns the exact words you input, then generates a response within some
extinction threshold, with a sliding window whose width is variable and bounded.
Its response is bounded by a maximum length (to solve the Banana Problem).
Because it must by some means know when a response ends "properly," it also
counts the newline character as a word.
These former are departures from Eggdrop.
It also learns from itself (to avoid saying something twice), as does Eggdrop.

In addition, p9k's response isn't necessarily random.
If you use the database I included, or choose the experimental "generator"
response method, p9k produces a response that is simply the most surprising word
it encountered subsequent to the preceding state chain.
This produces responses more often, and they are closer to something you said
before, but of course this is far less surprising and therefore less amusing.
The classical Eggdrop method takes a bit longer to generate any reply; but, when
it does, it drinks Dos Equis.
... Uh, I mean... when it does, the reply is more likely to be worth reading.
After I have experimented to my satisfaction, I'll switch the response method
back to the classic Eggdrop algorithm. Until then, if you'd prefer the Eggdrop
experience, you must delete the included database and regenerate it with the
default values and input a screenplay or something. I think Eggdrop's Web site
has the script for Alien, if you want to use that. Game over, man; game over!

In case you're curious, the algorithmic time complexity of PAL 9000 is somewhere
in the ballpark of O(((1 + MAX_COHERENCE - MIN_COHERENCE) * N) ^ X) per reply,
where N is every unique word ever learnt and X is the extinction threshold.
"It's _SLOW._" It asymptotically approaches O(1) in the best case.

For additional detail, consult /mlptk/reference/PAL9000/readme.txt.

Pal9000 is a prototypical design that implements some strange ideas about how,
exactly, a Markov-'bot should work. As such, some parts are nonfunctional (or,
indeed, malfunction actually) and vestigial. "Oops... I broke the algorithm."
While designing, I altered multiple good ideas that Eggdrop and DisPress did
right the first time, and actually made the whole thing worse on the whole. For
a more classical computer science dish, try downloading & compiling Eggdrop.

Om. Wherefore ore? Om. Om. Ore… art thou ore?

Dynamic recompilers: gold? Maybe. But what is the significance of recompilation
in the fast-paced, high-tech world of today? Does, indeed, my idea present any
useful information -- or am I merely setting forth in search of lost CPU time?
What am I doing? What am I saying? Where am I going? What am I computing? I've
built the swing, but what about the swang? After the swang, whither the swung?
(Paraphrased from sketches performed in "Monty Python's Flying Circus.")

This lecture provides, in abstract, my concept design of a specialized Turing
machine: the compound compiler/disassembler. It is feasible via my published
work, Quadrare Lexema, which is in the public domain as is my essay.

(Actually, I think these are more appropriately called "dynamic recompilers.")

If you're just joining me, Quadrare Lexema is a Bison-like parser generator
written in the Javascript programming language for use in Web browsers (FF3.6).
It is portable to any browser employing a modern, standards-compliant Javascript
interpreter; such as Chromium, Chrome, Opera, and (hypothetically) Konqueror.
I have not yet had an opportunity to acquire a copy of Microsoft's Web browser,
Internet Explorer, because I've had no opportunity to acquire a copy of Windows
(and couldn't pay for it, even if I did). QL produces a parser like GNU Bison;
which is to say, an LALR parser for Backus-Naur Format context-free grammars.
For more information, visit the Free Software Foundation's Web site; and/or the
Wikipedia articles treating Backus-Naur Format, regular grammars, LR parsers,
and Turing machines.

Because I have already described my work with the hundreds of pages of code and
the few pages of abstract condensed lectures, this is curtate. However, observe
that my foregoing proof constructs the basis upon which this concept rests.
Besides my work, bachelors of computer science worldwide have demonstrated in
their own dissertations similar proofs to that I have presented you by now.

Again: concept, not implementation. Toward that end lies the specification of a
Turing-complete virtual machine, wherein there be dragons. I will write you one
soon, but I am pending some subsequent blueprints on my acquisition of better
laboratory equipment. In the meantime, I'm pretty sure you can write one w/ QL.

Obviously: a compiler translates the syntax of a human-interface language into
machine instructions; parser generators can be employed to construct compilers;
and such generated parsers can be attached to other parsers in a daisy-chain.

So, make a compiler that converts the abstract instructions it reads into some
progressively simpler set of other instructions, by unshifting multiple tokens
back onto the input stream during the reduction step. You could either read them
again and thereby convert them to yet other computations, or simply skip-ahead
within the input stream until it has been entirely disassembled.

Consider: you're writing an artificial intelligence script, somewhat like this:
	FOR EACH actor IN village:
		WAKE actor,
		actor ACTS SCRIPT FOR TIME OF DAY IN GAME WORLD.
(This syntax is very similar to the A.I. language specified for the Official
 Hamster Republic Role-Playing Game Creation Engine (O.H.R.RPG.C.E.).)
Your context-free grammar is walking along der strasse minding its own business;
when, suddenly, to your surprise, it encounters the ambiguous TIME OF DAY symbol
and the parser does the mash. It does the monster mash. It does the mash, and it
is indeed a graveyard smash in the sense that your carefully-crafted artificial
intelligence -- the very stuff and pith of whose being you have painstakingly
specified as a set of atomic operations in an excruciating mathematical grammar
-- has now encountered a fatal syntax error and you would like nothing better
than to percuss the computer until it does what you say because you've used each
!@#$ free minute learning how to program this thing for the past fifteen !@#$
years and God Forfend(TM) that it should deny you the fruit of your labor.

Well, actually, the OHRRPGCE does solve that problem for you, which is nice; and
maybe it even solves it in exactly this way, which is cool; but I'll continue...
Let's say your parser lets you unshift multiple symbols upon reduction, like QL
(or like any parser that does a similar thing, of which there are surely many);
then "TIME OF DAY" could reduce to an algorithm that does nothing more than find
out what time of day it is and then unshift a symbol corresponding to the script
that most closely matches what time of day it is.

In other words, you've specified the possible computations that the language can
execute as a set of atomic operations, then decomposed the abstract instructions
into those atomic operations before executing them. The functionality is similar
to that of the C preprocessor's #include directive, which directs the CPP to add
the source code from the #included header verbatim like a copy-paste operation.
That's the most straightforward application of this thought, anyway: copy-and-
pasting artificial intelligence scripts into other AI scripts.

Another thought: let's say your compiler is, in addition to its named capacity,
also a virtual machine that's keeping track of state in some artificial world.
So, it's two virtual machines in one. Rather silly, I know, but why not follow
me a bit deeper into the Turing tar-pit where everything is possible and nothing
is NP-complete? After all, "What if?" makes everything meaningless, like magic.

So, again, it's trucking along and hits some instruction like "SLAY THE DRAGON;"
but the dragon is Pete, who cleverly hides from the unworthy. Now, you could say
just ignore this instruction and carry on with the rest of the script, and maybe
even try to slay the dragon again later after you have slung Chalupas out the TB
drive-through window for a few hours. You could even say travel back in time and
slay the dragon while there's yet a chance -- because wholesale violation of the
law of causality is okay when you can't afford to lose your heroic street cred.
But why not check the part of the script you've yet to execute, to see if maybe
there's something else you need to do while you're here, and then try doing it
while you keep an eye out in case Pete shows up?
I mean, you could write that check as part of the "SLAY" atom; or as part of the
next action in your script that happens after the "SLAY" atom; but why not write
it as part of what happens before the "SLAY" atom even executes?

Also, what if you wanted the compiler to try two entirely different sequences of
symbols upon computation of a symbol reduction? Or one reduction modifies the
input stream somehow, while another does not? Or to reorganize/polymorph itself?

All this and more is certainly possible. QL does a big chunk of it the wrong way
by gluing-together all sorts of stuff that could have been done by making your
symbols each a parser instead. Oh, and iterators in Icon also do it better. And
let us not forget the honorable Haskell, whose compiler hails from Scotland and
is nae so much a man as a blancmange. "Learn you a Haskell for great good!" --
this is both the title of a book and an amusing pearl of wisdom.

The only thing that bothers me about this idea is that the procedure necessary
to polymorph the code at runtime seems to be (a) something that I ought to be
doing by nesting context-free parsers, rather than writing a contextual one; and
(b) probably requires a heuristic algorithm, which would be awfully slow.

I haven't developed this idea very well and there's some space left.
Here is a verbatim excerpt with an example Haskell algorithm, for no reason:

  From The Free On-line Dictionary of Computing (30 January 2010) [foldoc]:

  Quicksort
  
     A sorting {algorithm} with O(n log n) average time
     {complexity}.
  
     One element, x of the list to be sorted is chosen and the
     other elements are split into those elements less than x and
     those greater than or equal to x.  These two lists are then
     sorted {recursive}ly using the same algorithm until there is
     only one element in each list, at which point the sublists are
     recursively recombined in order yielding the sorted list.
  
     This can be written in {Haskell}:
  
        qsort               :: Ord a => [a] -> [a]
        qsort []             = []
        qsort (x:xs)         = qsort [ u | u<-xs, u<x ] ++
                               [ x ] ++
                               qsort [ u | u<-xs, u>=x ]

     [Mark Jones, Gofer prelude.]

Now that I've burst my metaphorical payload, in regard to theory, I think future
lectures shall be of a mind to walk novices through some of the smaller bits and
pieces of the algorithms I've already written. There aren't many, but they're a
real pain to translate from source code to English (which is why I haven't yet).
I might also try to explain QL in further detail, but IDK if I can make it much
more straightforward than it is (no questions == no idea if you comprehend)...
and, besides, I've already written more than ten pages about it & it's tedious.

Depending on how much Pokemon I must play to clear my head after overhauling the
parser this year, and how much time is left after that to finish the shell's new
scripting language, I'll aim to write some of those simplified lectures by July.