Tag Archives: Methodology

Essays regarding computer programming in practice.

She Sells C Shells By The Seashore.

(This post, as are all my scientific works, is free of charge.)

I've written several shells in various languages, with varying success.

My most successful has been MLPTK: the blockbuster sensation that set the world
afire due to its author's controversial personality. (Or, rather, I ought to say
that it would have if the United Fascist States of America hadn't [EXPLETIVE] a
big pile of their rhetoric and censorship all over it.) I'll run you through its
schematic in a cursory manner with some discourse on my thoughts re: design.




Since I was about fifteen I dreamt of writing a parser. Specifically, the kind I
wanted to write was a shell: I intended to write a computer operating system, or
a text-mode adventure game similar to Zork or Skullkeep. I never did get 'round
to those goals, but I did write a shell (MLPTK) around age twenty five.

The first task was, of course, to effect a console with which I could debug the
shell as I wrote. This was easily accomplished using HTML4: my preferred tongue
for writing simplistic graphical user interfaces, and therefore the reason I so
often write computer programs using the Javascript language. I began with a text
field for command input and a rectangular textarea of dimensions 80 columns by
25 rows: gnome-terminal's canonical console width.

Then I needed to effect the console machinery in mlptk.js. This file contained a
function to bind console commands (other functions) to a global variable "tk."
The tk (Tool Kit) object contained a simplistic parser that split its commands
into legible chunks, processed them, then passed the processed arguments & input
to the bound function (all of which were subordinate to command line parsing).
This file is whence tk._bind() and tk._alias(): the Words of Power for beginner
MLPTK users to start adding their own modules. The file mlptk.js encapsulates a
command parser, history, console, log, input/output hooks, & related functions.

Because I had become sidetracked as I was writing, I then proceeded to write the
overload.js, which contains a few inefficient convenience functions overloading
Javascript's canonical primitive types (namely: Object, Array, & Function). The
library's most useful function was a JSON-like enumeration of object membership,
with a simplistic recursion sentinel (thanks, "==="!) for greater convenience.

Later on, I effected some changes to commands and the standard input / output
metaphor that permitted MLPTK users to use Javascript's objects within their
command lines and stdin/stdout. This could potentially be an improvement over
traditional Unix-like "bag of bytes" streams, although nothing is stopping any-
one from writing somewhat like fscanf(stdin, "%s", &classInstance)... except for
the facts that: (a) C does not work that way (easily circumvented via memcpy()),
and (b) input should never be written directly to memory without parsing it very
carefully beforehand (because of clever little Johnny Drop Tables).

For the finishing touches -- tee hee, I sound very much like the hosts of the TV
series "This Old House!" :) -- I added a rudimentary file system (cut buffer), &
some clickable buttons for command execution. Withal, MLPTK had become a bare-
bones shell with a functional console, and it was suitable to the task of basic
software design in Javascript. Not exactly Unix, but not too shabby!




MLPTK's parser is naive, but translates command lines in a way that allows argv
(the argument vector: an array containing the program's command line arguments)
to contain Javascript's primitive data types. This is actually an amusing trick,
because data need not be redundantly parsed by each pipelined module. Printing
to the console is effected by the module's return statement, but can also be
accomplished by invoking parent.dbug(), parent.println(), or parent.echo().

The module's standard input is accessed, within the module's anonymous function,
via this.stdin, which is an Array() object whose each element corresponds to the
output of the command at corresponding position in the command grouping that was
piped into this module. Segregation of grouped commands' output was also my idea
-- Bourne Again Shell on Linux certainly doesn't behave so -- & could be utile.

User input can be acquired mid-module with window.prompt(): an ugly hack, but
the only way MLPTK modules can properly acquire such input at runtime until I've
completed Quadrare Lexema and MLPTK can be implemented as an operating system.
Until then, input is more efficaciously processed via argv & stdin.

Quadrare Lexema is a Bison-like Look Ahead Left-to-Right (LALR) parser generator
I wrote in Javascript. I intend to use it supplant MLPTK's currently hand-hacked
parsing mechanism and implement a scheduled operating system in Javascript. This
would be substantially similar to what ChromeOS seems to have arrived upon. Of
course, such an operating system would be slow, so don't hold your breath.

Incidentally: Polynom, one of the modules, is a complicated parser in itself. It
 can shorten the time needed to reduce polynomials to like terms.
 However, I didn't test it much (because it's an informal parser),
 so don't use it to do your homework for you. Same with matloc.




In the years since I was twelve I've thought often of operating system design &
specifications thereof. I've arrived at the conclusion that computer programs
are most properly represented as tokens in a formal parser: they execute in time
to this parser's schedule, and are themselves interpreted by another parser. Of
course, this approach to the procedure of computer operation is inefficient, but
offers the several advantages of hypervisory oversight (which grants the OS a
certain degree of control over hacking / exploitation of bugs & an opportune HAL
or Hardware Abstraction Layer that may provide an easier way to load drivers) &
a more easily altered operating system (because, in this hypothetical parser, it
doesn't have a thousand tangled spaghetti pieces sticking keyboard hooks into
each other all day long).

Of course, because all computer programs and even the central processing unit
itself can be considered finite state automata (Alan Turing's brilliant axiom,
which cracked the German "Enigma" machine), the whole computer can be seen as a
parser. Likewise can any parser be seen as a miniature computermachine of a sort
-- therefore a shell is a subset of any graphical OS, as Microsoft demonstrated
with Windows '95 & '98 (which were built atop MicroSoft Disk Operating System).

But what kind of shell underpins the operating system? It could be interactive
or not, or some shell-like infrastructure accessible only by way of an API. 
Regardless what, something resembling a programming language comprises the basis
of any computer operating system, because they're sufficiently complex as must
require parsing a variety of input in a parser-like way. Due to the relative
ease of writing in high-level interpreted languages, the thought of writing a
whole operating system as an interpreted script is tempting. Viz: Stanford, ITS,
and the Lisp machines on which JEDGAR found his legendary home.

I suppose any OS can be built as a kind of hypervisory shell, in which all code
experiences its runtime within a virtual machine (this particular kind of VM is
called a "sandbox;" for instance, Firefox's Javascript sandbox). However, like
all sandboxes, it is not impregnable, and its boundaries are porous. This means
computer viruses will always be, but it also means the OS can play fun tricks
with scheduling (such as system requisites rescheduling themselves, or weighted
time slices) and interfaces between programs (say, pipes?) can be hypervised.

The comparisons I draw between mathematically-oriented software design (parser
theory, formal grammars of the sort one writes in extended Backus-Naur format, &
object oriented -- especially origin-symbol oriented -- systems programming) and
Microsoft's infamous XBOX Hypervisor are intentional. Because carefully written
parsers can sandbox the instructions they interpret, and because artificially
generated automatons are can be verified by way of mathematics, OS design can be
(I suppose) implemented as a modular series of sandboxed sandboxes. Each sandbox
could change slightly over time (by modifying the EBNF, or whatever, notation of
its grammar) so that computer viruses are more difficult to write. Additionally,
each sandbox could contain a set of heuristic anti-virus instructions and fire-
walls (black- or white-lists) to permit the human system operator to choose when
certain instructions (hard disk access, network, etc) are allowed to execute.

Needless to say, the benefits of a fully interpreted & hypervised computer OS
are potentially vast and of great benefit to humanity. I'm hopeful for future
developments in this field, but can't spare the time for research. Because this
and all my briefs & lectures are in the public domain, I encourage scientists to
take my "ball" and run with it. But, as in the science of basketball -- that is,
the mathematical study of geometry -- don't forget to dribble.




Because that discourse was in fact quite vapid (until I've written it myself),
here's today's most amusing quote from fortune(6):
"
 I
 am
 not
 very
 happy
 acting
 pleased
 whenever
 prominent
 scientists
 overmagnify
 intellectual
 enlightenment
"
:)
Advertisements

Using a Briefcase™? Briefly, a Use Case™.

Briefcase: a special kind of folder in Microsoft Windows, which synchronizes its
 contents with another briefcase of the same name when detected. Used
 to keep volatile documents on floppy disks & USB flash drives without
 constantly copying and pasting the contents of the whole disk every
 time it moves from workstation to workstation. Like an rsync daemon.

Use case: in the studies of software design & architecture, a storyboard sketch,
 or supposition about what a user expects or how s/he'll behave. E.g..
 This would be the initial node of a flowchart, a branch in main(), GUI
 dialog panes, or some interaction of user with program.
 Additionally, analysis of hostile users and newbies ("misuse case").




I am still moribund ("deadlocked," or "sick to death") by a headache that has
become cerebral palsy. I have been unable to concentrate on my plans this year.

Speaking of contributions to science, you can find my (literally) auriferous
portfolio at the magnanimous MediaFire (they're not just for pirates!):

    https://www.mediafire.com/folder/kr2bjyn1k3gjr/mlptk-recent
    (Download & read the CARGO-MANIFEST.TXT to ascertain the contents of the archives you seek.)
    WARNING: ADULTS ONLY. (Explicit sexual content.)
    Videlicet is still kind of broken. DiffWalk, too, may be faulty.

The hyperlink will lead you to a MediaFire directory. I have added new archives
(for bandwidth conservationists). The file CARGO-MANIFEST.TXT describes all the
contents: _download and read it first_ if you want to know what's in them there
archives, which total over one hundred Megabytes, &/or retrieve your preference.

What's new: kanamo & transl8 (in MLPTK), Mutate-o-Matic, Videlicet, & DiffWalk.
(I said MLPTK was officially dead, but will I let it rest? How about no...)
Archivists curating art galleries downloaded from social networks will love
Videlicet, which solves the vexing twin problems of automatic attribution and
re-configurable data mining. (For those pesky copy protection mechanisms.
Videlicet.py easily cuts through Web galleries and markup up to 1/4" thick.)

I even threw in the exprimental upnnas: yea, truly this is an epic day.
(^- That line alludes to one of the _Juicy Cerebellum_'s author's asides.)

The remainder of this briefing describes the salient points of a Python script I
wrote to automatically collate issues of my portfolio. Long story short: "diff."




Because the large size of the archives I upload has become problematic, I have
established a ramshackle mechanism to prepare smaller files for anyone concerned
about bandwidth conservation. (MediaFire reports only two Gigabytes since last
year, which is no big deal, but I certanly wasn't helping. Also I couldn't think
of much else to do.) In case you cared, the usual issues with bandwidth are
constriction & latency: to reuse Senator Ted Stevens' "tubes" metaphor, how wide
the tube is and how long it is, and either of these can alter an observer's
perception of the pressure of fluid forced through the pipe. "When the tubes get
full, things can't get through" -- like dead bodies, or the new episode of Veep.

Metaphorically one half of this mechanism is a portable diff utility: DiffWalk.
The other half is a shell script that identifies changes to the directory tree.
Neither is aught remarkable but why don't I talk your ear off about them anyway?

Diff is a program similar to cmp, used to compare two files and describe their
discrepancies. In common use on Unixlike systems, it is employed to create patch
files that require less time to transmit via point-to-point telecommunication
than would be needed to transmit the whole file whenever it changed. Because it
is so useful an algorithm, and because I've never seen one for Windows (except
in the Berkeley Utilities), I made (but didn't test) a portable one in Python.

DiffWalk is a walking collater that creates patches similar to diff's.
Although the two are not interoperable, they operate in the same manner:
by determination of where the files differ and description of the differences.
Therewith, a "new" file can be reconstructed from an "old" file plus a patch --
hypothetically, with according decrease of network bandwidth load.

Although the script is a few hundreds of lines long, the scanner (the part that
goes through the file looking for the interesting bits: such as, in this case,
the positions where the new file differs from the old) is one tenth that size.
As you've observed in my other software, I do without proper parsers & grammar.
This renders my work brief, vulgar, and full of bugs, but sometimes legible.




def diff (old_lines, new_lines): #fmt: old_offset old_lines new_lines\nlines\n
 patch_file = [ patch_copacetic_leadin ];
 
 scan_line = ""; # Compute MD5 checksums for both files...
 old_md5sum = hashlib.md5();
 for line in old_lines: old_md5sum.update(line);
 old_md5sum = old_md5sum.hexdigest();
 scan_line = "%s\t" % (old_md5sum);
 new_md5sum = hashlib.md5();
 for line in new_lines: new_md5sum.update(line);
 new_md5sum = new_md5sum.hexdigest();
 if new_md5sum == old_md5sum: return None; # same file? then no patch req'd.
 scan_line += "%s\n" % (new_md5sum);
 
 patch_file.append(scan_line); # Second line: old_md5 new_md5
 
 oi = 0; ol = len(old_lines); ni = 0; nl = len(new_lines);
 tally = 0; scan_line;
 unique_new_lines = set(new_lines) - set(old_lines);
 
 while ni < nl: # 2 phases: scan "same" lines, then diff lines
 oi = 0; tally = 0;
 while oi < ol and old_lines[oi] != new_lines[ni]: oi += 1;
 scan_line = "%d\t" % (oi); #Index in "old" file to cat some of its lines
 while oi < ol and ni < nl and old_lines[oi] == new_lines[ni]:
 tally += 1; ni += 1; oi += 1;
 scan_line += "%d\t" % (tally); # Number of lines to cat from "old" file
 tally = 0; next_ni = ni;
 while ni < nl and new_lines[next_ni] in unique_new_lines:
 tally += 1; next_ni += 1;
 scan_line += "%d\n" % (tally); # Number of lines to cat from "new" file
 
 patch_file.append(scan_line);
 patch_file.extend(new_lines[ni : next_ni]);
 ni = next_ni;
 # end while (scan the files, outputting the patch protocol format)
 
 return patch_file;
# end function diff: returns diff-style patches as writelines() compatible lists




Concise and transpicuous:
 1. Tally runs of lines that already existed in the old file. (Scan phase.)
 2. Tally runs of lines that do not exist in the old file. (Diff phase.)
 3. Print a patch format that permits ordered reconstitution of the lines.
 4. Repeat until the entire new file can be reconstructed from patch + old.

Here, Python's set()s abstract away a tedious series of repetitive scans.
Without set or a like data type, I'd have to either hash the "old" file's lines
myself (and waste time writing another binary tree) or loop through it all again
and again for each line of the new file. (That would be due to the fact that, if
lines had been moved about instead of simply moved apart by interjection, then a
lockstep scanner would mistakenly skip some and the patch file would be larger.)

There is no capacity to patch binary files, but DW still detects when they have
changed, and will write a copy into the patch directory. I assume that changes
to binary files are due to transcoding, and therefore the patch'd be just as big
-- some kinds of binary files, such as SQL databases, don't behave this way and
can be patched in the same manner as I patch text files, but I don't use them.
(If you extend the algorithm to databases or executables, don't forget to review
 the pertinent file formats and open the files in binary mode. :)

The rest of the script is a wrapper handling directory traversal and file I/O.

As `info diff` artfully states, "computer users often find occasion to ask how
2 files differ." The utility of a script like DiffWalk is therefore not limited
to patching, but compression protocol is its primary employment on my system. (I
still use `diff` for quotidian difference queries because DW isn't in my $PATH.) 
Likewise, the automatic collation of updates, such as moved and deleted files,
is a pleasant amelioration to the task of finding what's changed in an archive
since the last published edition. DiffWalk now handles these tasks for me.

If you'd like a better solution to the "Briefcase Problem" (how to synchronize
files across multiple installations with minimal time and fuss), don't forget to
stop by the manual pages for "diff", "patch", and "rsync".

“Windows cannot find a critical file.” Current Rage Level: Omicron.

Ubuntu Linux: the Wal-Mart(TM) Frontier. These are the voyages of the Spacecar
Grosvenor. Its continuing mission: to allocate new structs & new classes, unite
all people within its nation, and leak where memory has never leaked before.

Of the numerous Linux installations ("distributions"), I've used Ubuntu Linux (published by Canonical Inc.)
most. It contains the Linux kernel, the GNU core utilities, and several other
items of interest such as an automagically-configured graphical user interface.
It is extraordinarily user-friendly, to the point of feeling constrictive. (The
desktop environment has changed since version 11: users now cannot reconfigure
the taskbar or workspaces. The repository wants to be a dime-store, too, and
although a potentially lucrative storefront I miss the simplicity of Synaptic.)

Its installation procedure is simple: download a Live CD image from Canonical's
Web site, burn it to a CD-R or RW (these days, you might even need a DVD), and
reboot your machine with the disk inserted. (Don't forget to tell the BIOS -- er
whatchamacallit, the Extended Firmware Interface -- to boot from CD.) You'll be
presented with an operable temporary login. Thence you can install the OS. Also
available from this interface was an option to create a USB startup disk, but it
has been removed in recent revisions of Ubuntu: previously, using VirtualBox or
any similar virtual machine, the user could run the LiveCD & make a startup USB
without even rebooting out of their present operating environment, which was
useful on old machines whose optical drives had failed. You can still "Install"
to the USB key, but it boots slowly & you can't install it from there to a box.

The installation wizard is a no brainer: "install alongside Windows." Easy! And
it usually doesn't cause your existing Windows system to go up in smoke, either.
However, to install Ubuntu more than once per box, you must repartition manually
(and may also need to change grub: see /boot/grub and /etc/grub.d). Gparted is
included within the live disc images, but must be retrieved again after install.

If you'd like to make intimate friends with the manual pages, and discover where
primary partitions go when they die, you can install with less assistance. This
lets you specify partitions in which to mount your home & system directories, in
case you'd like to keep them segregated. (That's probably a great idea, but I
never do.) You can also create and specify swap partitions: which are employed
as virtual memory and, I suspect, for hibernation and hybrid suspension.

About file systems: I typically use FAT32, NTFS, ext4, and ext2. (Total newbie.)
FAT32 is elderly and fragile. It's used for boot/EFI partitions, 3DS & 3GPs.
NTFS is Microsoft's modern FS. Withstands some crashes, but has no fsck module.
ext2 & ext4 are Linux's. ext4 journals. ext2 permits file undeletion (PhotoRec).
The extended 4 system is harder to kill than a cockroach on steroids, so I tend
to prefer it anywhere near the heart of my archives. I use ext2 | NTFS for USBs.

Be very careful not to destroy your existing data when repartitioning the drive.
Any such operation carries some risk; back up anything important beforehand. One
way to backup is to prepare an empty HDD (or any medium of equal / greater size)
and dump the complete contents of the populated disk into the empty one:
 dd if=/dev/sda of=/dev/sdb status=progress
 (Where sda is the populated disk, and sdb the empty backup disk.)
Similar can be accomplished by dd'ing one of your partitions (/dev/sda1) into a
disk or a file, then dd'ing the image back onto a partition of equal size. Disk
image flashing is a simple and popular backup method for local machines, sparing
you the time to learn rsync (which is more useful in long term remote backups).
Far from being an annoying elder sister, dd is the Linux troll's best friend.

Beware the dreaded "write a new boot/system partition" prompt. It bricked me.
The problem was because I had set the system to "Legacy Support" boot mode, but
the original (now unrecognized) installation was in Extended Firmware Interface
mode. I was unable to recover until I had re-flashed several partitions.

The usual "new car smell" applies: you'll want to configure whatever settings
haven't yet been forbidden to you by your GUI-toting overlords. In Ubuntu 16,
access them by clicking the gear and wrench icon on the launcher panel. You can
also search for something you're missing by using the Dash (Super, or Windows,
key pulls it up: then type), which functions similarly to the apropos command:
e.g., instead of Ctrl + Alt + T and then "man -k image", Super key then "image".
It will also search your files (and, after plugins, several social media sites).

Although the newfangled Dash is convenient, don't forget your terminal emulator:
you can easily spend the vast majority of your working time using bash by way of
gnome-terminal, without ever clicking your treasured Microsoft IntelliMouse 1.1.
In Ubuntu 16, as it has been since Ubuntu 11, Ctrl + Alt + T opens the terminal.

Under the directory /usr/share/man/, you will find the on line (interactive)
manual. This describes the tools available to you. Begin reading it by opening
a terminal window (using Control + Alt + T, or the Super / Windows key and then
typing "terminal"), keying the command 'man name_of_manual_page', and pressing
the Enter key. In this case, the name of the manual page is the page's archive's
filename before the .[0-9].gz extension.
Of particular interest: telinit, dd, printf, cat, less, sed, tee, gvfs-trash,
mawk, grep, bash (if you're using the Bourne Again Shell, which is default on
Ubuntu 16), cp, rm, mv, make, sudo, chroot, chown, chmod, chgrp, touch, gunzip,
gzip, zip, unzip, python, g++, apt-get (especially `apt-get source ...`), mount,
kpartx, date, diff, charmap (same name on Windows!), basename, zipinfo, md5sum,
pdftotext, gnome-terminal (which is _how_ you're using bash), fortune, ffmpeg,
aview, dblatex, find, cut, uniq, wc, caesar, rot13, curl, wget, sort, vim, man,
tr, du, nautilus, tac, column, head, tail, stat, ls, pwd, pushd, popd, gedit,
source-highlight, libreoffice (a Microsoft Office killer), base64, flex, bison,
regex, perl, firefox, opera, chromium-browser, konqueror, lynx, virtualbox,
apropos, od, hexdump, bless, more, pg, pr, echo, rmdir, mkdir, fsck, fdisk (same
name, but different function, in Windows), ln, gdm, gnome-session, dhelp,
baobab, gparted, kill, locate, ps, photorec, testdisk, update-grub...
(If you haven't some of the above, don't worry. You should already have all you
 need. Keep in mind that the Ubuntu repository's software is divided in sections
 some of which contain potentially harmful or non-free software. When venturing
 beyond the fortified walls of <main>, be cautious: you may be eaten by a grue.)
Beneath /usr/share/doc/ or /usr/share/help/ are sometimes additional manuals.

If you use Linux, you will have to memorize several manuals, and name many more;
especially those of the GNU core utilities, which are a great aid to computing.
There's also a software repository to assist you with various computing tasks:

To acquire additional software: gnome-software (the orange shopping bag to your
left, above the Amazon.com icon), the friendly storefront, will assist you. If
you prefer a compact heads-up-display, try the Synaptic Package Manager instead.
`apt-get install package-name` works well if you know what you're looking for,
as does apt-get source package-name for the ponderously masculine.

And, speaking of ponderous masculinity, if you retrieve source code for any of
Ubuntu's mainline packages, typically all you need to do is 'cd' into the folder
containing the top level of the source tree and then invoke the following:
 1. ./configure.sh
 (You shouldn't need to chmod u+x ./configure.sh to accomplish this.)
 2. make
 (You may need to install additional packages or correct minor errors.)
 3. sudo make install
This can be abbreviated: ./configure.sh && make && sudo make install
Beware that sudo is a potentially dangerous operation. Avoid it if unsure.
The && operator, in bash, will only execute the next command if the past command
exited with a successful status code (i.e., zero).

But I digress.

You'll occasionally want to mount your other partitions on Linux's file system,
so that you can browse the files you've stored there. With external drives this
is as simple as connecting them (watch the output of `tail -f /var/log/*` in a
console window to observe the log messages about the procedure), but partitions
on fixed disks (or others, 'cause reasons) may not be mounted automagically. So:
 mount -t fs_type -o option,option,... /dev/sd?? path/to/mount/point/
where the mount point is a directory somewhere in your file system. BTW, mounts
that occurred automatically will be on points beneath /media/your_username/.

On a dual boot Windows system, I mount -t ntfs -o ro /dev/sda3 ~/Desktop/wintmp
often because the NTFS partition is in an unsafe state and won't mount writable.
In that case, rebooting to Windows and running chkdsk /f C: from Command Prompt
with Administrative privileges will sometimes clear the dirty flag if performed
multiple times. (How many times before ntfs-3g mounts writable, seems to vary.)

When you've attached external media, via USB etc, safely remove them after use:
use the "Safely Remove" menu option in the right-click context menu in Nautilus'
sidebar (be careful not to accidentally format the disk). You can also, from a
shell (gnome-terminal), `sync && umount /dev/sdb*` (if sdb is the medium).

Now that you've got a firm foothold in Ubuntu territory, I hope you can see your
house from here 'cause Windows seems to be dying a miserable death of attrition.
Don't count it out, though: all the Linuxes are terrible at Flight Simulator.

こんにちわ い-よ, わんこ いいんちよ…

Title translates loosely, if at all, as "I ain't say hi to the First Dog."




Discrete mathematics treats of sets (groups of items, of known size) and logic.
The operations on sets are union (OR), intersection (AND), difference (-), and
symmetric difference (XOR). I often call AND conjunction and XOR disjunction.

As for logic, the usual Boolean operations with which programmers are familiar:
 AND: Only the items ("1" bits) in both sets (bytes). 01001 & 01100 == 01000
 OR: All the items ("1" bits) in either set (byte). 01001 | 01100 == 01101
 XOR: Everything in one set or the other, _not_ both. 01001 ^ 01100 == 00101
 NOT: Logical inversion. (Not conversion.) Same as ¬. ! 01001 == 10110
(NOT is a unary operator. The binary operators are commutative. The exclusive OR
 (XOR) operator has this additional property: given, axiomatically, A ^ B = C,
 then C ^ B = A and C ^ A = B. I think that makes XOR evanescent, but I can't
 remember if that's transitivity. Anyhow: symmetric difference self-reverses.
 See also memfrob() @ string.h, & especially Python manual section 6.7 (sets).)

I may have misrepresented some of the details: consult a discrete math text too.
Salient trivia: don't confuse the bitwise intersection (&) & union (|) with the
 boolean operators && and ||. Although equivalent for the values
 1 and 0, they behave differently for all others. For example:
 2 & 5 == 0; although both (bool)2 and (bool)5 typecast to true,
 and therefore (bool)2 & (bool)5 == (bool)2 && (bool)5 == true,
 bitwise & is valid on integers and the compiler will produce the
 integer bitwise & operation which results in false.
Et trivia: xor can be used (cautiously) to logically invert comparisons (^ 1).
(See also: GNU Bison, Extended Backus-Naur Format, formal grammar (maths).)

Combinatorics (the science of counting) is discrete mathematics: in which the
delightful factorial (a unary operator, !, whose operand is to the left, which
distinguishes it from unary logical NOT) makes many an appearance. Combinatorial
explosion is a task for any mathematician who wishes to optimize (by trading
memory for repeat computation) such procedures as decision trees, which are NP.
(See also: algorithmic complexity, non-deterministic polynomial time).

Combinations of a set, because order makes no difference, are fewer than its
permutations. A combiner can be written with sliding windows, as in Python's
"itertools" library. (I've also implemented a heuristic version in Javascript.)
Combinations represent all the ways you'd choose some number of items from a set
such that no two combinations contain exactly the same items.
There are S! / (M! * (S - M)!) M-length combinations chosen from a S-length set.
(X! is "X factorial." Don't confuse it with !X, which is "not X," i.e. "¬X." )

Contrariwise: permutations, in which the same items chosen in a different order
constitute a different permutation than any other, represent all the ways you'd
choose items from a set such that the time when you chose each item matters.
Permutation is most simply and directly implemented by way of recursion.
(Combinations can also be written this way, by narrowing the slices, as I have.)
There are S! / (S - M)! permutations of length M chosen from a set of length S.

The number of "same items, different order" permutations can be arrived upon via
subtraction of the combination formula from the permutation formula:

S! S! M! * S! S! (M! - 1) * S!
-------- - ------------- == ------------- - ------------- == ------------- 
(S - M)! M! * (S - M)! M! * (S - M)! M! * (S - M)! M! * (S - M)!




Because the "N-ary Counter" appears frequently in computer programs that behave
in a combinatorial way, and because I implemented them too in Mutate-o-Matic:
the items in any set, when numbered in monotonic integral progression from 0,
can represent the numerals of numbers in base N (where N == S, to reuse the "set
length" nomial from above). Arithmetic with these figures rarely makes sense,
although hash tables and cryptographic checksums sometimes behave similarly.
There are S ** M (S to the power M; ** is exponentiation in Python) N-ary
"numbers" of length M that are possible by counting in base S.

Combinations and permutations are not defined when the length M of the mutation
exceeds the length of the set. N-ary counters are defined for any M, however.




Today I'll describe a mutator I wrote in C. It permutes, combines, and counts.
It is somewhat more complex than the recursive one-liner you'd sensibly write,
but not much different once you see past all the pointers.

Such a one-liner, before I get carried away, would look something like this...
 def anagrams (w): # Pseudocode (read: I haven't tested it) anagram generator
 if len(w) == 1: # <-- Recursion always requires a sentinel condition,
 yield list(w) # unless you are extremely clever indeed.
 return
 for i in range(len(w)): yield [ w[i] ] + anagrams(w[0 : i] + w[i + 1 :])
... in Python, though my own rendition in Python is somewhat more acrobatic (it
foolishly employs generator expressions); as for combinations, see PyDoc 10.7.1
(itertools.combinations). Python is fast, because it's just a shell around C, &
Python's "itertools" module will do the discrete math acrobatics in record time.
Itertools also contains many of the discrete math axioms I rewrote above -- see
its page in the Python manual, section 10.7, for the details.

Not content to waste nanoseconds, however, I resolved to write a significantly
less intelligible implementation using the C programming language. I ought to
point out that the savings in algorithmic complexity are vanishingly negligible:
it actually probably uses more time than the Python implementation, owing to its
prodigal invocation of malloc() (among my other C vices, such as **indirection).

Anyway, here's the Mutate-o-Matic in Stone Tablet Format courtesy of my C-hisel:
 
 typedef enum { PERM, COMB, NARY } mmode_t;
 // ^- Respectively, "permutation," "combination," or "N-ary counter" mode.
 void mutate (
 char *set[], // set of items to permute. An array of C-strings.
 size_t slen, // length of the set.
 size_t plen, // length of each permutation.
 mmode_t mode, // kind of mutation I'm executing.
 char delim[], // delimiter, per permutation item.
 char *prefix[] = NULL, // characters chosen already.
 size_t pflen = 0 // length of the prefix.
 ) {
 size_t subset_length = (mode == NARY) ? slen : slen - 1;
 char **next_set = (char**) calloc(subset_length, sizeof(char*));
 char **next_prefix = (char**) calloc(pflen + 1, sizeof(char*));
 if (next_set == NULL || next_prefix == NULL) { exit(-1); }
 for (int i = 0; i < pflen; next_prefix[i] = prefix[i], i++) ;
 
 for (int i = slen - 1, j, k; i >= ((mode == COMB) ? (int)plen : 0); i--) {
 next_prefix[last_item] = set[i];
 
 if (plen > 0) { // Either descend the recursion ...
 for (k = 0, j = 0; j < i; next_set[k++] = set[j++]) ;
 switch (mode) {// ^- the above are elements common to every mode
 case NARY: next_set[k++] = set[j]; // XXX NB: fallthrough
 case PERM: for (j++; j < slen; next_set[k++] = set[j++]); break;
 } // (^- conditionally copy elements not common to all three modes)
 
 mutate(next_set, k, plen, mode, delim, next_prefix, pflen);
 } else { // ... or print the mutation (selected set items).
 for (j = 0; j < last_item; printf("%s%s", next_prefix[j++], delim));
 printf("%s\n", next_prefix[last_item]);
 } // *Sound of cash register bell chiming.*
 } // end for (Select items and mutate subsets)
 free(next_set); free(next_prefix); // Paying tribute to Malloc, plz wait
 }

(To download the C file, plus the rest of my works, retrieve my portfolio by way
 of the links to MediaFire supplied in the top post at my WordPress blog.)

It seems so simple: only an idiot couldn't see how to compute mutations in this
manner, right? However, although I first encountered the problem when I was but
a wee lass, & although I finally discovered the formula for permutations and
N-ary counters after the better part of two !@#$ decades, I yet fail to grok
_why_ the "bounded sliding windows" illustrated by itertools successfully choose
only the combinations. (The best I can do is "because they're in sort order?")

Anyway, the procedure is woefully uncomplicated:
append on the prefix an unchosen item from the set, then either...
 [remaining choices > 0] recurse, having culled chosen item (unless in N-ary
 mode); and, if in Combination mode, having culled
 items subsequent to the sliding window (set[i]),
... or ...
 [choices == 0] print chosen items, which are a mutation, & continue

I'm sure you can see that this whole procedure would be a lot faster if I wasn't
spending precious clock ticks spelunking the dreaded caves of malloc(). A global
array of arrays with entries for each prefix length would really speed things up
-- as would splitting the mutater into three different functions, to eliminate
the branching instructions ("if-else," "switch," and "(predicate) ? a : b").

However, I recommend you don't bother trying to write it faster in C. What with
the huge cache memories CPUs have these days, it's probably plenty zippy. Even
though it _can_ be faster in C, it could be even faster in Assembler, so learn
an instruction set and assemble a core image if that's what you really want.

Until then, for all its inefficiency, Mutate-o-Matic renders about 100,000
mutations per second. That's faster than those sci-fi lockpicks in the movies.
Dash Maid, by Alason.

Booty Is In The Eye Of The Beholder.

(NB: at time of writing, Vide is still its Alpha testing phase.
     If you're only here for Vide, try again at 1q-2q 2018.)

Welcome yet again, intrepid reader, to the idyllic pastures of Moon Yu.
(^- Allusion to _Kung Pow: Enter the Fist._)

I've been having trouble coding anything extraordinary; so, for this edition,
I've written briefings covering the broad strokes of three quotidian programs.
Thankfully, modern languages like Bash and Python make software design easy, so
some of this (what I might have called dross, in better days) may interest you.

None of these programs are anything new, I'm sorry to say. Although you may not
have seen or heard of them before, I assure you they're quite trite. Nonetheless
this one in particular approaches a popular problem in an excellent way. I think
the memetic teenage and otaku demographics will be most enthusiastic!
(Unless they have to make a new font for it. What a dull task that would be.)




It's called "Videlicet." That's from two Latin words, "videre" & "licet," which
mean "to see" and "it is lawful; permitted; easy," thus "it can be seen." This
word "videlicet," abbreviated "viz," is used in the law: indicating the party is
not required to prove what he alleges (possibly because of a preceding verdict).
Here I mean simply "it's easy to see": a reference to its informational overlay.

My goal in writing Videlicet was to craft a programmatic means to attribution.
Because I often download many drawings and macros, and later forget where they
came from, the time was way past ripe to prepare somesuch "labeling machine." If
you're like me, you have Gigabytes of poorly-attributed drawings found at social
networks -- every one, inexplicably, without even the artist's signature! -- and
I'll bet you wish you had recorded as well as possible the provenance of each.

Perhaps with some program that automatically records this information for you?

Videlicet is such a script: a combination harvester for static-markup galleries.
I grant it to the Public Domain too, insofar as the source code _is_ mine.
Obviously, Python's libraries (and Python itself) belong to others: so, if you
intend commerce, consult its authors. (I think the language has copyrights.)




As usual, to download it, retrieve my portfolio from the links to MediaFire that
are supplied in the top post at my WordPress blog: https://ipduck.wordpress.com.




Two salient paradigms to be seen in Videlicet are typeface-rendering & grepping.
These are the major parts required to write stuff & to figure out what to write.
Shell scriveners worldwide have surely accomplished similar with their own tools
-- at a minimum, invoking wget to mirror the Web gallery containing the drawing
will do the same -- or simply written by hand in bibliographical text files, but
here I've described a program to inscribe that information _on_ the pictures.




First of all: how to make writing a thing? The Python Imaging Library (PIL) has
facilities fain to furnish fonts, but I felt a fool for failing to fathom their
fructuous function. I resolved to write my own: a simplistic cut-and-paste font
engine that gets the job done economically but without panache.

The desired typeface is encoded, character by character, as images of any type.
These are then loaded into memory and overlaid upon a panel of solid color. The
resulting "text box" is its own image, which I then attach to the retrieved one.
A few amusing effects are made possible by way of PIL's transformations.

The portions of the script that encode the font and the deceptively simple state
automaton that acts as a "typewriter" actually required about a thousand lines
and weeks before I was entirely satisfied. I give them short shrift here because
the vast majority of it is the exceedingly banal "move the cursor, check that it
hasn't traveled out of bounds, then paste a glyph into the image" procedure.

More on that procedure can be found in CharSet(), Overlay(), and JuiceFusion().
Videlicet can write forward, backward, up and down, using any font you create.




Second: how do I know what to write? This is a task I usually accomplish using
grep (the General Regular Expression Parser) and sed (the Stream Editor) in a
command line in my bourgeois gnome-terminal. I'd first feed the page's source,
probably via curl (a URL catenator), into a grep invocation that finds for me
the lines with the information I need, then into a sed invocation that massages
them into whatever format I want them to look like in the finished result.

(Such procedure is so commonplace that "grep" in jest can be a pun on "grope.")

Here, a similar procedure is executed against a downloaded HTML file in memory.
The files are retrieved by way of Python's hypertext transport facilities. If I
read Py's manual correctly: cookies, authentication, and proxies are supported.
Regular expressions are then employed to sieve the text and harvest any metadata
relevant to the user's records. The data, proper, is then harvested as well.

Actually sieving through the text necessitated contrivance. You see: text (such
as documents written in Hypertext Markup Language) changes a lot these days,
especially at corporate boilerplate factories like social networks. Interfaces
can't be relied upon from a robotic perspective, because programmers must always
be prepared to update the robots, which defeats the bot's purpose: to save you
valuable time and thought. Every time I visit MyTwitFeedFaceTube+(TM), I either
write a new "curl | grep | sed | mawk | while read ...; do ...; done" sieve or
just throw my hands up and use a foolish human GUI-capable Web browser. And, of
course, I still can't figure out Windows' interprocess communication metaphor,
so I'm dead in the water there too.

"Ballocks," I grumbled in this regard of late Spring, and grudgingly resolved to
implement a portable framework for execution of the usual sieving tasks.
At some overextended length, Videlicet's target acquisition procedure was born.
By the configuration files' powers combined, and with reference to a few short
(thanks to Python's standard libraries) convenience functions that effect the
vast majority of everything I ever do with GNU/Linux, it is Captain Planet.

Uh, I mean: it is able to effect the same reformatting operations.

As for retrieval of the text to be sieved and reformatted:

Videlicet simply employs the Python interpreter's capability of arbitrary code
execution via eval(), exec() and friends. In this fashion it constructs classes
that encode a recursion chain. Progressing from the initial phase of target
acquisition, through each step of winnowing the HTML (or Javascript, or whatever
-- although Videlicet doesn't implement a formal parser such as Bison or PLY and
is therefore devoid of any capacity to disassemble and recompile active code, it
can still effect a certain degree of penetration if the URL can be guessed) to
locate embedded Uniform Resource Locators within tags & stylesheets, and finally
to conclusory retrieval of the desired image & overlaying the historical record.




Once resources are harvested the discerning archivist has at his or her disposal
a wealth of pertinent data gathered automatically during Videlicet's operation.

Not only is the rendered image, with added informational overlay, saved to disk;
a copy of the original is also saved, as is a text file containing a record of
the transaction's HTTP headers (and, optionally, additional metadata). Actually,
the latter two records are saved for all fetched resources, just in case you
needed them. Naturally, this additional time stamping & record keeping clutters
up the burgh, so Videlicet is a lot less convenient to use than wget or curl.

Speaking of inconvenience, I ought to mention another one of its major caveats:
due to the manner of data retrieval, some of the target's original metadata
(including its times of last access, modification, status change, and birth,
which are remarkably important from a historical perspective) is irreparably
lost unless included with the HTTP headers. I could solve this problem by
reference to any Web browser's code to see how they preserve the information,
but after writing all the rest of it I was in a hurry to take some time off.

And, of course, the final indignity: because Vide requires the Python Imaging
Library, and I have no idea how to cross-compile this for Windows, you'll be out
of luck if you want to use it and can't find an installation with PIL available.
(_And_ I don't know where you can find one!) I am most sincerely regretful about
this fact, because Vide is one hell of a finite state automaton.




Given all that fuss and bother, why have I gone to the trouble of writing Vide?
Because I am sick and tired of downloading images from social networks only to
lose all the information recorded in the original post, even if there _is_ none.
A means to stamp images acquired from disparate sources was clearly necessary.




There are some other interesting bits in Videlicet. For example, it's scriptable
(employs arbitrary code execution via eval(), which is bad engineering practice
but saves me a little time), and can be employed to render text upon images you
already have. Regardless, all the technical complexity is as illustrated: the
remainder of Videlicet is a set of simple wrappers around grabbing and labeling.

Videlicet is my first Python 3 program. Unless the Python language undergoes any
further dramatic revision, this script will live on in posterity. (Really. It's
a pretty big deal.) Otherwise, you might want to get your kicks out of it now.
I don't plan to maintain it because, even if the language changes, I'm tired.

Cheroot Privileges: a Potpourri of Pointlessness.

Cheroot (Tamil "shuruttu" meaning "a roll"): a cigar. Reputed to be pungent.
chroot (GNU coreutils, manual section 8): run command in special root directory.
Potpourri: a compost heap, montage, medley, or ragout. NB: never compost meat.
Root privileges: to have these is to be the super-user, operator, admin, etc.
Root: a dental nerve, et c.



My foregoing post touched on socket programming, when I mentioned TFTP. (BTW, MS
Windows has a TFTP client built-in: in the Programs & Features control panel app
open "turn Windows features on or off.")

Sockets are a hardware abstraction layer that deals with computer networking.
As usual, gritty details are beyond me and I gloss them over. (Tee hee. That's a
pun about oyster pearls.) Suffice to say that sockets are ports of call for data
transmitted between computers: hardware and protocol not withstanding, bytes fly
out one socket and land in another. We built this Internet on socket calls.
(A pun on Jefferson {Airplane,Starship}'s "We Built This City.")

For more information, consult the RFCs, and the IEEE's 802.* network specs.
Perhaps ftp.rfc-editor.org, www.faqs.org/rfcs, or www.ietf.org/rfc are of use?

And an update to my Javascript snippet in the remedial lecture...
    function initnary (ctr) { for (var i=0; i < ctr.length; ctr[i++] = 0); }
    function incnary (counter) { // Faster, but rollover instead of sentry.
        for (var L = counter.length, i = L - 1;
             (nary[i] = ((nary[i--] + 1) % L)) == 0 && i > -1;
        ) ; // Faster than the example in WP12, but rollover not sentry.
    } // end incnary(): Increments N-ary counter (length >= 1), by reference
    // ...
    var nary = new Array(items.length);
    initnary(nary); // nary's state: 0 0 0 ... 0
    incnary(nary); // nary's state: 0 0 0 ... 1
    // ...
... which is possibly a bit faster than the other one, although neither will be
optimized by an optimizing compiler (due to the complicated loop initializer), &
therefore both are of marginal utility.



It's 2017. To begin my new year on the right foot, I began it on the wrong foot.

My first hint that I'd need to effect some impromptu renovations to my skeleton
came to me when I noticed that I had begun to experience an unpleasant taste of
musty dust after picking clean my right anterior maxillar tricuspid. (The reason
why shattered teeth taste of moist chalk is probably because dentine & chalk are
both calcareous substances. I'd guess chalk rots too, if infected.) Another way
I could describe the taste of a rotten tooth is "like hard-boiled eggs that were
rotten before they were boiled," because they smell and taste alike. The dentine
(the material composing the interior of teeth) also feels distinctly like chalk,
or like gritty soil, when I palpate it with my tongue.

Anyway, my left anterior mandibular tricuspid has also been a goner since auld
lang syne, and the bone fragments left over inside my gums have really begun to
bug me, so a taste of fetor was the last straw.

Luckily, I had a small piece of surgical gauze left over from when I foolishly
had my wisdom teeth removed. (If you're considering removal of yours, then I am
here to tell you: DON'T! It's a waste of money, and, unless your teeth are truly
rotten or a source of pain, there is simply _no reason_ to remove even one.) If
you haven't tried to get a grip on one of your teeth before, you wouldn't know,
but even a tooth you've wiped dry is difficult to grasp without gauze.

I'm also the lucky owner of a pair of surgical forceps. These handy little tools
look like a long and delicate pair of pliers with the fulcrum very close to the
gripping side of the levers. ("They really pinch.")

In case you were curious, forceps are usually employed to grasp small objects in
surgical procedures. They can also be used as roach clips. (For avoiding burns &
stains of the fingers while smoking. Wide pipe stems containing packed cotton
accomplish the same end: you can make one from a hollow ballpoint pen and cotton
balls sold at any general store. Nevertheless a forcep is more generally utile.)

Those teeth's days had long been numbered. Their time had come!

So it was that I spent tedious hours doubled over with my fingers crammed in my
mouth, wiggling that thrice-damned curse of a bone to try and work it loose.
I quite unwisely, and disregarding the risk of breaking my jaw, channeled thirty
years of pent aggression into what remained of my tricuspid molar, as malodorous
flakes of rotten enamel & dentine fell upon my tongue like evil snow.

I knew I had effected some kind of progress when I heard a muffled click inside
of my head -- bones have eerie acoustic properties, like an unsettling resonance
and a tendency to produce a crunching sound (rather than a snap) when fractured
-- and felt a stabbing pain travel up the side of my head. Thankfully the pain I
felt due to prolonged migraine headache rendered this somewhat less intolerable.

I repeated this procedure until I lost consciousness.
Well, that's how I had hoped that this would end, but it didn't.
I could not bear the pain, and had to stop trying to pull my tooth.

Unfortunately for me, although I did manage to work the molar somewhat further
out of my jaw than it had loosened already (my dental hygiene, in case the memo
hasn't reached you, is worse than Austin Powers'), I didn't completely extract.
All I managed to do was cause a hairline fracture of my maxilla, which will un-
doubtedly be a source of major difficulty and pain to me in the decades to come.

Worse yet, my application of too much pressure via the forceps caused additional
shattering of the tooth; further attempts at extrication are counterindicated.
That's just as well, because the kind of general-purpose forceps I had available
aren't for dental extraction: this requires a special kind of forcep I hadn't.

I suppose it's just as well: considering the fact that some dentine remained in
the shell of the tooth, its nerve was probably still alive and well. The nerves
connecting teeth to the root canal are extremely sensitive, and interconnected;
what's worse, I could easily have broken my jaw by violently levering the tooth;
therefore, extracting my tooth myself would very likely have been suicide.

So, as far as sockets go, my teeth will be rotting in theirs for some time yet.



Other noteworthy pratfalls during January:
1. Accidentally locked myself out of Windows by attempting to install Ubuntu 16
   alongside, which occurred after it prompted me to designate a BIOS boot part
   (prior installs didn't manifest the prompt and gave me no trouble).
2. Locked myself out of Ubuntu too by trying to unbrick Windows.
3. Flashed in the backup EFI system partition and boot sector from a disk image,
   reset the partition table with fdisk, thanked lucky stars, began again at 1.
4. Broke shiny new laptop's fragile keyboard connector. Cursed fate.

Incidentally, I had some luck using this procedure to regain access to a Lenovo
IdeaPad 100-151BD 80QQ's UEFI Firmware Configurator after I had set my boot mode
to Legacy Support before installing Ubuntu, which locked me out of the config:
    1. At GRUB operating system selection screen key 'c' for a command line.
    2. normal_exit
    3. initrd
    (initrd fails because you didn't load the kernel, but then Windows Boot
     Manager tries to load in UEFI mode for some reason & presents a screen
     politely offering to give you the FW Config if you give it the ESC key,
     which it doesn't usually when your boot mode is Legacy Support instead
     of UEFI with or without secure boot.)
I ought note: Ubuntu 16 boots the configurator automagically in UEFI boot mode:
the option reappeared when I `sudo update-grub`ed while in UEFI mode.

Speaking of GRUB, here's a boot procedure (in case you've never driven stick):
    1. root=hd0,gpt8
       (Linux is at sda8 on my system)
    2. linux /vmlinuz
    3. initrd /initrd.img
    4. boot
Or, to shift gears into Windows:
    1. root=hd0,gpt1
    2. chainloader /EFI/Microsoft/Boot/bootmgfw.efi
    3. boot

While I'm on the topic, here's how to play a tune at boot time using GRUB:
    A.1. @ boot menu (operating system selection), key 'c' for a GRUB shell.
    A.2. play TEMPO PITCH1 DURATION1 PITCH2 DURATION2 P3 D3 ... ad infinitum
         Pitches are frequencies in Hertz; duration is a fraction of tempo.
or
    B.1. In Ubuntu, Control + Alt + T to open a terminal emulator window.
    B.2. sudo gedit /etc/default/grub
    B.3. Feed the recordable piano by editing the line at the bottom:
         GRUB_INIT_TUNE="325 900 6 1000 1 900 2 800 2 750 2 800 1 900 2 600
5 0 1 500 1 600 1 800 1 750 2 600 2 675 2 750 4"
         # ^- The Amazing Water (NiGHTS)
         GRUB_INIT_TUNE="1024 600 2 650 2 700 2 950 10 900 20 0 10 600 2 650
2 700 2 950 20 1050 10 1100 5"
         # ^- Batman, the Animated Series.
         GRUB_INIT_TUNE="2048 600 5 0 1 600 5 0 1 575 5 0 1 575 5 0 1 550 5
0 1 550 5 0 1 575 5 0 1 575 5 0 1 600 5 0 1 600 5 0 1 575 5 0 1 575 5 0 1
550 5 0 1 550 5 0 1 575 5 0 1 575 5 0 1 900 8 0 4 900 24"
         # ^- classic Batman.
    B.4. Save the file, and then sudo update-grub && sudo reboot
Musical notes within the 500-1500 Hz range tend to be within 100Hz of each other
(therefore ± 50 Hz for flats & sharps) typically, but act strange around 600 Hz.



GNU/Linux is dandy for computer programming, especially data processing, because
it is now (thanks to Ubuntu) easier to use than ever; but it changes so quickly
that I've barely skimmed over the repository before the next long-term support
version has been finalized. The installer wizard also sometimes makes mistakes.
The software repository is slowly morphing into a dime-store, any software worth
using requires considerable technical expertise cultivated @ your great expense,
and if anything breaks then you have to be the fastest teletype gun in the west.

And, because my comments re: Linux may mislead, I'm thrilled about Windows 10.
Have you played Microsoft Flight Simulator recently? Great game.

Automaton Empyreum: the Key to Pygnition. (Trivial File Transfer Protocol edition.)

(I have implemented the Trivial File Transfer Protocol, revision 2, in this milestone snapshot. If you have dealt with reprogramming your home router, you may have encountered TFTP. Although other clients presently exist on Linux and elsewhere, I have implemented the protocol with a pair of Python scripts. You’ll need a Python interpreter, and possibly Administrator privileges (if the server requires them to open port 69), to run them. They can transfer files of size up to 32 Megabytes between any two computers communicating via UDP/IP. Warning: you may need to pull out your metaphorical monkey wrench and tweak the network timeout, or other parameters, in both the client and server before they work to your specification. You can also use TFTP to copy files on your local machine, if for whatever reason you need some replacement for the cp command. Links, courtesy of MediaFire, follow:

Executable source code (the programs themselves, ready to run on your computer): http://www.mediafire.com/file/rh5fmfq8xcmb54r/mlptk-2017-01-07.zip

Candy-colored source code (the pretty colors help me read, maybe they’ll help you too?): http://www.mediafire.com/file/llfacv6t61z67iz/mlptk-src-hilite-2017-01-07.zip

My life in a book (this is what YOUR book can look like, if you learn to use my automatic typesetter and tweak it to make it your own!): http://www.mediafire.com/file/ju972na22uljbtw/mlptk-book-2017-01-07.zip

)

Title is a tediously long pun on "Pan-Seared Programming" from the last lecture.
Key: mechanism to operate an electric circuit, as in a keyboard.
Emporium: ein handelsplatz; or, perhaps, the brain.
Empyreuma: the smell/taste of organic matter burnt in a close vessel (as, pans).
Lignite: intermediate between peat & bituminous coal. Empyreumatic odor.
Pignite: Pokémon from Black/White. Related to Emboar & Tepig (ember & tepid).
Pygmalion (Greek myth): a king; sculptor of Galatea, who Aphrodite animated.

A few more ideas that pop up often in the study of computer programming: which,
by the way, is not computer science. (Science isn't as much artifice as record-
keeping, and the records themselves are the artifact.)

MODULARITY
As Eric Steven Raymond of Thyrsus Enterprises writes in "The Art of Unix
Programming," "keep it simple, stupid." If you can take your programs apart, and
then put them back together like Lego(TM) blocks, you can craft reusable parts.

CLASSES
A kind of object with methods (functions) attached. These are an idiom that lets
you lump together all your program's logic with all of its data: then you can
take the class out of the program it's in, to put it in another one. _However,_
I have been writing occasionally for nearly twenty years (since I was thirteen)
and here's my advice: don't bother with classes unless you're preparing somewhat
for a team effort (in which case you're a "class" actor: the other programmers
are working on other classes, or methods you aren't), think your code would gain
from the encapsulation (perhaps you find it easier to read?), or figure there's
a burning need for a standardized interface to whatever you've written (unlikely
because you've probably written something to suit one of your immediate needs:
standards rarely evolve on their own from individual effort; they're written to
the specifications of consortia because one alone doesn't see what others need).
Just write your code however works, and save the labels and diagrams for some
time when you have time to doodle pictures in the margins of your notebook, or
when you _absolutely cannot_ comprehend the whole at once.

UNIONS
This is a kind of data structure in C. I bet you're thinking "oh, those fuddy-
duddy old C dinosaurs, they don't know what progress is really about!" Ah, but
you'll see this ancient relic time and again. Even if your language doesn't let
you handle the bytes themselves, you've got some sort of interface to them, and
even if you don't need to convert between an integer and four ASCII characters
with zero processing time, you'll still need to convert various data of course.
Classes then arise which simulate the behavior of unions, storing the same datum
in multiple different formats or converting back and forth between them.
(Cue the scene from _Jurassic Park,_ the film based on Michael Crichton's book,
 where the velociraptor peeks its head through the curtains at a half-scaffolded
 tourist resort. Those damn dinosaurs just don't know when to quit!)

ACTUALLY, VOID POINTERS WERE WHAT I WAS THINKING OF HERE
The most amusing use of void*s I've imagined is to implement the type definition
for parser tokens in a LALR parser. Suppose the parser is from a BNF grammar:
then the productions are functions receiving tokens as arguments and returning a
token. Of course nothing's stopping you from knowing their return types already,
but what if you want to (slow the algorithm down) add a layer of indirection to
wrap the subroutines, perhaps by routing everything via a vector table, and now
for whatever reason you actually _can't_ know the return types ahead of time?
Then of course you cast the return value of the function as whatever type fits.

ATOMICITY, OPERATOR OVERLOADING, TYPEDEF, AND WRAPPERS
Washing brights vs darks, convenience, convenience, & convenience, respectively.
Don't forget: convenience helps you later, _when_ you review your code.

LINKED LISTS
These are a treelike structure, or should I say a grasslike structure.
I covered binary trees at some length in my fourth post, titled "On Loggin'."

RECURSION
The reason why you need recursion is to execute depth-first searches, basically.
You want to get partway through the breadth of whatever you're doing at this
level of recursion, then set that stuff aside until you've dealt with something
immensely more important that you encountered partway through the breadth. Don't
confuse this with realtime operating systems (different than realtime priority)
or with interrupt handling, because depth-first searching is far different than
those other three topics (which each deserve lectures I don't plan to write).

REALTIME OPERATING SYSTEMS, REALTIME PRIORITY, INTERRUPT HANDLING
Jet airplanes, video games versus file indexing, & how not to save your sanity.

GENERATORS
A paradigm appearing in such pleasant languages as Python and Icon.
Generators are functions that yield, instead of return: they act "pause-able,"
and that is plausible because sometimes you really don't want to copy-and-paste
a block of code to compute intermediate values without losing execution context.
Generators are the breadth-first search to recursion's depth-first search, but
of course search algorithms aren't all these idioms are good for.
Suppose you wanted to iterate an N-ary counter over its permutations. (This is
similar to how you configure anagrams of a word, although those are combinations
-- for which, see itertools.combinations in the Python documentation, or any of
the texts on discrete mathematics that deal with combinatorics.) Now, an N-ary
counter looks a lot like this, but you probably don't want a bunch of these...
    var items = new Array(A, B, C, D, ...);       // ... tedious ...
    var L = items.length;                         // ... lines ...
    var nary = new Array(L);                      // ... of code ...
    for (var i = 0; i < L; nary[i++] = 0) ;       // ... cluttering ...
    for (var i = L - 1; i >= 0 && ++nary[i] == L; // ... all ...
        nary[i--] = ((i < 0) ? undefined : 0)     // ... your other ...
    ) ; // end for (incrementation)               // ... computations ...
... in the middle of some other code that's doing somewhat tangentially related.
So, you write a generator: it takes the N-ary counter by reference, then runs an
incrementation loop to update it as desired. The counter is incremented, where-
upon control returns to whatever you were doing in the first place. Voila!
(This might not seem important, but it is when your screen size is 80 by 24.)



NOODLES AND DOODLES, POMS ON YOUR POODLES, OODLES AND OODLES OF KITS & CABOODLES
(Boodle (v.t.): swindle, con, deceive. Boodle (n.): gimmick, device, strategy.)
Because this lecture consumed only about a half of the available ten thousand
characters permissible in a WordPress article, here's a PowerPoint-like summary
that I was doodling in the margins because I couldn't concentrate on real work.
Modularity: perhaps w/ especial ref to The Art of Unix Programming. "K.I.S.S."
Why modularity is important: take programs apart, put them together like legos.
Data structures: unions, classes.
Why structures are important: atomicity, op overloading, typedefs, wrappers.
linked lists: single, double, circular. Trees. Binary trees covered in wp04??
recursion: tree traversal, data aggregation, regular expressions -- "bookmarks"
Generators. Perhaps illustrate by reference to an N-ary counter?

AFTER-CLASS DISCUSSION WITH ONE HELL OF A GROUCHY ETHICS PROFESSOR
Suppose someone is in a coma and their standing directive requests you to play
some music for them at a certain time of day. How can you be sure the music is
not what is keeping them in a coma, or that they even like it at all? Having
experienced death firsthand, when I cut myself & bled with comical inefficiency,
I can tell you that only the dying was worth it. The pain was not, and I assure
you that my entire sensorium was painful for a while there -- even though I had
only a few small lacerations. Death was less unpleasant with less sensory input.
I even got sick of the lightbulb -- imagine that! I dragged myself out of the
lukewarm bathtub to switch the thing off, and then realized that I was probably
not going to die of exsanguination any time soon and went for a snack instead.

AFTER-CLASS DISCUSSION WITH ONE HELL OF A GROUCH
"You need help! You are insane!"
My 1,000 pages of analytical logic versus your plaintive bleat.