Tag Archives: Code

Entry contains code in some language. Hopefully one I didn’t create.

Using a Briefcase™? Briefly, a Use Case™.

Briefcase: a special kind of folder in Microsoft Windows, which synchronizes its
 contents with another briefcase of the same name when detected. Used
 to keep volatile documents on floppy disks & USB flash drives without
 constantly copying and pasting the contents of the whole disk every
 time it moves from workstation to workstation. Like an rsync daemon.

Use case: in the studies of software design & architecture, a storyboard sketch,
 or supposition about what a user expects or how s/he'll behave. E.g..
 This would be the initial node of a flowchart, a branch in main(), GUI
 dialog panes, or some interaction of user with program.
 Additionally, analysis of hostile users and newbies ("misuse case").




I am still moribund ("deadlocked," or "sick to death") by a headache that has
become cerebral palsy. I have been unable to concentrate on my plans this year.

Speaking of contributions to science, you can find my (literally) auriferous
portfolio at the magnanimous MediaFire (they're not just for pirates!):

    https://www.mediafire.com/folder/kr2bjyn1k3gjr/mlptk-recent
    (Download & read the CARGO-MANIFEST.TXT to ascertain the contents of the archives you seek.)
    WARNING: ADULTS ONLY. (Explicit sexual content.)
    Videlicet is still kind of broken. DiffWalk, too, may be faulty.

The hyperlink will lead you to a MediaFire directory. I have added new archives
(for bandwidth conservationists). The file CARGO-MANIFEST.TXT describes all the
contents: _download and read it first_ if you want to know what's in them there
archives, which total over one hundred Megabytes, &/or retrieve your preference.

What's new: kanamo & transl8 (in MLPTK), Mutate-o-Matic, Videlicet, & DiffWalk.
(I said MLPTK was officially dead, but will I let it rest? How about no...)
Archivists curating art galleries downloaded from social networks will love
Videlicet, which solves the vexing twin problems of automatic attribution and
re-configurable data mining. (For those pesky copy protection mechanisms.
Videlicet.py easily cuts through Web galleries and markup up to 1/4" thick.)

I even threw in the exprimental upnnas: yea, truly this is an epic day.
(^- That line alludes to one of the _Juicy Cerebellum_'s author's asides.)

The remainder of this briefing describes the salient points of a Python script I
wrote to automatically collate issues of my portfolio. Long story short: "diff."




Because the large size of the archives I upload has become problematic, I have
established a ramshackle mechanism to prepare smaller files for anyone concerned
about bandwidth conservation. (MediaFire reports only two Gigabytes since last
year, which is no big deal, but I certanly wasn't helping. Also I couldn't think
of much else to do.) In case you cared, the usual issues with bandwidth are
constriction & latency: to reuse Senator Ted Stevens' "tubes" metaphor, how wide
the tube is and how long it is, and either of these can alter an observer's
perception of the pressure of fluid forced through the pipe. "When the tubes get
full, things can't get through" -- like dead bodies, or the new episode of Veep.

Metaphorically one half of this mechanism is a portable diff utility: DiffWalk.
The other half is a shell script that identifies changes to the directory tree.
Neither is aught remarkable but why don't I talk your ear off about them anyway?

Diff is a program similar to cmp, used to compare two files and describe their
discrepancies. In common use on Unixlike systems, it is employed to create patch
files that require less time to transmit via point-to-point telecommunication
than would be needed to transmit the whole file whenever it changed. Because it
is so useful an algorithm, and because I've never seen one for Windows (except
in the Berkeley Utilities), I made (but didn't test) a portable one in Python.

DiffWalk is a walking collater that creates patches similar to diff's.
Although the two are not interoperable, they operate in the same manner:
by determination of where the files differ and description of the differences.
Therewith, a "new" file can be reconstructed from an "old" file plus a patch --
hypothetically, with according decrease of network bandwidth load.

Although the script is a few hundreds of lines long, the scanner (the part that
goes through the file looking for the interesting bits: such as, in this case,
the positions where the new file differs from the old) is one tenth that size.
As you've observed in my other software, I do without proper parsers & grammar.
This renders my work brief, vulgar, and full of bugs, but sometimes legible.




def diff (old_lines, new_lines): #fmt: old_offset old_lines new_lines\nlines\n
 patch_file = [ patch_copacetic_leadin ];
 
 scan_line = ""; # Compute MD5 checksums for both files...
 old_md5sum = hashlib.md5();
 for line in old_lines: old_md5sum.update(line);
 old_md5sum = old_md5sum.hexdigest();
 scan_line = "%s\t" % (old_md5sum);
 new_md5sum = hashlib.md5();
 for line in new_lines: new_md5sum.update(line);
 new_md5sum = new_md5sum.hexdigest();
 if new_md5sum == old_md5sum: return None; # same file? then no patch req'd.
 scan_line += "%s\n" % (new_md5sum);
 
 patch_file.append(scan_line); # Second line: old_md5 new_md5
 
 oi = 0; ol = len(old_lines); ni = 0; nl = len(new_lines);
 tally = 0; scan_line;
 unique_new_lines = set(new_lines) - set(old_lines);
 
 while ni < nl: # 2 phases: scan "same" lines, then diff lines
 oi = 0; tally = 0;
 while oi < ol and old_lines[oi] != new_lines[ni]: oi += 1;
 scan_line = "%d\t" % (oi); #Index in "old" file to cat some of its lines
 while oi < ol and ni < nl and old_lines[oi] == new_lines[ni]:
 tally += 1; ni += 1; oi += 1;
 scan_line += "%d\t" % (tally); # Number of lines to cat from "old" file
 tally = 0; next_ni = ni;
 while ni < nl and new_lines[next_ni] in unique_new_lines:
 tally += 1; next_ni += 1;
 scan_line += "%d\n" % (tally); # Number of lines to cat from "new" file
 
 patch_file.append(scan_line);
 patch_file.extend(new_lines[ni : next_ni]);
 ni = next_ni;
 # end while (scan the files, outputting the patch protocol format)
 
 return patch_file;
# end function diff: returns diff-style patches as writelines() compatible lists




Concise and transpicuous:
 1. Tally runs of lines that already existed in the old file. (Scan phase.)
 2. Tally runs of lines that do not exist in the old file. (Diff phase.)
 3. Print a patch format that permits ordered reconstitution of the lines.
 4. Repeat until the entire new file can be reconstructed from patch + old.

Here, Python's set()s abstract away a tedious series of repetitive scans.
Without set or a like data type, I'd have to either hash the "old" file's lines
myself (and waste time writing another binary tree) or loop through it all again
and again for each line of the new file. (That would be due to the fact that, if
lines had been moved about instead of simply moved apart by interjection, then a
lockstep scanner would mistakenly skip some and the patch file would be larger.)

There is no capacity to patch binary files, but DW still detects when they have
changed, and will write a copy into the patch directory. I assume that changes
to binary files are due to transcoding, and therefore the patch'd be just as big
-- some kinds of binary files, such as SQL databases, don't behave this way and
can be patched in the same manner as I patch text files, but I don't use them.
(If you extend the algorithm to databases or executables, don't forget to review
 the pertinent file formats and open the files in binary mode. :)

The rest of the script is a wrapper handling directory traversal and file I/O.

As `info diff` artfully states, "computer users often find occasion to ask how
2 files differ." The utility of a script like DiffWalk is therefore not limited
to patching, but compression protocol is its primary employment on my system. (I
still use `diff` for quotidian difference queries because DW isn't in my $PATH.) 
Likewise, the automatic collation of updates, such as moved and deleted files,
is a pleasant amelioration to the task of finding what's changed in an archive
since the last published edition. DiffWalk now handles these tasks for me.

If you'd like a better solution to the "Briefcase Problem" (how to synchronize
files across multiple installations with minimal time and fuss), don't forget to
stop by the manual pages for "diff", "patch", and "rsync".
Advertisements

こんにちわ い-よ, わんこ いいんちよ…

Title translates loosely, if at all, as "I ain't say hi to the First Dog."




Discrete mathematics treats of sets (groups of items, of known size) and logic.
The operations on sets are union (OR), intersection (AND), difference (-), and
symmetric difference (XOR). I often call AND conjunction and XOR disjunction.

As for logic, the usual Boolean operations with which programmers are familiar:
 AND: Only the items ("1" bits) in both sets (bytes). 01001 & 01100 == 01000
 OR: All the items ("1" bits) in either set (byte). 01001 | 01100 == 01101
 XOR: Everything in one set or the other, _not_ both. 01001 ^ 01100 == 00101
 NOT: Logical inversion. (Not conversion.) Same as ¬. ! 01001 == 10110
(NOT is a unary operator. The binary operators are commutative. The exclusive OR
 (XOR) operator has this additional property: given, axiomatically, A ^ B = C,
 then C ^ B = A and C ^ A = B. I think that makes XOR evanescent, but I can't
 remember if that's transitivity. Anyhow: symmetric difference self-reverses.
 See also memfrob() @ string.h, & especially Python manual section 6.7 (sets).)

I may have misrepresented some of the details: consult a discrete math text too.
Salient trivia: don't confuse the bitwise intersection (&) & union (|) with the
 boolean operators && and ||. Although equivalent for the values
 1 and 0, they behave differently for all others. For example:
 2 & 5 == 0; although both (bool)2 and (bool)5 typecast to true,
 and therefore (bool)2 & (bool)5 == (bool)2 && (bool)5 == true,
 bitwise & is valid on integers and the compiler will produce the
 integer bitwise & operation which results in false.
Et trivia: xor can be used (cautiously) to logically invert comparisons (^ 1).
(See also: GNU Bison, Extended Backus-Naur Format, formal grammar (maths).)

Combinatorics (the science of counting) is discrete mathematics: in which the
delightful factorial (a unary operator, !, whose operand is to the left, which
distinguishes it from unary logical NOT) makes many an appearance. Combinatorial
explosion is a task for any mathematician who wishes to optimize (by trading
memory for repeat computation) such procedures as decision trees, which are NP.
(See also: algorithmic complexity, non-deterministic polynomial time).

Combinations of a set, because order makes no difference, are fewer than its
permutations. A combiner can be written with sliding windows, as in Python's
"itertools" library. (I've also implemented a heuristic version in Javascript.)
Combinations represent all the ways you'd choose some number of items from a set
such that no two combinations contain exactly the same items.
There are S! / (M! * (S - M)!) M-length combinations chosen from a S-length set.
(X! is "X factorial." Don't confuse it with !X, which is "not X," i.e. "¬X." )

Contrariwise: permutations, in which the same items chosen in a different order
constitute a different permutation than any other, represent all the ways you'd
choose items from a set such that the time when you chose each item matters.
Permutation is most simply and directly implemented by way of recursion.
(Combinations can also be written this way, by narrowing the slices, as I have.)
There are S! / (S - M)! permutations of length M chosen from a set of length S.

The number of "same items, different order" permutations can be arrived upon via
subtraction of the combination formula from the permutation formula:

S! S! M! * S! S! (M! - 1) * S!
-------- - ------------- == ------------- - ------------- == ------------- 
(S - M)! M! * (S - M)! M! * (S - M)! M! * (S - M)! M! * (S - M)!




Because the "N-ary Counter" appears frequently in computer programs that behave
in a combinatorial way, and because I implemented them too in Mutate-o-Matic:
the items in any set, when numbered in monotonic integral progression from 0,
can represent the numerals of numbers in base N (where N == S, to reuse the "set
length" nomial from above). Arithmetic with these figures rarely makes sense,
although hash tables and cryptographic checksums sometimes behave similarly.
There are S ** M (S to the power M; ** is exponentiation in Python) N-ary
"numbers" of length M that are possible by counting in base S.

Combinations and permutations are not defined when the length M of the mutation
exceeds the length of the set. N-ary counters are defined for any M, however.




Today I'll describe a mutator I wrote in C. It permutes, combines, and counts.
It is somewhat more complex than the recursive one-liner you'd sensibly write,
but not much different once you see past all the pointers.

Such a one-liner, before I get carried away, would look something like this...
 def anagrams (w): # Pseudocode (read: I haven't tested it) anagram generator
 if len(w) == 1: # <-- Recursion always requires a sentinel condition,
 yield list(w) # unless you are extremely clever indeed.
 return
 for i in range(len(w)): yield [ w[i] ] + anagrams(w[0 : i] + w[i + 1 :])
... in Python, though my own rendition in Python is somewhat more acrobatic (it
foolishly employs generator expressions); as for combinations, see PyDoc 10.7.1
(itertools.combinations). Python is fast, because it's just a shell around C, &
Python's "itertools" module will do the discrete math acrobatics in record time.
Itertools also contains many of the discrete math axioms I rewrote above -- see
its page in the Python manual, section 10.7, for the details.

Not content to waste nanoseconds, however, I resolved to write a significantly
less intelligible implementation using the C programming language. I ought to
point out that the savings in algorithmic complexity are vanishingly negligible:
it actually probably uses more time than the Python implementation, owing to its
prodigal invocation of malloc() (among my other C vices, such as **indirection).

Anyway, here's the Mutate-o-Matic in Stone Tablet Format courtesy of my C-hisel:
 
 typedef enum { PERM, COMB, NARY } mmode_t;
 // ^- Respectively, "permutation," "combination," or "N-ary counter" mode.
 void mutate (
 char *set[], // set of items to permute. An array of C-strings.
 size_t slen, // length of the set.
 size_t plen, // length of each permutation.
 mmode_t mode, // kind of mutation I'm executing.
 char delim[], // delimiter, per permutation item.
 char *prefix[] = NULL, // characters chosen already.
 size_t pflen = 0 // length of the prefix.
 ) {
 size_t subset_length = (mode == NARY) ? slen : slen - 1;
 char **next_set = (char**) calloc(subset_length, sizeof(char*));
 char **next_prefix = (char**) calloc(pflen + 1, sizeof(char*));
 if (next_set == NULL || next_prefix == NULL) { exit(-1); }
 for (int i = 0; i < pflen; next_prefix[i] = prefix[i], i++) ;
 
 for (int i = slen - 1, j, k; i >= ((mode == COMB) ? (int)plen : 0); i--) {
 next_prefix[last_item] = set[i];
 
 if (plen > 0) { // Either descend the recursion ...
 for (k = 0, j = 0; j < i; next_set[k++] = set[j++]) ;
 switch (mode) {// ^- the above are elements common to every mode
 case NARY: next_set[k++] = set[j]; // XXX NB: fallthrough
 case PERM: for (j++; j < slen; next_set[k++] = set[j++]); break;
 } // (^- conditionally copy elements not common to all three modes)
 
 mutate(next_set, k, plen, mode, delim, next_prefix, pflen);
 } else { // ... or print the mutation (selected set items).
 for (j = 0; j < last_item; printf("%s%s", next_prefix[j++], delim));
 printf("%s\n", next_prefix[last_item]);
 } // *Sound of cash register bell chiming.*
 } // end for (Select items and mutate subsets)
 free(next_set); free(next_prefix); // Paying tribute to Malloc, plz wait
 }

(To download the C file, plus the rest of my works, retrieve my portfolio by way
 of the links to MediaFire supplied in the top post at my WordPress blog.)

It seems so simple: only an idiot couldn't see how to compute mutations in this
manner, right? However, although I first encountered the problem when I was but
a wee lass, & although I finally discovered the formula for permutations and
N-ary counters after the better part of two !@#$ decades, I yet fail to grok
_why_ the "bounded sliding windows" illustrated by itertools successfully choose
only the combinations. (The best I can do is "because they're in sort order?")

Anyway, the procedure is woefully uncomplicated:
append on the prefix an unchosen item from the set, then either...
 [remaining choices > 0] recurse, having culled chosen item (unless in N-ary
 mode); and, if in Combination mode, having culled
 items subsequent to the sliding window (set[i]),
... or ...
 [choices == 0] print chosen items, which are a mutation, & continue

I'm sure you can see that this whole procedure would be a lot faster if I wasn't
spending precious clock ticks spelunking the dreaded caves of malloc(). A global
array of arrays with entries for each prefix length would really speed things up
-- as would splitting the mutater into three different functions, to eliminate
the branching instructions ("if-else," "switch," and "(predicate) ? a : b").

However, I recommend you don't bother trying to write it faster in C. What with
the huge cache memories CPUs have these days, it's probably plenty zippy. Even
though it _can_ be faster in C, it could be even faster in Assembler, so learn
an instruction set and assemble a core image if that's what you really want.

Until then, for all its inefficiency, Mutate-o-Matic renders about 100,000
mutations per second. That's faster than those sci-fi lockpicks in the movies.

Cheroot Privileges: a Potpourri of Pointlessness.

Cheroot (Tamil "shuruttu" meaning "a roll"): a cigar. Reputed to be pungent.
chroot (GNU coreutils, manual section 8): run command in special root directory.
Potpourri: a compost heap, montage, medley, or ragout. NB: never compost meat.
Root privileges: to have these is to be the super-user, operator, admin, etc.
Root: a dental nerve, et c.



My foregoing post touched on socket programming, when I mentioned TFTP. (BTW, MS
Windows has a TFTP client built-in: in the Programs & Features control panel app
open "turn Windows features on or off.")

Sockets are a hardware abstraction layer that deals with computer networking.
As usual, gritty details are beyond me and I gloss them over. (Tee hee. That's a
pun about oyster pearls.) Suffice to say that sockets are ports of call for data
transmitted between computers: hardware and protocol not withstanding, bytes fly
out one socket and land in another. We built this Internet on socket calls.
(A pun on Jefferson {Airplane,Starship}'s "We Built This City.")

For more information, consult the RFCs, and the IEEE's 802.* network specs.
Perhaps ftp.rfc-editor.org, www.faqs.org/rfcs, or www.ietf.org/rfc are of use?

And an update to my Javascript snippet in the remedial lecture...
    function initnary (ctr) { for (var i=0; i < ctr.length; ctr[i++] = 0); }
    function incnary (counter) { // Faster, but rollover instead of sentry.
        for (var L = counter.length, i = L - 1;
             (nary[i] = ((nary[i--] + 1) % L)) == 0 && i > -1;
        ) ; // Faster than the example in WP12, but rollover not sentry.
    } // end incnary(): Increments N-ary counter (length >= 1), by reference
    // ...
    var nary = new Array(items.length);
    initnary(nary); // nary's state: 0 0 0 ... 0
    incnary(nary); // nary's state: 0 0 0 ... 1
    // ...
... which is possibly a bit faster than the other one, although neither will be
optimized by an optimizing compiler (due to the complicated loop initializer), &
therefore both are of marginal utility.



It's 2017. To begin my new year on the right foot, I began it on the wrong foot.

My first hint that I'd need to effect some impromptu renovations to my skeleton
came to me when I noticed that I had begun to experience an unpleasant taste of
musty dust after picking clean my right anterior maxillar tricuspid. (The reason
why shattered teeth taste of moist chalk is probably because dentine & chalk are
both calcareous substances. I'd guess chalk rots too, if infected.) Another way
I could describe the taste of a rotten tooth is "like hard-boiled eggs that were
rotten before they were boiled," because they smell and taste alike. The dentine
(the material composing the interior of teeth) also feels distinctly like chalk,
or like gritty soil, when I palpate it with my tongue.

Anyway, my left anterior mandibular tricuspid has also been a goner since auld
lang syne, and the bone fragments left over inside my gums have really begun to
bug me, so a taste of fetor was the last straw.

Luckily, I had a small piece of surgical gauze left over from when I foolishly
had my wisdom teeth removed. (If you're considering removal of yours, then I am
here to tell you: DON'T! It's a waste of money, and, unless your teeth are truly
rotten or a source of pain, there is simply _no reason_ to remove even one.) If
you haven't tried to get a grip on one of your teeth before, you wouldn't know,
but even a tooth you've wiped dry is difficult to grasp without gauze.

I'm also the lucky owner of a pair of surgical forceps. These handy little tools
look like a long and delicate pair of pliers with the fulcrum very close to the
gripping side of the levers. ("They really pinch.")

In case you were curious, forceps are usually employed to grasp small objects in
surgical procedures. They can also be used as roach clips. (For avoiding burns &
stains of the fingers while smoking. Wide pipe stems containing packed cotton
accomplish the same end: you can make one from a hollow ballpoint pen and cotton
balls sold at any general store. Nevertheless a forcep is more generally utile.)

Those teeth's days had long been numbered. Their time had come!

So it was that I spent tedious hours doubled over with my fingers crammed in my
mouth, wiggling that thrice-damned curse of a bone to try and work it loose.
I quite unwisely, and disregarding the risk of breaking my jaw, channeled thirty
years of pent aggression into what remained of my tricuspid molar, as malodorous
flakes of rotten enamel & dentine fell upon my tongue like evil snow.

I knew I had effected some kind of progress when I heard a muffled click inside
of my head -- bones have eerie acoustic properties, like an unsettling resonance
and a tendency to produce a crunching sound (rather than a snap) when fractured
-- and felt a stabbing pain travel up the side of my head. Thankfully the pain I
felt due to prolonged migraine headache rendered this somewhat less intolerable.

I repeated this procedure until I lost consciousness.
Well, that's how I had hoped that this would end, but it didn't.
I could not bear the pain, and had to stop trying to pull my tooth.

Unfortunately for me, although I did manage to work the molar somewhat further
out of my jaw than it had loosened already (my dental hygiene, in case the memo
hasn't reached you, is worse than Austin Powers'), I didn't completely extract.
All I managed to do was cause a hairline fracture of my maxilla, which will un-
doubtedly be a source of major difficulty and pain to me in the decades to come.

Worse yet, my application of too much pressure via the forceps caused additional
shattering of the tooth; further attempts at extrication are counterindicated.
That's just as well, because the kind of general-purpose forceps I had available
aren't for dental extraction: this requires a special kind of forcep I hadn't.

I suppose it's just as well: considering the fact that some dentine remained in
the shell of the tooth, its nerve was probably still alive and well. The nerves
connecting teeth to the root canal are extremely sensitive, and interconnected;
what's worse, I could easily have broken my jaw by violently levering the tooth;
therefore, extracting my tooth myself would very likely have been suicide.

So, as far as sockets go, my teeth will be rotting in theirs for some time yet.



Other noteworthy pratfalls during January:
1. Accidentally locked myself out of Windows by attempting to install Ubuntu 16
   alongside, which occurred after it prompted me to designate a BIOS boot part
   (prior installs didn't manifest the prompt and gave me no trouble).
2. Locked myself out of Ubuntu too by trying to unbrick Windows.
3. Flashed in the backup EFI system partition and boot sector from a disk image,
   reset the partition table with fdisk, thanked lucky stars, began again at 1.
4. Broke shiny new laptop's fragile keyboard connector. Cursed fate.

Incidentally, I had some luck using this procedure to regain access to a Lenovo
IdeaPad 100-151BD 80QQ's UEFI Firmware Configurator after I had set my boot mode
to Legacy Support before installing Ubuntu, which locked me out of the config:
    1. At GRUB operating system selection screen key 'c' for a command line.
    2. normal_exit
    3. initrd
    (initrd fails because you didn't load the kernel, but then Windows Boot
     Manager tries to load in UEFI mode for some reason & presents a screen
     politely offering to give you the FW Config if you give it the ESC key,
     which it doesn't usually when your boot mode is Legacy Support instead
     of UEFI with or without secure boot.)
I ought note: Ubuntu 16 boots the configurator automagically in UEFI boot mode:
the option reappeared when I `sudo update-grub`ed while in UEFI mode.

Speaking of GRUB, here's a boot procedure (in case you've never driven stick):
    1. root=hd0,gpt8
       (Linux is at sda8 on my system)
    2. linux /vmlinuz
    3. initrd /initrd.img
    4. boot
Or, to shift gears into Windows:
    1. root=hd0,gpt1
    2. chainloader /EFI/Microsoft/Boot/bootmgfw.efi
    3. boot

While I'm on the topic, here's how to play a tune at boot time using GRUB:
    A.1. @ boot menu (operating system selection), key 'c' for a GRUB shell.
    A.2. play TEMPO PITCH1 DURATION1 PITCH2 DURATION2 P3 D3 ... ad infinitum
         Pitches are frequencies in Hertz; duration is a fraction of tempo.
or
    B.1. In Ubuntu, Control + Alt + T to open a terminal emulator window.
    B.2. sudo gedit /etc/default/grub
    B.3. Feed the recordable piano by editing the line at the bottom:
         GRUB_INIT_TUNE="325 900 6 1000 1 900 2 800 2 750 2 800 1 900 2 600
5 0 1 500 1 600 1 800 1 750 2 600 2 675 2 750 4"
         # ^- The Amazing Water (NiGHTS)
         GRUB_INIT_TUNE="1024 600 2 650 2 700 2 950 10 900 20 0 10 600 2 650
2 700 2 950 20 1050 10 1100 5"
         # ^- Batman, the Animated Series.
         GRUB_INIT_TUNE="2048 600 5 0 1 600 5 0 1 575 5 0 1 575 5 0 1 550 5
0 1 550 5 0 1 575 5 0 1 575 5 0 1 600 5 0 1 600 5 0 1 575 5 0 1 575 5 0 1
550 5 0 1 550 5 0 1 575 5 0 1 575 5 0 1 900 8 0 4 900 24"
         # ^- classic Batman.
    B.4. Save the file, and then sudo update-grub && sudo reboot
Musical notes within the 500-1500 Hz range tend to be within 100Hz of each other
(therefore ± 50 Hz for flats & sharps) typically, but act strange around 600 Hz.



GNU/Linux is dandy for computer programming, especially data processing, because
it is now (thanks to Ubuntu) easier to use than ever; but it changes so quickly
that I've barely skimmed over the repository before the next long-term support
version has been finalized. The installer wizard also sometimes makes mistakes.
The software repository is slowly morphing into a dime-store, any software worth
using requires considerable technical expertise cultivated @ your great expense,
and if anything breaks then you have to be the fastest teletype gun in the west.

And, because my comments re: Linux may mislead, I'm thrilled about Windows 10.
Have you played Microsoft Flight Simulator recently? Great game.

Pan Fried Programming

(Here's the update -- nothing much is new:
MLPTK: http://www.mediafire.com/file/m3u25i445lqkztb/mlptk-2016-12-16.zip
Source Highlight: http://www.mediafire.com/file/ygxb14ie94cwcuy/mlptk-src-hilite-2016-12-16.zip
Book: http://www.mediafire.com/file/vg439qruq3do90q/mlptk-book-2016-12-16.zip
)

Remedial (adj.): intended to rectify, amend, heal.
Panacea (n., myth): goddess of healing, daughter of Aesculapius.
Pansear (n.): Chili's Pokémon.

This remedial lecture will tersely cover a semester's curriculum,
similar to what you have learnt in your high school algebra class,
comprising the fundamentals of programming with synthetic languages
(those that are above machine code).

If you don't know what computer programming is, I would recommend that you study
some tutorials & encyclopedia articles. Much is available on the WWW (Worldwide
Web). The Web is a part of the Internet, and it is the Web you access from your
Web browser when you navigate to a Web page. You could also try one'a them there
"<name of programming language> For Dummies" textbooks: the "For Dummies" books
are excellent "Cliff's Notes"-style crash courses, and each aims to be similar
to a "101" course in the topic advertised.

To make a beginning with any programming language, all you must know is that a
computer computes: your instructions, issued in the program you write, tell the
machine how to progress from its input or initial state to a resultant output or
final state. These instructions look different in different languages -- some
languages require more or fewer -- but every computer program is an algorithm,
or "recipe for computation."

Computers and computer programs can be characterized as finite state automata.
They're like assembly-line robots who know where to weld each sheet of metal.
Also like assembly-line robots, they are "blind" in the sense that they'll kill
you with the soldering iron should you step between it and the sheet.
Computing machines do what they're told, even when it is particularly stupid:
that's why computer viruses, worms, and computer espionage exist.

In simplest terms, the computer's processing units receive some numbers and an
instruction that says what mathematical operation to execute, then operates:
like a calculator. High-level programming languages are more synthetic, like a
human language is, and comprise such ideas as objects (amalgamations of data) &
functions (modular sub-routines). Compilers or interpreters read these languages
and translate them into machine instructions, simplifying the lengthy series of
instructions necessary to make the calculator execute these difficult tasks.

In a high-level language, there are few technical concerns.
You can begin immediately with the abstract concepts.
Here are some:

VARIABLES
As in algebra, a variable is a name that represents a value.
As in solving a system of equations, values are typically assigned to some vars
and the value of the other variables is computed using the values given.
For example, in Javascript:
    var a = 2;
    var b = a + 2;
The variable <b> is now equal to 2 + 2. Similar operations function similarly.
In Javascript and other very-high-level languages, variables aren't only scalars
and can point at any object. They're like placeholders for procedure.
Although "variable" implies a value stored in memory, and "identifier" only its
mnemonic, the words "variable" & "identifier" used loosely mean about the same.
    "Just don't try that with the Captain."
        -- Geordi LaForge, to Data, _Star Trek: the Next Generation._

POINTERS, REFERENCES
These are important ideas that are abstracted away in VHLLs. A pointer stores an
address in memory, for a later indirect read/write or similar operation. In HLLs
a pointer/reference accesses an object directly instead of copying its value.
You'll rarely have to make the distinction in Javascript; but, for example:
    var a = new Array(1, 2, 3); // a[0] == 1, a[1] == 2, a[2] == 3
    var b = a; // Incidentally, b === a, and that is why in the next line...
    b[0] = 555; // ... b[0] == 555, and now a[0] also == 555!
As opposed to:
    var c = new Array(3); // c is a new array of length 3
    c[0] = b[0]; c[1] = b[1]; c[2] = b[2]; // copy scalar values one-by-one
    c[0] = 0; // c[0] == 0, but b[0] remains == a[0], which remains == 555.
    var d = 2;
    var e = d;
    e = 4; // e == 4, but d is still == 2.
As you can see, operating on an object (such as via array subscript operation)
changes the object, even if the object is pointed by multiple variables.
Likewise, objects passed as the argument of a function are passed by reference:
they aren't simply copied, and operating on the argument within the function is
equivalent to changing the object, whose scope is above that of the function.
Some high-level languages, like C, permit you to explicitly specify what is a
pointer or reference, which eliminates some of this confusion but requires more
exacting attention to detail in your design specification.

STATE
The state of a program is the value of all its variables, the current location
within the instruction set, and the environment of the operating system (or the
interpreter). In Javascript, within Web browsers, the browser typically provides
access to some of its state via the Document Object Model.

CONDITIONAL EXECUTION
Heuristics, or "guesswork," could not exist if there were no way to execute some
different code depending on the state of the program. Furthermore there are some
mathematics you can't write as exactly one set of instructions that produces one
semantic value: for instance, a function defined only on an interval, or an even
root of a positive number. In this circumstance, you are writing branches:
    if (5 > 10) { /* of course, the code in this block never happens. */ }
    else if (2 < 0) { /* neither does this, b/c cond'n is also false. */ }
    else { /* but all of this code happens, because the others didn't. */ }
... One of the branches executes, and the others don't.
The part in parentheses is the "conditional statement:" it's evaluated as either
"true" or "false," like in Boolean logic. 

SCOPE
Identifiers are only valid within the block (curly brackets, or { ... }) where
they were declared. Well, they're supposed to, anyway. Therefore, if you declare
a variable inside a function, you can't use it outside of the function or within
another function. Why would you want to, anyway? The next time you invoked the
function, the value of the variables you were using in there would change again.

LOOPS
Computers are great at repetition. Loops repeat a set of instructions: they are
typically written as a prefix, conditional, postfix, and body. For example:
    for (var T = 10; T > 0; T--) { alert("T minus " + T); }
... which counts down from ten to one with annoying alert popups.
While or do-while loops have only conditions & bodies.
A loop is an example of an "iterative algorithm." Each time the loop's body is
executed, it's called an "iteration." In computing fractal geometric patterns,
"iteration" means more like "recursion:" which, see below.

FUNCTIONS
A function is a modular segment of your program: a sequence of computation that
is repeated a few times, or can be reused as part of another computation.
Functions are "invoked," or called by name, with values supplied as arguments,
and return a value, similarly to how functions behave in algebra. When declaring
a function, you'd typically write the name of the function followed by its args
in parentheses and then the function body. For example, again in Javascript:
    function intp (N) { return (N % 1) == 0; } // integer predicate
... which returns true if N is probably an integer, or whole number:
    if (intp(5)) { alert("Yes. 5 is probably an integer."); }
    if (intp(5.55)) { alert("This box never appears..."); }
    else { alert("... but this one does, because 5.55 is a floater."); }
(Floating-point numbers are inaccurate, in Javascript as likewise elsewhere.)

RECURSION
A function that invokes itself is a recursive function. Any function invoking an
other function, which subsequently causes the original function to be invoked
again, causes a recursion-like situation that I think is called "re-entrancy."
It is essential to note that _any_ and _every_ recursive function you can write
for a computer to execute can be rewritten as an iterative algorithm. The proof
of this is complex: it follows from Alan Turing's model of finite state automata
and the read-execute model of arithmetic and logic units (CPUs), and basically
asserts that you'd never be able to execute recursion if you couldn't do it by
reading one instruction at a time. In other words, each time your function calls
itself again, it's simply storing its state in memory temporarily while the
machine executes a fresh copy: after the copy is done, the former state is re-
loaded, and execution proceeds from there. This is achieved with stacks: data
structures that grow in size as more is "pushed" onto them, and shrink when some
is "popped" off of the top.

OBJECTS
An object is a collection of data that comprises a several datum. That is, when
data are related to one another, they can be envisioned as a "shape" or "motion"
that is the sum of its parts. For example, a vector has direction and magnitude;
an individual has a first and last name; a parser has an input stream, a stack,
and a procedure. In Javascript, you'd write something like this:
    function Blah (newz) { if (newz) { this.z = newz; } return this; }
    Blah.prototype = new Object();
    Blah.prototype.z = 555;
    Blah.prototype.tell_me_z = function () { return this.z; }
    var a = new Blah(222), b = new Blah(); // a.z == 222; b.z = 555.
... syntax varies among languages. Essentially an object is a data structure
containing some members ("variables" attached to the object, like Blah::z above)
and, if the object is a class, some methods (functions, like ::tell_me_z).

Om. Wherefore ore? Om. Om. Ore… art thou ore?

Dynamic recompilers: gold? Maybe. But what is the significance of recompilation
in the fast-paced, high-tech world of today? Does, indeed, my idea present any
useful information -- or am I merely setting forth in search of lost CPU time?
What am I doing? What am I saying? Where am I going? What am I computing? I've
built the swing, but what about the swang? After the swang, whither the swung?
(Paraphrased from sketches performed in "Monty Python's Flying Circus.")

This lecture provides, in abstract, my concept design of a specialized Turing
machine: the compound compiler/disassembler. It is feasible via my published
work, Quadrare Lexema, which is in the public domain as is my essay.

(Actually, I think these are more appropriately called "dynamic recompilers.")

If you're just joining me, Quadrare Lexema is a Bison-like parser generator
written in the Javascript programming language for use in Web browsers (FF3.6).
It is portable to any browser employing a modern, standards-compliant Javascript
interpreter; such as Chromium, Chrome, Opera, and (hypothetically) Konqueror.
I have not yet had an opportunity to acquire a copy of Microsoft's Web browser,
Internet Explorer, because I've had no opportunity to acquire a copy of Windows
(and couldn't pay for it, even if I did). QL produces a parser like GNU Bison;
which is to say, an LALR parser for Backus-Naur Format context-free grammars.
For more information, visit the Free Software Foundation's Web site; and/or the
Wikipedia articles treating Backus-Naur Format, regular grammars, LR parsers,
and Turing machines.

Because I have already described my work with the hundreds of pages of code and
the few pages of abstract condensed lectures, this is curtate. However, observe
that my foregoing proof constructs the basis upon which this concept rests.
Besides my work, bachelors of computer science worldwide have demonstrated in
their own dissertations similar proofs to that I have presented you by now.

Again: concept, not implementation. Toward that end lies the specification of a
Turing-complete virtual machine, wherein there be dragons. I will write you one
soon, but I am pending some subsequent blueprints on my acquisition of better
laboratory equipment. In the meantime, I'm pretty sure you can write one w/ QL.

Obviously: a compiler translates the syntax of a human-interface language into
machine instructions; parser generators can be employed to construct compilers;
and such generated parsers can be attached to other parsers in a daisy-chain.

So, make a compiler that converts the abstract instructions it reads into some
progressively simpler set of other instructions, by unshifting multiple tokens
back onto the input stream during the reduction step. You could either read them
again and thereby convert them to yet other computations, or simply skip-ahead
within the input stream until it has been entirely disassembled.

Consider: you're writing an artificial intelligence script, somewhat like this:
	FOR EACH actor IN village:
		WAKE actor,
		actor ACTS SCRIPT FOR TIME OF DAY IN GAME WORLD.
(This syntax is very similar to the A.I. language specified for the Official
 Hamster Republic Role-Playing Game Creation Engine (O.H.R.RPG.C.E.).)
Your context-free grammar is walking along der strasse minding its own business;
when, suddenly, to your surprise, it encounters the ambiguous TIME OF DAY symbol
and the parser does the mash. It does the monster mash. It does the mash, and it
is indeed a graveyard smash in the sense that your carefully-crafted artificial
intelligence -- the very stuff and pith of whose being you have painstakingly
specified as a set of atomic operations in an excruciating mathematical grammar
-- has now encountered a fatal syntax error and you would like nothing better
than to percuss the computer until it does what you say because you've used each
!@#$ free minute learning how to program this thing for the past fifteen !@#$
years and God Forfend(TM) that it should deny you the fruit of your labor.

Well, actually, the OHRRPGCE does solve that problem for you, which is nice; and
maybe it even solves it in exactly this way, which is cool; but I'll continue...
Let's say your parser lets you unshift multiple symbols upon reduction, like QL
(or like any parser that does a similar thing, of which there are surely many);
then "TIME OF DAY" could reduce to an algorithm that does nothing more than find
out what time of day it is and then unshift a symbol corresponding to the script
that most closely matches what time of day it is.

In other words, you've specified the possible computations that the language can
execute as a set of atomic operations, then decomposed the abstract instructions
into those atomic operations before executing them. The functionality is similar
to that of the C preprocessor's #include directive, which directs the CPP to add
the source code from the #included header verbatim like a copy-paste operation.
That's the most straightforward application of this thought, anyway: copy-and-
pasting artificial intelligence scripts into other AI scripts.

Another thought: let's say your compiler is, in addition to its named capacity,
also a virtual machine that's keeping track of state in some artificial world.
So, it's two virtual machines in one. Rather silly, I know, but why not follow
me a bit deeper into the Turing tar-pit where everything is possible and nothing
is NP-complete? After all, "What if?" makes everything meaningless, like magic.

So, again, it's trucking along and hits some instruction like "SLAY THE DRAGON;"
but the dragon is Pete, who cleverly hides from the unworthy. Now, you could say
just ignore this instruction and carry on with the rest of the script, and maybe
even try to slay the dragon again later after you have slung Chalupas out the TB
drive-through window for a few hours. You could even say travel back in time and
slay the dragon while there's yet a chance -- because wholesale violation of the
law of causality is okay when you can't afford to lose your heroic street cred.
But why not check the part of the script you've yet to execute, to see if maybe
there's something else you need to do while you're here, and then try doing it
while you keep an eye out in case Pete shows up?
I mean, you could write that check as part of the "SLAY" atom; or as part of the
next action in your script that happens after the "SLAY" atom; but why not write
it as part of what happens before the "SLAY" atom even executes?

Also, what if you wanted the compiler to try two entirely different sequences of
symbols upon computation of a symbol reduction? Or one reduction modifies the
input stream somehow, while another does not? Or to reorganize/polymorph itself?

All this and more is certainly possible. QL does a big chunk of it the wrong way
by gluing-together all sorts of stuff that could have been done by making your
symbols each a parser instead. Oh, and iterators in Icon also do it better. And
let us not forget the honorable Haskell, whose compiler hails from Scotland and
is nae so much a man as a blancmange. "Learn you a Haskell for great good!" --
this is both the title of a book and an amusing pearl of wisdom.

The only thing that bothers me about this idea is that the procedure necessary
to polymorph the code at runtime seems to be (a) something that I ought to be
doing by nesting context-free parsers, rather than writing a contextual one; and
(b) probably requires a heuristic algorithm, which would be awfully slow.

I haven't developed this idea very well and there's some space left.
Here is a verbatim excerpt with an example Haskell algorithm, for no reason:

  From The Free On-line Dictionary of Computing (30 January 2010) [foldoc]:

  Quicksort
  
     A sorting {algorithm} with O(n log n) average time
     {complexity}.
  
     One element, x of the list to be sorted is chosen and the
     other elements are split into those elements less than x and
     those greater than or equal to x.  These two lists are then
     sorted {recursive}ly using the same algorithm until there is
     only one element in each list, at which point the sublists are
     recursively recombined in order yielding the sorted list.
  
     This can be written in {Haskell}:
  
        qsort               :: Ord a => [a] -> [a]
        qsort []             = []
        qsort (x:xs)         = qsort [ u | u<-xs, u<x ] ++
                               [ x ] ++
                               qsort [ u | u<-xs, u>=x ]

     [Mark Jones, Gofer prelude.]

Now that I've burst my metaphorical payload, in regard to theory, I think future
lectures shall be of a mind to walk novices through some of the smaller bits and
pieces of the algorithms I've already written. There aren't many, but they're a
real pain to translate from source code to English (which is why I haven't yet).
I might also try to explain QL in further detail, but IDK if I can make it much
more straightforward than it is (no questions == no idea if you comprehend)...
and, besides, I've already written more than ten pages about it & it's tedious.

Depending on how much Pokemon I must play to clear my head after overhauling the
parser this year, and how much time is left after that to finish the shell's new
scripting language, I'll aim to write some of those simplified lectures by July.

On Loggin’.

The post title, "On Loggin'," is a pun on the algorithmic time-complexity of an
ordered binary tree seek operation: O(n * log(n)).

This lecture provides my advice in regard to sorting data by using binary trees.
(From Bouvier's Law Dictionary, Revised 6th Ed (1856) [bouvier]:
 ADVICE, com. law. A letter [...] intended to give notice of [facts contained].)
My lectures are in the Public Domain, and are free of charge.
"By order of the Author." - Mark Twain, _Huckleberry Finn._

A tree is a directed acyclic graph. ("Tree" (1998-11-12) via FOLDOC 30jan2010.)
Other sort methods besides tree sort include quicksort, bubble sort, heap sort,
and insertion sort. ("Sort" (1997-02-12) via FOLDOC, 30th January 2010.)

This pattern can be seen almost everywhere that data are represented as discrete
notional objects: for example, the directory trees contained within the file
systema of your personal computer, the technology/skill trees that are likely to
present themselves in your favorite video game, and in any computer program that
maintains the logical order of a data set as new data are added. These aren't
necessarily treelike data structures; but, because the perceived "shape" of the
data is a tree, they're probably similar if coded in object-oriented language.
I imagine a file system as "what an object even is;" although a computer program
is volatile, whereas files are nonvolatile, the idea is much the same.
See also: object-oriented programming, procedural vs. functional languages, file
systems, {non,}volatile memory such as RAM and EEPROM, sort algorithms.

Of course, treelike structures aren't always the optimum schematic to represent
a hierarchical assortment of data; definitively, like all other object-oriented
designs, they're mere abstractions employed to delineate pieces of a contiguous
unidimensional memory space (the Turing model of a logic machine considers RAM
to be just a big empty pair of set brackets; data are populated & operated-upon
arbitrarily, but the container is merely a set) and to signify correlation.
An object is a collection of related data; which comprise a several datum.
See also: pointers, heap memory, set theory, discrete math, and Boolean logic.

¶ The following (til the next pilcrow) is nearly a verbatim excerpt from FOLDOC.
The data stored in a tree is processed by traversal. The algorithm begins at
root node, transforms or collects or computes against the data, and then repeats
itself for the root's child-nodes ("branches") in some specific order. Three
common traversal orders are pre-order, post-order, and in-order traversal.
For the binary tree illustrated below:
        T
       / \
      I   S
     / \
    D   E
A pre-order traversal visits the nodes in the order T I D E S.
A post-order traversal visits them in the order D E I S T.
An in-order traversal visits them in the order D I E T S.
¶ "Traversal" (2001-10-01) via FOLDOC, ed. 30th January 2010.
See also: recursion, the call stack (subroutine invocation), digraph traversal.

To encode a tree walk via recursive algorithm is straightforward, and is how the
concept of data set traversal is introduced to novitiate computer programmers.
(Well, after they've learned how to loop over an array with "for.")

Succinctly:

    function put (pNode, value) {
        if (pNode.isEmpty()) { pNode.datum = value; }
        else if (value < pNode.datum) { put(pNode.bLeft, value); }
        else if (value > pNode.datum) { put(pNode.bRight, value); }
    } // Ordered insertion.

    function tell (pNode, value) {
        if (value < pNode.datum) { return tell(pNode.bLeft, value); }
        else if (value > pNode.datum) { return tell(pNode.bRight, value); }
        else if (pNode.isEmpty()) { return NULL; }
        return pNode;
    } // Seek pointer to inserted.

_Any_ recursive algorithm can be reduced to an iterative algorithm, provided you
can use variable-length arrays to simulate the function of the call stack, which
stores the arguments to the subroutine you invoked and the address to restore to
the instruction pointer when the subroutine returns. That is, the call stack is
like bookmarks or tabbed indices to tell the routine where it was before a jump.
Replace the "instruction pointer and arguments" model of a stack frame with any
constant-width data set that remembers what you were doing before you jumped to
the next node in the digraph, and, naturally, you can remember the same stuff as
if you had utilized the convenient paradigm that is recursivity. (Stack frames
don't really all have to be the same size, but a definite frame width spares you
from doing yet another algorithm to parse frames. See: network protocol stacks.)

The "stuff you remember" is the state of the finite state automaton who tends to
the mechanism whereby the machine knows which instruction to execute next.
Recursion provides this automaton "for free," because it crops up so frequently;
but, for whatever reason (such as the 3000-call recursion limit in Firefox 3.6),
you might want to write a tree sort without using recursion at all.

Gentlemen, behold...
    corm = GenProtoTok(
        Object,
        function () {
            this.tree = new this.Hop();
            return this;
        },
        "Hopper",
        undefined,

        "locus", undefined,

        "seek", function (u_val) {
            while (this.locus[2] != undefined) {
                if (u_val > this.locus[2]) { this.locus = this.hop(1); }
                else if (u_val == this.locus[2]) { return false; }
                else { this.locus = this.hop(0); } // u < loc[2] || u == undef
            }
            return true;
        }, // end function Hopper::seek

        "tellSorted", function () {
            if (this.tree == undefined) { return undefined; }
            var evaluation = new Array();
            for (var orca = new Array(this.tree), did = new Array(false);
                orca.length>0;
            ) {
                this.locus = orca.pop();
                if (did.pop()) {
                    evaluation.push(this.locus[2]);
                    if (this.locus[1] != undefined) {
                        orca.push(this.locus[1]);
                        did.push(false);
                    }
                } else {
                    orca.push(this.locus);
                    did.push(true);
                    if (this.locus[0] != undefined) {
                        orca.push(this.locus[0]);
                        did.push(false);
                    }
                }
            }
            return evaluation;
        }, // end function Hopper::tellSorted

        "hop", function (index) {
            if (this.locus[index]==undefined) {
                this.locus[index] = new this.Hop();
            }
            return this.locus[index];
        }, // end function Hopper::hop

        "addUnique", function (value) {
            this.locus = this.tree;
            if (this.seek(value)) { this.locus[2] = value; return true; }
            return false;
        }, // end function Hopper::addUnique

        "Hop", GenPrototype(Array, new Array())
    );
... corm!

Here is how corm works: it's an Array retrofitted with a binary tree algorithm.

Each node of the tree is an Array whose members, at these indices, signify:
    0. Left-hand ("less-than") branch pointer.
    1. Right-hand ("greater-than") branch pointer.
    2. Value stored in the node.

Values are added via the corm object's addUnique method, which resets a pointer
to the algorithm's location in the tree and then seeks a place to put the datum.
Seeking causes a node to be created, if no node yet contains the datum; so, the
corm::seek method's name is misleading, but seeking is exactly what it does.

In-order traversal is executed by corm::tellSorted which returns the sorted set.
Recursion is simulated using a state stack whose frames each contain a pointer
to the "previous" node and a boolean value that signifies whether its left child
has been visited. Here is the main loop, step-by-step, in natural English:
    for (var orca = new Array(this.tree), did = new Array(false);
        orca.length>0;
    ) {
"Given the stack, which I consider until it is exhausted..."
        this.locus = orca.pop();
"I am here." -^
        if (did.pop()) {
"If I did the left branch already,"
            evaluation.push(this.locus[2]);
"then this value occurs at this location in the ordered set;"
            if (this.locus[1] != undefined) {
"if the right branch exists,"
                orca.push(this.locus[1]);
"visit the right child on the next loop iteration"
                did.push(false);
"(and, naturally, I haven't visited the right child's left child yet)"
            }
"."
        } else {
"If I haven't done the left branch already,"
            orca.push(this.locus);
"then I have to visit this node again when I come back"
            did.push(true);
"(except I shall have visited the left child by then);"
            if (this.locus[0] != undefined) {
"if the left branch exists,"
                orca.push(this.locus[0]);
"visit the left child on the next loop iteration"
                did.push(false);
"(and, naturally, I haven't visited the left child's left child yet)"
            }
"."
        }
"... I consume pointers and bools, doing all that there stuff"
    }
"."

And that is how to do everything I did with the Hopper data structure
(which is here renamed corm, because I like corm) in quadrare-lexema.js.



Full disclosure: whereas; although I do put on women's clothing; and although
indeed I sleep all night and work all day; and, in addition, despite that I have
been known, of an occasion: to eat eat my lunch, go to the lavatory, pick wild
flowers, hang around in bars, and want to be a girlie; my Wednesdays have not
historically been known to involve shopping or buttered scones, I don't know how
to press a flower, I have neither suspenders nor heels nor a bra (but thanks for
offering), and am not, in fact, a lumberjack.