Software in the Age of Sampling

|

I've just returned from Øredev in Malmö, Sweden where I delivered Software in the Age of Sampling twice. The first time to a healthy turnout, the second to a nearly empty theater, to get a take without demo snafus. The video below is Take Two.

Abstract

Over the last generation or so, software development has changed profoundly, but some of these changes occurred so slowly they have barely been recognized.

Software was once built by skilled but peculiar artisans, who meticulously crafted their original, green-fields commissions from first principles. Today, their frontiers are gone, and the very few of us build anything new from the ground up. Instead existing resources are rehashed, recombined, and remixed to produce "new" mash-ups based up the work of others. It's collaborative, this 21st century scavenging, and it has become a state-of-the-art approach to software development.

These changes in the way we produce software have much in common with the changes that have transformed the music industry over the last thirty years. The garage band era of original composition has given way to the direct borrowing of scraps of other people's pieces and a cavalier disregard for traditional originality.

This session will explore how software developers in the age of sampling have as much in common with contemporary high-tech music "producers" as they do with traditional engineers.

The Cobbler's Children

|

The cobbler's children are the last to be shod.

So it is too for software developers these days. I'm reminded of this hoary trope when I think about the state of twenty-first century programming languages and technology. Because its is still very much mired / anchored in the 95-character ASCII/EBCDIC punchcard-era practices of the fifties and sixties.

Punchcard

This reverie was triggered by a chance Twitter sighting of a 'blog post by Canadian Computer Scientist Greg Wilson on why Donald Knuth's vision of Literate Programming had failed.

Greg's piece resonated with me because it addresses two themes I've trotted out myself on a couple of occasions. Such is the nature of this sort of triggering.

The first is the enduring irony that, despite having delivered all manner of post-ASCII visual GUI bliss and razzle-dazzle to countless users of all descriptions, programmers themselves must still trade in primitive text-based program representations that would be familiar to time-traveling hackers from the Nixon Administration. With a handful of exceptions, today's production code would be quite at home on a keypunch (sans case). These textual accounts of programmer intent remain the canonical, authoritative representations of code, from which any supporting executable binary accounts must be derived. For most, the vast majority, in fact, programs are ASCII characters that live in ASCII files, in directories, which these days, might be under some sort of source control, but still...

Now, one of Greg's points in his "ASCII" passage, was to remind us that Knuth's vision of integrating complex typeset annotations like equations and tables with code, and binding them to the codebase, is even now only clumsily possible using embedded HTML in "doc" comments.

And, it's true, we don't do this well. In general, tying diagrams, equations, and commentary together with code in a way that presents it conveniently along with the code so as to make it even remotely possible to maintain it and keep it consistent with changes to the code, is just not something one sees well supported. And given the polyglot nature of so many applications these days, this ASCII common denominator too often places important cross-language artifacts beyond the reach of refactoring tools that might, given a richer representation, have had a better shot at keeping such support material in sync. Think XML configuration, definition, and data files, even schema.

Cobbler

The cobbler's children lament, then, is borne of the envy that one feels over the fact that so many our customers all get to enjoy much richer representations of the important objects in their domains than we programmers do.

This theme has been showing up of late when I talk about the ubiquitous Ball of Mud phenomenon. These talks have come to have several "DVD" (select your own) endings, one of which explores the idea that one source of our perception of muddiness is the primitive tools and representations we currently employ to navigate and visualize our codebases.

This in turn, is a partial result of working with refactoring tools developers at UIUC a few years back, where I came to believe that program representation is one of the most interesting and important unsolved problems in program language research. By this, I meant getting beyond 95-character ASCII into a richer, round-trip AST-level representation amenable to analysis, refactoring, and annotation. Such representations would need to be cast somewhere beneath text, but above the level of say, Java .class files.

This is neither a new nor an original idea, but it remains, alas, unfinished business. Ever wondered why refactoring tools for C and C++ are so slow in coming? This is part of the problem.

The second reverie that Greg triggered, which was partially reinforced by another Twitter citation, this Redis Road Map from Paul Smith, was this one:

One of the underlying premises of the now discredited waterfall division of labor between skilled architect/designers and rote programmer/coders was the implicit, presumably self-evident assumption that the code was unreadable, and that design must therefore be conveyed by "higher-level" descriptions like documents and cartoons.

Nowadays, a fundamental article of faith underlying the craftmanship, clean code, and iterative, emergent design communities is that the code itself can be wrought to carry the dual burden of conveying to the compiler a comprehensive set execution instructions, while at the same time communicated to at a suitable level of literacy and abstraction the details and intent of the artifacts design. A lot of stock has been placed in this conceit, so one can only hope it holds up.

It is clear that code can do so far better than the woefully minimal degree of adequacy that many shops tolerate. But the question of whether, even at our best, meticulously tended source code will be enough remains an open one.

The Roots of Refactoring

|

I'd been of the mistaken belief that the term refactoring, while used in casual parlance during the 1980s, hadn't made it into print until the publication the seminal paper by Bill Opdyke and Ralph Johnson in 1990. Opdyke's 1992 thesis gave a more detailed account of this work.

Thanks to Bill Wake for pointing out this this fully-formed treatment of the style of program revision that came to be known as refactoring in his 1984 book Thinking Forth, by Leo Brodie. And right there, on page 181, is what is, to the best of my knowledge, the first published appearance of The Word itself.

Forth enjoyed a burst of popularity during the early 1980s, before the object boom. I still own a copy of Starting Forth. Alas, this is the first time I'd seen this treatment of program factoring in Thinking Forth.

Forth, of course, remains an ubiquitous part of our everyday lives, in the guise of a proprietary variant, Postscript, that hums along under the hood in so many of the world's printers (among other places).

A tip-of-the-hat to Leo, and the refactoring pioneers of Forthdom...

"Real" Software Engineering

|

This video of a presentation by Glenn Vanderburg entitled Real Software Engineering came up last week during one of those periodic flurries of contrary opinion on Twitter regarding whether or not software development is, or is not engineering. Glenn's 51 minute talk explains why, after after having made a painstaking, convincing case that what we do do is utterly unlike what any other known engineering discipline does, he nonetheless aligns himself with the "pro" engineering perspective.

Real Software Engineering - Glenn Vanderburg from Engine Yard on Vimeo.

It's a well-prepared and delivered piece, and well worth your time. He opens by acknowledging something that anyone who has been in this field for long already knows: that the kind of Software Engineering that has been taught in Computer Science programs for the last forty years has been hopeless, comically out of touch with day-to-day software development reality.

His opening examination of where "Software Engineering" went astray is particularly compelling; he does so by going back and examining some primary sources. For instance, the legendary NATO the 1968 meeting that established the field had some moments that seemingly foreshadowed today's Agile orthodoxy, before heading off into into the weeds for a generation the next year. Winston Royce has evidently been saying, for 42 years, that his original waterfall paper has been tragically misunderstood. Glenn makes a good case that this is so. You may, of course, actually read the paper and decide for yourself. Parnas's A Rational Design Process: How and Why to Fake it is here too. Glenn has some fresh background on Parnas's use of the term "rational".

Galloping Gertie

I thought I caught a welcome, albeit uncredited whiff of Petroski in the second part of the talk, where he describes how science, and art, mathematics, craft, tradition, and empiricism guide what real engineers really do. And no talk on the limits of engineering would be complete without an appearance from Galloping Gertie

I particularly enjoyed Glenn's treatment of of the perennial and enduring mis-perception of the roles of engineers and coders that the industry inherited from its lengthy flirtation with the waterfall model. This conceit went something like this: The "real" engineering effort involved in engineering software is in the design, not the implementation. Hence, design must be something distinct, something more demanding, than mere coding. The software engineers job then, was produce, from a given set of requirements, some artifact that could be "thrown over the wall" to the coders for "construction".

Of course, this analogy is off. The design of the program itself is the part of this process that is akin to automotive or aircraft design. Construction, building, or fabrication is the process of reproducing the shrink-wrapped media, or invoking and executing the application over the web. For aeronautical engineers, fabricating each individual remains aircraft is quite expensive. Though software engineering began during an era of pocket-protectors and mechanical pencils where CPU were still scarce, fabrication for us now is essentially free. Given this perspective, Glenn continues, the folks dismissed as blue-collar coders in the waterfall pecking order are the real engineers. Because engineering is about making stuff that works. And it is with this contention than Vanderburg rests his case.

Which is fine, as far as it goes. I guess, after all that, I feel less obligated to align myself with the engineering fraternity than does Glenn, given how different making software turned out to be from the other disciplines he showcased, but that's probably a matter of taste. I'm just not sure I have a dog in it. There are lots of disciplines that deliver stuff that works besides engineering: cinema, pharmaceuticals, agriculture, bakeries, blacksmiths, composers, ... I could go on. What might we yet learn by analogy from disciplines outside our mathematics and engineerings roots?

Of course, the other great unspoken truth about the software engineering tradition in Computer Science has been that software engineering has always really focused on the admittedly considerable challenges associated with managing, organizing, and yes, even leading a large, infantry scale industrial organization whose objective it is to produce software, often to the detriment of the more technical issues of interest to those in the trenches.

Ironically, one of the successes of the Agile movement has been encourage the emergence of more antonymous "commando" scale units within the kinds of World War II Era waterfall shops that had dominated the industry.

Indeed, these hierarchical corporate traditions, the clashes of these kinds of cultures with the agile mindset, and the daunting demands of scale are all issues that might have merited additional examination, and that continue to the contribute to the perception that software engineering is out-of-touch.

Tests and Double-Entry Bookkeeping

|
Double-Entry Bookkeeping

I came across this interview with legendary Code Hygienist Bob Martin, and want to bookmark it here before I lose it.

Among the highlights was an insight that test-driven development can gain suite-cred when sold by analogy with double-entry bookkeeping. I was reminded of the early PLoP conferences where a entrepreneur and rouge scholar Dan Palanza promoted the idea of double-entry bookkeeping as pattern that he was convinced has applicability beyond the domain of accountancy. It was gratifying to see Bob showcase this useful connection between it and TDD.

The discussion of codebase health metrics was useful too. Here's some information on the Braithwaite metric mentioned in the interview.

Refactoring's Original Sin: Part II

|
Makeover

Part I of this piece made the case that Refactoring's Original Sin was having allowed Refactoring to come to be perceived as completely cosmetic elective surgery. Worse yet, the resulting improvements benefited only the innards of your application, not the parts the users see.

Even as early as the early nineties, there was a recognition that allowing refactoring to be become pigeonholed as an unproductive polishing pass could do injustice to the cause of the kind of iterative, incremental design.process that we'd become accustomed to working in Smalltalk.

We did make several attempts to characterize this approach as a integrated iterative, incremental process in which design pervaded the object lifecyle. A few years earlier, Lehman and Belady had observed that while design was an entropy decreasing process, maintenance was an entropy increasing process, that inevitable eroded structure.

Yet, somehow, the Smalltak image had avoided this fate, and the development style we'd vicariously absorbed working with Smalltalk was seemingly allowing our code to do so as well. The way this worked was that, as you added new code, you'd keep an eye peeled for opportunities to better integrate it with the code around you. And then ... take them. This casual, continual shift in focus from function to form and back again helped to forestall the kind of death-by-duct tape that seemed so unavoidable when working on classical code. And I think it was fair to say that it was by cultivating these form-focused improvements to the structure, clarity and presentation of the code by hand that we learned what refactorings were.

For instance, we trotted out an extended abstract A Fractal Model of the Lifecycles of Reusable Objects at a few workshops during the early nineties. Conceived as a wry variation on Boehm's Spiral Model, it sketched a three phase scheme whereby an Initial Prototype formed the nucleus of a system that was organically grown through a succession of fine-grained, opportunistic Expansion and Consolidation phases. Expansion had an exploratory focus, and explicitly tolerated a certain degree of structural erosion in the cause of mastering functionality.

Consolidation on the other hand, was a form focused phase in which opportunities to harvest the benefits of hindsight were exploited to repair any damage done during prior expansions and ensure the application was poised to address the challenges to come. Anywhere in the universe, counteracting entropy requires an investment of countervailing energy, and it was here that that entropy reduction was done.

By PLoP '94, Bill Opdyke and I had put together a somewhat more detailed treatment of this material that tried to cast Refactoring in the context of this this kind of process: Lifecycle and Refactoring Patterns that Support Evolution and Reuse. One goal of this work was to once again underscore the idea that refactoring should normally be an integral part of incremental development, not a coarse-grained recovery project. Which is not to say that such an effort might not be called for if a codebase had gone badly to seed, but that with conscientious cultivation the need for major reclamation efforts could be prevented.

Software Tectonics

Our PLoP '95 Software Tectonics idea can be seen in this light: entropic stress accumulating in a system risks release with increasingly severe power-law consequences.

By then Refactoring was poised to become a household word, among hackers, anyway. John Brant and Don Roberts were already working on the first practical automated refactoring tool for Smalltalk, Martin Fowler was soon to embark on a landmark cataloging effort that would, in turn, give Erich Gamma a roadmap that would lead to the refactoring facilities of the Eclipse JDT. And Kent Beck and his cohorts were soon to deem Refactoring as one of the pillars of eXtreme Programming.

So, one of the reasons that Martin's TradableQualityHypothesis piece resonated with me so deeply is that this is precisely the kind of perception battle I still find myself in when discussing Refactoring to this day. Our inability to have framed it as an integral, fundamental, indispensable element of a continuous, organic design and development process has left code quality and the kind of poised responsive, adaptive posture in the face of change that than makes possible, vulnerable to the remorseless whims of the budget axe.

There is still work to be done.

Brownfield Software

|

I spent an informative 51 minutes yesterday watching this InfoQ video presentation on Brownfield Software: Industrial Waste or Business Fertilizer by Josh Graham of Thoughtworks.

Josh describes an intensive, heroic effort to bring a large, festering, vintage 1997 legacy Java system under control.

Among the highlights: his admonition that, no matter how great the temptation may be, avoid the urge to rip out those homebrew frameworks and replace them with more modern, standard ones.

The testimonials from the theretofore downtrodden hackers who'd been pinned down in the trenches on this project were touching as well.

Oh yeah, and they used AOP too. For logging, naturally....

They also spoke of the indispensability of engineers with superior textual skills in an environment like this, which, I gather, means the kinds of folks who for whatever reason are able to make some sense out of code most of us would find hopelessly inscrutable.

Kakonomics

|

Came across this blog entry by Gloria Origgi, courtesy of a tweet from @ronjeffries on Kakonomics: or The Strange Preference for Low-Quality Outcomes.

The piece opines that in many systems, a state of collusive mediocrity can emerge and persist between agents, whereby an exchange of low quality outcomes and minimal rewards is mutually tolerated as part an Evolutionarily Stable Strategy.

It's easy to imagine this sort of low-expectations relationship emerging between an IT organization and mud maintainers, and how it might account for the kind of neglect some codebases must seemingly endure.

The relationship between coder and code itself might lend themselves to this sort of analysis along the lines of Joshua Kerievsky's notions of Sufficient Design.

One wishes the author had opted for a different name. I can almost hear every preschooler (and preschooler-at-heart) in the room giggle each time I read it.

Refactoring's Original Sin: Part I

|

Before there was Refactoring there was refactoring. There were people who refactored just fine before Refactoring was invented.

I remember right well.

I remember back during the late 1980s, the OOPSLA-era Object boom was in full bloom. I had joined a new research group led by Ralph Johnson at the University of Illinois at Urbana-Champaign. And some of us were availing ourselves of the privilege of getting the best apprenticeship in object-oriented programming a body could ever get: spelunking around the PARC Smalltalk-80 codebase.

Gauguin and Tahiti

Here we found code that had clearly been crafted by insanely bright, committed pioneers who had stuck with it, lived amongst it, inhabited it, as they acknowledged themselves, polished it and cultivated it. And incrementally improved it. It was clear to anyone who visited the place, and it felt like a place, that the kind of code you were browsing was the fruit of a patient, sustained, collaborative process of cultivation. All the code was there. You were free to roam, and explore as you pleased.

It was a place that enticed you to join it. Smalltalk development enticed one to "go native", like Gauguin's Tahiti. And you grew and cultivated your code next to theirs, and when you saw an opportunity to improve it, you did. And your code grew better too.

Of course, even then, during the dusk of the Cold War, not every working programmer left every first draft untouched. If you were overconscientious, you might routinely do this as part-and parcel of bringing an implementation into being. Sometimes these ideas would come into focus once the code was actually on-the-air. In those days, when you wanted to improve the design of a chunk of existing code, you said you were going to clean it up, or take a final polish pass over it, or maybe tidy it up a bit. An enlightened boss might tolerate such an indulgence at the end of a successful push. But even then, the suspicion that you might be engaging in self-indulgent lily-gilding was never far behind.

Often, your focus would be strictly on improving the code's clarity or structure, or aesthetics without changing the way it worked at all. By analogy with number theory, one could casual describe a reworked or improved version as being better factored than its predecessor. It became natural to describe the newer take as a refactored version of the older one, and more broadly, to describe this kind of focused improvement activity as refactoring. Still, this most often was less a distinct phase that an acute, opportunistic shift of focus. It was typically part of preparing the groundwork for some subsequent functional improvement.

And so it came to pass that in the early days of the reign of George Bush the Elder, Bill Opdyke and Ralph Johnson set about the task of cataloging some of these Things You Could Do to Programs that Didn't Change The Way They Worked But Made them Better, with an eye towards seeming studying them more formally, and seeing whether they could be automated. And refactoring became Refactoring, with a Capital-R.

A Refactoring became a much more narrowly cast behavior-preserving program transformation, which served their research agenda fine, but cast the broader, more casual usage to the side. Refactoring, the activity, had henceforth to be one in which only such sorts of transformation were to applied. A distinct activity.

Subsequent authors, such as Martin Fowler (Refactoring) and Joshua Kerievsky (Refactoring to Patterns), have probably inadvertently reinforced this impression with their detailed focus on the surgical tactics and skills required to accomplish particular Refactorings, at times seemingly at the expense of broader, more holistic therapeutic approaches to day-to-day codebase health maintenance.

Adam and Eve So, what was Refactoring's Original Sin? Letting the idea be framed as a distinct phase, a polishing pass, an optional brush up, where working code was turned into "higher quality" working code that worked the same way as the original working code. Framed this way, this could sound like an utterly optional spasm of indulgent, flamboyant, narcissistic navel gazing by self-indulgent prima-dona programmers, rather than a inherent, indispensable, vital element of a process that cultivated the growth and sustained health of both the codebase and the team.

The problem with "refactoring" is that refactoring is not a bad name for Refactoring proper, but that the idea has always cast a wider shadow, in part because of its gestation in precisely the kinds of iterative incremental incubators we nostalgically recalled above.

As a result of having framed in this fashion, the perception persist to this day that refactoring is a distinct, coarse-grained, and dispensable luxury, rather than a natural, inherent, integral part of a healthy, sustainable development process.

The problem is that the refactoring process in which Refactoring plays this indispensable role, and the value of the resulting code, have never been really properly framed.

And a consequence of this is that refactoring itself becomes a casualty in precisely the kinds of TradableQualityHypothesis triage games that the Martin Fowler piece that triggered this one so eloquently showcased.

Presidents Day 2011

|
1George IGeorge Washington
2John IJohn Adams
3Thomas IThomas Jefferson
4James IJames Madison
5James IIJames Monroe
6John IIJohn Quincy Adams
7Andrew IAndrew Jackson
8Martin IMartin Van Buren
9William IWilliam Henry Harrison
10John IIIJohn Tyler
11James IIIJames Knox Polk
12Zachary IZachary Taylor
13Millard IMillard Filmore
14Franklin IFranklin Pierce
15James IVJames Buchanan
16Abraham IAbraham Lincoln
17Andrew IIAndrew Johnson
18Ulysses IUlysses Simpson Grant
19Rutherford IRutherford Birchard Hayes
20James VJames Abram Garfield
21Chester IChester Alan Arthur
22Stephen IStephen Grover Cleveland
23Benjamin IBenjamin Harrison
24Stephen IStephen Grover Cleveland
25William IIWilliam McKinnley
26Theodore ITheodore Roosevelt
27William IIIWilliam Howard Taft
28Thomas IThomas Woodrow Wilson
29Warren IWarren Gamaliel Harding
30John IVJohn Calvin Coolidge
31Herbert IHerbert Clark Hoover
32Franklin IIFranklin Delano Roosevelt
33Harry IHarry S. Truman
34Dwight IDwight David Eisenhower
35John VJohn Fitzgerald Kennedy
36Lyndon ILyndon Baines Johnson
37Richard IRichard Milhous Nixon
38Gerald IGerald Rudolph Ford
39James VIJames Earl Carter, Jr.
40Ronald IRonald Wilson Reagan
41George IIGeorge Herbert Walker Bush
42William IVWilliam Jefferson Clinton
43George IIIGeorge Walker Bush
44Barack IBarack Hussein Obama

This page began as: Some Ruminations on History, and on our American Regents, on the occasion of the second Coronation of His Imperial Majesty George III back in 2005.

I remain fond, as a nominal UK native, raised in the USA, of this idea of numbering American Presidents as if they were real Monarchs. And, this page has been overdue for a touch-up, after all we witnessed, over two years ago, to condign fanfare, the Coronation of Barack I. In keeping with the original theme of this page, it is worth noting that while tales of descent from African kings that Barack I was told as a boy turned out to have been apocryphal, he may well be able to claim descent, on his mother's side, from a Scottish king, and shared ancestry with as many as seven American regents.

Barack I

In 1789, our first president under our current constitution, George I, effectively succeeded British king George III, of the House of Hanover, as America's supreme authority. Britain's , of course, saw his legacy tarnished when he allowed his military become entangled in a stubborn insurgency in a distant part of the empire that didn't quite turn out the way he had planned. There have been two more homegrown Georges, now, since then. We've effectively come full circle.

George III

Reckoning time in terms of the reigns of one's country's regents is a time-honoured Anglo-Saxon tradition, I'm told, and one I've informally observed without thinking about it myself over the years. "That technology was already considered obsolete during the Nixon Administration", would be the kind of thing I'd find myself saying.

I'm amused by the idea of naming our Heads of State in the same traditional manner as Popes and Kings, using a single Christian Name, and a Roman Numeral. The results of doing so to America's forty-three regents are shown in the table to the right. Doing so gives one pause, and causes one to ponder the considerable imperial trappings of the American presidency.

George I

We have witnessed no fewer than five true cases of dynastic succession, accounting for fully ten of forty-three reigns. It is difficult indeed to fathom how such numbers could have arisen by chance, or even through merit.

Our first dynasty, the House of Adams, saw the son of John I, John II retake the throne in 1825.

The second, the House of Harrison, witnessed the ascension of Benjamin I, the grandson of William I in 1889.

In 1893, the Cleveland Restoration saw Stephen I reclaim the throne himself from Benjamin I, whom had in turn taken it from him.

In 1933, the inauguration of Franklin II, a cousin of Theodore I returned the House of Roosevelt to power. Interestingly, even though Franklin II and Theodore I themselves were fifth cousins, Franklin II's consort, Anna Eleanor, was herself a Roosevelt, and indeed, was the niece of Theodore I. The Hapsburgs, and Europe, held no monopoly on aristocratic inbreeding, it would seem.

Martin I was of purely Dutch ancestry. He and his wife Spoke Dutch at home. Martin I and Theodore I were third cousins twice removed.

The ghastly murder of the youthful, urbane and widely admired Irishman John V in 1963 brought Lyndon I to the throne, but this reestablished no Johnson dynasty, for he was of no relation whatsoever to Andrew II. The House of Johnson is a false dynasty.

Indeed Andrew II.and Lyndon I are among only a relative handful of those who have occupied the throne whom are not at least distant cousins to at least some of the others who've sat upon it. The other American Commoners: Andrew I, James III, Chester I, William III, Dwight I, John V, James VI, Ronald I, and William IV. James VI is related, however, to genuine indigenous American Royalty: Elvis Presley.

George II

Finally, the first Coronation of George III, the son of George II, and the scion of the House of Bush, in 2001, consummated America's first father-son dynastic succession since the restoration of the Adams Family. Barbara Pierce Bush, the consort of George II, and mother to George III, is herself a distant cousin of Franklin I.

George III

Our coronations are traditionally the occasions where we put aside the rancor and division that accompany our often contentious imperial ascension process, and 2005 we united behind, and swore our allegiance to George III, he having prevailed over both a challenger (the would-be Albert I), and an usurper (an aspiring John VI) to lay claim to his crown.

Barack I took his crown in 2009 by vanquishing a fellow son of a Scottish King William I and yet another prospective John VI.

Were we more like say, the United Kingdom of Great Britain and Northern Ireland, we'd further festoon his title with such honourifics as Monarch of the Forty-Eight Contiguous United States of America and Alaska and Hawaii, Defender of the Faith, and Emperor of Mesopotamia, but we Americans are an egalitarian lot, and tend to eschew such airs and pretensions.


Despite the fact that the United States is nominally a constitutional republic, and not a monarchy, eight of America's forty-two regents have, tragically, and ironically, nonetheless (inadvertently, one must presume) enjoyed the imperial prerogative that a King (all have been male) may serve for life. Four, William I, Zachary I, Warren I, and Franklin II (our longest serving regent), died on the throne of natural causes.

The Old Brown Jug

We have, sadly, witnessed four regicides in the course of our brief history. Abraham I, James V, William II, and John V, all saw their reigns brought to an end by assassin's bullets. In 1981, Ronald I was shot and severely wounded, and nearly followed suit, but for timely medical attention. Indeed, historians believe that both James V and William II could have survived their wounds given better medical care.

Gerald I, and Harry I both survived assassination attempts unscathed while on the throne. Franklin II survived an assassination attempt unharmed during the interregnum prior to his coronation. Theodore I was shot, but survived during his Bull Moose campaign to restore the House of Roosevelt in 1912, a battle that split the Tories between him and William III, thereby paving the way for the eventual ascension of the scholarly pacifist Thomas II.

There are many who believe that only assassin's bullets and scandal thwarted the dynastic ambitions of the House of Kennedy and the eventual elevation of John IV's siblings Robert I, and/or perhaps even Edward I to the throne. The untimely death of Kennedy Crown Prince, and paparazzi favorite, John Jr. in 1999 further diminished the family's dynastic potential.

Amendment XXII to the constitution effectively limited the reign of any given individual to no more than ten years. Even before this, reigns were customarily short, especially by comparison with European standards, in keeping with the precedent set by George I. For example, the reign of Queen Victoria spanned those of fully eighteen American regimes, from Martin I, through William II. Indeed, Franklin II was widely excoriated for breaking George I's precedent, and his breach thereof helped lead to the passage of the twenty-second amendment.

Spiro I

One regent, Richard I, was forced to abdicate in 1974. Alas, not for love. His successor, Gerald I, thereupon became America's first utterly unelected Head of State, having been appointed, not elected to the rank of Vice-President of the Realm, after the heir apparent who would have been Spiro I was forced to abdicate in disgrace, restoring at last the sacred principle of rule by Divine Right.


God Save the King

O Lord our God arise,
Scatter his enemies
And make them fall;
Confound their politics,
Frustrate their knavish tricks,
On Thee our hopes we fix,
God save us all!

© 2005, 2011 by The Laputan Press, Ltd.

In keeping with the spirit of Catfish in the Memepool, I've located another 'blog where huge swaths of this notion had emerged quite independently in an at times uncannily similar manner. I'm grateful to this site's author for his superior research into the Christian names for Stephen I, Thomas II, and John IV. Here is another hit. Here is a fascinating rundown on the ancestry of the presidents from Burke's Peerage.

Find recent content on the main index or look in the archives to find all content.

Pages

November 2012

Sun Mon Tue Wed Thu Fri Sat
        1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30  

Brian's Links

Homepages

Diarists

Brian Marick
Martin Fowler
Ralph Johnson (That's Completely Bogus!)
Dave Thomas (The Pragmatist)
Glenn Vanderberg
Patrick Logan
Lambda the Ultimate
Joshua Allen (Better Living Through Software)
Mariann Unterluggauer (Motz)
James O. Coplien
Eugene Wallingford
Blaine Buxton
Nickieben Bourbaki
Travis Griggs
Ivan Moore
Mike Mikinkovich
Superboy & Ward
Rebecca Wirfs-Brock
Nat Pryce
Tim Ottinger
Forrest Chang
Gregor Hohpe
Sam Gentile
Robert Hanson
Chad Fowler
Jonathan Edwards
James Robertson
Bruce Eckel
Andrew Stopford
Tully Monster
Grady Booch
Dave's Ramblings
ShiningRay
Solveig Haugland
Dave Hoover
But Uncle Bob
Doug Schaefer
Smallthought
Ted Leung
blog.talbot.ws
The Farm
Ian Clysdale (Random)
Gilad Bracha
Keith Devens
Urbana-Champaign Techophiles
Stefan Lauterer (Tinytalk)
Planet Python
Chris Koenig
Peter Lindberg (Tesugen)
Jason Yip
Sean McGrath
Jeff Erickson (Ernie's 3D Pancakes)
Steve Freeman (Mock Turtle Soup)
hakank (komplexitetemergens)
Deciduous Ponderings
Take One Onion
Project.ioi.st
Ken Schreiner
Hen so.com
Michael Mahemoff (Software as She's Developed)
Tootruthy
Champaign Media Watch
Jason E. Sweat's Weblog (PHP, etc.)
Raymond Lewallen (Code Better)
Keith Ray
Raymond Chen (The Old New Thing)
Neil Gafter
Joe Walnes
Ivan Moore
LD/dazza (Lost in La Manche)
Scott Rosenberg (Wordyard)
Dave Stagner (Sit down and shut up!)
Walter Korman (Lemurware)
Munawar Hafiz (The space between)
Rafael de F. Ferreira (Rafael Rambling)
Mike Hostetler (Where Are The Wise Men)
Jordan Magazine
Andriy Solovey (Software Creation)
Mike Griffiths (Ideas and essays on code development)
Ashish Shiraj (Ashish Kumar -- Software Test Engineer)
Nathaniel T. Schutta (Just a thought...)
Lynn Cherny (Ghostweather R&D Blog)
Dominique Boucher (The Scheme Way)

Powered by Movable Type 5.14-en