OOPSLA '89 Workshop on the Reusable Component Marketplace John T. Mason, organizer Department of Computer Science University of Illinois at Urbana-Champaign 1304 West Springfield Ave., Urbana IL 61801 (217) 328-3523 email@example.com
A Revolution Gone Sour?
I count myself among the true believers of the object-oriented revolution. I have even, in my small way, done my part as a missionary spreading the message of a millennium of code reuse [Johnson & Foote 1988] [Foote 1988]. Yet now, with victory seeming within our reach, I am plagued by doubts over some of what I see.
Our vision of an era where programmers are liberated from the drudgery of cutting each prosaic component they need from whole cloth seems in jeopardy. The promise of an age of well-honed, lavishly documented components and frameworks is imperiled by snake oil salesmen who peddle program skeletons and raw source code under the label of reuse. The orderly development of the software infrastructure that will carry us into the next century may be undermined by carpetbaggers who threaten to attempt to preemptively entrench poorly engineered, badly documented rubbish as standards. Finally, there is resistance from an unexpected quarter: the hacker in the trenches, who finds that he must forsake the glory of authorship for the tedium of learning how to assemble a mosaic of someone else's tiles.
Still, I am optimistic that our revolution can yet succeed. For this to be so, we as practitioners, as well as our patrons (such as they are) must recognize that design of reusable artifacts is a qualitatively different enterprise than that of disposable ones, and that returns on investments in them may be long-term rather than short-term ones. We must make sure that the difference between the kind of reusable components we are advocating and a 3000 line BASIC program scavenged from some bulletin board is clear to everyone. We must have patience in order to avoid the temptation to prematurely allow bad standards to be carved in stone while better ones are still evolving. If we succeed, we can channel the programmer's inherent sense of pride and craftsmanship into the production of a legacy of reusable components and frameworks. If we fail, we will condemn a generation of programmers to a fate far worse than the maintenance ghettos of today's sweatshops.
So here, in no particular order, is my collection of ruminations over the reuse revolution. Taken together, they attempt to further develop the notions that reusable components have an altogether different sort of lifecycle than disposable ones, that documentation is of paramount importance if reuse efforts are to succeed, and that attitudes are every bit as serious a threat to wide-spread code reuse as technical obstacles. They also develop the notion that once all the wheels have been reinvented, we will have a long, design-bound road ahead of us.
Programmer is to Program as Mongrel is to Hydrant
Face it, the painters who have touched up the Sistine Chapel have not become household names. Can the ambitious artist be blamed for wanting to paint his own frescos? How many managers, having assigned a quick touch up job, have come in the next morning to find their programmer on his back at work on a newly gessoed ceiling? It is said that that which a culture glorifies will flourish. Certainly the culture of computerdom does not glorify maintenance programming.
When look at from this perspective, it is not difficult to understand the compulsion many programmers feel from time to time (I certainly include myself here) to overhaul software they are charged with maintaining. By leaving a substantial mark of one's own on a piece of software, the programmer is enhancing the probability that any laurels directed toward it might come to rest on his head.
The thanklessness of software maintenance is exacerbated by the lack of recognition among the computer community of just how difficult the task of reading and comprehending grossly underdocumented tag-team code really is. To my mind, the difficulty associated with reading and understanding large programs is one of the computer world's dirty little secrets. Not being clairvoyant, I find the task of attempting to re-infer the constellation of design assumptions and execution contingencies that the author(s) of a large program originally had in mind one of the most daunting and frequently frustrating intellectual challenges I have ever encountered. This task is not made any easier when (as is frequently the case) the original authors of a software component refuse to expose their original intentions to general scrutiny by explicitly recording them somewhere. There are few programmers who have not cursed the insouciance of some predecessor who has left a fundamentally flawed piece of code completely devoid of any clues as to where his logic might have gone astray. Second-guessing the sanity of others is simply not any fun.
In a world dominated by code reuse, all programming, will, in one sense, be maintenance programming. This is the case in systems like Smalltalk-80 today. Yet, Smalltalk programmers do not seem to display the degree of contempt for using code written by others that one sees in some other environments. I think there are two factors that account for this. One is the excellent set of code browsing and management tools that the Smalltalk programming environment provides. The second is the set of standard programming conventions that this environment encourages and supports. A standard set of programming conventions can make it easier for a programmer to feel that his or her contribution to the collective effort is no less a legitimate part of it than any other.
The second major conclusion that one can draw from the discussion above is that documentation, perhaps more than any other factor, will determine the future of code reuse. From the level of source code, through that of reusable components, to that of frameworks, and through entire families of applications, reusability will depend on the quality of documentation, and documentation management and retrieval tools.
In an Age of Reuse, Will Bad Designs Drive Out the Good? (The Tortoise and the Hare Revisited)
In an age of reuse, will bad designs drive out the good? An irony of the object-oriented revolution may be that it has made bad programming easier and good programming harder. A consequence of the above observation is that one should never mistake a prototype for a well designed program. I fear that we may soon be confronted by an avalanche of object-oriented components and libraries, and that mediocre ones may take hold before superior ones have had a change to adequately evolve. To confront this challenge, the consumers of these tools must be prepared to be discriminating and vigilant.
Double-Click the Dumpster Icon for Source Code
Have the heroes of the reuse revolution given their last full measure so that slash-and-burn artists can dig fragments of programs that were never designed to be reused out of some electronic dumpster? Will we ride into the next century in a software flivver held together with duct-tape and bailing wire, built of code fragments that no one understands any longer? The nightmare portrayed here is in some ways a degenerate case of the above. My fear is that the market, if left to work its will, may give an edge to the scavengers rather than the craftsmen. My hope is that the benefits of designing for the long haul will allow those who do so to prosper. The last ten years or so have hardly been reassuring with respect to the willingness of American investors to buckle down for the long haul...
There Be Dragons
There seems to be a tendency among newly trained object-oriented designers to test their new skills on the construction of a library or framework for some domain that they understand well. For many, this will mean the construction of yet another sliders and dials set, windowing package, matrix algebra library, parser generator, linked-list manager, or the like.
At first glance, there may seem to be no harm in this, although I already sense that the object-oriented community is soon to be glutted with such code. The danger here is that some may fail to recognize that the authors of such packages are the beneficiaries of huge intellectual subsidies from the huge number of people who have in effect done the basis design work for them. Our current consensus as to what a compiler should look like is the product of hundreds and hundred of person years of effort over the last thirty years or so. In the case of mathematics, our knowledge has been accumulating for hundreds, and even thousands, of years. Certainly, solid, refined frameworks for domains like this will be valuable. We must be wary that we do not construe any quick victories we may achieve in areas such as this as predictors of similar successes in areas where basis design principles have not already been exhaustively explored.
As we move beyond well-researched domains such as compiler construction or linear algebra, the construction of reusable components and frameworks becomes a design-bound, error fraught enterprise. We must ensure that we do not oversell the power of code reuse on the basis of its performance in a handful of areas that are well understood by computer scientists. At the same time, we must not undersell that potential for well-crafted, well-documented object-oriented components have to revolutionize the way that software is produced.
An Object-Oriented Lifecycle Perspective
Object-oriented components and frameworks have lifecycles of their own that are separate and distinct from those of the applications in which they are incubated. They evolve both within and beyond individual applications. The greater reuse potential of object-oriented entities can justify lavishing resources and attention on them, in effect, lowering the break-even point for reuse efforts. From this perspective, design pervades the lifecycle, rather than being a distinct phase at the start of it. The perspective recognizes that applications tend not to exist in a vacuum.
There are three features that distinguish object-oriented approaches from conventional ones: polymorphism, inheritance, and encapsulation. Polymorphism makes it more likely that a component will operate correctly in a wide variety of different contexts. Inheritance allows common behaviors to be shared among a family of objects, permits programming-by-difference, and promotes the emergence of abstract superclasses. Encapsulation insulates an evolving part of a system from the rest of it.
A framework is a collection of cooperating classes that together define a generic or template solution to a family of domain specific requirements. The Smalltalk-80 Model-View-Controller (MVC) classes constitute a framework for constructing Macintosh applications. A framework is tailored for a specific application by overriding certain framework methods, and supplying application specific components that meet certain requirements. Frameworks are often characterized by an inversion of control, in which the framework plays the role of a main program in controlling and sequencing activity.
The Lifecycle of an Object-Oriented Entity
(A) Initial Design (Prototype) Phase
An object-oriented entity usual begins life as an informally structured prototype of some sort. This prototype frequently makes use of "expedient" code borrowing, and constitutes a "first pass" at a proper design for the component. The designer should concentrate on solving the problem at hand, although opportunities to make the design more general should be taken when they can be anticipated. Expedient, first pass designs like this should never be mistaken for good designs, however.
(B) Exploratory (Expansionary) Phase
This phase in an object's lifecycle occurs when a design has proven successful. (There is a distinctly Darwinian quality about this.) Exploratory activity frequently occurs during what traditional lifecycle models call the perfective maintenance phase. This phase is characterized by the spawning of a number of specialized variants on the original theme. As a result, broad, shallow class hierarchies can develop. There is a risk during this phase that the component may suffer from "mid-life" generality loss. This occurs when a previously general design is compromised by the addition of code that expediently solves some specific problem, but destroys the component's ability to deal with other contingencies that the original designer of the component may have anticipated. The variants (subclasses) added during the exploratory phase may be informally organized.
(C) Design Consolidation (Generalization) Phase
This is where the entropy reduction gets done. During this phase, the class hierarchy is reorganized, and abstract classes that reflect the structural regularities in the system (and the problem domain) emerge. The class hierarchy comes to reflect the way you would like to be able to tell people you arrived at the design. Redesign at this point in a system's evolution allows the designer to exploit the insights available from having specialized a design to suit a number of application. Hindsight, which is now available, can be brought to bear on the redesign. The informal, white-box hierarchies that may be present in the system can be recast as black-box components.
The importance of hindsight in this approach can not be overemphasized. If there is one thing that object-oriented designers agree on, it is that it is quite difficult to get their designs right on the first attempt. This is because it is difficult to design components and frameworks in a top-down fashion. (Oh sure, we all hit the occasional hole-in-one.) Instead, these entities emerge as object-oriented experience in a given application domain is accrued.
If hindsight is so valuable, how do we get it? To paraphrase a page from my Murphy's Law calendar: Good judgement comes from experience. Experience comes from bad judgement. In the software world, it comes from prototyping efforts, and experience with related systems. Object-oriented systems have earned a reputation as being good for exploratory programming and prototyping. Domain specific frameworks allow the direct reapplication of design experience gained during the production of related applications.
The three lifecycle phases listed above unfold independently at all levels of the system. Both components and frameworks evolve in this fashion. Both exploration and consolidation should be undertaken in an "opportunistic" fashion. This lifecycle model shares a number of characteristics with Boehm's Spiral model.
This model has a number of implications. One is that design is more than a discrete phase in the development of a component, instead, it is an activity that pervades the component's lifecycle. This model places emphasis not so much on the generation of single applications as on the development of the software infrastructure for solving a range of application requirements. If hindsight is so valuable, then maybe current programmer deployment practices are backwards. Skilled designers may be most valuable during the design consolidation phase. Perhaps this model can lead to the gentrification of software maintenance.
The greater reuse potential of object-oriented components can make lavishing greater care, resources, and attention on such components profitable. Increases in component robustness so achieved benefit all applications that use these components.
There are certain risks associated with the approach advocated here. One is over-design (proving Parkinson's Law by exhaustion). Programmers can embark on searches for software Northwest Passages that are not really there. This approach can lead as well to the construction of software Swiss Army knives, and the hiding of power in the name of generality. Reusable component designers can embark on the construction of Space Shuttles where Big Dumb, disposable Boosters would have sufficed.
Getting Skeletons Back in the Closet
Out in the trenches, raw source code is being directly recycled, rather than being (re)engineered for reuse. This kind of reuse is harmful because programmers seldom are able to make the effort required to completely comprehend the code they are attempting to reuse. Frameworks can help us stamp out such brute-force code reuse.
A framework can be used in much the same way as a skeleton to provide a starting point for an application that will be similar to an old one. Unlike skeletons, the core of a framework is shared by all applications derived from it. Changes to this core will affect all derived applications. Frameworks subsume all the advantages of application skeletons, with few of the disadvantages, especially as requirements evolve.
Long Live the Revolution
At this critical juncture, we must not let down our guards. We must ensure that we do not oversell the power of code reuse on the basis of its performance in a handful of areas that are well understood by computer scientists. At the same time, we must not undersell the potential for well-crafted, well-documented components have to revolutionize the way that software is produced. As we commemorate the 200th anniversary of the storming of the Bastille, my thoughts turn to those brave managers that have embraced the promises we have made about our age of reuse. If we have erred, it will be their heads that the nineties will see atop the pikes.
[Foote 1988] Brian Foote Designing to Facilitate Change with Object-Oriented Frameworks Masters Thesis, 1988 University of Illinois at Urbana-Champaign [Foote & Johnson 1988] Brian Foote and Ralph E. Johnson Reflective Facilities in Smalltalk-80 OOPSLA '89, New Orleans, LA October 1-6 1989 pages 327-335 SIGPLAN Notices, Volume 24, Number 10, October 1989 [Johnson & Foote 1988] Ralph E. Johnson and Brian Foote Designing Reusable Classes Journal of Object-Oriented Programming Volume 1, Number 2, June/July 1988 pages 22-35