DESIGNING TO FACILITATE CHANGE

WITH

OBJECT-ORIENTED FRAMEWORKS

Abridged HTML Version

Brian Foote

Department of Computer Science

University of Illinois at Urbana-Champaign, 1988

Ralph E. Johnson, Advisor


This Abridged HTML Version of Designing to Facilitate Change with Object-Oriented Frameworks contains the Abstract, Dedication, Table of Contents, Introduction, Discussion, and Conclusion from the original work. These sections are likely to be of interest to the more casual reader. The missing sections contain a detailed "literate" exposition of the framework I built, and the black-box components that emerged from it. There is no substitute for looking at these annotated code descriptions if you want to examine these ideas in depth. These missing sections are identified in red in the Table of Contents. Interested readers are invited to download one of the versions of the full manuscript from my home page .

Don't let the Table of Contents scare you. The manuscript formatting conventions for theses tend to run up the page counts. I cut the References section as well, because like the code examples, it didn't convert gracefully to HTML. Again, consult the manuscript if you need to see them. I've otherwise resisted the temptation to correct the manuscript, or add a lot of hyperlinks. This link will take you to the discussion links in the Table of Contents.

-- BF 9/9/97

A Squeak compatible version of the Smalltalk source code (with a few contemporary corrections) for the framework discussed herein, and presented in the full manuscript, is now available as well. It has been tested with Squeak 1.3.

-- BF 1/25/98


Abstract

Application domains that are characterized by rapidly changing software requirements pose challenges to the software designer that are distinctly different from those that must be addressed in more conventional application domains. This project assesses the impact of applying object-oriented programming tools and techniques to problems drawn from one such domain: that of realtime laboratory application programming. The project shows how a class inheritance hierarchy can promote the emergence of application specific frameworks as a family of applications evolves. Such frameworks offer significant advantages over conventional skeleton program-based approaches to the problems of managing families of applications. Particular attention is given to design issues that arise both during the initial design and implementation of such applications, and during later stages of their lifecycles, when these applications become the focus of reuse efforts. The project also addresses the impact of object-oriented approaches on the simulation of realtime systems and system components.


Henry Hudson

To Henry Hudson


Table of Contents

Chapter I -- Introduction 1

The Structure of this Document 4

Chapter II -- Background 5

The Realtime Laboratory Application Domain 5
The CPL Battery 7
A Tour of the CPL Battery 10
Why a Smalltalk Battery Simulation? 15

Chapter III -- Anatomy of the Battery Framework 18

Battery-Items 21

BatteryItem 22
SternbergTask 48
ToneOddball 54
WordOddball 58

Battery-Parameters 63

BatteryParameters 64
SternbergTaskParameters 68
ToneOddballParameters 71
WordOddballParameters 73

Stimulus-Generators 74

StimulusGenerator 76
SternbergDisplayGenerator 78
ToneGenerator 80
WordGenerator 83

Stimulus-Support 85

Stimulus 86
SternbergStimulus 87
ToneStimulus 88
WordStimulus 89

Sequence-Support 90

WeightedCollection 91
SequenceGenerator 94

Response-Support 97

ButtonBox 98
ButtonBoxResponse 101

Data-Management 102

BatteryBlock 103
BatteryTrial 104
DataDictionary 106
DataDictionaryEntry 107

Interface-Battery 109

ListHolder 110
ItemListController 112
ParameterListController 115
BatteryCodeController 117
WaveformController 119
BatteryBrowser 121
ItemListView 127
ParameterListView 128
BatteryView 130
BatteryCodeView 133
WaveformView 134

Chapter IV -- Anatomy of the Battery Library 143

Realtime-Support 145

Timebase 146
Timebase 147

Realtime-Devices 149

Device 150
ClockedDevice 152
Clock 154
Digitizer 161
BufferedDigitizer 164
StreamedDigitizer 166
InputBit 168
OutputBit 173

Waveform-Support 175

Averager 176
Waveform 180
WaveformCollection 183
Tally 185

Random-Support 189

IntegerGenerator 190
IntegerStream 191
RandomStream 193
SampledStream 197

Plumbing-Support 199

Filter 200
Pump 203
Tee 207
ValueFilter 208
Valve 210
ValueSupply 212

Accessible-Objects 214

AccessibleObject 215
AccessibleDictionary 223

Chapter V -- A Tour of the Battery Simulation 225

Chapter VI -- Discussion 229

Object-Oriented Frameworks 229
Environments 235
Getting Skeletons Back in the Closet 235
Why Software Design is So Difficult 241
Designing in the Presence of Volatile Requirements 246
Designing to Facilitate Change 251
Specificity vs. Generality 252
Objects, Evolution, and Maintenance 254
Reuse vs. Reinvention 257
Frameworks and the Software Lifecycle 259
Programming in the Smalltalk-80 Environment 263
The Learnability Gap 264
Smalltalk and Realtime Systems 266
Starting with a "Real" System 269
Lisp, Simula, and Uniform Access 270
O2 Programming is Easy. O2 Design is Hard 274

Chapter VII -- Conclusion 277

References 279


Chapter I -- Introduction

A good software designer knows that a job is not done when the requirements for the project at hand have been met. Well designed systems will lay the foundations for solving related problems as well. Good designers will always keep one eye open to opportunities to produce code and components that are usable beyond the scope of the current problem. A common result of such efforts is the accumulation of a library of broadly applicable utility components and routines. Another result can be the construction of robust, easily extensible applications that facilitate the evolution of a given application or component as the demands made of it change.

Not all programming systems and methodologies are equally effective in supporting the graceful evolution of applications and systems. Certainly high-level languages such as Algol-68 and Pascal, and approaches such as stepwise refinement and structured programming have done much to ease the burden of the programmer designing new applications. The notions of encapsulation and information hiding, which have been embodied in languages like Ada and Modula-2, have contributed much to our ability to deal with large, complex problems and systems. These approaches, powerful though they may be, only begin to address the sorts of pressures one faces when an existing system must adapt to new requirements. The central focus of the work described herein is on how object-oriented languages, techniques and tools confront the problem of volatile requirements.

This document describes a project (the battery simulation) that assesses the value of bringing object-oriented tools and techniques to bear on problems drawn from the domain of realtime laboratory programming. The project was based upon a large laboratory system (the CPL battery, or the battery) developed by the author using more traditional tools and approaches. The project involved the reimplementation of substantial portions of the CPL battery using Smalltalk-80. The principal question that motivated this effort was: How might the use of object-oriented tools and techniques affect the design, implementation, and evolution of programs in this application domain?

The problems of realtime laboratory programming are quite distinct from those in areas that have been more comprehensively studied by computer scientists, such as compiler construction or operating system design. Realtime laboratory data acquisition and experimental control applications must flourish in a research environment that is (by the very nature of research itself) characterized by rapidly changing requirements. These applications must also operate under severe timing and performance constraints, and must be designed to facilitate graceful evolution.

The applications which provided the basis for this project (the CPL battery) resulted from a fairly ambitious attempt to address some of the requirements stated above using traditional programming tools. A number of the approaches taken in the design of the CPL battery were inspired by object-oriented techniques. This project (the battery simulation) represented an attempt to ascertain what impact the use of a full-blown, bona-fide object-oriented programming environment (Smalltalk-80) might have on a redesign and reimplementation of representative portions of the CPL battery.

This Smalltalk-80 reimplementation had, from the onset, an exploratory character. An important aim of the project was to examine how the Smalltalk language and system might affect the sort of code produced to solve laboratory programming problems. The "plumbing" data stream classes for data analysis and the "Accessible object" record/dictionary classes are two of the more interesting results. Another goal was to assess the utility of Smalltalk's user interface construction tools in constructing these applications. The waveform and parameter browsing tools incorporated into the project resulted from this effort. The plumbing data stream classes, Accessible objects, and the battery browsing tools are presented in detail in Chapters III and IV of this document.

The overriding focus of this effort, however, was not as much to see how object-oriented techniques might aid in the construction of any given application as it was to assess the ways in which these techniques might be used to avoid the sort of wasteful duplication that is conventionally seen in these sorts of application domains as requirements change.

One way in which object-oriented schemes help to meet this goal is by encouraging the design of general, application independent libraries of reusable components. More conventional programming environments do this too. The information hiding capabilities present in object-oriented languages and systems are of great benefit in promoting the development of reusable libraries.

Another way in which object-oriented approaches facilitate graceful component evolution is via the ability of an object-oriented system to support the customization of a general kernel of components through the specialization ability provided by inheritance. The specialization and reuse capabilities provided by object-oriented inheritance and polymorphism increase the potential applicability of both preexisting and user generated system components. Hence, effort spent making a component more general is likely to pay off sooner than it might in a conventional system.

The Smalltalk battery simulation uses a class inheritance hierarchy to help manage a set of related, evolving applications as they diverge from a common ancestor. The emergence of an application specific framework in the face of volatile requirements is perhaps the most interesting consequence of the use of object-oriented techniques.

Traditional tools and approaches in the laboratory domain encourage a programming style built around a library of context independent reusable subroutines and disposable custom programs built (perhaps) from simple skeletons. An object-oriented approach allows a broad middle ground between these two extremes: the application framework.

The existence of a mechanism that allows the graceful evolution of program components from the specific to the general is a valuable asset during the design of any system, but the value of such a capability takes on an additional dimension of importance when such systems must evolve in the face of highly dynamic requirements. Thus, designing to facilitate change takes in lifecycle issues that normally are addressed under the rubrics of reuse, maintenance and evolution.


The Structure of this Document

This document is organized as follows:

Chapter II gives the history and background of this project, including a detailed description of the system upon which the project was based (the CPL battery), and discusses Smalltalk and object-oriented programming in general.

Chapters III and IV give the detailed anatomy of the Smalltalk-80 battery simulation framework and library.

Chapter V gives an illustrated tour of the battery simulation.

Chapter VI contains a discussion of a number of general questions and points raised by this research.

Chapter VII summarizes the project's results and conclusions.


Chapter VI -- Discussion

This chapter discusses a number of issues raised by this investigation. The bulk of the discussion is centered on the emergence and use of object-oriented frameworks, and their impact on the design process and the software lifecycle. Later sections discuss the impact of object-oriented techniques in general, and the Smalltalk-80 system in particular, on the design and implementation of realtime laboratory applications.

The chapter begins with a discussion of object-oriented frameworks and environments. This section explains what distinguishes them from conventional programming techniques. Next, the problems associated with skeleton programs are discussed. This is followed by a lengthy examination of why software design, which is difficult under any circumstances, is particularly difficult in the presence of changing requirements. Subsequent sections discuss designing to facilitate change, and the tension between specificity and generality.

These discussions are followed by an examination of certain software lifecycle issues. The so-called maintenance phase and the question of whether to reuse or reinvent software are covered in these sections.

The final parts of this chapter examine object-oriented programming and realtime applications, as well as the problems of designing general data structures in Smalltalk.


Object-Oriented Frameworks

An object-oriented framework is a set of classes that provide the foundation for solutions to problems in a particular domain. Individual solutions are created by extending existing classes and combining these extensions with other existing classes.

The Battery simulation described herein is a framework for constructing realtime psychophysiological application programs. MacApp [Apple 85] is a framework for constructing Macintosh application programs. It is in effect a generic application program that provides standard Macintosh user interface and document handling capabilities. The Lisa Toolkit [Apple 83] was an earlier package that used object-oriented techniques to integrate applications into the Lisa desktop environment. The Smalltalk-80 Model-View-Controller triad (MVC) is a framework for constructing Smalltalk-80 user interfaces [Goldberg 84].

Frameworks are more than well written class libraries. A good example of a set of utility class definitions is the Smalltalk Collection hierarchy. These classes provide ways of manipulating collections of objects such as Arrays, Dictionaries, Sets, Bags, and the like. In a sense, these tools correspond to the sorts of tools one might find in the support library for a conventional programming system. Each component in such a library can serve as a discrete, stand-alone, context independent part of a solution to a large range of different problems. Such components are largely application independent.

A framework, on the other hand, is an abstract design that much more closely resembles a skeleton program depicting a solution to a specific kind of problem. It is a template for a family of solutions to a set of related problems. Where a class library contains a set of components that may be reused individually in arbitrary contexts, a framework contains a set of components that must be reused together to solve a specific instance of a certain kind of problem.

One should not infer that frameworks are useful only for reusing mainline application code. The abstract designs of library components as well as those of skeletal application code may serve as the foundations for frameworks. The ability of frameworks to allow the extension of existing library components is in fact one of their principal strengths.

It is important to distinguish between using library components in the traditional fashion and using them as part of the basis for a framework. For example, most of the classes in Chapter V (the battery library) of this document are used by the battery simulation primarily as traditional, discrete library components. However, the classes in Chapter IV (the battery framework) illustrate a set of three concrete applications derived from a generic psychophysiological experimental design. Each application was created by defining application specific subclasses that selectively overrode and extended the behavior of the abstract battery design. A finished application is comprised of a cooperating set of specialized framework objects supplemented by library and system components. The view and controller classes in the Interface-Battery category are examples of system components that have been specialized by the battery simulation.

Recall that one way to look at a framework is as an abstract design. Such a design is extended and made concrete via the definition of new subclasses. Each method that a subclass adds to such a framework must abide by the internal conventions of its superclasses.

Another type of framework is a collection of abstract component designs. A library of components will, in this case, supply at least one, and perhaps several components that fit each abstract design.

The major difference between using object-oriented frameworks and using component libraries is that the user of a component need understand only its external interface, while the user of a framework must understand the internal structure of the classes being extended by inheritance. Thus, components are "black boxes" while frameworks are "white boxes". Clearly, inheritance-based frameworks require more training to use and are easier to abuse than component-based frameworks, but they allow application-dependent algorithms to be recycled more easily.

A few examples from the battery simulation should serve to illustrate these distinctions. The abstract classes BatteryItem and BatteryParameters constitute the roots of an inheritance framework that defines generic experiments. The subclasses WordOddball, ToneOddball, and SternbergTask, along with WordOddballParameters, ToneOddballParameters, and SternbergTaskParameters define three concrete applications derived from this framework.

The abstract classes Stimulus and StimulusGenerator define a generic stimulus handling mechanism. The subclasses WordGenerator, ToneGenerator, and SternbergDisplayGenerator, along with WordStimulus, ToneStimulus, and SternbergStimulus define concrete realizations of these designs for each battery application that has been defined. Hence, the two hierarchies built around Stimulus and StimulusGenerator are inheritance frameworks as well. The four hierarchies rooted in BatteryItem, BatteryItemParameters, Stimulus, and StimulusGenerator can hence be thought of as together comprising a framework for constructing experiments.

However, the relationship between the BatteryItem and StimulusGenerator hierarchies merits additional scrutiny. Subclasses of BatteryItem do not have direct access to the states of their stimulus generation objects. They instead own instances of an appropriate kind of StimulusGenerator. They thus have access to these objects only via their external protocols. Hence, even though the class hierarchies for stimulus handling parallel those used to define the battery items, this need not be the case in general. In fact, any object that met the protocol assumptions made in a given battery item for its stimulus generator would, as a result of polymorphism, work correctly with that battery item. The battery items in this case define a component framework, in which any object that meets a given abstract design may replace any other.

If the relationship between parts of a framework can itself be defined in terms of an abstract protocol (or signature), instead of using inheritance, then the generality of all the parts of the framework is enhanced. In fact, as the design of a system becomes better understood (as it evolves) more component-based relationships should emerge to replace inheritance-based ones. One can think of such component-based relationships as an ideal towards which system components should be encouraged to evolve.

The stimulus hierarchies discussed above are examples of just this sort of evolution. In the CPL battery, as well as in earlier versions of the battery simulation, the stimulus generation code was intertwined tightly with other battery item control code. It became clear as I worked with this code that separating the stimulus generation code from the rest of the battery items would increase the modularity and clarity of both the resulting hierarchies. In addition, even though there is still a one-to-one correspondence between the concrete subclasses in each hierarchy, the relationship among these objects need not stay this way. Since the stimulus generation objects are now components of battery items, future items may mix and match these as they please.

Object-oriented inheritance allows extensions to be made while leaving the original code intact. The original root objects of a framework may also serve as the basis for any number of concrete extensions. A framework may be structured as a hierarchy of increasingly more specific subclasses. In contrast, a purely component-based reuse strategy requires that application code that orchestrates application independent components either be conditionalized or rewritten for each new application.

A framework may, in ideal cases, evolve from the sort of white box structure described above into a gray or even black box structure. It may do so in those instances where overridden methods can be replaced by message sends to black box parameters. Examples of such frameworks are the sorting routines that take procedural parameters seen in such conventional systems. Where it is possible to factor problems in this fashion, it is certainly desirable to do so. Reducing the coupling necessary between framework components so that the framework itself works with any plug-compatible object increases its cohesion and generality. A number of authors have written on the desirability of using component-decomposition as an alternative to inheritance where possible (see [Halbert 87] and [Johnson 88] for instance). It should be a goal of the framework architect to allow a given framework to evolve into a component framework. The route by which a given inheritance framework will evolve into a component framework will not always be obvious, and many (most?) frameworks will not complete the journey from skeleton to component frameworks during their lifetimes.

White box inheritance frameworks should be seen as a natural stage in the evolution of a system. They may be intermediate stages in the evolution of the system, or permanent fixtures in it. The important thing to realize about them is that by providing a middle ground between task specific and general realizations of a design, white box inheritance frameworks provide an indispensable path along which applications may evolve.

Barbara Liskov, in a keynote address given at the OOPSLA '87 conference in Orlando, distinguished between inheritance as an implementation aid (which she dismissed as unimportant) and inheritance for extending the abstract functionality of an object. Liskov claims that in the later case, only the abstract specification, and not the internal representation of the parent object should be inherited. She was in effect advocating that only the black box framework style described above should be employed. Such a perspective ignores the value of white box frameworks, particularly in the face of changing requirements.

A failure to confront the challenges of design evolution is characteristic of most of the work that has been done on programming methodologies. It is not my intention to fault researchers in this area for this, however. The problems of system design are difficult enough without adding the enormous additional complication of moving target requirements.

One way of characterizing the difference between inheritance and component frameworks is to observe that in inheritance frameworks, the state of each instance is implicitly available to all the methods in the framework, much as the global variables in a Pascal program are. In a component framework, any information passed to constituents of the framework must be passed explicitly. Hence, an inheritance framework relies, in effect, on the intra-object scope rules to allow it to evolve without forcing it to subscribe to an explicit, rigid protocol that might constrain the design process prematurely.

It is possible, albeit difficult, to design good class libraries and frameworks in a top-down fashion. More frequently, good class libraries and frameworks emerge from attempts to solve individual problems as the need to solve related problems arises. It is through such experience that the designer is able to discern the common factors present in solutions to specific problems and construct class hierarchies that reflect these commonalities. It is the ability of inheritance hierarchies to capture these relationships as they emerge that makes them such powerful tools in environments that must confront volatile requirements.

Much of the remainder of this section looks at the impact of frameworks on the software design process and the software lifecycle, with a particular emphasis on the problems surrounding application environments characterized by rapidly changing requirements. We will compare the framework-based approach with conventional solutions to such problems, such as application skeletons and subroutine libraries, and examine those areas addressed by frameworks where no clearly recognized alternative methodologies currently exist.


Environments

An object-oriented application construction environment, or environment, is a collection of high level tools that allow a user to interact with an application framework to configure and construct new applications. Environments are referred to as Toolkits in [Johnson 88]. Examples of environments are Alexander's Glazier system for constructing Smalltalk-80 MVC applications [Alexander 87], and Smith's Alternate Reality Kit [Smith 87]. A framework for a given application domain can often serve as the basis for the construction of tools and environments for constructing and managing applications.


Getting Skeletons Back in the Closet

Skeletons

One has only to examine the state of practice in many research application domains to realize how much they need better tools and techniques. There are certainly some problems with the approaches taken in the research described herein. To put things in perspective, it might serve us well to look at how much worse the status quo is.

Let's examine the ways in which realtime psychophysiological application programs developed in a typical laboratory environment. (The author has spent a considerable amount of time working in this domain. See [Heffley 84].)

At the University of Illinois' Cognitive Psychophysiology Laboratory, research applications are normally written by scientists and graduate students, using Fortran 66 and a structured preprocessor. Realtime device access, and time critical functions are performed using a large library of fast assembly language subroutines. (See [Foote 85].) Some might ask whether a language such as Pascal or C might be a better choice for this sort of application programming. In fact, Fortran retains a number of advantages over these other languages for scientific programming. These include de facto support for separate compilation, good support for subroutines that manipulate arbitrarily dimensioned multidimensional arrays, and very good floating point support. In any case, so far so good.

A given group of students working in such a research environment will exhibit tremendous diversity in their levels of programming ability. Some students will have extremely strong computer backgrounds, others will have minimal programming backgrounds. One result of this is that novel application programs are developed by those with the better computer skills, and become templates, or skeleton programs, upon which others build.

The person writing such a skeleton program is seldom interested in solving anything more than the problem at hand. The program is only a tool for conducting the current experiment. A programmer in a research environment will often be in a poor position to plan for future version of a program, since research is often a somewhat fuzzy, exploratory enterprise. The initial version of a given program might be a short, spare shell that performs the rudiments of a given experimental design and nothing more. Since these initial programs are usually thought of (at least at the time) as one-shot, disposable programs, they are frequently underdocumented or undocumented. (The question of what constitute a truly well documented program is a very interesting one I will not try to address here.) Still, things have not yet gotten unmanageable. The production of disposable programs may be a reasonable practice when applications will consistently be short and trivial, and where successive applications will not be required to try to build on previous ones.

Trouble can begin when new problems that resemble the one for which an original program was written in some respects (but not others) arise. This is a familiar situation in most application domains, but occurs with much greater frequency in some research programming domains, again, because such is the nature of research. When this point is reached, the programmer is faced with the following choices:

He or she can write a new program "from scratch", perhaps based in some respects on the original program. This alternative is merely the generation of a new "disposable" program.

The programmer can generalize the existing program to accommodate both the old and new requirements. In some instances, the existing program can be made to accommodate the new requirements merely by altering overly specific code. (An example of this is a program written to collect, say, 2 channels of data, when it could nearly as easily have been written to collect "n" channels.) I call this approach "parameterization".

More frequently, accommodating multiple requirements in a given program will require "conditionalization", the addition of flag parameters or other variant tag tests that direct the flow of control to code that implements either the old or new requirements.

Global variants can be eliminated very elegantly using an object-oriented inheritance hierarchy. The base class containing tag checks becomes an abstract superclass. At each point where a tag is tested, a method dispatch to self is performed instead. (Such variant elimination might even be automated.) Each variant becomes a subclass of the new abstract class. The methods in these subclasses encompass only the code that distinguishes them from the abstract superclass. The result is much more readable, since the code that distinguishes each variant from the common ancestor is collected together, rather than scattered among case statements all over the parent module. Variant elimination and the generation of such abstract superclasses thus promote the emergence of frameworks.

The final alternative is to copy the original program. This copy is altered to meet the new requirements. I call this approach metastisization. (The somewhat pejorative connotation is intentional.) My experience has been that this is the approach most often taken in informal communities of programmers when faced by rapidly changing requirements. It is replete with problems. To name a few: New programmers usually understand only those localized portions of the programs that they are called upon to change. New documentation is rarely added, and is often incorrect. Worse yet, obsolete code and documentation often litters the derived program. A lack of comprehension of the global structure of these programs frequently leads to errors. (It is only fair to note here that some of the same sorts of comprehensibility problems can arise in massive object-oriented inheritance hierarchies.)

As such programs grow more complex, no single individual completely understands what they do. Reading and fully comprehending such code is among one of the most difficult intellectual challenges one might have the misfortune to encounter. In part for this very reason, these programs take on lives of their own. Starting over would require that somebody figure out what these programs do and rewrite them properly. Since this would usually require a large investment of the time of someone who is not a programming professional and has other things to do, these monsters live on. The behavior of these programs, and not any explicit experimental design, comes to define what the de facto designs of certain experiments are.

One of the biggest difficulties with allowing a plethora of variants on an original skeleton to proliferate is that the chore of maintaining or improving them increases n-fold, since any bug fix or improvement must be made to every copy. The version management problems, should multiple system be involved, become similarly overwhelming.

What can be done to drive stakes through the hearts of these creatures?

Let me review the lifecycle alternatives I've just given:

Disposable programs

Good for short orders, but wasteful

Parameterization

Generalization, internal complexity

Conditionalization

De facto variants, internal complexity

Metastisization

Skeleton programs, version proliferation

Brad Cox of Productivity Products has proposed the notion that object-oriented programming promotes the generation of what he calls software ICs [Cox 86]. It is interesting to observe how the evolution of IC interfaces has paralleled the paths described above in many respects. MLSI ICs were clean, simple, single purpose black boxes. New ICs were often designed by treating a previous design as a skeleton, with the features that distinguished the new design added in lieu of certain features of the base design. Todays VLSI controllers have much more complex interfaces, that provide a variety of modes (conditionalization) and internal registers (parameterization). Rather than being able to learn a single external interface, and treat the component as a black box, the designer is forced to become much more familiar with the internal structure of these new controllers.

Software vendors who wish to protect their source code have an interest in promoting frameworks as black boxes. It remains to be seen whether this is a practical approach. Vendors who retain their source code must nonetheless produce extensive documentation of the the internal structure of their components. It is interesting to note that two of the most successful object-oriented frameworks, Smalltalk-80 and MacApp, come with full source code.

Both the skeleton and VLSI controller discussions above illustrate one of the major problems with brute force generalization attempts. Attempts to encompass too much functionality in a single component can greatly increase that component's internal and external complexity. A component with an excessive number of modes will suffer from a perception of high complexity and will require a larger number of operations for initialization and configuration.

The CPL battery was in many respects a radical antithesis inspired by the woes of disposable programming and metastisization. It pushed the alternative strategies of parameterization and conditionalization to their limits, and then headed in the direction of employing a crude, brute force inheritance scheme to manage some of the subapplications as they evolved. The success of the techniques used in the CPL battery led me to investigate the use of a true object-oriented design as exemplified by the battery simulation. (Another factor was the superior facilities that object-oriented systems provide in the way of user interface components.)

Object-oriented frameworks allow a base application to serve as a dynamic skeleton for a number of derived applications. Unlike a typical skeleton program, a change to the root classes of a framework will affect all the subclasses that inherit its behavior. By allowing application specific code to reside in subclasses, frameworks encourage the designer to develop and enhance the base classes. Enhancements to a skeleton affect only those application written using it after the enhancements are made. Subsequent enhancements to a skeleton cannot affect programs previously derived from it. Frameworks used in this fashion are perhaps one of the best examples of how one should properly exploit the code sharing facilities provided by object-oriented inheritance.

The CPL battery items have repeatedly benefited from the ability that frameworks provide for sharing improvements made to their generic cores. Each time a feature is added to this core, all the battery items are able to use this feature the next time they are compiled. Managing parallel changes to every member of a large family of applications on a program by program basis can become so time consuming as to be prohibitive.

Skeletons and libraries in conventional systems can be thought of as lying at two poles along a generality axis. Frameworks bridge this gap by serving as waystations at which components can reside as they evolve from applications to framework components to library components. Note that not all components will complete such a journey. Some will never even begin. However, as a system matures, the responsible designer will make every effort to factor common code out of application classes into abstract frameworks. He or she will also take pains to find application independent components and place them in libraries.

Skeleton-based approaches have become increasingly popular over the last few years as the general level of application program complexity has increased. Programmers who several years ago might have been producing simple applications with command-based glass teletype style user interfaces are now faced with demands for complex, event-driven applications in multiprocessor distributed environments. As a result, programmers who might have written applications from first principles in the past must rely more and more on skeletal templates.

Application frameworks like the Smalltalk MVC framework and MacApp have already demonstrated the advantages of object-oriented frameworks over skeleton-based programming in the user interface domain. Enterprises like the battery simulation are a first step towards demonstrating the power of this approach outside the cozy realm of those domains well known to computer scientists.


Why Software Design is So Difficult

To understand how object-oriented frameworks might ease the plight of the software designer, it might prove useful to first examine the difficulties present in the software design process in general, with a particular focus on how traditional design practices address volatile requirements.

At the most fundamental level the reasons for the difficulty of software design are as varied as each individual set of requirements, implementation tools, and programmers. However, it should not be surprising that design is at the heart of most difficulties in software systems, since design encompasses the structural complexity of the solution that must match a given specification and the low level algorithmic idioms that must express this solution. No matter how carefully prepared or detailed a system definition and requirements specification might be, it should only specify what is to be implemented, and not how this is to be accomplished. In the end, many easily specified tasks might still be "easier said than done". Likewise, the traditional distinctions between programmers and analysts reflect a recognition that is is harder to design systems than it is to implement them.

Software design is difficult in part because software designers work in a universe of arbitrary complexity where their creations are not constrained by physical laws, the properties of materials, or, in the case of many application domains, by the history and traditions of earlier designs. Hence, the number of degrees of freedom that confront the software designer can dwarf those faced by designers in more conventional engineering disciplines.

Adding to this difficulty is the fact that only a few application domains have been studied thoroughly enough so that their design principles are well understood. Computer scientists have spent a massive number of person/years investigating the principles of compiler construction and operating system design. These efforts have allowed the construction of truly useful automatic tools, and allow the analyst working in these areas to "stand on the shoulders of giants" when confronted with design problems drawn from these domains.

However, the hapless designer confronting a problem from outside these areas is nowhere near so fortunate. The triumphs of another application domain may map poorly, if at all, to a new discipline, and the design principles of a given domain may be quite subtle. It took four centuries to discover that mimicking the birds was a bad way to design heavier than air vehicles that flew. The solution that is now commonly used, the airfoil, seems upon naive examination to be a peculiar place in the space of all possible solutions to fix as a fundamental constraint. Yet, our fleets of flying machines are now designed with just such a basic constraint. The point to this all is that for many application domains, we simply have not discovered the basic pivot points around which other constraints may be relaxed. Each domain may await solutions with the elegance of our parsing techniques, or synchronization primitives, before other design constraints can be wrestled into place. In this sense, design can be said to be "experience limited" or even "idea bound".

Another general problem with the massive constraint satisfaction problem facing the software designer is (apart from the fact that there are many more constraints than face other types of designers) the fact that design parameters vary along vague, difficult to conceptualize (let alone visualize), frequently non-scalar dimensions. A bridge designer can gauge the effect of adding additional concrete to a roadbed in terms of weight, volume, and the like. However, the effects of manipulating software design parameters (coupling, cohesion, number of modules, module size, intermodule protocols) and the effect of changes to these on functionality, reliability, generalizability, reusability, etc. are difficult (at best) to accurately estimate.

One of the worst effects of this state of affairs is that it is not always easy to tell if a specification is unreasonable. Considers the example of constructing an automobile that can fly using parts found at local shopping malls. I will submit that some of the problems that confront software designers are in a very real practical sense every bit as outlandish as the task given above. The specification might be quite clear, and the implementation materials and tools well understood, and the task might well be theoretically possible. Yet, in the absence of experience to guide the design process, a designer might not be able to distinguish between the software equivalents of the merely difficult task (building an automobile from parts) and the much more difficult task of allowing it to to meet all the previous constraints on automobiles and fly as well. If the solution domain is such that there is no major axis of impracticality, but some number of axes along which a design must be optimized, the task can become quite daunting.

How can these sorts of problems be addressed? Clearly we must give the designer the means to limit the huge number of design decision he or she must make, and to identify those aspects of the design that have a major impact on the suitability of the resulting product to the task for which it is being designed.

How might the number of design decision be limited? One approach is to allow the designer to refer to an existing body of work that meets similar constraints, if not perfectly, then at least in an aesthetically reasonable manner. Kristina Hooper [Hooper 86] points out that software design often has as much in common with disciplines like architecture as it does with disciplines more traditionally associated with it (such as engineering, mathematics, and physics). From this perspective, questions of aesthetics, style and experience play a much more central role in the design process, and judgement as well as empirical constraints may legitimately serve as constraints on the design process. Hooper points out that in software design, as in architecture, fads and even "schools" of design orthodoxy can rise and fall. Of course, previous "conventional" design experience often comprises a rich lode of material for the designer as well. However, in those domains where previous principles are not well established, intangibles like experience, judgement and aesthetics can reasonable be brought into play. It is probably also the case (to paraphrase [Kernighan 76]) that with design as well as with programming, good practice is developed as much from looking at actual working systems as from "abstract sermons". Disciplines like architecture and engineering can teach us much about the value of case studies. Object-oriented programming, with its emphasis on reading code, provides more opportunity for this that most other approaches.

Fred Brooks [Brooks 87] makes a similar point in his paper "No Silver Bullet: Essence and Accident in Software Engineering". He states: "We can get good designs by following good design practices instead of poor ones. Good design practice can be taught." However, he goes on to say: "Whereas the difference between poor designs and good ones may lie in the soundness of the design method, the difference between good designs and great ones surely does not. Great designs come from great designers. Software construction is a creative process."

A second point at which to limit design chaos is at the interface between the software system being designed and the substrate upon which it is being built. Hence, the programming environment under which a design is undertaken can do much to limit the number of arbitrary choices that a designer must make. One way it can do so is by providing a large library of fixed, well understood, reusable components (see [Harrison 86]). A knowledge-based system can even apply design schemas to coding situations into which template idioms can be fit (see for instance [Lubars 86]). In either case, having a set of "canned" idioms serve as low level reference points in the design constraint satisfaction process is of value to both human and automatic "designers". Most designers recognize that working up from a set of fixed low level components brings many advantages to an ostensibly "top-down" design exercise. It is frequently observed that design is an iterative process in which fixed points at either end of the design "space" reach in from both the top-down and the bottom-up to attempt to fix additional decisions.

A third aid to the designer is the set of rules and guidelines that have been developed to guide the design process itself. Our variations on the ancient edict to "divide and conquer" and on the notion of "need to know" help guide the designer to limit the internal complexity of his or her designs. Notions like stepwise refinement, hierarchical decomposition, and information hiding guide the designer to keep the intermediate nodes in the design graph intellectually manageable. Admonishments to keep cohesion high and coupling low (see [Pressman 82]) give the designer goals to try to achieve as he or she wades through the muck between the desires expressed in the specifications and the reality of what problems there are already solutions at hand for. However, despite the best of intentions, sometimes the problem has ideas of its own. Only the most hardened theoretician could believe that for every problem, no matter how inherently complex, sloppy and idiosyncratic, there must be one or more elegant, tidy solutions.

Programming tools that help express complex constraints among system components can help somewhat as the going gets thicker. Alan Borning's ThingLab [Borning 79] and other work on constraint satisfaction systems shows that more powerful systems and notations can aid the designer in dealing with complex problems. Borning also cites Simon's observations about "nearly decomposable systems" that can almost, but not quite, be composed into cohesive units. Borning's system displays problem solutions built from objects that are more or less cohesive units ("wholes") in themselves, while they at the same time comprise the components ("parts") of higher level "wholes". Such part/whole type organizations are frequently seen in both artificial and natural systems.

For certain classes of problems, very high level languages, or declarative or knowledge-based languages can force the designer down paths that are known to have a high likelihood of leading to success.

Tools that help the designer visualize the structure of the system he is designing can aid the design process as well. Systems like SADT [Ross 85] provide facilities for divining and visualizing the structures of existing systems, including ones under construction.

Still, the situation must at times seem fairly dismal to the large system designer. He or she works in a realm where the normal physical bounds on system complexity do not exist, and the roots of complexity are poorly understood. The benefits of the insights of Eli Whitney and Henry Ford (i.e. standard interchangeable, reusable parts) are still largely unavailable to software designers in most application domains. (This lack of standardization is in part the fault of a tension between the desire for generality, and a desire to achieve simplicity in individual designs. More on this later.) Programming tools can help at the lowest levels, but as the size of a system increases, the entire effort can become increasingly design intensive. Finally, good tools to help the designer understand the structure of existing systems are only beginning to appear.

All of this might seem bad enough, but wait, it gets worse...


Designing in the Presence of Volatile Requirements

Software design for all but the most trivial applications is a difficult process under even the best of circumstances. In the traditional waterfall software lifecycle model, software design is performed once and for all after initial system definition and software requirements assessments have been made. Forcing the designer to confront the probability that system requirements will change throughout the lifetime of the software product adds a new dimension of complexity to the design problem. Few treatments of the software design process address this possibility. Changing requirements are nonetheless a fact of life in most application domains.

Dave Brown, in an article entitled "Why We did So Badly in the Design Phase" [Brown 84] points out that by putting all the design effort up front, we run the risk that that effort will be deemed to be difficult to abandon, even when a better solution is identified later in the process of product development. In the horror story he cites, a design group was reluctant to give up the effort it had put into a cumbersome design, simply because of how much effort had already been expended in developing and documenting that design. In the end, a fresh team member shows the original team the error of its ways.

One could construe this as merely an example of Brook's [Brooks 75] admonition to "plan to throw out the first one. You will anyway." I think the example might serve to make a stronger point than that. That is, that since change is inevitable, we should anticipate it and plan for it.

There are many reasons why the designer often has the requirements rug pulled from beneath him. Sometimes, change requests come from users who do not fully comprehend the impact that software changes might have on the structure of a software product. This problem is addressed below. Certainly one way of addressing the problem of volatile requirements is to attempt more vigorously to prohibit them. My feeling is that, as helpful as this might be, it is simply not realistic to expect that this is possible in many (perhaps most) cases. I would argue instead that we must accept volatile requirements as inevitable, and develop design techniques and tools to cope with them.

One of the more difficult problems facing the software designer is the myth of software malleability. This myth holds that since a given system is implemented in software, it must be arbitrarily easy to change. This belief is common among users and customers with no or (worse yet) modest amounts of programming experience. It is reinforced by experience with small programs that can lead one to generalize incorrectly from the experience one perceives of the ease with which one can make minor changes. The naive seem to think that this phenomenon scales directly in such a way so as to make arbitrary changes to large systems "trivial".

The truth is that in many instances the structural edifice that constitutes the software architecture of a system is its most "rigid" part. As systems grow in complexity, the lattice of architectural dependencies between its constituent parts can become more and more dense. Such patterns of coupling among the components of a system reflect real complexity in the architecture of the system that result directly from the complexity inherent in the original requirements and the design choices that were made in divining particular solutions to the problems posed by those requirements. This perspective makes it easy to see why sometimes emulations are the most effective way to keep existing useful systems in the field working. In these instances the observation that "software is hard, and hardware is soft" is appropriate. The person charged with maintaining the system in these cases may have correctly recognized that the instruction set level interfaces was the most structurally simple place in the system to intervene.

Another source of the impression that software should be more malleable than it is a failure to recognize that the order in which design decisions are made affects other decisions in ways that are difficult to undo. Design in practice is neither truly top-down or bottom up, but it is a hierarchical process. At the top of the hierarchy are the broad outlines of the product being designed. At the bottom are those things we already know how to do that we recognize as likely to play a role in the solution to the problem at hand. The designer fixes decisions at a given level by considering the impact they will have on their ancestors, neighbors, and descendents in the design hierarchy. At each level, decision making is influenced by both top-down and bottom-up feedback. The designer attempts to come to an optimal solution to the issues present at each level of the hierarchy before turning his or her attention to the next level. As decisions at each level are fixed, they become fixed constraints around which other constraints are relaxed. Attempts to violate the structure of a system so constructed can be very disruptive. To give an architectural analogy, it is much more appropriate to make changes to the wiring plan of a building after the floor plan is determined, and before the wallboard is up. Considering this level of detail too early can be counter-productive, since the wall locations have yet to be determined. Later changes can require radical backtracking (such as tearing down the wallboard).

Incorrect perceptions of software malleability are but one phenomenon that drives change and evolution in software systems. Changing requirements, the need to add additional capacity, the need to repair existing problems, and increased user sophistication are other factors that can foster system change. This list is by no means exhaustive. I'd like to concentrate here on externally initiated change requests that result in a system's having to meet what are in essence dynamic "moving target" requirements.

The requirement that a system designer somehow design to facilitate change adds a whole new dimension of difficulty to the design process. The easiest position to take is to ignore all but the current system requirements. This is only done, however, at the expense of avoiding obvious opportunities to generalize system components in such a way as to anticipate future requirements. This tension between generality and simplicity (and specificity) is a pervasive problem facing the designer. Does the designer build a simple component with poor generality but (perhaps) a simple, cohesive interface, or a more general, but (perhaps) more complex component that might require more parameters and a more complicated interface? How does the designer gauge the potential for possible reuse, especially in the face of changing requirements? To put it another way, does one narrowly solve only the problem at hand, or does one spend considerable additional effort to build general components that solve a more general range of problems, some of which one might (or might not ever!) have to address at some future date? If one attempts to do this, how does one know that he or she has anticipated the right problems? Where does one draw the line?

Assume that any component lies somewhere along a generality continuum. At one end, there are actions, such as string manipulation, basic display primitives and the like, that obviously are sufficiently context independent, and of sufficient general utility to justify the effort needed to make them general tools. At the other end of this continuum are those portions of any programming task that are manifestly specific to the application itself. In between is a broad gray area that up until now has been claimed by default by the application specific end of the continuum. It is in this area that I think application frameworks can help the designer the most.

Recall that frameworks can allow the abstract designs present both in specific applications and libraries to be customized, extended, and reused. Since a framework can encompass a family of diverging, related components, less pressure is placed on the designer to initially anticipate all the demands that might be placed on a given component. The designer can address the problem at hand, knowing that the resulting component can serve as the root of a framework as related problems arise. Also, the designer can seize opportunities to exploit potential generality, since a general component can be recustomized to meet specific application demands that might if necessary.

Recall, too, that as such frameworks emerge, any enhancements made to their core components will be shared by other elements of the framework. In conventional systems the only way to accommodate divergence in an evolving family of applications is to maintain separate sources, and share only those application independent components that are characteristic of conventional libraries. This proliferation of source files and versions can be a nightmare for the designer entrusted with maintaining them.

I believe that the use of object-oriented application frameworks can help the designer to design in such a way as to facilitate change. Such frameworks can also foster the sharing of generic skeletons and somewhat application specific components, and can help the designer avoid the haphazard, patchwork organization frequently seen in evolving systems during the maintenance phase of the software lifecycle.

A final problem confronting the designer is designing systems to be safe and reliable. As if designing the system to work the way it is supposed to is not enough, the designer is confronted as well with the responsibility for making sure that his or her system performs reasonably in the face of input or uses for which it was not intended. Here too, the process of making a system more robust is much more complex than that facing designers in more conventional disciplines. An aircraft engineer can increase safety by increasing design margins. If only typing each line of code five times would make a module five times more reliable.

One way to aid the designer in dealing with these problems is to provide a greater armamentarium of off-the-shelf, proven components. Such components can be more reliable since they will often have been tested and proven in previous applications. Since it is known that components in such a library may be frequently reused, the software engineer can justify lavishing more attention and resources on them that might be justified were they being used only one. The potential range of application of such components can be substantially increased using schemes like the application framework approach.

The plight of the designer is daunting, but not hopeless. Despite the many perils that face the software designer, programs that appear to work still manage to appear. We are only beginning to understand the real complexity of these undertakings, and construct tools and techniques to come to grip with them.


Designing to Facilitate Change

Over the last twenty years, computer scientists have made impressive progress in understanding how to design and implement computer programs given a fixed set of requirements. Much less is known, however, about how to design systems to facilitate their orderly evolution. Designing to facilitate change is a much more demanding task than designing simply to meet fixed requirements. Managing systems in the presence of volatile, changing, highly dynamic requirements can be a daunting task. Lifecycles that are characterized by a steady infusion of new requirements are qualitatively different from the traditional "waterfall" lifecycles.

[Booch 86] has discussed the use of object-oriented development techniques as a partial lifecycle method. [Jacobson 86] discusses how object-oriented approaches can support change in large realtime systems.

It is far more difficult to design a framework that attempts to accommodate future expansion and extension requirements than it is to merely meet the requirements at hand. How does one trade off the simplicity of solving only the current problem with the potential benefits of designing more general components? There is, (to paraphrase Randall Smith), a tension between specificity and generality.

Kent Beck [O'Shea, 1986] claims that turning solutions to specific problems into generic solutions is a difficult challenge facing system designers using any methodology. To quote: "Even our researchers who use Smalltalk every day do not often come up with generally useful abstractions from the code they use to solve problems. Useful abstractions are usually created by programmers with an obsession for simplicity, who are willing to rewrite code several times to produce easy-to-understand and easy-to-specialize classes." Later he states: "Decomposing problems and procedures is recognized as a difficult problem, and elaborate methodologies have been developed to help programmers in this process. Programmers who can go a set further and make their procedural solutions to a particular problem into a generic library are rare and valuable."

Useful abstractions are (more often than not) designed from the bottom up, and not from the top down. We create new general components by solving specific problems, and then recognizing that our solutions have potentially broader applicability.


Specificity vs. Generality

It would seem that the point at which it pays to design a general rather than a task specific component is different in an object-oriented environment than in a conventional environment. The main reason is that polymorphism and inheritance increases the likelihood that any given component might itself be usable in a wide range of contexts, and that other components can be reused within it.

However, generalization has a cost. Attempting to design a general component requires a great deal more thought and effort than designing a narrow, single purpose component. Furthermore, such efforts are not at all certain to succeed. It is not at all obvious how many specific problems will be special cases of a particular general solution. It may be the case that a component can be made more general only at the cost of increased internal or external complexity. (For instance, it should be clear that any given component could be made to solve two problems with the addition of an external "switch" and a number of internal tests on this switch.) In all but the most obvious cases, it may not be easy to determine how useful it is to make a given component more general. The designer will at times engage in the speculative design of general purpose components that solve problems that may never arise.

A general design can be like a Swiss army knife. The danger is that a complex tool may perform many tasks, but none well. As with the Swiss army knife, this approach may sometimes be just what a given job requires, and may be inadequate in other cases.

The goal of the conscientious designer is to find simple solutions that encompass not only the problem at hand, but a wide range of other as well. These are the holes-in-one that more than make up for the many blind alleys that the designer may have explored.

As has been noted before, the use of inheritance hierarchies can allow an evolving set of components to occupy intermediate states between being completely specific to one problem and being largely context independent. The use of subclassing to allow a given core to solve several related problems provides an alternative to conditionalization, and can allow the outlines of an abstract superclass to emerge as a given set of classes is reapplied to successive sets of requirements.

Inheritance can help the system designer avoid the loss of generality that can occur during mid-life phases of the software lifecycle when the pressures to push a clean design beyond its original specifications inevitably arise. A well designed software component should have but a single purpose (that is, it should exhibit a high degree of cohesion). (See for instance [Pressman 82].) However, it is often easier to pervert a clean single purpose component to make it serve multiple roles than it is to build new components from the ground up. Thus, a previously general component may have task specific features added to it.

The pressure to do this may be particularly hard to resist when a component that was conceived of by the designer as truly general is currently being used for only one or two purposes. Short sighted managers are often willing to trade long term, hypothetical generality for such short term gains. The wisdom of such trade-offs can only be gauged in hindsight. The use of a framework-based approach can reduce the pressure to lose generality by allowing a generic core component to retain its generality. Application specific features are implemented in subclasses using inheritance.

In this way, object-oriented systems provide a continuum along which system components may evolve from specificity to generality. A given class hierarchy containing an entourage of task specific subclasses provides an intermediate stage that is much harder to manage using conventional approaches.

The ability to allow a family of applications to be managed as they evolve and diverge is very valuable in environments characterized by highly dynamic, rapidly changing (even fickle) requirements. Laboratory research applications are, by the very nature of research itself, such environments.


Objects, Evolution, and Maintenance

It is not difficult to argue that evolutionary lifecycles are the rule rather than the exception in practice. Sofware lifecycle researchers are increasingly recognizing the distinctive problems that must be addressed during a software product's evolutionary phase. (See for instance, [Lehman 80], and [Boehm 85]. Fairley observes that "the distribution of effort between development and maintenance has been variously reported as 40/60, 30/70, and even 10/90" [Fairley 85]. Pressman [Pressman 82] quotes figures (from [Lientz 80]) that identify four software maintenance categories: perfective, corrective, preventative, and adaptive. Corrective maintenance is the process of diagnosing and correcting errors (i.e. the bug fixing that is usually thought of as the maintenance phase of the software lifecycle. Adaptive maintenance consists of those activities that are needed to properly integrate a software product with new hardware, peripherals, etc. Perfective maintenance is required when a software product is successful. As such a product is used, pressure is brought to bear on the developers to enhance and extend the functionality of that product. Preventative maintenance activity occurs when software is changed to improve future maintainability or reliability or to provide a basis for future enhancements. [Lientz 80] reported in a study of 487 software development organizations that a typical distribution of these activities was:

This observation and the observation that 50% of maintenance activity is "perfective" would seem to support the contention that an evolutionary phase is a inevitable (and quite important) phase in the lifecycle of a successful software product.

The term "maintenance" phase has a certain pejorative quality about it. It has come to be associated with the drudgery of dealing with, debugging, and mopping up after other people's code. In large organizations, maintenance tasks are often assigned to junior programmers. After having spent some time doing maintenance tasks, programmers can then graduate to more glamorous development work.

The term maintenance implies that this part of the software lifecycle consists of the software equivalent of checking the oil and changing the filters. I would contend instead that it is at this stage in the evolution of a software system that many of the most important and difficult design challenges are encountered.

In many respects, maintenance is a more demanding task than development. The maintainer must infer a complex design, frequently given inadequate or absent internal documentation, and make modifications that might cut across the grain of the existing design.

It is not surprising, then, that senior people in an organization might reserve for themselves the relatively pleasant, rewarding tasks of designing and implementing clean new systems from initial requirement sets, and leave to others the task of molding existing systems to changing requirements. This is because doing a new design for any one set of requirements is a relatively simple, high profile task, while adapting another programmer's existing system to requirements that contradict those of an original design can be a difficult, tedious, and relatively thankless job.

This is an ironic corollary to the Peter Principle. The person best qualified to maintain a system component, its original designer, is promoted as a result of the success of this design effort to a position where he or she is no longer responsible for the component's further evolution.

The design clashes that occur during the maintenance/evolutionary phase of the software lifecycle that normally confound the original design can often be minimized by an application framework. Because application frameworks provide a middle ground between generalizing a single application to meet contradictory requirements (conditionalization) and spawning new versions (metastisization) they can provide a path by which sub-versions of the original application can gradually diverge from the original design.

By promoting the sharing, reuse and understanding of other people's code, object-oriented environments can help enhance the appeal of perfecting and adapting existing systems over that of designing new systems.


Reuse vs. Reinvention

There are limits to how complex a system a given person can construct, manage, and comprehend. No (sane) person would dream of attempting to single-handedly engineer an entire automobile, or a Saturn-V. Yet, many a contemporary software engineer harbors (with out realizing it, in many cases) equally outlandish expectations about what it might take to construct certain software systems from first principles. Faced with reusing the work of others, many would elect to reimplement a given component instead. An aversion to code "not invented here" or even "not invented by me" is quite common in programming circles. Why are programmers so often inclined to rewrite code rather than reuse code?

Often the fig leaf of "source control" is cited as a reason why existing products must be reimplemented. Certainly ego is often the reason that code is reinvented. There are those who aspire to reimplement the world, with their name at the top of each module.

There are more benign reasons as well. One is the difficulty of reconstructing the assumptions and implicit design principles that underlie a given module by reading the code for it. The difficulty of comprehending what a given module does by merely reading its code is, I think, one of the most underestimated things in all of computer science. Also, as most programmers know, the state of the internal documentation in most production modules is typically quite poor (to say the least). (To jump ahead to learnability, the question is not why Smalltalk is hard to read, it is why we compare a programming style that requires extensive reading with one that does not.) In most environments, programmers are seldom called upon to (or seldom undertake to, in any case) understand much more than very localized portions of existing systems. Most such encounters take place during the so-called maintenance phase of a given product's lifecycle.

Hence, one of the major motivations for rewriting existing code is to ensure than the design assumptions and structure of that module are completely understood.

There seem to be but two ways out of this conundrum. One is that reusable components must be written using a level of abstraction that makes them easy to comprehend. The second must be that the effort to learn the internals of a given component must be rewarded by a high level of generality and reusability. That is, either we must only learn a component's external interface, or learning the internals must be worth it in terms of reuse potential. (One path leads to data abstraction and black box frameworks, the other to extensible white box inheritance frameworks.)

If a component is likely to be reused, then a greater level of effort to make it more general can be justified. Object-oriented languages can have just this impact. Polymorphism increases the likelihood that a new component will behave correctly right away in a number of existing contexts, and that that component will be able to operate on a potentially wide range of different objects. The judicious use of inheritance can allow the construction of very general abstract cores, (i.e. frameworks), that can then be extended and specialized as specific requirements dictate.

Note that frameworks occupy a position in between application code and library components in terms of generality. A component can be reused as a black box, the behavior of which is defined by its protocol (or signature). (Re)using a framework, on the other hand, requires that existing code be treated as a white box. Subclassing the components that comprise a framework requires that the programmer have a detailed knowledge of the internal structures and mechanisms in order to choose judicious overrides to enable the framework to implement the problem being addressed. Even in those cases (for example MacApp) where the override points are well specified, extending a framework might be said to require a "gray box" understanding of the framework.

William Cook, in a forthcoming paper on the denotational semantics of inheritance [Cook 87] makes a similar observation. He claims that an object has two interfaces, an outer interface that is seen by code that uses the object, and an inner interface seen only by its subclasses.

Frameworks provide a way of reusing the sort of code that is resistant to more conventional reuse attempts. Object-oriented libraries provide reuse capabilities for application independent components that are analogous to (though considerably more powerful than) those seen in the utility and support libraries used in conventional environments. However, it is very difficult to use conventional systems to build general abstract algorithms that can be used in a wide range of different contexts. Hence, we can reuse whatever application independent components we might generate in the course of generating an application rather easily, but reusing the edifice that ties the components together so that they solve a problem of interest is usually possible only by physically copying the application. (This is the skeleton program approach.)

Approaches like using procedure and function parameters, and Ada generic modules address these concerns to some extent, but with nowhere near the generality of an object-oriented framework approach. One can make the case that the use of these techniques is a sort of brute force attempt to reap the benefits of object-oriented polymorphism and inheritance using conventional programming tools.


Frameworks and the Software Lifecycle

This section depicts the relationships among applications, frameworks, and library components. It attempts to illustrate how the use of frameworks can bridge the gulf that traditionally exists between application specific and application independent code.

The figure above shows an application program surrounded by a set of application specific subcomponents (the squares). The circles at the right represent application independent library components. The gray line shows the boundary between application specific and application independent code.

The figure above illustrates a phenomenon that can occur during maintenance. Expediency can force maintainers to modify a previously general component to meet the needs of a specific application. A loss of generality ensues. This can be seen in the way that the boundary between application independent and application specific code has shifted to encroach on what was previously application independent territory. In practice, this phenomenon is most frequently exhibited in somewhat application domain specific components that were originally designed for the given application, with an eye towards reuse. The pressures of the maintenance phase can result in the benefits of such foresight being squandered.

A variation on this phenomenon is when the previously general component is used as a skeleton for the new component. This leads to a proliferation of components derived from a given skeleton.

 

If a library component is used as the root of a component framework, the problems discussed above can be avoided. The figure above shows how the specializations needed to the original component can be embodied in a subclass of that component. This subclass will share most of the original component's code, and inherit any subsequent changes made to the original component. Note how the boundary between application specific and application independent code has been gerrymandered over into the library component's realm to encompass the application specific subclass.

The figure above depicts two applications (the lower rectangles) derived from an original application that has served as a skeleton for them. The large rectangles represent the mainline application code. The smaller rectangles are application specific subroutines. The large circles are library components shared by all three programs. Though all the applications share the same library components, they share no application specific code. Hence, any application specific changes required by all the programs must be made separately to each. This barrier between the original skeleton application and the two derived applications is represented by the thin gray line. This process becomes increasing difficult as these applications evolve and diverge, since what should be common code may not be maintained as such in any orderly fashion.

This figure shows how an application like the one from the previous figure might be organized using a framework-based approach. On the application side, each application is a subclass (the small rectangles) derived from what is now a general core on the application specific side (the plain circle) and the library components. Notice how the gray boundary line now encloses only the two gray subclasses. The original application is now itself serving as the root of a framework. Since this inheritance framework is itself shared, it is drawn as a circle. Changes made to this common core will be reflected immediately in each derived application. Each application is comprised only of that code that distinguishes it from the roots of the frameworks, and is likely to be much smaller and easier to comprehend than the corresponding skeleton program. The identities of the application specific subroutines (the small boxes) will not necessarily remain distinct when a set of skeletal applications is transformed into a framework. (They do not in this example.)

This figure shows a dilemma that faces programmers using conventional tools when they are faced with changing requirements that are at odds with an applications original specifications. They may either create a conditionalized variant that encompasses the functionality of both sets of specifications, or use the original application as a skeleton for a new application. The former approach increases the internal complexity of an application, since it becomes filled with case-like statements that test variant tag fields. On the plus side, each variant will continue to retain all the functionality of the original application as both evolve. The latter approach introduces all the problems associated with managing a family of skeletal applications.

A framework-based approach combines the power of the first approach with the code level simplicity of the second approach. Since message sending will in effect perform the dispatches on the variant tags implicitly, the subclasses can retain the structural clarity of single purpose components while retaining the shared capabilities of their ancestors.


Programming in the Smalltalk-80 Environment

Developing a system using the Smalltalk-80 environment is distinctly different sort of enterprise from doing so using a conventional environment. The most obvious difference is that from the onset, one spends ones time reading (other people's) existing code, rather than writing (or reinventing) new code. This is because in order to exploit the vast body of existing code that one extends (subclasses) in order to create new, application specific classes, one must understand the capabilities of the classes that already exist.

The rich vocabulary of preexisting classes is what gives object-oriented systems much of their power. It is the need to learn the capabilities of the classes in these libraries that is at the root of some of the "learnability" problems reported with some object-oriented systems. In any case, it is the presence of this huge library that results in the need to spend time reading code rather than writing it.


The Learnability Gap

A drawback cited by some to the use of object-oriented environments is that programmers find them difficult to learn [O'Shea 86]. My experience using the Smalltalk-80 environment has been that it does indeed take a great deal of time to learn it well. The learnability gap is real. However, there are reasonable explanations for it.

By far the most significant factor affecting the relative learning times in systems like Smalltalk is the need to to master an enormous library of methods before one can begin to intelligently use the system. A conscientious newcomer to Smalltalk may spend almost all of his or her time reading existing code. The same programmer using a language like C would probably begin by reimplementing the very primitive operations that the Smalltalk programmer is searching the existing code for. So in one sense, comparing the Smalltalk environment with the C language in terms of learnability is to compare apples and oranges, it is the Smalltalk language itself (which is quite small) that should be compared with languages like C. A much fairer comparison might be to compare the learning time for the Smalltalk system with that of the Unix environment.

The notion that long learning curves are caused by the need to master a large library of components is borne out by the experiences reported by programmers attempting to write applications using the Apple Macintosh Toolbox [Apple 85]. This subroutine library is not object-oriented (though it was heavily influenced both by Smalltalk, and an earlier objected oriented library, the Lisa Toolkit [Apple 84]). The sheer size of this library, and the requirement that a substantial subset of it be mastered before a simple application could be constructed has evidently lead to learnability difficulties that are quite similar to those reported with object-oriented systems. In fact, the addition of an object-oriented superstructure, MacApp [Apple 86], is reported to greatly facilitate the generation of Macintosh code [Schmucker 86].

Another factor that comes into play is the emphasis that is placed on the support of window-based interactive graphical user interfaces in systems like Smalltalk. Comparing the difficulty of developing an MVC-based graphical application with the effort involved in writing a "Hello world" program may again be comparing apples and oranges. In this respect, the comparisons among Smalltalk, the Macintosh Toolbox, and MacApp seem quite apt.

Hence, the sheer size of the method library under Smalltalk, is, in my opinion, the greatest contributor to the relatively long time one needs to master it. This is consistent with what others have reported [O'Shea 86].

There are other factors at work here. The requirement that a programmer actually read another programmer's code is one that is met by a surprising degree of resistance in some circles. My feeling is that the difficulty of reading and comprehending someone else's code is one of CS's "dirty little secrets". The browsing tools under Smalltalk-80 are the best tools I have seen for gleaning information from source code. However, even better tools and browsers would be helpful. (See [Borning and O'Shea 86].

Another problem is that much of the Smalltalk system is underdocumented. The absence of good descriptions of the rationale for much of the system's structure, especially in much of the graphics and MVC code, is particularly troublesome. However, even if the final Smaltalk-80 book were to exist, better facilities than are currently available for managing system documentation within the system itself could prove very useful.

An additional problem with learning Smalltalk-80 is that as much as polymorphism and inheritance aid the implementor, they can hinder the reader. Polymorphism makes it difficult for the reader to determine the flow of control. (Ironically, this information may have been known to the author of the original method. In these cases, the availability of a type system might enhance readability by increasing the reader's certainty of his or her knowledge of the flow of control.)

The readability problems associated with inheritance seem as if they could be ameliorated through the use of better browsers. See, for instance [Goldberg 82] and [Foote 87].

Another approach that might aid comprehensibility might be the addition of a module mechanism [Borning and O'Shea 86] or the formalization of protocols.


Smalltalk and Realtime Systems

It may seem peculiar to use Smalltalk-80 system to investigate realtime laboratory programming problems. After all, realtime systems must operate under tight, demanding performance constraints, and Smalltalk has a reputation for being anything but fast. However, a need for high performance is but one of the characteristics that distinguishes the laboratory programming problems from more traditional programming problems. Among the other demands that the laboratory application domain places on the software designer are a need for highly interactive graphical user interfaces, and a need to respond to highly dynamic software requirements (i.e. requirements that are changing frequently).

The laboratory environment shares with traditional application domains the need for good editing, debugging, and code management tools.

It was the observation that the strengths of an object-oriented programming environment like Smalltalk-80 might be well matched to many of the needs of the laboratory programming domain that motivated this project.

Note that to achieve accurate realtime event handling, it is essential that input events be accurately timestamped and that output event initiation be accurately scheduled. If these two requirements can be met in such a way as to maintain accurate system timing, the software that schedules and consumes these events can be slow and sloppy from a timing standpoint, as long as overall throughput constraints are met (and the global timebase is precisely managed). In an object-oriented system, this might mean that a foreground process that allows tightly timed events to be produced and consumed by slower code could be used to achieve adequate realtime performance. Such table-driven scheduling and timestamping techniques have been used in more conventional situations where realtime events must be managed by slow processes.

A number of approaches might be taken to restore some of the performance that is lost when a system like Smalltalk is employed. Among these are:

The use of hybrid object-oriented languages

The use of custom primitives

The use of a typed, optimized Smalltalk system

Increased exploitation of multiprocessing and multitasking

Hybrid object-oriented languages like C++ [Stroustrup 86], Classcal, Objective-C, and Object Pascal [Schmucker 86] can allow performance levels approaching those of the base languages (in these cases C and Pascal) to be attained. In certain cases, hybrid object-oriented implementations have exceeded the performance of their counterparts written in the base languages. These improvements resulted because the use of an object-oriented framework allowed superior library routines to replace slower application routines [Schmucker 86].

Another approach one can take is to use efficient primitives to implement operations for which performance is critical. Such primitives might be written in fast, conventional languages like C or assembly language. This, of course, is what Smalltalk itself does for operations like graphical bit block transfers.

A third approach might be to use a highly optimized, "typed" variant of Smalltalk [Johnson 87]. This approach, in effect, turns Smalltalk itself into a hybrid language. In those contexts where polymorphism can be reduced to static overloading, the full efficiency of compile-time typing can be exploited.

It might be possible as well to exploit hardware parallelism to restore some of the performance lost to runtime polymorphism. Such improvements might result from parallel garbage collection schemes, or the delegation of time-critical tasks to dedicated processors.

Also, the sorts of functional separations made possible by language level multitasking (see the StimulusGenerator discussion in Chapter III) might facilitate the generation of more efficient runtime code, as well as battery modularity and reusability.

The StimulusGenerator hierarchy is a portion of the battery simulation that was formerly embedded in the BatteryItem hierarchy that has, under Smalltalk, evolved into a separate part of the overall battery framework. Indeed, the StimulusGenerator hierarchy is in some respects still in transition between membership in the original battery hierarchy and clean isolation from it. The degree of coupling seen in the interface between each battery item and these classes is still somewhat high.

One very intriguing route by which this coupling might be reduced is via the introduction of parallelism. Under such an approach, stimulus generators would run in parallel with battery items, with both tied to the same underlying timebase. Such an approach would simplify both the battery item stimulus coordination code, and the stimulus sequencing code in this stimulus generators. Hence, the use of parallelism could greatly increase the generality and reusability of components of both hierarchies.

Realtime application programs frequently exhibit a structure in which the mainline code must explicitly schedule sequences of otherwise unrelated events. The use of parallel processes can free the designer from this temporal yoke.


Starting with a "Real" System

By building a system that was based one built from real, production requirements, I believe I was able to avoid the artificial simplicity seen in many academic demonstration systems. It is easy to specify a set of requirements that in effect are a "straw man" waiting to be dealt with by the techniques being demonstrated.

This approach is not without its problems. In working in an area that has not been well studied, it is more difficult to evaluate the conclusions that one reaches about the effectiveness of techniques for coping with problems one must address in that domain. One also runs the risk that superfluous detail and breadth may confound one's ability to adequately focus on areas of legitimate academic interest. Both these issues arose to some extent during the Battery simulation effort.

On the whole, I feel that having used a system like the Battery as the basis for a project such as this one has been a sound idea. The combination of issues it raised, such as volatile requirements, data management, data flow problems, parameter management, and realtime device simulation, is quite unorthodox. Computer science research has a tendency towards parochialism. By addressing problems outside the veins we have traditionally mined, we increase the potential that we will gain insights of value not only to the new domains investigated, but to computer science in general as well.


Lisp, Simula, and Uniform Access

One of the most difficult design bottlenecks in the battery simulation effort was the design of the parameter access and data management mechanisms. These difficulties arose because Smalltalk's dual heritage from Lisp and Simula made it more difficult that I would have expected to design the sorts of active data structures I needed.

The problems arose for the following reasons. I wanted battery items to be able to access their user alterable parameter values in an efficient fashion. At the same time, I wanted to be able to write a general utility that would allow users to inspect and alter the parameters of any program, with facilities such as help and error checking. This utility would have resembled the CPL battery parameter browser. Also, it was my goal to use the objects stored by each battery item as it ran as the basis for a data management scheme. Experimenters using data stored by battery items were to use Smalltalk-80 itself to query this database.

The most efficient way to construct an aggregate object in Smalltalk is to define a class with a field (or instance variable) for each element of the aggregate. This method of defining aggregates resembles the record mechanism of Simula-67.

The use of records introduces a number of limitations. For instance, it is impossible to add a field to a Smalltalk record on a per object basis, and it is quite difficult to iterate (do:) over all the fields of a record. Adding a field on even a per class basis requires the definition of a new subclass. Motivations for adding fields to a record might include adding a timestamp, or a link to a descriptor giving help and limit information.

Smalltalk dictionaries do not share these difficulties. It is very easy to add a new field to a dictionary. One simply stores into an unnamed slot. Iteration over the elements of a dictionary is simple in Smalltalk.

Extension and iteration over objects in the ways described above resemble a coding style frequently seen in Lisp. The property list mechanism in Lisp is often used to allow the addition of arbitrary attributes to individual data structures. The properties of an object can be iterated across as well. Interestingly, earlier versions of Smalltalk (such as Smalltalk-72 [Goldberg 76]) showed a much stronger Lisp influence than do later versions.

Earlier versions of the battery simulation used dictionaries for parameter and data management. Managing parameter and data management objects as dictionaries introduced a number of complications. One was that access to fields required a somewhat more cumbersome syntax. This was a significant problem because it complicated the syntax of user queries to the data management system. More seriously, the process of constructing these dictionaries required explicit code to initialize the fields that seemed to wastefully recapitulate exactly the mechanism that instance variable inheritance provided.

The table below summarizes the trade-offs between using Lisp-style records (instance variables) and P-List style dictionaries in Smalltalk. (The entry for redefinability refers to the ease with which one can interpose a calculation for a simple instance variable reference.

Feature

Instance Vars

Dictionaries

iteration (do:)

hard

easy

extension (adding a field)

hard

easy

efficiency (speed/space)

high

low

access syntax

clean

messier

redefinability (access side effects)

high

lower

instantiation overhead

low

higher

The solution I finally opted for uses a new pair of classes, AccessibleObject and AccessibleDictionary, to allow dictionary-like access to objects, and object-like access to dictionaries

Accessible objects allow dictionary-style access to all their instance variables, along with record-style access to a built in dictionary. Hence, instance variables can be accessed using at: and at:put:, as well as the standard record-style access protocol (name and name:).

Both access styles are provided without any need to explicitly define additional accessing methods. The record-style access method is rather slow however, and should be overridden when efficiency is an important consideration.

If name: or at:put: storage attempt is made and no instance variable with the given name exists, an entry is made for the given selector in the AccessibleObject's item dictionary. Thereafter, this soft instance variable may be accessed using either access method. In this way, uniform access to hard and soft fields is provided, and Smalltalk's Simula/Lisp dualism is bridged. AccessibleObjects provide a way of adding associations to objects in a manner similar to that provided by Lisp's property list mechanisms.

The example below shows some of the capabilities of AccessibleObjects:

AccessibleObject class methods for: examples

 

example

"AccessibleObject example"

 

| temp |

temp ¬ AccessibleObject new.

temp dog: 'Fido'.

temp cat: 'Tabby'.

Transcript print: temp dog; cr.

Transcript print: temp items; cr.

temp keysDo: [:key | Transcript print: key; cr].

Transcript print: (temp variableAt: #items); cr.

Transcript endEntry

I think in hindsight that another reason for the difficulty of parts of the battery simulation discussed above was that I was violating Einstein's edict that: "Things should be as simple as possible, but not simpler." I was attempting to design a single, uniform, all encompassing access mechanism to solve an unwieldy collection of distinct problems. A more natural solution might have involved several different data structures and access mechanisms. The scheme I was trying to construct resembled the multiple view mechanisms seen in some database systems. It might have been easier in the end to treat the parameter storage requirements of the battery items as distinct from the data management problems in both the battery items themselves and in the data management system. As long as data could be easily converted from one format to another, this sort of approach will work. Such an approach would have led to the explicit entry of battery data into an object-oriented database, rather than an attempt to extend the original objects so that they themselves became the database.

I think this is an observation that is applicable to a wide range of object-oriented design problems. The power of object-oriented systems can delude one into searching for a single, all encompassing solution to too wide a collection of problems. In the end, a number of freely convertible currencies may prove easier to manage than a single common currency. I could easily take the opposite side of this argument. (Indeed, in the battery simulation described herein, I clearly did.)

Henry Hudson

Looking for shortcuts like the one described above reminds me of Henry Hudson and the navigators who searched unsuccessfully for the Northwest Passage. Hudson, on a journey to find a Northeast Passage around Asia at one point abandoned this search, and set sail across the Arctic to search once more for a Northwest Passage (a rather dramatic change in his itinerary). Hudson's efforts to find the Northwest Passage finally came to a tragic end in 1611 in the Canadian bay that bears his name. The crew of his ship, the Discovery, mutinied, and left Hudson, his son, and a few loyal crewmembers adrift in a lifeboat in Hudson Bay.

Adrift

Hudson was not the last victim claimed by the search for the Northwest Passage. Many more perished before Admunsen finally completed a journey of the passage during the period from 1903 to 1905.

USS Nautilus

During 1958 and 1959, the American nuclear submarine Nautilus completed a journey from the Atlantic to the Pacific under the Arctic icecap, as part of the celebration of the International Geophysical Year. In 1969, the icebreaker Manhattan "humped" its way across the Northwest Passage.

There must be a lesson here someplace. The success of the Nautilus and the Manhattan probably says something about the wisdom of tackling tough problems with appropriately powerful tools. Until such tools are available, one's prospects for success may be no better than Hudson's.

There appear to be approaches on the horizon that address some of the uniform access issues mentioned above. Delegation-based object-oriented languages may allow some of the sorts of per instance manipulations that are difficult in Smalltalk. (See [Lieberman 86] and [Ungar 87].) Techniques like the use of (database style) views [Garlan 86] and metaclass manipulations like reflection [Maes 87] might have allowed some of the goals of my original, ambitous data management ideas to be realized.


O 2 Programming is Easy. O2 Design is Hard

I think I might summarize my experiences with object-oriented programming and design as follows: object-oriented programming is easier. Much easier. Object-oriented design is harder. Much harder.

Once one has put in the necessary time to master the class libraries, routine programming tasks seem to take remarkably little time. I found that the aggregate handling capabilities of the Collection hierarchy in particular saved me enormous amounts of time. I found myself making statements like: "I don't write symbol table managers anymore. I'm a Smalltalk programmer. I have the Dictionary classes." In fact, my subjective impression was that programming time itself ceased to be a major determinant of how long it would take me to accomplish a programming task. Instead, the process seemed to become completely design limited.

The reasons for the difficulty of object-oriented design do not reflect poorly on it, though, since they result from the fact that object-oriented techniques allow one to address much more difficult problems than were here-to-fore addressable.

(On those occasions where I've felt at ease taking a short-sighted, slash and burn attitude towards Smalltalk programming, I've found that quick-and-dirty design can be just as easy as quick-and-dirty programming. It is this aspect of Smalltalk that, I would guess, makes it popular for prototyping.)

Certainly a major reason that object-oriented programming is design limited is one that has already been discussed at length here: the tension between simplicity and task specificity on one hand vs. generality and potential future applicability on the other hand. Since polymorphism and inheritance combine to make efforts to make a component more general much more likely to pay off, the designer can do worse than spend time considering what he or she might do achieve such generality.

Another reason that object-oriented design takes time is the great wealth of design alternatives that can be brought to bear on a given problem. In a traditional system, the designer might bemoan the fact that there is simply no good way to resolve a given design difficulty. In an environment like Smalltalk's, there might be several different ways of approaching a given problem, each with its own strengths and weaknesses. The thoughtful designer must attempt to weigh all such alternatives. Selecting from among such an embarrassment of alternatives can be time consuming. Whether there are no solutions or an infinite number of solution to a linear system, the determinant of that system will still be zero.

It is these sorts of design decisions that can make object-oriented design both aggravating and exhilarating. A tune by the new wave band Devo summed this dilemma up nicely with the lines:

What we've got is freedom of choice
What we want is freedom from choice
[1]

The need to arbitrarily relax design constraints can haunt the designer when combined with attempts to generalize components to meet hypothetical future requirements. One is often left with the vague feeling that each fork in the design road, however innocuous, might be the one that preempts some major unseen design simplification somewhere down the road. It is for this reason that it is frequently better to allow general components to emerge from experience rather that to attempt to design them in a strictly top-down fashion.

The final reason for the difficulty of object-oriented design is simply that these techniques that allow us to confront complex, design bound problems that have been beyond the reach of conventional programming techniques. For example, the problems of deciding where to trade-off design simplicity and generality in certain components don't arise to the same extent in conventional systems, since the same high reuse potential is simply not there.

Sisyphus

It would seem object-oriented techniques offer us an alternative to writing the same disposable programs over and over again. We may instead take the time to craft, hone, and perfect general components, with the knowledge that our programming environment gives us an the ability to reexploit them. If designing such components is a time consuming experience, it is also the sort of experience that is aesthetically satisfying. If my alternatives are to roll the same rock up the same hill every day, or leave a legacy of polished, tested general components as the result of my toil, I know what my choice will be.


[1] I've discovered, somewhat to my chagrin, that though the lyrics I gave accurately conveyed the spirit of Freedom of Choice, I somehow reversed the phrasing of these two lines from this song when I quoted them in the original work during late 1987. I've left these as they were in the original thesis. To see them as they were supposed to have been, in context, click lines either here or above. (BF 2/27/98)

Chapter VII -- Conclusion

The challenges associated with application environments characterized by requirements that evolve rapidly are not well addressed by traditional software design and software engineering techniques. The use of object-oriented frameworks fills the large gap that exists between application programs and application independent components, and provides a path along which applications may evolve as they mature. By allowing both application skeletons and components to serve as the roots of frameworks, object-oriented inheritance allows previously application specific skeletons to be generalized, and currently application independent components to be tailored to specific applications. The reuse potential of both is hence greatly increased. A higher reuse potential, in turn justifies a higher level of effort to polish and perfect such components, since any application that uses such a component will benefit from such changes. Inheritance hierarchies provide a way in which one can manage the evolution of a family of related applications as they diverge.

Frameworks promise to have their greatest impact where the need to confront volatile requirements is the greatest: in scientific research programming (such as neural network simulation, for example), in certain realtime embedded application domains, and in experimental user interface prototyping environments, to name but a few.

Frameworks, along with other approaches made possible by object-oriented techniques (such as our Plumbing-Support classes), seem to promise to be of major benefit in realtime application environments, despite potential performance problems. This is because performance is but one of the requirements that distinguishes realtime environments from more prosaic application domains.

The Battery simulation described herein, along with the CPL Battery work that preceded it, have conclusively convinced me that object-oriented techniques and systems can be of major benefit to designers and implementors of realtime scientific laboratory packages. The use of object-oriented frameworks turned what would have been a disjoint collection of individual applications into an integrated system with a high degree of code sharing. This economy, in turn, allowed effort to be expended on "luxuries" like a better user interface, including menus and a help system. It also allowed each application to share more complex data encoding and data management schemes, and a table driven parameter editor. These in turn allowed each application to implement a wider range of experimental designs than would otherwise have been possible. I believe that none of this would have been feasible without the framework-based structure present in both Battery implementations.

My experience with the resilience of these techniques in the face of new design requirements has been very positive The hierarchical Battery frameworks have gracefully accommodated dramatic alterations in the original requirements, such as the addition of digitized speech stimuli and a variety of complex realtime displays. These features, once added, are available to all the experimental designs that might benefit from them.

The original Battery experience is evidence as well that object-oriented techniques can be of enormous utility even when no object-oriented language is available.

Designing to facilitate change requires a great deal of care and foresight. Design must be seen not as a stage that occurs near the start of the software lifecycle, but as a process that pervades it. Object-oriented tools and techniques make it much more likely that an investment in design time and effort will be repaid in terms of greater system generality. There are few feelings more gratifying to the software designer than realizing that through prudence and foresight, one has at hand components that solve nearly all of what would otherwise have been a tedious reimplementation of some boiler-plate piece of code. Object-oriented frameworks and careful design can together help to bring about such a pleasant state of affairs where such an outcome would otherwise not have been possible.


Brian Foote foote@cs.uiuc.edu
Last Modified: 14 December 2005