Practical Software Engineering
Techniques, methods and theories which allow us to plan, design,
develop, and maintain the software for modern computer systems.
- An air traffic controller, relying on information from his
computer console, directs two Boeing 747's onto intersecting paths.
The jets collide and burst into flame, and all aboard perish. -- Washington Post
- A hospital minicomputer, monitoring a patient recovering from
surgery, fails to alert hospital staff that the patient is having
a stroke. The patient dies. -- Washington Post
- A company discovers that its computer has mangled valuable
and sensitive information beyond recovery. The loss gravely weakens
the company's market position. -- Washington Post
"Systematic Errors: A new law aims to prevent software meltdown in federal agencies", Scientific American, May 1996
-- the DOE can't get reliable software to replace the aniquated software that tracks nuclear waste
"Software for Reliable Networks", Scientific American, May 1996
-- "brownouts" on the Web
"Battling the Enemy Within: A billion-dollar fiascois just the tip of the military's software problems", Scientific American, April 1996
-- A major U.S. Army initiative to modernize
thousands of aging computer systems has hit the
skids, careening far beyond schedule and well
over budget. The 10-year project, known as the
Sustaining Base Information Services (SBIS)
program, is supposed to replace some 3,700
automated applications by the year 2002. The
current systems automate virtually every business
function--from payroll and personnel
management to budgeting and health care--at
more than 380 installations worldwide. But after
investing almost three years and about $158
million, the army has yet to receive a single
"Software Gone Awry", Scientific American, October 1996
--Investigators appointed by the European
Space Agency reported in July that a software
bug brought down the new $8-billion Ariane 5
rocket, which exploded.
"ONE SMALL STEP: The next big advance in chip design arrives one year early", Scientific American, August 1996
-- VLSI design software can't keep up to progress in chip integration, software may stifle progress
"Canned Software", Scientific American, August 1996
-- The U.S. Army will terminate its Sustaining Base
Information Systems program at the end of fiscal
year 1997. The program was to have replaced
some 3,700 computer systems by 2002. To date,
the army has spent more than $150 million yet
has received only a handful of replacement
- "Computer bug bites Alberta exchange", Calgary Herald, October 24, 1996
-- The Alberta Stock Exchange crashed at 7:38 a.m. Wednesday, brought down by a glitch in its
brand new, fully computerized trading system.
These problems stem from an unrealistic view of what it takes
to construct the software to perform these tasks.
To improve the record we must:
- better understand the development process;
- learn how to estimate trade off time, manpower, dollars;
- estimate and measure the quality, reliability, and cost of
the end product.
US government planned to spent $26.5
billion on information technology in 1996. Boehm estimates 1995 world-wide software costs
to be $435 billion. Software costs will continue to rise (Boehm says 12%/year)
although hardware costs may decrease because:
- a new application means new programs
- new computers need new software or modifications of existing
- programming is a labour-intensive skill.
Software errors result in two costs:
- the harm which ensues.
- the effort of correction.
e.g. electronic-funds-transfer system.
The following activities occur during the life cycle paradigm:
- System Engineering. Top level customer requirements
are identified, functional and system interfaces are defined and
the relation of this software to overall business function is
- Analysis. Detailed requirements necessary to define
the function and performance of the software are defined. The
information domain for the system is analyzed to identify data
flow characteristics, key data objects and overall data structure.
- Design. Detailed requirements are translated into a
series of system representations that depict how the software
will be constructed.
The design encompasses a description of program structure, data
structure and detailed procedural descriptions of the software.
- Code. Design must be translated into a machine executable
The coding step accomplishes this translation through the use
of conventional programming languages (e.g., C, Ada, Pascal) or
fourth generation languages.
- Testing. Testing is a multi-step activity that serves
to verify that each software component properly performs its required
function and validates that the system as a whole meets overall
- Maintenance. Maintenance is the re-application of each
of the preceding activities for existing software. The re-application
may be required to correct an error in the original software,
to adapt the software to changes in its external environment (e.g.,
new hardware, operating system), or to provide enhancement to
function or performance requested by the customer.
This is the most widely used approach to software engineering.
It leads to systematic, rational software development, but like
any generic model, the life cycle paradigm can be problematic
for the following reasons:
- The rigid sequential flow of the model is rarely encountered
in real life. Iteration can occur causing the sequence of steps
to become muddled.
- It is often difficult for the customer to provide a detailed
specification of what is required early in the process. Yet this
model requires a definite specification as a necessary building
block for subsequent steps.
- Much time can pass before any operational elements of the
system are available for customer evaluation. If a major error
in implementation is made, it may not be uncovered until much
Do these potential problems mean that the life cycle paradigm
should be avoided? Absolutely not! They do mean, however, that
the application of this software engineering paradigm must be
carefully managed to ensure successful results.
Prototyping moves the developer and customer toward a "quick"
implementation. Prototyping begins with requirements gathering.
Meetings between developer and customer are conducted to determine
overall system objectives and functional and performance requirements.
The developer then applies a set of tools to develop a quick design
and build a working model (the "prototype") of some
element(s) of the system. The customer or user "test drives"
the prototype, evaluating its function and recommending changes
to better meet customer needs. Iteration occurs as this process
is repeated, and an acceptable model is derived. The developer
then moves to "productize" the prototype by applying
many of the steps described for the classic life cycle.
In object oriented programming a library of reusable objects
(data structures and associated procedures) the software engineer
can rapidly create prototypes and production programs.
The benefits of prototyping are:
- a working model is provided to the customer/user early in
the process, enabling early assessment and bolstering confidence,
- the developer gains experience and insight by building the
model, thereby resulting in a more solid implementation of "the
- the prototype serves to clarify otherwise vague requirements,
reducing ambiguity and improving communication between developer
But prototyping also has a set of inherent problems:
- The user sees what appears to be a fully working system (in
actuality, it is a partially working model) and believes that
the prototype (a model) can be easily transformed into a production
system. This is rarely the case. Yet many users have pressured
developers into releasing prototypes for production use that have
been unreliable, and worse, virtually unmaintainable.
- The developer often makes technical compromises to build a
"quick and dirty" model. Sometimes these compromises
are propagated into the production system, resulting in implementation
and maintenance problems.
- Prototyping is applicable only to a limited class of problems.
In general, a prototype is valuable when heavy human-machine interaction
occurs, when complex output is to be produced or when new or untested
algorithms are to be applied. It is far less beneficial for large,
batch-oriented processing or embedded process control applications.
- Initial conception
- Requirements analysis
- Initial design
- Verification and test of design
- Prototype manufacturing
- Assembly and system-integration tests
- Acceptance tests (validation of design)
- Production (if several systems are required)
- Field (operational) trial and debugging
- Field maintenance
- Design and installation of added features
- System discard (death of system) or complete system redesign.
Boehm gives figures for several systems showing about
- 40% of effort on analysis and design
- 20% on coding and auditing (handchecking)
- 40% on testing and correcting bugs.
Documentation was not included, estimated at extra 10%.
As a project progresses, the cost of changes increase dramatically.
The reasons for increases in cost are:
- Testing becomes more complex and costly;
- Documentation of changes becomes more widespread and costly;
- Communication of problems and changes involves many people;
- Repeating of previous tests (regression testing) becomes costly;
- Once operation is begun, the development team is disbanded
- Roger S. Pressman. Software Enginerring: A Practitioner's Approach (third edition).
McGraw-Hill, New York. 1992. Chapter 1. (see also chapter 1 and chapter 3 in the
- Grady Booch. Object-Oriented Analysis and Desgin with Applications (second edition).
Addison-Wesley, Menlo Park, California. 1994. Chapter 1.
- Tom DeMarco. Controlling Software Projects. Yourdon Press : Englewood Cliffs, N.J.
1982. Chapters 1, 2, 20.
Practical Software Engineering, Department of Computer Science