Rob Kremer

UofC

Practical Software Engineering


Software Quality Assurance

SQA is something which should be applied throughout the development process.

Quality is often measured in thousands of lines of code or mean time to make a change.

Roger Pressman quotes Philip Crosby:

The problem of quality management is not what people don't know about it. The problem is what they think they do know....In this regard, quality has much in common with sex. Everybody is for it. (Under certain conditions, of course.) Everybody feels they understand it. (Even though they wouldn't want to explain it.) Everybody thinks execution is only as matter of following natural inclinations. (After all, we do get along somehow.) And, of course, most people feel that problems in these areas are caused by other people. (If only they would take the time to do things right.)

Software Reliability

Reliability means
  1. issues that related to the design of the product which will operate well for a substantial length of time.
  2. a metric which is the probability of operational success of software.

Probabilistic Models

can refer to deterministic events (e.g. motor burns out) when it cannot be predicted when they will occur; or random events.

The probability space -- the space of all possible occurrences must first be defined, e.g. in a probability model for program error it is all possible paths in a program. Then the rules for selection are specified, e.g. for each path, combinations of initial conditions and input values. A software failure occurs when an execution sequence containing an error is processed.

Reliability Theory

is the application of probability theory to the modeling of failures and the prediction of success probability.

A definition commonly accepted is:

Software reliability is the probability that the program performs successfully, according to specifications, for a given time period.

Specifications -- precise statements of:

Errors are found from a system failure, and may be: hardware, software, operator, or unresolved.

Time may be divided into:

Different variables from time may need to be considered,

e.g.

Software is repairable if it can be debugged and the errors corrected. This may not be possible without inconveniencing the user, e.g. air-traffic control system.

Software availability is the probability that the program is performing successfully, according to specifications, at a given point in time.

Availability is defined as:

  1. the ratio of systems up at some instant to the size of the population studied (no. of systems).
  2. the ratio of observed uptime to the sum of the uptime and downtime:
    A = Tup / (Tup + Tdown) (of a single system)
These measurements are used to: If the system is still in the design and development phase then a third definition is used:
  1. the ratio of the mean time to failure (uptimes) and the sum of the mean time to failure and the mean time to repair (downtime):
    A = MTTF /(MTTF + MTTR)
Various hypotheses exist about program errors, and seem to be true, but no controlled tests have been run to prove or disprove them:
  1. Bugs per line constant. There are less errors per line in a high level language. Many types of errors in machine code do not exist in HOL.
  2. Memory shortage encourages bugs. Mainly due to programming "tricks" used to squeeze code.
  3. Heavy load causes errors to occur. Very difficult to document and test heavy loads.
  4. Tuning reduces error occurrences rate. This involves removing errors for a class of input data. If new inputs are needed, new errors could occur, and the system (hardware and software) must be retuned.
Further hypotheses about errors:
  1. The normalized number of errors is constant. Normalization is the total number of errors divided by the number of machine language instruction.
  2. The normalized error-removal rate is constant. These two hypotheses apply over similar programs.
  3. Bug characteristics remain unchanged as debugging proceeds. Those found in the first few weeks are representative of the total bug population.
  4. Independent debugging results in similar programs. When two independent debuggers work on a large program, the evolution of the program is such that the differences between their versions is negligible.
Many researchers have put forward models of reliability based on measures of the hardware, the software, and the operator; and used them for prediction, comparative analysis, and development control. Error reliability and availability models provide a quantitative measure of the goodness of the software. There are still many unanswered questions.

Software Quality Evaluation

This is still in its development phase.

Boehm, Brown and Lipow identify key issues, and say measures should show where a program is deficient. Managers must decide on the relative importance of:

They define a hierarchical software characteristic tree, the arrow indicates logical implication. The lowest level characteristics are combined into medium level characteristics. The lowest level are recommended as quantitative metrics. They define each one. Then they evaluated each by their correlation with program quality, potential benefits in terms of insights and decision inputs for the developer and user, quantifiability, feasibility of automating evaluation. The list is more useful as a check to programmers rather than a guide to program construction.

Gilb also devised a set of software metrics:

and many more. He defines each in detail. The reality of applying these measures is disheartening. Many are difficult to obtain, and no expected range is given. They are not all independent.

However, this is still a developing field and he has pioneered some software quality measurements.

Halstead used 'methods and principles of classical experimental science'. He counted:

He defined vocabulary h as h1+ h2 and implementation length N as N1 + N2. From these he devised equations for: length, volume, potential volume, boundary volume, program level, intelligence, programming effort...
e.g. length equation
N1 = h1 log2 h1 + h2 log2 h2.

His length equation was tested on 14 algorithms and found to be very close to actual length. Other experimental evidence is also convincing. However, it ignores the issues of variable names, comments, choice of algorithms or data structures. It also ignores the general issues of portability, flexibility, efficiency.

Zak lists five productivity attributes:

From a survey of managers and technicians: In an experiment, five programming teams were given a different objective each: When productivity was evaluated each team ranked first in its primary objective. This shows that programmers respond to a goal.

Maintainability

This is the main programming costs in most installations, and is affected by data structures, logical structure, documentation, diagnostic tools, and by personnel attributes such as specialization, experience, training, intelligence, motivation.

Methods for improving maintainability are:

Bugs are sometimes seeded to establish a maintainability measure. For example, a program has 100 seeded bugs. During debugging 550 bugs are found, 50 of which were seeded. It can then be estimated that 500 real bugs remain.

Software Maintenance has very high cost. Gansler (1976) quotes Air Force avionics software at $75/instruction to develop, and $4000/instruction to maintain.

Maintenance includes the cost of rewriting, testing, debugging and integrating new features.

Documentation is one of the items which is said to lead to high maintenance costs. It is not just the program listing with comments. A program librarian must be responsible for the system documentation, but programmers are responsible for the technical writing.

Other aids may be text editors, and Source Code Control System (SCCS) tool for producing records. Some companies insist that programmers dictate any test or changes onto a tape every day.

Problem areas in software maintenance reported by respondents

Rank Problem area
  1. User demands for enhancements, extensions
  2. Quality of system documentation
  3. Competing demands on maintenance personnel time
  4. Quality of original programs
  5. Meeting scheduled commitments
  6. Lack of user understanding of system
  7. Availability of maintenance program personnel
  8. Adequacy of system design specifications
  9. Turnover of maintenance personnel
  10. Unrealistic user expectations
  11. Processing time of system
  12. Forecasting personnel requirements
  13. Skills of maintenance personnel
  14. Changes to hardware and software
  15. Budgetary pressures
  16. Adherence to programming standards in maintenance
  17. Data integrity
  18. Motivation of maintenance personnel
  19. Application failures
  20. Maintenance programming productivity
  21. Hardware and software reliability
  22. Storage requirements
  23. Management support of system
  24. Lack of user interest in system
(Lients et al. (1976), Table V.)

Quality Assurance

For hardware, this covers inspection and test of materials, maintenance of standards for workmanship, calibration of equipment, acceptance testing.

For software, there is no prototyping (except as phase one of a two-phase design), no incoming parts to be inspected, no standards for measuring software quality.

Rules to follow in software contracting:

  1. Get legal advice from the beginning.
  2. Negotiate with a senior person.
  3. Negotiate with only one person.
  4. Document all verbal agreements.
  5. Make sure the contract specifies everything you will get: the prices, the terms, the conditions.
  6. Do not announce the final decision until the contract is signed.
  7. Remember that no matter what the contract says, success with software depends first of all on a good business relationship between buyer and seller.

A table of contents for a typical requirements document.

1. Introduction
Organization principles; astracts for other sections, notation guide
2. Computer characteristics
If the computer is perdetermined a general description with particular attention to its idiosyncrasies; otherwise a summary of its required characteristics.
3. Hardware interfaces
Concise description of information recieved or transmitted by the computer.
4. Software functions
What the software must do to meet its requirements, in various situations and in reponse to various events.
5. Timing contraints
How often and how fast each function must be performed: This section is separate for section 4 since "what" and "when" can change independently.
6. Accuracy constraints
How close ouput values must be ideal to values to be acceptable.
7. Response to undesirable events
What the software must do if sensors go down, the pilot keys in invalid data, etc.
8. Subsets
What the proram should do if it cannot do everything.
9. Fundamental assumptions
The characteristics of the program that will stay the same, not matter what changes are made.
10. Changes
The types of changes that have been made or expected.
11. Glossary
Most documentation is fraught with acronyms and technical terms.
12. Sources
Annotated list of documentation and personnel, indicating the types of questions each can answer.
Source: Heninger (1979, p. 3). Three additional sections are suggested for this outline: 1(a), "Software Characteristics"; 2(a), "Software Interfaces"; and 6(a), "Defensive Programming Techniques."

It is desirable to add sections:

2A. Software Characteristics
which includes design philosophy, language, algorithms, data structures.
3A. Software Interfaces
which should discuss decisions on: existing operating systems, compilers, interpreters, assemblers, existing software development tools, existing code modules, subroutines or data bases.
7A. Defensive Programming Techniques
which includes expected range of input variables, key intermediate variables, output variables, range checking, parallel computation and checking, rollback, error-recovery techniques.

Formal Technical Review Meetings

Formal Technical Reviews (FTR) are conducted during design and coding phases.

Calculations of cost savings by a defect amplification model:

Errors Found Number Cost Unit Total
Reviews Conducted
During design 22 1.5 33
Before test 36 6.5 234
During test 15 15 315
After release 3 67 201
783
No Reviews Conducted
Before test 22 6.5 143
During test 82 15 1238
After release 12 67 804
2177
(Source: Roger Pressman. Software Engineering: A Practitioner's Approach. 1997. Table 8.1, pp. 191)

Review Guidelines

  1. Review the product, not the producer.
  2. Set an agenda and maintain it.
  3. Limit debate and rebuttal.
  4. Enunciate problem areas, but don't attempt to solve every problem noted.
  5. Take written notes.
  6. Limit the number of participants and insist upon advance preparation. (3-4)
  7. Develop a cheaklist for each work product that is likely to be reviewed.
  8. Allocate resources and time schedule for FTRs.
  9. Conduct meaningful training for all reviewers.
  10. Review your early reviews.

UofC Practical Software Engineering, Department of Computer Science

Rob Kremer