Overall Design Document
CPSC 451 Supplier Group #1

Department of Computer Science
University of Calgary
26 January 1997

Page maintainer: Terrence Asgar-Deen
terrence@cpsc.ucalgary.ca

  • Terrence Asgar-Deen
  • Patrick Chan
  • Thomas Hui
  • Carsten Jaeger
  • Matthew Johnson
  • Brian Low
  • Hoang Nguyen
  • Kevin Pattison
  • Csaba Suveges
  • Jeremy Tang
  • Leena Thakkar
  • Al-Amin Vira
  • Lin Zhang

  • Preliminary Testing Overview

    Disclaimer : This page is only a copy of the original page. Comments have been placed throughout it for the use of dHACs software group. To see the original page please refer to Here.

    Although the various modules which will make up the system have not yet been thoroughly specified, it is still possible to present an overview of the various testing schemes that are under consideration for the system.

    The merits of each testing method and situations in which each one may be applicable will be discussed in this section.

    The goal of software testing is to examine the stability of the software under a variety of input conditions. This is accomplished through the use of well-defined test cases to discover faults and failures in the software for later correction. Test cases consist of: input data, expected output for the data, and a justification for the test case.

    Your professional knowledge on this matter is admirable. If your testing procedures live up to these rigorous standards then Peachy Business Forms can expect a very robust sytem.

    Testing procedures can generally be broken down into two broad categories:

    Black Box Testing

    Black Box testing revolves around deriving test cases from the design specifications of the software. The implementation and internals of a program (or portion thereof) are given little consideration. The purpose is to ensure the software produces correct output given a specific input.

    Several methods have been established for Black Box testing:

    Exhaustive Testing: Trying all possible input combinations and comparing results to predicted output. This is often not feasible in most cases and is likewise not suitable for testing this particular software. The number of possible inputs to manage is overwhelming.

    Random Testing: Randomly selecting a subset of possible inputs for test cases. If a good sampling of test cases is chosen, this method can be quite effective. It will most certainly be used in testing this software.

    Equivalence Partitioning: Partition the possible inputs into a set of equivalence classes from which test cases can be drawn. It can be difficult to determine the partitioning of the input. This test method will be used whenever possible. Equivalence partitioning can be used for expected outputs as well.

    What is an Equivalence Class? Examples may make this section clearer.

    White Box Testing

    White Box testing is governed by the structure of the code itself (as opposed to its functionality, which is the purview of Black Box testing). The purpose of White Box testing is to "exercise" the code.

    Various methods exist for White Box Testing as well:

    Statement Coverage: Develop test cases so that every statement of source code is executed at least once. Sometimes this testing method is impossible to perform since certain sections of code may only be executed in extreme circumstances and it may be dangerous to execute that code.

    If certain areas of code are dangerous to execute, why are they there in the first place?

    Branch Coverage: Similar to statement coverage, only here it is necessary to exercise every "branch" in the code. This method is a superset of the statement coverage method.

    Multiple Condition Coverage: Every condition within each decision must be evaluated with every possible combination of true and false outcomes at some point during test execution.

    Fault-Based Testing: Predicting what faults may be present in the system and engineering test cases to attempt to uncover said faults.

    Limited applications of the above methods will be used to test this software, but due to time constraints it will be impossible to test for all situations. Therefore, the program will be carefully analyzed to determine which sections of code will be most likely to be executed and those sections will be tested thoroughly.

    The various testing methods can be used for individual program modules or for various combinations of modules up to and including the entire system.

    General testing procedures that will be undertaken for this software include:

    Unit Testing: Testing of each individual module on its own. To facilitate this, various test drivers and "stubs" will be used. For instance, the "Add new salesman" module will be tested apart from any other module to ensure its functionality.

    Subsystem Testing: Testing of groups of modules. An example is testing all of the modules pertaining to managing the customer list.

    Integration Testing: As each module is coded and tested, it will be integrated and tested along with all of the other modules. This will enable us to ensure reliability for completed modules before coding any new ones.

    Regression Testing: Any time a change is made to an already established portion of the system, re-testing of any part of the system likely to be affected will have to be undertaken. This helps verify that the change did not introduce any faults that were not present before the change.

    Acceptance Testing: Testing done under the supervision of the user and from the user's point of view. For this system the acceptance testing will take place at the demonstration to the customer.

    There are certainly many technical terms that haven't been explained. A glossary could not be found explaining these.