Library System Testing Strategy

Testing Plan

Objectives

To have a product with the least amount of errors possible, and to be as user friendly and functional as possible.

Testing Schedule

Andrew Tang will be responsible for having two non-coders test each module, using all the methods described in the following pages.

Summary of Monitoring Procedures

The Head of Testing, Andrew Tang, reads incoming test reports to make sure that the deadlines set forth previously are met. If they are not, Andrew has the option of extending the deadline, assigning the test to another person, or in cases of extreme time pressure, dropping the test altogether.

Summary of Reporting Procedures

For each test performed, the tester fills out a report telling whether the test was successful or whether bugs were found. If it was successful, it is mailed only to the Head of Testing. If the test revealed bugs, then the report contains details of the circumstances that made the bug appear. The bug report is mailed the Heads of both Coding and Testing.

Summary of Correcting Procedures

The Head of Coding, Laura Meynberg, receives the testing report indicating a bug. She gets the coding team to fix the bug. She notifies the Head of Testing that the bug has been fixed. The Head of Testing then schedules a re-test if time permits.

Integration Plan

The coders are divided into three groups. Each group consist of two coders and each group is in charge of one section below.

  1. Book Administration
  2. Borrower Administration
  3. Transactions

In these three main sections, they are further divided into sub-sections. For example, in the Book Administration section, it is further divided into:

  1. Add a Book
  2. Remove a Book
  3. Update a Book
  4. Search a Book
  5. Report on status
  1. Then the coders in charge of the Book Administration section would be required to write the codes for each sub-sections. Once they finished their codes, they would mail the codes to Geoff Dyment (our group leader) and he would group and combine all the codes according to their sections before he put it up to the WWW web.

Steps in implementing the modules

Since book and borrower are independent databases, it is possible to implement them at the same time.

Book Administration

The modules will be implemented in the following order. Following each module, a justification is given for its place in the order.

Add a Book

Update a Book

Search a Book

Remove a Book

Report on status

  1. Borrow Administration

As stated previously, this main module would be implemented simultaneously with the Book Administration main module. Internally, there is really no integration plan to speak of, as Book Administration consists only of one very large dialogue box (see module specifications for details on this dialogue box).

Transactions

Lend a Book

Return a Book

Recall a Book

Renew a Book

Report on Fines

Details of testing procedures

Software testing is a critical element of software quality assurance (SQA). Nexus corporation has always strived to deliver high quality software to their client through intensive software testing procedures.

Our testing objectives cover the following areas:

  1. Intention in finding errors.
  2. Produce good test cases that have higher probabilities of finding as yet undiscovered errors.
  3. Executing successful tests to uncover as yet undiscovered errors.

However, please bear in mind that testing cannot show the absence of defects, it can only show that software defects are present. We at Nexus will of course draw our utmost efforts in minimizing/eliminating defects. Several different types of test plans will help to minimize the occurrence of bugs in the final product. As a programming team we will know what needs to be corrected, and where to concentrate testing to ensure that any changes in the code have been effective. This paragraph attempts to describe the methods of testing prescribed.

Methods of Testing

Walkthroughs

After getting in touch with a few real life librarians, we will spend some time with them discussing how the system might fit in. We will attempt to develop concrete, detailed examples of tasks they perform or want to perform with the given library system.

Write up descriptions of all tasks and circulate them to testers of the system. Also ask for details that were left out in the original task description, get corrections, clarifications, suggestions and rewrite the tasks description if needed.

Walkthroughs are good for debugging the interface. The steps in performing them are illustrated below:

For each user's step/action in the task:

  1. I/O Testing.

I/O Testing involves attempts to uncover bugs when a module performs external I/O. The following questions will be addressed. The Quality Assurance team will note them down specifically.

  1. File attributes correct?
  2. OPEN/CLOSE statements correct?
  3. Format specification matches I/O statement?
  4. Files opened before use?
  5. End-of-file conditions handled?
  6. I/O errors handled?
  7. Any textual errors in output information?

Each bug found will also come with a full text description of it. This report will be comprised of several key descriptive areas including:

  1. Who found the bug.
  2. The date the bug was discovered.
  3. The version of the program in which the bug was discovered.
  4. The area of the program in which the bug was discovered.
  5. What exactly the bug is.
  6. The circumstances that brought about the bug.
  7. The operating system being used at the time.
  8. The hardware on which the program was being run.
  1. User Interface Testing.

While some parts of the program may not be working yet, the interface so far developed can be tested for its usability. It is important that the testing team thinks as a user is this stage of the testing, judging all aspects of the interface from the user's point of view. In performing this stage of the testing adequately, we will hopefully end up with an interface that is very usable to the librarians. Several guidelines can be followed here:

  1. Simple and natural dialogue.
  2. Speak the users' language.
  3. Minimize user's memory load.
  4. Consistency across the language, graphics and input used.
  5. Provide feedback for the user.
  6. Long delays are dealt with.
  7. Provide clearly marked exits.
  8. Provide shortcuts for expert users.
  9. Dealing with errors in a positive and helpful manner.
  10. Help is provided online.

  1. Logic testing.

Programs are tested to ensure that logical conditions contained in program modules are error free. Types of errors in a condition include the following:

  1. Boolean operator error.
  2. Boolean variable error.
  3. Boolean parenthesis error.
  4. Relational operator error.
  5. Arithmetic expression error.

Loops testing will also be performed. Four different classes of loops can be defined: simple loops, concatenated loops, nested loops, and unstructured loops. Different techniques will be employed to detect errors in these looping constructs.

Details of Monitoring Procedures

For a list of tests, people assigned to them, and due dates, please see the Testing Summary section. A deadline is considered "met" if the testing report is e-mailed to Andrew Teoh, the Head of Testing, before the due date for that test. If someone is unable to meet a testing deadline, they must inform Andrew as far in advance as possible. Andrew then chooses between three options, at his discretion:

Details of Reporting Procedures

For each test, the tester will follow this procedure:

  1. Fill out a report containing the following information:
    • Number of test done, according to the section that lists tests, dates and people.
    • Description of test, including:
    • Module or subsystem being tested
    • Level of testing being done (unit, integration, functional, or performance)
    • Type of testing being done (walkthrough, logic, or i/o)
    • If the test was successful, write "successful" and indicate date of test completion. No more information is needed if the test is successful.
    • If the test revealed bugs, then for each bug:
      • List the circumstances that led to the bug in the greatest detail possible.
      • Write date and approximate time the bug was discovered.
  2. Keep a personal copy of the report on file.
  3. E-mail the report to the Head of Testing. If the test was successful, Andrew checks off that test on the list of tests. If the test was unsuccessful, Andrew awaits word from the coding team that the bug has been fixed, and schedules a re-test.

Details of Correcting Procedures

These are the steps followed for correcting a bug reported by a tester:

  1. The Head of Coding is mailed all reports of tests that uncovered bugs.
  2. The coding team fixes the bug.
  3. The Head of Coding informs the Head of Testing that the bug appears to have been fixed. If time permits, a re-test is scheduled by the Head of Testing.



Back to the Detailed Design Index