Lifecycle Testing:

We have seen that the phrase Software Testing refers to a variety of actvities that span the software development lifecycle. Practitioners who recognize the value of testing throughout the development lifecycle often wonder which techniques are most appropriate for the variety of artifacts that are created as software is developed.

To address this question, we consider the various actvities that are common to all software development processes and consider the objective of each of these actvities. Furthermore, we examine the artifacts created as a result of these actvities and specify the testing techniques that are most appropriate for verifying these artifacts.

We may classify the activities in most software development processes into the following broad categories:

  1. Requirements Elicitation
  2. Design:
    1. Architectural & High Level Design
    2. Low Level Design
  3. Coding & Unit Testing
  4. Integration & System Testing
  5. Acceptance Testing

Requirements Elicitation:

The authors discuss this actvity in Chapter 2 (page 23-38). Recall that the primary artifact created is the Requirements Specification document. The authors also mention a Requirements Definition document but for our purposes we will consider both of these documents to be one and the same. Recall that a Requirements Traceability Matrix is also created but it is not a final deliverable at this stage of the process. It is a document that evolves, that is it is updated throughout the lifecycle.

  1. Objective:

    To document the functional and non-functional requirements of the software product/system. The document represents an agreement between the customer and the developer about what the software product/system will do. On page 36 the authors mention that requirements should be:

    1. Complete
    2. Unambiguous
    3. Consistent
    4. Traceable
    5. Feasible
    6. Testable

  2. Testing Technique:

    Since the artifacts created are non-executable then static testing is appropriate. Since Inspections are the most effective of the formal review techniques then they are recommended.

Design:

Architectural & High Level Design activities are sometimes done in conjunction with Low Level Design. However, since these activities are conceptually somewhat different, we will consider them separately and in the process highlight some of these differences.

  1. Architectural & High Level Design

    Given the requirements, the process of defining the external and internal aspects of the software product.

    It is at this stage that decisions are made about partitioning the software product given hardware and software resources and constraints. That is, the architecture of the software product is defined. The feasibility issue, previously addressed at the requirements stage, may be revisited. Phased delivery of functionality given requirements prioritization, addressed at the requirements stage, is considered and decided on. Sizing, that is effort estimation, is also revisited.

    1. Objective:
      1. Design interfaces. That is user interfaces, API's, data structures and data stores (i.e. databases).
      2. Design software modules/components to implement required functionality.
      3. Design System Test plan.
      4. Ensure functional requirements satisfied.

    2. Testing Technique:

      Given the objectives, it is clear that several artifacts constitute the Architectural/High Level design document. However, these artifacts are all non-executable and so static testing is appropriate. Since Inspections are the most effective of the formal review techniques then Inspections are recommended for all artifacts.

  2. Low Level Design

    1. Objective:

      The process of transforming the High Level design into the level of detail appropriate for coders. That is, algorithms are outlined (possibly in pseudocode).

    2. Testing Technique:

      Since the artifacts created are all non-executable then static testing is appropriate. Since Inspections are the most effective of the formal review techniques then Inspections are recommended for all artifacts.

Coding & Unit Testing

  1. Objective:
    1. Transform the Low Level design into executable components. That is, code new components and/or modify existing components.
    2. Verify code against Design artifacts.
    3. Execute all new and modified code to ensure:
      • All branches executed.
      • Correctness of logic.
      • Paths are verified.
    4. Exercise all error messages, return codes etc.

  2. Testing Technique:

    Note that several artifacts will be created. Not only will code be produced in both textual and executable form but test plans, user interfaces etc. will be created. Some of the artifacts created will be executable and so dynamic testing will be appropriate. Note however that static testing is also appropriate since several artifacts (i.e. code, test plan etc.) may be effectively reviewed by static testing:

Integration & System Testing

These activities are sometimes done separately. Although both involve integrating individual components into a system/product, they are somewhat different.

  1. Integration

    1. Objective:
      1. Assemble components.
      2. Test interfaces between components.
      3. Test interfaces with external components.
      4. Test API's.
      5. Test error recovery and messages that arise from component interaction.

    2. Testing Technique:

      Executable components are being assembled into systems and sub-systems and so dynamic testing is the appropriate technique at this stage.

      • White Box: Basis Path Testing is the recommended technique. However, the focus here is not paths within components but paths between components.
      • Black Box: Equivalence Partitioning, Boundary Value Analysis, and Error Guessing. Note that unlike Unit testing the assembled components will, at this stage, usually map directly to one or more requirements.

  2. System Testing

    1. Objective:

      Recall that we use the term System Test to refer to a collection of activities that involve functional and non-functional testing. These may be described as:

      1. Functional Testing
      2. Regression Testing
      3. Performance Testing
      4. Usability Testing

    2. Testing Technique:

      Executable components have been assembled into the complete product/system and so dynamic testing is the appropriate technique at this stage. Black Box testing is the technique. Remember that this is primarily a validation process that is being conducted by developers. The System Test plan first created at this Design stage is augmented and used.

      Remember that Equivalence Partitioning is the primary technique. It is likely that one of the weaker forms of Equivalence Partitioning may be used since there will be more emphasis on ensuring that all functionality is present then validating error conditions. In any event, it should be augmented where necessary with Boundary Value analysis and Error Guessing.

      Note: Remember that regression testing is merely a term used to describe testing that is intended to ensure that the software product has not regressed from build to build, or as bugs are fixed or, in the case of modified components, from previously working functionality. Remember also that Usability testing occurs throughout the lifecycle from Requirements Elicitation. At this the Usability of the integrated, executable system is being done to ensure that usability requirements have been satisfied.

Acceptance Testing

Recall that the primary objective is as for System Testing but the customer is the driver. Black Box techniques will be used but it is likely, given the experience of the customers, that Error Guessing will be the primary technique. Also, the focus will be on the testing of functional requirements.

However, customers should be encouraged to evaluate non-functional requirements also. Particularly support requirements. That is, procedures, tools, and infrastructure designed to support the customers should be tested by the users. for example, incident reporting and logging facilities, electronic customer support facilities (i.e. internet/intranet discussion groups, software download facilities etc.).

 

BeBugging/Mutation Testing:

On page 222 the author mentions BeBugging and Mutation Testing as a way of determining the effectiveness of testing strategies. This is an issue that has been the focus of attention for practioners and researchers since Software Testing emerged as a distinct and important discipline. The author captures the essence of this issue on page 222 by recounting a hypothetical exchange between a manager and a software tester.

Manager: How close are we to finding most of the bugs in this build?

Tester: How many of these little bugs would we be looking for?

The point is that most development efforts proceed into the testing of the software product without having any way of determining how effective any of the testing activities have been and so are essentially hoping for the best when they decide to release the product.

A rather striking approach to answering this question is the technique of Bebugging/Mutation Testing which has been borrowed by the testing community from Biology/Ecology. In Biology/Ecology the technique is known as capture/recapture sampling and is widely, and very effectively, used to estimate the size of populations. the general idea is that a sample is selected from the population and tagged (i.e. marked in some identifiable way) and then released back into the population. After time for mixing, another sample is selected. The proportion of tagged items in the new sample should be about the same as the proportion of tagged items in the population. That is, if k of n items in the sample are tagged and K tagged items were introduced into the population then:

k/n = K/N

Given this expression, we may estimate the unknown value N by rearranging terms. That is, N = K(n/k). However, in this simple version of this technique, the following assumptions are required:

  1. No immigration, births, deaths between capture and recapture.
  2. The probability of being caught is the same for all items.
  3. Tags are not lost and are always recognizable.

Bebugging/Mutation testing is slightly different. In this case, the software product is mutated by seeding it with bugs and so if N1 bugs are seeded into a product that has N2 bugs then the mutated product now has N = N1 + N2 total bugs. The mutated product is then tested. Now, if n bugs are found and k of them are seeded bugs then as in the Biology/Ecology setting, the proportion of seeded bugs found should be about the same as the proportion of bugs seeded. That is, we may say:

k/n = N1/N = N1/ (N1 + N2)

Given this expression, we may estimate the unknown value N2 by rearranging terms. That is, N2 = N1(n/k - 1). Note that, in this case, only assumption 2 is an issue. However, considerable efforts and progress have been made to mitigate the effect of this assumption.