Lifecycle Testing:
We have seen that the phrase Software Testing refers to a variety of actvities that span the software development lifecycle. Practitioners who recognize the value of testing throughout the development lifecycle often wonder which techniques are most appropriate for the variety of artifacts that are created as software is developed.
To address this question, we consider the various actvities that are common to all software development processes and consider the objective of each of these actvities. Furthermore, we examine the artifacts created as a result of these actvities and specify the testing techniques that are most appropriate for verifying these artifacts.
We may classify the activities in most software development processes into the following broad categories:
Requirements Elicitation:
The authors discuss this actvity in Chapter 2 (page 23-38). Recall that the primary artifact created is the Requirements Specification document. The authors also mention a Requirements Definition document but for our purposes we will consider both of these documents to be one and the same. Recall that a Requirements Traceability Matrix is also created but it is not a final deliverable at this stage of the process. It is a document that evolves, that is it is updated throughout the lifecycle.
To document the functional and non-functional requirements of the software product/system. The document represents an agreement between the customer and the developer about what the software product/system will do. On page 36 the authors mention that requirements should be:
Since the artifacts created are non-executable then static testing is appropriate. Since Inspections are the most effective of the formal review techniques then they are recommended.
Design:
Architectural & High Level Design activities are sometimes done in conjunction with Low Level Design. However, since these activities are conceptually somewhat different, we will consider them separately and in the process highlight some of these differences.
Given the requirements, the process of defining the external and internal aspects of the software product.
It is at this stage that decisions are made about partitioning the software product given hardware and software resources and constraints. That is, the architecture of the software product is defined. The feasibility issue, previously addressed at the requirements stage, may be revisited. Phased delivery of functionality given requirements prioritization, addressed at the requirements stage, is considered and decided on. Sizing, that is effort estimation, is also revisited.
Given the objectives, it is clear that several artifacts constitute the Architectural/High Level design document. However, these artifacts are all non-executable and so static testing is appropriate. Since Inspections are the most effective of the formal review techniques then Inspections are recommended for all artifacts.
The process of transforming the High Level design into the level of detail appropriate for coders. That is, algorithms are outlined (possibly in pseudocode).
Since the artifacts created are all non-executable then static testing is appropriate. Since Inspections are the most effective of the formal review techniques then Inspections are recommended for all artifacts.
Coding & Unit Testing
Note that several artifacts will be created. Not only will code be produced in both textual and executable form but test plans, user interfaces etc. will be created. Some of the artifacts created will be executable and so dynamic testing will be appropriate. Note however that static testing is also appropriate since several artifacts (i.e. code, test plan etc.) may be effectively reviewed by static testing:
Integration & System Testing
These activities are sometimes done separately. Although both involve integrating individual components into a system/product, they are somewhat different.
Executable components are being assembled into systems and sub-systems and so dynamic testing is the appropriate technique at this stage.
Recall that we use the term System Test to refer to a collection of activities that involve functional and non-functional testing. These may be described as:
Executable components have been assembled into the complete product/system and so dynamic testing is the appropriate technique at this stage. Black Box testing is the technique. Remember that this is primarily a validation process that is being conducted by developers. The System Test plan first created at this Design stage is augmented and used.
Remember that Equivalence Partitioning is the primary technique. It is likely that one of the weaker forms of Equivalence Partitioning may be used since there will be more emphasis on ensuring that all functionality is present then validating error conditions. In any event, it should be augmented where necessary with Boundary Value analysis and Error Guessing.
Note: Remember that regression testing is merely a term used to describe testing that is intended to ensure that the software product has not regressed from build to build, or as bugs are fixed or, in the case of modified components, from previously working functionality. Remember also that Usability testing occurs throughout the lifecycle from Requirements Elicitation. At this the Usability of the integrated, executable system is being done to ensure that usability requirements have been satisfied.
Recall that the primary objective is as for System Testing but the customer is the driver. Black Box techniques will be used but it is likely, given the experience of the customers, that Error Guessing will be the primary technique. Also, the focus will be on the testing of functional requirements.
However, customers should be encouraged to evaluate non-functional requirements also. Particularly support requirements. That is, procedures, tools, and infrastructure designed to support the customers should be tested by the users. for example, incident reporting and logging facilities, electronic customer support facilities (i.e. internet/intranet discussion groups, software download facilities etc.).
BeBugging/Mutation Testing:
On page 222 the author mentions BeBugging and Mutation Testing as a way of determining the effectiveness of testing strategies. This is an issue that has been the focus of attention for practioners and researchers since Software Testing emerged as a distinct and important discipline. The author captures the essence of this issue on page 222 by recounting a hypothetical exchange between a manager and a software tester.
Manager: How close are we to finding most of the bugs in this build?
Tester: How many of these little bugs would we be looking for?
The point is that most development efforts proceed into the testing of the software product without having any way of determining how effective any of the testing activities have been and so are essentially hoping for the best when they decide to release the product.
A rather striking approach to answering this question is the technique of Bebugging/Mutation Testing which has been borrowed by the testing community from Biology/Ecology. In Biology/Ecology the technique is known as capture/recapture sampling and is widely, and very effectively, used to estimate the size of populations. the general idea is that a sample is selected from the population and tagged (i.e. marked in some identifiable way) and then released back into the population. After time for mixing, another sample is selected. The proportion of tagged items in the new sample should be about the same as the proportion of tagged items in the population. That is, if k of n items in the sample are tagged and K tagged items were introduced into the population then:
Given this expression, we may estimate the unknown value N by rearranging terms. That is, N = K(n/k). However, in this simple version of this technique, the following assumptions are required:
Bebugging/Mutation testing is slightly different. In this case, the software product is mutated by seeding it with bugs and so if N1 bugs are seeded into a product that has N2 bugs then the mutated product now has N = N1 + N2 total bugs. The mutated product is then tested. Now, if n bugs are found and k of them are seeded bugs then as in the Biology/Ecology setting, the proportion of seeded bugs found should be about the same as the proportion of bugs seeded. That is, we may say:
Given this expression, we may estimate the unknown value N2 by rearranging terms. That is, N2 = N1(n/k - 1). Note that, in this case, only assumption 2 is an issue. However, considerable efforts and progress have been made to mitigate the effect of this assumption.