Chapter 1:

  1. Fault: An error in the artifact (pg5). It may result in the artifact not behaving as expected.
  2. Failure: the manifestation of a fault (pg5). That is, the artifact does not behave as expected.
  3. On pg 5 the author indicates that there are two basic functions of software testing.
    1. Validation: (pg6)
      1. Activities that ensure that all requirements are implemented. That is, does the final product, as implemented, meet the expectations of the customer.
      2. Establishing the fitness of a software product for its use.
      Are we building the right product?
    2. Verification: (pg6)
      1. Activities that ensure that software artifacts are created as specified. That is, are the artficats of a particular phase consistent with that phase and preceding phases.
      2. Establishing the correspondence between artifacts and specifications.
      Are we building the product right?
  4. Quality?
    1. OED, 1990; n., the degree of excellence!
    2. Crosby; Quality = zero defects!
    3. Juran; Quality = fitness for purpose!
    4. ISO, 1996; The totality of features and characteristics of a product that bear on its ability to satisfy specified needs.
      Note: Two contrasting views of quality.
      1. The higher the quality the higher the cost (either due to extra functionality or extra care). This leads to the notion that you should provide the level of quality that the customer can afford or is willing to pay for (value based view - most commercial software).
      2. Second, (the Crosby view), quality is free! Improvements in quality during construction lead to fewer defects and increased reliability in use hence a reduction in cost during maintenance (Remember that eighty percent of software development costs are incurred after product deployment - maintenance).
  5. Software Quality:
    On pg 6 the author says, a software product is said to have good quality if:
    1. It has few failures when used by customer.
    2. It is reliable (i.e. when used by customer it seldom demonstrates unexpected behavior.)
    3. It satisfies a majority of users.
      Note: User based view - very hard to quantify.
  6. On pg 7, 8 the author mentions the following two components of "Rapid Testing". I will drop the word Rapid and use Testing instead.
    1. Static Testing: Inspections, Walkthroughs, Peer Reviews, Static Analysis (syntax defects, data structure anomalies etc.) Any activity than can be done to uncover defects without executing the artifact.
    2. Dynamic Testing: Executing the program with the intent to determine if actual behavior differs from expected behavior.
  7. Process: (pg 10)
    A series of steps that lead to the production of some output. The steps involve:
    1. Activities (i.e. with entry and exit criteria)
    2. Constraints (e.g. schedule, budget)
    3. Resources (i.e. people, facilities etc.)
    If the process relates to the building of a software product then it is referred to as a Software Lifecycle (e.g. Waterfall model).
  8. On pg 15-17 the author discusses the notion of a testing process. The following terminology is introduced:
    1. Test Case: The collection of inputs, expected results and execution conditions for a single test.
      Note: The author does not mention the following but they are important notions:
      1. If a test is executed without prior planning it is referred to as an ad-hoc test case - especially if the expected behavior is not known prior to running the test!!!!
      2. Often, in such cases, we may think the product seems to operate correctly. We sometimes use the term coincidental correctness - when behavior appears to be what is expected, but it is just a coincidence. The but it works syndrome is usually a manifestation of this phenomenon.
      3. The term oracle refers to a procedure or process that can determine if actual behavior matches expected behavior.
    2. Pass/fail criteria: Rules used to determine whether a product passes or fails a given test. The author does not mention the following but they are important notions:
      1. Incident - when a test produces an unexpected outcome - further effort is necessary to classify the incident as a software error, a design error, a specification error, a testing error, etc.
      2. Incident/Bug Report - a method of transmitting the occurrence of a discrepancy between actual and expected output to someone who cares for "follow-up" - also known as discrepancy report, defect report, problem report, etc.
      3. Work-around - a procedure by which an error in the product can be "by-passed" and the desired function achieved.
    3. Test Suite: (not mentioned in text) A collection of test cases necessary to "adequately" test a product or function.
    4. Test Plan: A document describing the scope, approach, resources and schedule of intended testing activity - identifies features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning.
  9. System Test: Activities that ensure that the product does what the customer expects it to do. Two main types:
    1. Functional Testing: Are all functions, expected by the customer, present and working as expected.
    2. Performance Testing: Stress testing, volume testing, timing tests, recovery tests etc.
    Note: Internal to development team. Not conducted by customer!!!
  10. Acceptance Test: Activities conducted by the customer to ensure that the product does what is expected. Deployment is conditional on the customer agreeing that the product is sufficiently acceptable. The author mentions two distinct types:
    1. Alpha: Customer internal to organization/company.
    2. Beta: Customer external to organization/company.

Expected behavior:

Q: How do you know what behavior is expected from an artifact?
A: Requirements specification?

Note: Quite often supplemented (where applicable) by Proposal/Contract, Marketing Literature, Users' Manual, Common Sense

Chapter 2:

Standish Group (1994):

Survey of 350 companies, > 8000 projects:

Standish Group (1995):

Follow up survey to identify causes of failed projects:

The author points out on pg 24 that the survey also suggests that a large number of defects are introduced during the requirements phase.

Barry Boehm argues that the earlier in the development cycle that a bug is detected, then the cheaper it is to rectify:

The author points out that this justifies the need for testing at the earliest stages in development. That is at the requirements phase. In summary:

Requirements:

  1. Requirements Specification: Also known as the Functional Specification. Arguably the most important artifact of the software development process. See pg 31 for an example.
    On pg 30 the author mentions that each requirement should be uniquely identified (i.e. numbered), presented from a users perspective, functional & non-functional (i.e.# of concurrent users, SLA's, etc.), placed under configuration management (i.e. version control).
    Note: May be appropriate/prudent to classify requirements (pg 28):
    1. Essential
    2. Highly desirable.
    3. Nice to have.
  2. Requirements Traceability Matrix: This document is initiated in the requirements phase and is updated throughout the lifecycle as development progresses (see pg 35).
  3. Requirements must be tested (pg 36). Static Testing (i.e. Inspections, Walkthroughs, Peer Reviews) is used since requirements are non-executable artifacts.
    Requirements should be (pg 38):
    1. Complete: All components must be present. That is, no requirement, or part of a requirement, should explicitely, or implicitely, suggest "TBD".
    2. Unambiguous: Requirements should be precise and clear. That is, sufficient detail should be provided that only a single interpretation of the requirement is possible.
    3. Consistent: Requirements should not conflict with each other.
    4. Traceable: Requirements should be identifiable (i.e. numbered) so that it can be seen where they are represented in each artifact.
    5. Feasible: Requirements should be implementatble given constraints of time/budget.
    6. Testable/Verifiable: You should be able to write a test case(s) for each requirement.

Efficiency in testing:

Q: How many possible test cases can you have?
A: Many!

Note:

  1. Testing involves inferring the behavior of all inputs from a relatively small number of inputs
  2. Pick your inputs to ensure the "Biggest Bang for the Buck" the measure of "bang" is dependent on the goal you've selected for testing ...