4. Test Design Techniques
4.1 The Test Development Process
Considering the Analysis and Design and Implementation stages of the Test Process
- Analysis => Test Design Specification / Test conditions
- Design => Test Case Specification / Test cases
- Implementation => Test Procedure Specification / Test procedures
4.1.1 Differentiate between a test design specification, test case specification and test procedure specification
- test design specification: A document specifying the test conditions (coverage items) for a test item, the detailed test approach and identifying the associated high level test cases.
- test case specification: A document specifying a set of test cases (objective, inputs, test actions, expected results, and execution preconditions) for a test item
- test procedure specification: A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script.
4.1.2 Compare the terms test condition, test case and test procedure
- test condition: An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, quality attribute, or structural element
Test conditions are derived and prioritised by analysing requirements. They state 'what' will be tested.
- test case: A set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement.
Test cases break down test conditions. They state 'how' the 'what' (test conditions) will be tested.
- test procedure: …a sequence of actions for the execution of a test. Also known as test script or manual test script.
Test procedures are the low-level steps of the tests.
4.1.3 Evaluate the quality of test cases in terms of clear traceability to the requirements and expected results
4.1.4 Translate test cases into a well-structured test procedure specification at a level of detail relevant to the knowledge of the testers
4.2 Categories of Test Design Techniques
- black-box (specification-based)
- white-box (structure-based)
- experience-based
4.2.1 Recall reasons that both specification-based (black-box) and structure-based (white-box) test design techniques are useful and list the common techniques for each
- white-box test design technique:Procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system.
- white-box testing: Testing based on an analysis of the internal structure of the component or system.
4.2.2 Explain the characteristics, commonalities, and differences between specification-based testing, structure-based testing and experience-based testing
- experience-based test design technique: Procedure to derive and/or select test cases based on the tester’s experience, knowledge and intuition.
4.3 Specification-based or Black-box Techniques
- Equivalence Partitioning
- Boundary Value Analysis
- Decision tables
- State transition diagrams / tables
- Use case testing
Generally, EP and BVA done together.
4.3.1 Write test cases from given software models using equivalence partitioning, boundary value analysis, decision tables and state transition diagrams/tables
4.3.2 Explain the main purpose of each of the four testing techniques, what level and type of testing could use the technique, and how coverage may be measured
4.3.3 Explain the concept of use case testing and its benefits
4.3.1 Equivalence Partitioning
- equivalence partition: A portion of an input or output domain for which the behavior of a component or system is assumed to be the same, based on the specification.
e.g. age should be in range of 20 to 50
valid partitions:
- age < 20
- 20 <= age <= 50
- age > 50
Note: check questions for valid or valid/invalid
4.3.2 Boundary Value Analysis
- boundary value: An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, for example the minimum or maximum value of a range.
- boundary value analysis: A black box test design technique in which test cases are designed based on boundary values. See also boundary value.
BVA is an extension of EP.
e.g. age should be in range of 20 to 50
Valid boundary values:
- 19, 20
- 50, 51
Note: check questions for valid or valid/invalid
4.3.3 Decision Table Testing
- decision table: A table showing combinations of inputs and/or stimuli (causes) with their associated outputs and/or actions (effects), which can be used to design test cases.
- decision table testing: A black box test design technique in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table. See also decision table.
4.3.4 State Transition Testing
State diagrams made up of:
- state
- transition
- event
- action

- state transition testing: A black box test design technique in which test cases are designed to execute valid and invalid state transitions. See also N-switch testing
- N-switch testing:A form of state transition testing in which test cases are designed to execute all valid sequences of N+1 transitions. See also state transition testing.
4.3.5 Use Case Testing
- use case testing: A black box test design technique in which test cases are designed to execute scenarios of use cases.
4.4 Structure-based or White-box Techniques
- Statement testing and coverage (weakest)
- Branch/Decision testing and coverage (stronger than statement)
- All path coverage
- Other: condition testing, multiple condition testing
- How to calculate Statement, Branch/Decision and Path Coverage for ISTQB Exam purpose
4.4.1 Describe the concept and value of code coverage
- coverage: The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite.
- coverage analysis: Measurement of achieved coverage to a specified coverage item during test execution referring to predetermined criteria to determine whether additional testing is required and if so, which test cases are needed.
- coverage item: An entity or property used as a basis for test coverage, e.g. equivalence partitions or code statements.
4.4.2 Explain the concepts of statement and decision coverage, and give reasons why these concepts can also be used at test levels other than component testing (e.g., on business procedures at system level)
- statement: An entity in a programming language, which is typically the smallest indivisible unit of execution.
- statement coverage: The percentage of executable statements that have been exercised by a test suite.
- statement testing: A white box test design technique in which test cases are designed to execute statements
Decision coverage
- 100% decision coverage guarantees 100% statement cover, but NOT vice versa
- decision testing: A white box test design technique in which test cases are designed to execute decision outcomes.
4.4.3 Write test cases from given control flows using statement and decision test design techniques
4.4.4 Assess statement and decision coverage for completeness with respect to defined exit criteria.
4.5 Experience-based Techniques
- Exploratory testing
- Error guessing
4.5.1 Recall reasons for writing test cases based on intuition, experience and knowledge about common defects
4.5.2 Compare experience-based techniques with specification-based testing techniques
exploratory testing
- exploratory testing: An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.
error guessing
- error guessing: A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them
4.6 Choosing Test Techniques
- Risk and objectives
- Type of system and development cycle
- Regulatory standards
- Time and budget
- Knowledge and experience
4.6.1 Classify test design techniques according to their fitness to a given context, for the test basis, respective models and software characteristics
Factors:
- Regulatory standards
- Customer or contractual requirements
- Level of risk
- Type of risk
- Type of system
- Test objective
- Documentation available
- Models used
- Knowledge of the testers
- Time and budget
- Development life cycle
- Experience of type of defects founds
page revision: 20, last edited: 21 Jul 2016 10:43