Testing Strategies

In this section, you will learn about two kinds of testing strategies: how the logic is tested (via black-box and white-box testing) and how the testing is conducted (by top-down and bottom-up testing).

Test Cases

Test cases are input data created to demonstrate that both components and the total system satisfy all design requirements. Created data rather than 'live,' production data, is used for the following reasons: 

1. Specially developed test data can incorporate all operational situations. This implies that each processing path may be tested at the appropriate level of testing (e.g., unit, integration, etc.). 

2. Predetermined test case output should be predicted from created input. Predicting results is easier with created data because it is more orderly and usually has fewer cases. 

3. Test case input and output are expanded to form a model database. The database should statistically reflect the users' data in the amount and types of records processed while incorporating as many operational processing paths as possible. The database is then the basis for a regression test database in addition to its use for system testing. Production data is real, so finding statistically representative cases is more difficult than creating them.

Each test case should be developed to verify that specific design requirements, functional design, or code are satisfied. Test cases contain, in addition to test case input data, a forecast of test case output. Real or 'live' data should be used to reality test the modules after the tests using created data are successful. 

Each component of an application (e.g., module, subroutine, program, utility, etc.) must be tested with at least two test cases: one that works and one that fails. All modules should be deliberately failed at least once to verify their 'graceful degradation'. For instance, if a database update fails, the application should give the user a message, roll back the processing to leave the database as it was before the transaction, and continue processing. If the application were to abend, or worse, continue processing with a corrupt database, the test would have caught an error.

Test cases can be used to test multiple design requirements. For example, a requirement might be that all screens are directly accessible from all other screens; a second requirement might be that each screen contain a standard format; a third requirement might be that all menus be pull-down selections from a menu bar. These three requirements can all be easily verified by a test case for navigation that also attends to format and menu selection method. 

The development of test case input may be facilitated by the use of test data generators such as IEBDG (an IBM utility) or the test data generators within some case tools. The analysis and verification of processing may be facilitated by the use of language-specific or environment-specific testing supports (see Figure 17-12). These supports are discussed more completely in the section on automated supports.


                COBOL Language Supports:
                Display
                Exhibit
                Ready Trace
                Interactive Trace
                Snap Dump
                Focus Language Supports:
                Variable Display
                Transaction Counts
                Online Error Messages
                Online Help 

FIGURE 17-12 Examples of Language Testing Supports


To insure that test cases are as comprehensive as possible, a methodical approach to the identification of logic paths or system components is indicated. Matrices, which relate system operation to the functional requirements of the system, are used for this purpose. For example, the matrix approach may be used in

  • unit testing to identify the logic paths, logic conditions, data partitions or data boundaries to be tested based on the program specification. 
  • integration testing to identify the relationships and data requirements among interacting modules. 
  • system testing to identify the system and user requirements from functional requirements and acceptance criteria.

An example of the matrix approach for an integration test is illustrated as Figure 17-13. The example shows a matrix of program requirements to be met by a suite of modules for Read Customer File processing. The test verifies that each module functions independently, that communications between the modules (i.e., the message format, timing, and content) are correct, and verifies that all input and output are processed correctly and within any constraints. 

The functional requirements of the Read Customer File module are related to test cases in the matrix in Figure 17-13. The 11 requirements can be fully tested in at most seven test cases for the four functions.

Good
Cust-ID
Bad
Cust-ID
Missing
ID
Retrieve
by Name
(Good)
Retrieve
by Name
(Bad)
Good
Credit
Bad
Credit
Good
Data
Bad
Data


Call from
GetValid
Customer
(Good)
Call from
GetValid
Customer
(Bad)

1.

2.

x

x

x

 

x

 

x

 

x

 

 

x

 

x

 

 

 

 

x

x

x

x


3.

4.

x

x

x

x

x

x

 

x

 

x

 

x

 

x

x

x

x

 

x

x

x

x

Legend:

1. Read Customer

2. Check Credit

3. Create Customer

4. Display Customer

FIGURE 17-13 Read Customer File Requirements and Test Cases


Matching the Test Level to the Strategy 

The goal of the testers is to find a balance between strategies that allows them to prove their application works while minimizing human and computer resource usage for the testing process. Noone testing strategy is sufficient to test an application. To use only one testing strategy is dangerous. If only whitebox testing is used, testing will consume many human and computer resources and may not identify data sensitive conditions or major logic flaws that transcend individual modules (see Table 17-1). If only black-box testing is used, specific logic problems may remain uncovered even when all specifications are tested; type 2 errors are difficult to uncover. Top-down testing by itself takes somewhat longer than a combined top-down, bottom-up approach. Bottom-up testing by itself does not find strategic errors until too late in the process to fix them without major delays.

In reality, we frequently combine all four strategies in testing an application. White-box testing is used most often for low-level tests-module, routine, subroutine, and program testing. Black-box testing is used most often for high-level tests-integration and system level testing. White-box tests find specific logic errors in code, while black-box tests find errors in the implementation of the functional business specifications. Similarly, top-down tests are conducted for the application with whole tested modules plugged into the control structure as they are ready, that is, after bottom-up development. Once modules are unit tested, they can be integration tested and, sometimes, even system tested with the same test cases. 

Table 17-2 summarizes the uses of the box and live-data testing strategies for each level of test. Frequently black- and white-box techniques are combined at the unit level to uncover both data and logic errors. Black-box testing predominates as the level of test is more inclusive. Testing with created data at all levels can be supplemented by testing with live data. Operational, live-data tests ensure that the application can work in the real environment. Next, we examine the ABC rental application to design a strategy and each level of test.

TABLE 17-1 Test Strategy Objectives and Problems


Test Strategy Method Goal Shortcomings

White-Box Logic  Prove processing. Functional flaws, data sensitive conditions, and errors across modules are all difficult to test with white-box methods.
Black-Box  Data  Prove results.  Type 2 errors and logic problems difficult to find.
Top-Down  Incremental  Exercise critical code extensively to improve confidence in reliability. Scaffolding takes time and may be discarded. Constant change may introduce new errors in every test. 
Bottom-Up  All or nothing  Perfect parts. If parts work, whole should work.
 
Functional flaws found late and cause delays. Errors across modules may be difficult to trace and find.



Level General Strategy Specific Strategy Comments on USe

Unit Black-Box Equivalence Partitioning Equivalence is difficult to estimate.
Boundary Value Analysis Should always be used in edit- validate modules.
Cause-Effect Graphing A formal method of boundary analysis that includes tests of compound logic conditions. Can be superimposed on already available graphics, such as state-transition or PDFD.
Error Guessing Not a great strategy, but can be useful in anticipating problems.
Math Proof, Cleanroom Logic and/or mathematical proof The best strategies for life- sustaining. embedded, reusable, or other critical modules, but beyond most business SE skills.
White-Box Statement Logic Exhaustive tests of individual statements. Not desirable unless life-sustaining or threatening consequences are possible, or if for reusable module. Useful for ‘guessed’ error testing that is specific to the operational environment.
Decision Logic Test A good alternative to statement logic. May be too detailed for many programs.
Condition Logic A good alternative providing all conditions can be documented.
Multiple Condition Logic Desired alternative for program testing when human resources can be expended.
Live-Data Reality Test Can be useful for timing, performance, and other reality testing after other unit tests are successful.
Integration Black-Box Equivalence Partitioning Useful for partitioning by module.
Boundary Value Analysis Useful for partitioning by module.
Cause-Effect Graphing Useful for application interfaces and partitioning by module.
Error Guessing Not the best strategy at this level.



TABLE 17-2 Test Level and Test Strategy


Level General Strategy Specific Strategy Comments on Use

Integration Live-Data Reality Test Useful for interface and black-box tests after other integration tests are successful.
System/QA-Application Functional Requirements Test Black-Box Equivalence Partitioning Most productive approach to system function testing.
Boundary Value Analysis Too detailed to be required at this level. May be used to test correct file usage, checkpoint/rcstart, or other data-related error recovery.
Cause-Effect Graphing Can be useful for intermodule testing and when combined with equivalence partitioning.
White-Box Statement Logic Not a useful test strategy
Decision Logic Test May be used for critical logic.
Condition Logic May be used for critical logic.
Multiple Condition Logic May be used for critical logic.
System/QA-Human Interface Black-Box Equivalence Partitioning Useful at the level for screen and associated process and for screen navigation.
Boundary Value Analysis Useful at screen level for associated process and screen navigation.
Useful for QA testing.



Level General Strategy Specific Strategy Comments on Use

System/QA-Human Interface White-Box Condition Logic May be used for critical logic.
Multiple Condition Logic May be used for critical logic.
System/QA-Constraints Black-Box Equivalence Partitioning May be useful at the execute unit level.
Boundary Value Analysis Should not be required at this level but could be used.
Cause-Effect Graphing Might be useful for defining how to measure constraint compliance.
White-Box Multiple Condition Logic Could be used but generally is too detailed at this level of test.
Live-Data Reality Test Useful for black-box type tests of constraints after created data tests are successful.
System/QA-Peak Requirements White-Box Multiple Condition Logic May be used for critical logic, but generally too detailed for this level of testing.
Live-Data Reality Test Most useful for peak testing.