Test Plan for ABC Video Order Processing

In this section, see how ABC Video designs testing to validate that specification, design, and coding mesh with functional and non-functional requirements of the system.

Test Strategy

Unit Testing

Guidelines for Developing a Unit Test 

Unit tests verify that a specific program, module, or routine (all referred to as 'module' in the remaining discussion) fulfills its requirements as stated in related program and design specifications. The two primary goals of unit testing are conformance to specifications and processing accuracy. 

For conformance, unit tests determine the extent to which processing logic satisfies the functions assigned to the module. The logical and operational requirements of each module are taken from the program specifications. Test cases are designed to verify that the module meets the requirements. The test is designed/rom the specification, not the code. 

Processing accuracy has three components: input, process, and output. First, each module must process all allowable types of input data in a stable, predictable, and accurate manner. Second, all possible errors should be found and treated according to the specifications. Third, all output should be consistent with results predicted from the specification. Outputs might include hard copy, terminal displays, electronic transmissions, or file contents; all are tested. 

There is no one strategy for unit testing. For input/output bound applications, black-box strategies are normally used. For process logic, either or both strategies can be used. In general, the more critical to the organization or the more damaging the possible errors, the more detailed and extensively white-box testing is used. For example, organizationally critical processes might be defined as any process that affects the financial books of the organization' meets legal requirements, or deals with client relationships. Examples of application damage might include life-threatening situations such as in nuclear power plant support systems, life support systems in hospitals, or test systems for car or plane parts.

Since most business applications combine approaches, an example combining black- and white-box strategies is described here. Using a white-box approach, each program specification is analyzed to identify the distinct logic paths which serve as the basis for unit test design. This analysis is simplified by the use of tables, lists, matrices, diagrams, or decision tables to document the logic paths of the program. Then, the logic paths most critical in performing the functions are selected for white-box testing. Next, to verify that all logic paths not white-box tested are functioning at an acceptable level of accuracy, black-box testing of input and output is designed. This is a common approach that we will apply to ABC Video. 

When top-down unit testing is used, control structure logic paths are tested first. When each path is successfully tested, combinations of paths may be tested in increasingly complex relationships until all possible processing combinations are satisfactorily tested. This process of simple-to-complex testing ensures that all logic paths in a module are performing both individually and collectively as intended.

Similarly, unit testing of multiuser applications also uses the simple-to-complex approach. Each program is tested first for single users. Then multiuser tests of the single functions follow. Finally, multiuser tests of multiple functions are performed.

Unit tests of relatively large, complex programs may be facilitated by reducing them to smaller, more manageable equivalent components such as 

  • transaction type: e.g., Debit/Credit, Edit/Update/Report/Error 
  • functional component activity e.g., Preparing, Sending, Receiving, Processing 
  • decision option e.g., If true ... If false ...

When the general process of reduction is accomplished, both black-box and white-box approaches are applied to the process of actually defining test cases and their corresponding output. The black-box approach should provide both good and bad data inputs and examine the outputs for correctness of processing. In addition, at least one white-box strategy should be used to test specific critical logic of the tested item. 

Test cases should be both exhaustive and minimal. This means that test cases should test every condition or data domain possible but that no extra tests are necessary. For example, the most common errors in data inputs are for edit/validate criteria. Boundary conditions of fields should be tested. Using equivalence partitioning of the sets of allowable values for each field we develop the test for a date formatted YYYYMMDD (that is, 4-digit year, 2-digit month, and 2-digit day). A good year test will test last year, this year, next year, change of century, all zeros, and all nines. A good month test will test zeros, 1, 2, 4 (representative of months with 30 days), 12, and 13. Only 1 and 12 are required for the boundary month test, but the other months are required to test day boundaries. A good day test will test zeros, 1,28,29,30,31, and 32, depending on the final day of each month. Only one test for zero and one are required, based on the assumption that if one month processes correctly, all months will. Leap year and nonleap years should also be tested. An example of test cases for these date criteria is presented. Figure 17 -14 shows the equivalent sets of data for each domain. Table 17-4 lists exhaustive test cases for each set in the figure. Table 17-5 lists the reduced set after extra tests are removed.


FIGURE 17-14 Unit Test Equivalent Sets for a Date


TABLE 17-4 Exhaustive Set of Unit Test Cases for a Date

Test Case

YYYY

MM

DD

Comments

1

aaaa

0

Aa

Tests actions against garbage input

2

1992*

13

0

Tests all incorrect lower bounds

3

2010

1

32

Tests all incorrect upper bounds

4

1993

12

31

Tests correct upper day bound

4a

1994

1

31

Not required ... could be optional test of upper month/day bound. Assumption is that if month = 1 works, all valid, equivalent months will work.

5

1995

12

1

Tests correct lower day bound

6

1996

12

1

Not required ... could be optional test of upper month/lower day bound. Assumption is that if month = 1 works, all valid, equivalent months will work.

7

1997

1

32

Tests upper day bound error

8

1998

12

32

Not required ... could be optional test of upper month/upper day bound error. Assumption is that if month = 1 works, all valid, equivalent months will work.

9

1999

12

0

Retests lower bound day error with otherwise valid data ... Not strictly necessary but could be used.

10

2000

2

1

Tests lower bound ... not strictly necessary

11

2000

2

29

Tests leap year upper bound

12

2000

2

30

 

Tests leap year upper bound error

13

1999

2

28

Tests nonleap year upper bound

14

1999

2

29

Tests nonleap year upper bound error

15

1999

2

0

Tests lower bound error... not strictly necessary

16

2001

4

30

Tests upper bound

17

2001

4

31

Tests upper bound error

18

2002

4

1

Tests lower bound ... not strictly necessary

19

2003

4

0

Tests lower bound error... not strictly necessary


TABLE 17-5 Minimal Set of Unit Test Cases for a Date

Test Case

YYYY

MM

DD

Comments

1

aaaa

aa

aa

Tests actions against garbage input

2

1992

0

0

Tests all incorrect lower bounds

3

2010

13

32

Tests all incorrect upper bounds

4

1993

1

31

Tests correct upper day bound

5

1995

1

1

Tests correct lower day bound

6

1997

1

32

Tests upper day bound error

7(9)

2000

2

29

Tests leap year upper bound

8(10)

2000

2

30

Tests leap year upper bound error

9(11)

1999

2

28

Tests nonleap year upper bound

10(12)

1999

2

29

Tests nonleap year upper bound error

10(14)

2001

4

30

Tests upper bound

11(15)

2001

4

31

Tests upper bound error


Other frequently executed tests are for character, field, batch, and control field checks. Table 17-6 lists a sampling of errors found during unit tests. Character checks include tests for blanks, signs, length, and data types (e.g., numeric, alpha, or other). Field checks include sequence, reasonableness, consistency, range of values, or specific contents. Control fields are most common in batch applications and are used to verify that the file being used is the correct one and that all records have been processed. Usually the control field includes the last execution date and file name which are both checked for accuracy. Record counts are only necessary when not using a declarative language. 

Once all test cases are defined, tests are run and results are compared to the predictions. Any result that does not exactly match the prediction must be reconciled. The only possible choices are that the tested item is in error or the prediction is in error. If the tested item is in error, it is fixed and retested. Retests should follow the approach used in the first tests. If the prediction is in error, the prediction is researched and corrected so that specifications are accurate and documentation shows the correct predictions. 

Unit tests are conducted and reviewed by the author of the code item being tested, with final test results approved by the project test coordinator. 

How do you know when to stop unit testing? While there is no simple answer to this question, there are practical guidelines. When testing, each tester should keep track of the number of errors found (and resolved) in each test. The errors should be plotted by test shot to show the pattern. A typical module test curve is skewed left with a decreasing number of errors found in each test (see Figure 17-15). When the number of errors found approaches zero, or when the slope is negative and approaching zero, the module can be moved forward to the next level of testing. If the number of errors found stays constant or increases, you should seek help either in interpreting the specifications or in testing the program.


ABC Video Unit Test 

Above, we said we would use a combination of black- and white-box testing for ABC unit tests. The application is being implemented using a SQL software package. Therefore, all code is assumed to be in SQL. The control logic and nonSELECT code is subject to white-box tests, while the SELECT modules will be subject to black-box tests.

TABLE 17-6 Sample Unit Test Errors

Edit/Validate

Transaction rejected when valid

Error accepted as valid

Incorrect validation criteria applied

Screen

Navigation faulty

Faulty screen layout

Spelling errors on screen

Inability to call screen

Data Integrity

Transaction processed when inconsistent with other information

Interfile matching not correct

File sequence checking not correct

File Processing

File, segment, relation of field not correctly processed

Read/write data format error

Syntax incorrect but processed by interpreter

Report

Format not correct

Totals do not add/crossfoot

Wrong field(s) printed

Wrong heading, footing or other cosmetic error

Data processing incorrect


In Chapter 10, we defined Rent/Return processing as an execute unit with many independent code units. Figure 17-16 shows partial SQL code from two Rent/Return modules. Notice that most of the code is defining data and establishing screen addressability. As soon as two or three modules that have such strikingly similar characteristics are built, the need to further consolidate the design to accommodate the implementation language should be obvious. With the current design, more code is spent in overhead tasks than in application tasks. Overhead code means that users will have long wait times while the system changes modules. The current design also means that debugging the individual modules would require considerable work to verify that the modules performs collectively as expected. Memory locations would need to be printed many times in such testing.


FIGURE 17-15 Unit Test Errors Found Over Test Shots


To restructure the code, we examine what all of the Rent/Return modules have in common-Open Rentals data. We can redefine the data in terms of Open Rentals with a single-user view used for all Rent/Return processing. This simplifies the data part of the processing but increases the vulnerability of the data to integrity problems. Problems might increase because the global view of data violates the principle of information hiding. The risk must be taken, however, to accommodate reasonable user response time. 

The common format of the restructured SQL code is shown in Figure 17-17. In the restructured version, data is defined once at the beginning of Rent/Return processing. The cursor name is declared once and the data is retrieved into memory based on the data entered through the Get Request module. The remaining Rent/Return modules are called in sequence. The modules have a similar structure for handling memory addressing. The problems with many prints of memory are reduced because once the data is brought into memory, no more retrievals are necessary until updates take place at the end of the transaction. Processing is simplified by unifying the application's view of the data.

          ADD RETURN DATE (Boldface code Is redundant)
                    DCL   INPUT_VIDEOID	     CHAR(8);
                    DCL	  INPUT COPY_ID	     CHAR(2);
                    DCL	  INPUT CUST_ID      CHAR (9);
                    DCL	  AMT                PAID	DECIMAL (4,2);
                    DCL	  CUST_ID	           CHAR(9);
                    ...
                       CONTINUE UNTIL ALL HELDS USED ON THE SCREEN OR USED TO
                    CONTROL SCREEN PROCESSING ARE DECLARED
                    DCL	  TOTAL_AMT_DUE	     DCIMAL(5.2);
                    DCL	  CHANGE	           DECIMAL(4,2);
                    DCL	  MORE OPEN RENTALS	 BIT(1);
                    DCL	  MORE NEW_RENTALS	 BIT(1);
                    EXEC  SQL INCLUDE SQLCA: /"COMMUNICATION AREA"/".
                    EXEC  SQL DECLARE CUSTOMER TABLE
                          (FIELD DEFINITIONS FOR CUSTOMER RELATION);
                    EXEC SOL DECLARE VIDEO TABLE
                          (FIELD DERNmONS FOR VIDEO RELATION);
                    EXEC SQL DECLARE COPY TABLE
                          (FIELD DEFINITIONS FOR COPY RELATION);
                    EXEC SQL DECLARE OPENRENTAL TABLE
                          (FIELD DEFINITIONS FOR OPENRENTAL RELATION);
                    EXEC SQL DECLARE SCREEN_CURSOR CURSOR FOR
                          SELECT ’ FROM OPEN_RENTAL
                                   WHERE VIDEOID - ORVlDEOID
                                   AND COPYID = ORCOPYID;
                    EXEC SQL OPEN SCREEN_CURSOR
                    GOTOLABEL
                    EXEC SQL FETCH SCREEN CURSOR INTO TARGET
                          :CUSTID
                          :VIDEOID
                          :COPYID
                          :RENTALDATE
                    IF SQLCOOE - 100 GOTO GOTOEXIT;
                    EXEC SQL SET :RETURNOATE - TODAYS.DATE
                    WHERE CURRENT OF SCREEN_CURSOR;
                    EXEC SQL UPDATE OPEN RENTAL
                          SET ORRETURNDATE - TODAYS,DATE
                          WHERE CURRENT OF SCREEN_CURSOR;
                    GOTO GOTOLABEL.
                    GOTOEXIT;
                    EXEC SQL CLOSE SCREEN CURSOR;


      

FIGURE 17-16 Two Modules Sample Code


The restructuring now requires a change to the testing strategy for Rent/Return. A strictly top-down approach cannot work because the Rent/Return modules are no longer independent. Rather, a combined top-down and bottom-up approach is warranted. A sequential bottom-up approach is more effective for the functional Rent/Return processing. Top-down, black-box tests of the SELECT code are done before being embedded in the execute unit. Black-box testing for the SELECT is used because SQL controls all data input and output. Complete SELECT statements are the test unit.

        DCL  	INPUT_VIDEO_ID	    CHAR(8);
        DCL	  INPUT_COPY_ID	      CHAR(2);
        DCL	  INPUT_CUST_ID	      CHAR(9);
        DCL	  AMT_PAID	          DECIMAL (4.2);
        DCL	  CUST_ID	            CHAR(9);
        ...
        continue until all fields used on the screen or used to control screen processing are 
        declared...
        DCL	  TOTAL_AMT_DUE	      DECIMALS(5,2);
        DCL	  CHANGE	            DECIMAL(4,2);
        DCL	  MORE_OPEN RENTALS	  BIT(1);
        DCL	  MORE_NEW_RENTALS	  BIT(1);
        EXEC SQL INCLUDE SQLCA: /*COMMUNICATION AREA*/
        EXEC SQL DECLARE RENTRETURN TABLE
        (field definitions for user view including all fields from customer, video, copy,
        open rental, and customer history relations);
        EXEC SQL DECLARE SCREEN_CURSOR CURSOR FOR
                  SELECT * from rentretum
                  where (:videoid = orvideojd and xopyid = orcopyid) 
                  or xustid = orcustid)
        EXEC SQL OPEN SCREEN_CURSOR
        EXEC SQL FETCH SCREEN_CURSOR INTO TARGET
                 :Request
        If :request eq "C?" set custid = :request
        else      set :videoid = :request 1
                  set icopyid = :request;
                      
                  (At this point the memory contains the related relation data
                  and the remaining rent/return processing can be done.)
                      
        All the other modules are called and contain the following common format:
        GOTOLABEL
        EXEC SQL FETCH SCREEN.CURSOR INTO TARGET
        :screen fields
            
        IF SQLCODE = 0 next step; (return code of zero means no errors)
        IF SQLCODE - 100 (not found condition) CREATE DATA or CALL END PROCESS;
        IF SQLCODE < 0 CALL ERROR PROCESS, ERROR-TYPE;
        Set screen variables (which displays new data)
        Prompt next action
            
        GOTO GOTOLABEL;
        GOTOEXIT;
        EXEC SQL CLOSE SCREEN_CURSOR;
      

FIGURE 17-17 Restructured SQL Code-Common Format


Test


1. Test SQL SELECT statement

2. Verify SQL cursor and data addressibility

3. Test Get Request

4. Test Get Valid Customer. Get Open Rentals

5. Test Get Valid Video

6. Test Process Payment and Make Change

7. Test Update Open Rental

8. Test Create Open Rental

9. Test Update Item

10. Test Update/Create Customer History

11. Test Print Receipt

Type


Black Box

White Box

White Box

Black Box for embedded SELECT statement. White Box for other logic

White Box for logic. Black Box for embedded SELECT statement

White Box

Black Box for Update, White Box for other logic

Black Box for Update, White Box for other logic

Black Box for Update, White Box for other logic

Black Box for Update, White Box for other logic

Black Box for Update. White Box for other logic

FIGURE 17-18 Unit Test Strategy


The screen interaction and module logic can be tested as either white box or black box. At the unit level, white-box testing will be used to test intermodule control logic. A combination of white-box and black-box testing should be used to test intramodule control and process logic. 

The strategy for unit testing, then, is to test data retrievals first, to verify screen processing, including SQL cursor and data addressability second, and to sequentially test all remaining code last (see Figure 17-18). 

Because all processing in the ABC application is on-line, an interactive dialogue test script is developed. All file interactions predict data retrieved and written, as appropriate. The individual unit test scripts begin processing at the execute unit boundary. This means that menus are not necessarily tested. A test script has three columns of information developed. The first column shows the computer messages or prompts displayed on the screen. The second column shows data entered by the user. The third column shows comments or explanations of the interactions taking place.

A partial test script for Rent/Return processing is shown in Figure 17-19. The example shows the script for a return with rental transaction. Notice that the test begins at the Rent/Return screen and that both error and correct data are entered for each field. After all errors are detected and dispatched properly, only correct data is required. This script shows one of the four types of transactions. It shows only one return and one rental, however, and should be expanded in another transaction to do several rentals and several returns; returns should include on-time and late videos and should not include all tapes checked out. This type of transaction represents the requisite variety to test returns with rentals. Of course, other test scripts for the other three types of transactions should also be developed. This is left as an extra-credit activity.