Back

Selecting and justifying test data

Introductio
We have seen that when you translate a program, the translator looks for syntax errors. If it finds any, then the error diagnostics tools try to help by displaying appropriate error messages and indicating, or attempting to indicate, where the problem or problems are. We know, however, that just because a program is successfully translated into object code, there may still be errors known as 'logical errors'. A logical error is an error that gives an incorrect answer. They can only be tracked down by appropriate testing using a carefully thought through test plan.

Types of testing that should be done.
When a test plan is being thought about for some code, we should consider testing some different sorts of data. We should test:

    • some valid data
    • some invalid data
    • some borderline data
    • some extreme data

An example of a test plan
Consider the following pseudo-code, which prints out a message depending upon what exam mark you got:

IF (Result >=0) AND (Result < 100) THEN
     PRINT “Poor mark”
ELSE
     IF (Result < 200) THEN
          PRINT “Could do better”
     ELSE
          IF (Result < 300) THEN
               PRINT “Good”
          ELSE
               PRINT “Well done”
          ENDIF
     ENDIF
ENDIF

We should test:

    • some valid data e.g. 50, 250
    • some invalid data e.g. -30
    • some borderline data e.g. 99, 100, 101
    • some extreme data e.g. 30000

Producing a test plan
Once we have identified the sorts of tests we should do, we should create a test plan. This is usually done in a table using the following headings:

Test no

Type

Data to be used

Expected result

Actual result

Checked?

Comment

1

Valid

50

Poor mark

     

2

Valid

250

Good

     

3

Invalid

-30

??

     

4

Borderline

99

Poor mark

     

5

Borderline

100

Could do better

     

6

Borderline

101

Could do better

     

7

Extreme

30000

Well done ??

     

We can already see that by simply carefully thinking about our test plan, we are identifying problems already, and we haven't even done any actual tests yet. It is clear, for example, that our code doesn't take care of the possibility that someone could enter a negative test result accidentally. It also doesn't take care of the problem of someone entering in an unreasonably high mark such as 30000 (the test might be out of 400, for example).

Checking actual results against expected results
Once we have created the test plan, we then carry it out and record the actual results. The final and important part of the test plan is to check the actual results against the predicted results and make sure that written evidence of this happening is recorded. It is often best to simply tick off on the plan when each test result has been checked, commenting on those which need to be looked at in more detail. 

Back