UniSoft Corporation logo

Testing Glossary Version 3.2


  A    B    C    D    E    F    G    H    I    J    K    L    M    N    O    P    Q    R    S    T    U    V    W    X    Y    Z 

The following glossary contains terms which are useful when developing assertion based tests. Where appropriate, words in this glossary are aligned with the IEEE standard

IEEE standard for information technology -- Test methods for measuring conformance to POSIX: IEEE Standard 1003.3-1991. (ISBN: 1-155937-104-8)

The Institute of Electrical and Electronic Engineers, Inc.
345 East 47th Street, New York, NY 10017-2394, USA. for information technology POSIX 1003.3-1991.

For the vast majority of the words in this glossary, the meaning is consistent and understood throughout the information technology industry. However, some words do mean different things to different people. These words are marked with a dagger (ยง).

Within this glossary, words which are defined elsewhere in the text are italicised where a cross-reference is useful.

Comments on the glossary or suggestions of words for inclusion are welcome. Please us send your comments.


A

assertion
A statement of behaviour for an element to be tested. These are normally derived from text describing software to be tested (the specification).

Assertions take one of the following forms:

  1. Bold assertion: in the form

    The Ford motorcar is black.

  2. Cause/effect behaviour using the form

    when, then

    such as

    When cause occurs, then effect results.

  3. Conditional assertion, in the form

    If <condition>: when, then

    or

    If <condition>: <bold assertion>

    such as

    If <optional feature is supported>: when cause occurs, then effect results.

assertion classification, assertion class
Assertions are classified according to to two factors:
  1. Is it possible and practical to test the behavior? [T=testable, U=untested]
  2. Is the behavior being tested required to be present, or is the test conditional on functionality being present? [R=required, C=conditional]

Assertions are thus classified as following:

TR
Testable assertion of required feature (POSIX class A)

UR
Untested assertion of required feature (POSIX class B)

TC
Testable assertion of conditional feature (POSIX class C)

UC
Untested assertion of conditional feature (POSIX class D)

assertion id
This is a unique identifier associated with each assertion. It takes the form of letters describing the group of the assertions being tested followed by a number of the assertion within that group.

For example: tests for checking colors could be of the type

chkcol_0010.

Normally the initial numbering scheme used when writing assertions is based on an increment of 10 between assertions. This enables assertion writers to add new assertions without re-numbering or being forced to place assertions in inappropriate positions.

assertion number
The number part of the assertion id.

assertion test
A test for an assertion.

B

base assertion
An assertion which is testable (and which is to be tested). Also known as testable assertion.

black box test
A test for functionality viewed from the exterior of the product under test, such as some user-accessible feature. Also known as verification. (As opposed to white box testing or validation during which testing is based on some understanding of the internals of the product under test).

bold assertion
An assertion describing unconditional behaviour. Consequently, this is written in the form of a sentence without conditions such as "the Ford motorcar is black."

boundary conditions
The conditions which occur on the limits of the operation of the product, such as the largest or most precise number, or the use of the product when environmental conditions are limited (such as running out of disk or memory). These are also known as corner case conditions.

bounds checking
This is the process of checking the boundary of a class. For example, if an application allowed the entry of a three digit positive whole number, all positive integers up to and including 999 would be expected to be acceptable. The bounds checking tests in this case would be 999 (expected to pass) and (assuming that it was possible to enter a four digit number) 1000 (expected to fail).

This is testing the boundary conditions.

bug
A software bug is the appearance of a defect (or a failure caused by a defect). Bugs are also known as errata, faults, features, claims and problem reports.

C

canon, canonical file
A file containing the output of a test run which is intended for comparison with the output of future test sessions.

capture
The process of capturing a test session (see capture and replay tool). Also known as record.

capture and replay tool
A testing tool which enables test sessions to be recorded and then replayed. This has the following significant benefits:
  1. The test sessions might be replayed at a later date with the confidence that the events can be reproduced.
  2. The test sessions might be edited and then replayed repeatedly or with several test sessions running at the same time (thus simulating additional load).
  3. An enormous amount of effort is saved by re-using the test data without re-keying all the information.

Most capture replay tools enable the test sessions to be edited, parameterised and generalised. Almost all of the tools have a compare facility to compare the expected results from a test run with those which actually occur.

Capture replay tools are often used for regression testing and testing the user interface associated with application software. See also replay.

clean up
This is the part of the testing process performed after an assertion test to ensure that the environment returns to a known state in preparation for subsequent assertion tests (or groups of assertion tests). Also known as tear down.

closed loop
When distributed tests are run on a single system, it is called closed loop testing. This is also known as loopback testing.

code coverage
The amount of code exercised during testing. The precise methods for code coverage analysis vary greatly including basic block, statements and decision paths. See coverage.

compare files
These are the files for the technique when volumes of known good output are compared with a more recent test campaign. This approach is generally considered discredited since the known good files often contain defects and the only possible result is "there is (or is not) a difference in the files." It does not report the cause of underlying bugs.

compliance
The stage of testing when a product under test meets all the requirements of a test and/or standard, but has not yet been shown to formally have conformance with the requirement of the specification (which often requires additional paperwork or user acceptance).

The following note is adapted from the IEEE's POSIX 1003.3-1990 standard:

The term compliance was introduced to provide an efficient way to represent specific acceptable levels of conformance to an implementation of a specification (as measured by a test). Thus "compliant with" a specification means "passing" the tests associated with the specification. However, at a later stage the developers of this standard decided that a distinction between the words compliance and conformance should be eliminated as it was causing confusion.

concurrent, concurrency testing
The process of forcing tests to execute software in a manner where several features are tested at the same time.

conditional feature
A feature which does not have to be present in all implementations of the product (such as support for particular foreign languages).

An assertion for a conditional feature starts with: If...

conformance
The assurance that a product formally meets all the requirements of a standard or specification and passes the tests associated with the specification. Linked to compliance. See compliance for IEEE's POSIX 1003.3-1990 standard.

conformance tests
Conformance tests are those which are developed to show the successful and normal operation of the product under test.

These are also known as positive tests. (It is easier to use the phrase positive tests, since it removes the confusion with the word conformance).

corner case conditions
The conditions which occur on the limits of the operation of the product, such as the largest or most precise number, or the use of the product when environmental conditions are limited (such as running out of disk or memory). These are also known as boundary conditions.

coverage
This is the coverage of the test suite on the software under test. This can be measured in many different ways including:
  • code coverage, where the coverage is measured against parameters regarding the amount of source code in the product being exercised by the tests
  • coverage of the thoroughness (exhaustive, thorough, identification) of the tests with regard to the operation of the software being tested
  • data path analysis, where the possible types of data usable by the product under test are tested.

D

defect
A defect is an error or omission in a product which has not shown evidence of its existence. When the defect is identified, it becomes a bug. Defects are often known as hidden defects or hidden bugs.

destructive testing
The testing of a product to destruction (or more commonly with software to the point where the software and/or data need to be reloaded from backup).

development system
The system on which tests are developed.

deviance tests
These are tests constructed to show error conditions from the product under test. They are also known as negative tests and dirty tests.

dirty tests
Dirty tests are those constructed to show error conditions from the product under test. These are also known as deviance tests and negative tests.

distributed test
A distributed test is defined to be a test which is comprised of components which need to be executed and sychronized on more than one platform to provide a result. (Readers may compare this with a remote test, which is not the same).

distributed testing
Distributed testing takes the following forms:
  1. Testing using more than one system to test how a given implementation works when its operation is distributed over more than one system (for example testing a networked database by introducing tests on several different systems in the same test run).
  2. Testing using a target system and a host system.

In this case tests are started, stopped and the results collected on the host system, but the results show the effects of the tests on the target system.

An example of this would be where a personal computer was used as a capture and replay tool tool to play test scripts which exercised software on a server connected to the PC using a network.

Given that distributed testing has more than one possible meaning, it is always vital that this term is further defined before distributed testing development is considered in detail. See also distributed test and remote test.

E

equivalence classes and partitioning
Equivalence partitioning is the process of identifying tests which are expected to produce the same result by testing the same software. For example, when testing a three digit numeric field, using 105, 106 and 107 as test data in three separate tests, it is likely to result in three identical test outcomes. These can then be grouped and in many cases one test is considered sufficient.

error assertion
Negative assertions are written for functionality which will show error conditions from the product under test. These assertions will result in deviance tests or negative tests being developed. This is also known as a negative assertion.

error guessing
This is a technique where tests are created based on the test developer's understanding of the implementation under test and those tests which are likely to cause failures.

The test developers create tests which are based on their best guess (or their own experience) regarding where errors might be found. Using experienced testers, this has been found to be an excellent way of finding errors.

For example, experience might show that on particular processors, selecting particular numbers for maths tests may be more likely to generate errors.

execution requirements
Execution requirements are sometimes used in conjunction with an assertion when the assertion is to be tested in more than one environment. They always follow the test of the assertion to be tested and start with the words "execution requirement(s)" For example, they may be a requirement to execute a particular test using a variety of languages. This would be expressed as an execution requirement.

exhaustive testing
Exhaustive testing seeks to test the behaviour of every aspect of an element under test, including every permutation and combination of events.

extended assertion
An extended assertion is an assertion which is too difficult to test completely for one of the following reasons:
  1. No portable test method exists.
  2. The document or specification on which the assertion is based is not sufficiently specific to write a test.
  3. No reliable test method is known.
  4. The test setup requires an unreasonable amount of effort by the test user.
  5. The test would require an unreasonable amount of time or resources to complete on most systems.
  6. Writing the test would take an unreasonable amount of time.
  7. This test would have an adverse effect on the completion of other tests.

When an assertion is declared to be an extended assertion, and the test is not written, the reason code (or number as above) is marked alongside the assertion. Extended assertions are better known as untested assertions.

F

fail
This is a result code where the assertion was found to be false based on the execution of its assertion test.

failure
An assertion test which has generated the result code fail.

fault
Another word for a bug. Often used in the context "that's not my fault."

G

general assertion, (GA)
An assertion which occurs many times and applies to different elements. For example, all commands for a particular product might respond in a similar way when invoked with incorrect arguments. In this case a single general assertion might be written and applied to all commands.

The assertion id for each general assertion starts with the letters GA.

glass box test
A test which assumes some understanding of the internals of the product under test. As opposed to black box testing or verification which is a test for functionality viewed from the exterior of the product under test, such as some user-accessible feature.

Glass box tests are also known as white box tests or validation tests.

H

host system
A computer system used to manage the execution and management of tests where the product being tested is on a separate target system.

hypothesis
This is the supposition made as the basis for testing, used as the starting point for further investigation into finding defects. Normally a hypothesis for developing tests would include assumptions such as the following: "the documentation from which the tests are to be developed is almost entirely correct."

I

identification test
An identification test seeks to check some small but distinguishing characteristic of the element under test to ensure that (a) it is the element which is expected and (b) it is in the correct position. Identification testing is also known as touch testing.

informative
Informative text within specifications or documentation is provided for the information of the developers or users, but is not considered to define the operation of the product. Informative text is also known as non-normative text.

See also rationale (which tends to be the comments sub-set of informative text).

Informative text is not tested.

integration
Integration testing takes place when the element under test is tested alongside other software with which it is expected to be integrated.

interactions
The relationship between different components of software under test.

invocable component, (IC)
An invocable component (often called an IC) is the smallest grouping of tests which can be executed individually. An invocable component will include one or more test purposes testing one or more assertions.

J

K

L

loopback
When distributed tests are run on a single system, it is called loopback testing.

This is also known as closed loop testing.

M

may
The word "may" when it refers to an implementation or product suggests optional or undefined behaviour. (Users would be well advised not to depend on optional or undefined behaviour). For this reason "may" is often impossible to test on account of its ambiguity, and is thus best avoided at all times.

To avoid additional ambiguity the reverse sense of "may" is best expressed as "need not" (rather than "may not").

N

negative assertion
Negative assertions are written for functionality which will show error conditions from the product under test. These are better known as error assertions. These assertions will result in deviance tests, dirty tests or negative tests being developed.

negative tests
Negative tests are those constructed to show error conditions from the product under test. These are also known as deviance tests and dirty tests.

non-normative
Non-normative text within specifications or documentation is provided for the information of the developers or users, but is not considered to define the operation of the product. Non-normative text is also known as informative text.

Non-normative text is provided to suggest possible techniques for one of the following:

  • techniques for implementing the specification (which might not be adopted by the developers)
  • examples (which might only work under particular circumstances)
  • comments (for example, to show why a particular feature has been described in a given way)

See also rationale (which tends to be the comments sub-set of non-normative text).

Non-normative text is not tested.

normative
Normative text (which would be expected to make up the bulk of any specification or user publication) defines the functionality and use of the product under test. Normative text shall be tested. See also non-normative.

Writers of documents which are to be tested are well advised to separate normative and non-normative text, so it is obvious which text describes the operation of the product (and shall be tested) and which text is intended as background information.

Normative text is identical to text describing requirements.

O

P

pass
This is a result code where the assertion was found to be true based on the execution of its assertion test.

performance testing
Testing a product under known given loads; in particular, in terms of numbers of users and volumes of data and the measurement of the speed with which data is processed along with the associated response times.

Traditionally, measuring the length of time a functional test takes to execute is a poor measure of performance. This is because much of the time taken to perform a test is spent checking that an element of functionality has worked correctly, rather than purely the execution of the element.

With careful design, functional tests can show useful performance data (indeed, they can show exactly which elements of functionality are executing quickly or slowly).

positive assertion
Positive assertions are written for functionality which will show the successful and normal operation of the product under test. These assertions will result in conformance tests or positive tests being developed.

positive tests
Positive tests are those which are developed to show the successful and normal operation of the product under test.

These are also known as conformance tests.

Q

R

random testing
Random testing may take the following forms:
  • a known set of tests are executed in a pseudo-random order (and the order must be noted so that in the event of a failure, the tests can be re-run)
  • tests are run using pseudo-random data (and this also shall be preserved)

Tests which work correctly in an independent order can often be executed in a random order to ensure that no interdependencies exist within the product.

Random testing is only effective if the tests can be reproduced. For this reason, the order and data associated with random tests must be preserved.

rationale
Text describing rationale within specifications or documentation is provided for the information of the developers or users, but is not considered to define the operation of the product. This usually takes the form of comment to show why a particular feature was implemented in a given way. See also requirements. See also non-normative (where non-normative text is a superset of the rationale within the document).

Rationale text is not tested.

record
The process of recording a test session (see capture and replay tool) such that it might be repeated at a later date. Also known as capture.

regression
Regression testing is the process of re-testing. Generally this takes place for one of the following reasons:
  • to ensure that a known defect has been fixed, and that in fixing the defect, no new defects have been introduced (or more usually, where these new defects are located)
  • to ensure that a new release of a product is compatible with the previous release

reliability testing
Reliability testing is the technique used to test an implementation in such a manner that testing continues for many days with a view to the tests demonstrating the overall reliability of the environment under test.

remote test
A remote test is one which is invoked by a remote system, but during execution executes entirely on one platform, without the need to sychronize separate parts of the tests on more than one platform.

A remote test may pass results back to a central results repository on another platform.

replay
This is the process of re-playing a script of activity against the software under test. See also capture and replay tool.

required feature
A required feature in a product is one which shall always be present when the product is in use and when the product is tested.

A required feature will result in a base assertion or an extended assertion being written.

requirements
Text describing the requirements of an implementation (which would be expected to make up the bulk of any specification or user publication) defines the functionality and use of the product under test. This text shall be tested. See also non-normative text and rationale.

Writers of documents which are to be tested are well advised to separate requirements and non-normative text, so it is obvious which text describes the operation of the product (and shall be tested) and which text is intended as background information.

Requirements are also known as normative text.

S

set up
This is the part of the testing process performed prior to performing an assertion test. During this phase all the environmental requirements will be established to ensure that the test can be run correctly.

shall
Many specification writers use the word "shall" to describe a requirement in the software that is to be tested.

should
When the word "should" is used in a specification with respect to a product or implementation, the text is normally testable and the normal test development process should be followed.

Best practice is to avoid the word "should" with regard to implementations (using the word shall in preference).

When the word "should" is used with reference to user operations, the text is not normally tested.

strategy
Test strategies are normally comments within test files which explain the approach used by the test developer to test an assertion.

The overall approach for testing grouped elements of functionality should be included within the test suite design. Also known as tactic.

stress testing
This is the testing of a product under known load and extreme load; in particular, in terms of numbers of users, volumes of data and the measurement of the speed in which data is processed (known as testing performance).

Traditionally, functional tests do not stress software. This is because it is normally possible to test functionality using very small amounts of data and a minimal number of users (thus the tests work well on a system with a small configuration).

However, with careful design, functional tests can stress a system (indeed, they can show exactly which elements of functionality are working well under stress and which are not).

system testing
This phase of test execution involves executing the entire set of tests against a fully complete product, to ensure that the entire system is tested.

T

tactic
Test tactics are normally comments within test files which explain the approach used by the test developer to test an assertion.

The overall approach for testing grouped elements of functionality should be included within the test suite design. Also known as strategy.

target system
A target system is one where the tests are executed on another platform (called the host system), but they are directed to test another system.

tear down
This is the part of the testing process performed after an assertion test to ensure that the environment returns to a known state in preparation for subsequent assertion tests (or groups of assertion tests). Also known as clean up.

test
The word test is so widely used in an ambiguous manner as to be undefinable. Always ask for further information.

test case
A test case is the lowest level of testing performed. Thus a single assertion might be tested with a number of different test cases, each of which might individually pass or fail.

For example, when conducting tests for mathematical routines, many values might be used to test a single assertion.

test description
The phrase "test description" is often used for the description of tests when they are not written in the assertion form.

test harness
A test harness builds, executes and reports the results of tests.

test purpose, (TP)
A test purpose (often called a TP) is the software that tests an assertion. A single test purpose will always test a single assertion.

test result code
This is the determination of the result of a test by the test purpose.

The results include: pass, fail, unresolved, unsupported, untested.

Test developers often add additional codes to these, including:

  • uninitiated (where the test was not executed for some reason)
  • warning (where there is a problem which for some reason has been downgraded from a failure)
  • inspect (where a visual inspection is required to determine the final result)

test suite
A test suite is a collection of tests (or test purposes) and all the other programs and data required to build and execute the tests (and return the environment to a known state).

testable assertion
This as an assertion for which it is practical to develop a thorough test which could result in either a pass or fail. Also known as a base assertion.

testing requirements
Testing requirements are sometimes used in conjunction with assertions when the scope of the assertion is considered to be far-reaching. They always follow the assertion to which they relate and start with the words "Testing requirement(s)."

The requirements are used to define a precise level of thoroughness with which the assertion is to be tested. For example a testing requirement might require the test developer to add a test which demonstrated functionality shown in an example in customer documentation.

The POSIX specifications state that when there is a chance that an assertion may be ambiguous, incomplete or misinterpreted, the assertion is clarified by adding a testing requirement.

However, if an assertion is unclear, it should always be rewritten. Thus, this is no reason to add text to further define the meaning of the assertion.

thorough testing
Thorough testing seeks to verify the behaviour of every aspect of an element, but not to exercise all the permutations of this behaviour.

touch test
A touch test seeks to check some small but distinguishing characteristic of the element under test to ensure that (a) it is the element which is expected and (b) it is in the correct position. Touch testing is also known as identification testing.

U

unit testing
Unit testing is the process of performing tests on small components of a product in their isolation, prior to these components being integrated into the complete product for system testing.

unresolved
This is an intermediate result code which requires manual intervention to identify the final test result. At least one of the following conditions must be true:
  1. The assertion test required manual inspection in order to determine its result.
  2. The setup for the assertion test did not complete in the manner expected.
  3. The test program containing the assertion test was unexpectedly interrupted.
  4. The assertion test could not be executed because a previous assertion test on which it depended failed.
  5. The test program containing the assertion was not initiated.
  6. Compilation or execution of the test program produced unexpected errors or warnings.
  7. The assertion test did not resolve to a final result code for another reason.

unsupported
This is a result code where an assertion test could not be performed because the conditional feature was not implemented.

untested
This is a result code where either there was no assertion test or the implemented test for the extended assertion is not complete enough to result in a test result code of pass or fail.

untested assertion
An untested assertion is an assertion which is too difficult to test completely for one of the following reasons:
  1. No portable test method exists.
  2. The document or specification on which the assertion is based is not sufficiently specific to write a test.
  3. No reliable test method is known.
  4. The test setup requires an unreasonable amount of effort by the test user.
  5. The test would require an unreasonable amount of time or resources to complete on most systems.
  6. Writing the test would take an unreasonable amount of time.
  7. This test would have an adverse effect on the completion of other tests.

When an assertion is declared to be an extended assertion, and the test is not written, the reason code (or number as above) is marked alongside the assertion. Untested assertions are also known as extended assertions.

V

validation
The process of testing based on the assumption of some understanding of the internals of the product under test. As opposed to black box testing or verification which is a test for functionality which may be viewed from the exterior of the product under test, such as some user-accessible feature.

Validation tests are also known as glass box tests or white box tests.

verification
Verification tests functionality which may be viewed from the exterior of the product under test, such as some user-accessible feature. Also known as black box testing. (As opposed to white box testing or validation which assumes some understanding of the internals of the product under test).

W

white box test
A test which assumes some understanding of the internals of the product under test. As opposed to black box testing or verification which is a test for functionality which may be viewed from the exterior of the product under test, such as some user-accessible feature.

White box tests are also known as glass box tests or validation tests.

X

Y

Z