New Term or Change?
If you want to suggest a new term or a change to a current term then please see here. acceptance testing:
Formal testing conducted to enable a user, customer, or other authorized entity to determine whether to accept a system or component
The behaviour actually produced when the object is tested under specified conditions.
alpha testing: Simulated or actual operational testing at an in-house site not otherwise involved with the software developers.
Backus-Naur form: A meta language used to formally describe the syntax of a language. See BS .
basis test set:
A set of test cases derived from the code logic which ensure that \% branch coverage is achieved.
behavior: The combination of input values
and preconditions and the required response for a function of a system. The full specification of a function would normally comprise one or more behaviors.
beta testing: Operational testing at a site not otherwise involved with the software developers. branch condition combination coverage: The percentage of combinations of all branch condition outcomes in every decision
that have been exercised
by a test case suite.
branch condition coverage: The percentage of branch condition outcomes in every decision that have been exercised by a test case suite.
capture/playback tool: A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time.
CAST: Acronym for computer-aided software testing.
cause-effect graph: A graphical representation of inputs or stimuli (causes) with their associated outputs (effects), which can be used to design test cases. certification: The process of confirming that a system or component complies with its specified requirements and is acceptable for operational use. From [IEEE]. code coverage: An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention. code-based testing: Designing tests based on objectives derived from the implementation (e.g., tests that execute specific control flow paths or use specific data items). compatibility testing: Testing whether the system is compatible with other systems with which it should communicate. component: A minimal software item for which a separate specification is available. condition: A Boolean expression containing no Boolean operators. For instance, A<B
is a condition
but A and B
is not. [DO-B]
condition outcome: The evaluation of a condition to TRUE or FALSE. conformance testing: The process of testing that an implementation conforms to the specification
on which it is based.
control flow: An abstract representation of all possible sequences of events in a program's execution.
control flow path:
conversion testing: Testing of programs or procedures used to convert data from existing systems for use in replacement systems. correctness: The degree to which software conforms to its specification. coverage item: An entity or property used as a basis for testing. data flow testing: Testing
in which test cases are designed based on variable usage within the code.
debugging: The process of finding and removing the causes of failures in software. decision: A program point at which the control flow has two or more alternative routes. decision outcome:
The result of a decision (which therefore determines the control flow alternative taken).
design-based testing: Designing tests based on objectives derived from the architectural or detail design of the software (e.g., tests that execute specific invocation paths or probe the worst case behaviour of algorithms).
documentation testing: Testing concerned with the accuracy of documentation.
domain: The set from which values are selected.
dynamic analysis: The process of evaluating a system or component
based upon its behaviour during execution. [IEEE]
emulator: A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system. [IEEE,dob]
error: A human action that produces an incorrect result. [IEEE]
A test case design technique where the experience of the tester is used to postulate what faults might occur, and to design tests specifically to expose them.
error seeding: The process of intentionally adding known faults to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of faults remaining in the program. [IEEE] executable statement:
A statement which, when compiled, is translated into object code, which will be executed procedurally when the program is running and may perform an action on program data.
failure: Deviation of the software from its expected delivery or service. [Fenton]
fault: A manifestation of an error in software. A fault, if encountered may cause a failure
A path for which there exists a set of input values and execution conditions which causes it to be executed.
fit for purpose testing: Validation carried out to demonstrate that the delivered system can be used to carry out the tasks for which it was acquired.
functional specification: The document that describes in detail the characteristics of the product with regard to its intended capability. [BS , Part]
functional test case design: Test case selection that is based on an analysis of the specification
of the component without reference to its internal workings.
incremental testing: Integration testing
where system components are integrated into the system one at a time until the entire system is integrated.
independence: Separation of responsibilities which ensures the accomplishment of objective evaluation. After [dob].
input: A variable (whether stored within a component
or outside it) that is read by the component.
input domain: The set of all possible input
input value: An instance of an input. inspection:
A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection). After [Graham]
installability: The ability of a software component or system to be installed on a defined target platform allowing it to be run as required. Installation includes both a new installation and an upgrade. installability testing: Testing whether the software/system installation meets defined installation requirements. Full definition.
installation guide: Supplied instructions on any suitable media, which guides the installer through the installation process. This may be a manual guide, step-by-step procedure, installation wizard, or any other similar process description.
installation wizard: Supplied software on any suitable media, which leads the installer through the installation process. It shall normally run the installation process, provide feedback on installation outcomes and prompt for options.
instrumentation: The insertion of additional code into the program in order to collect information about program behaviour during program execution. integration: The process of combining components into larger assemblies. integration testing: Testing performed to expose faults in the interfaces and in the interaction between integrated component
LCSAJ: A Linear Code Sequence And Jump, consisting of the following three items (conventionally identified by line numbers in a source code listing): the start of the linear sequence of executable statements, the end of the linear sequence, and the target line to which control flow is transferred at the end of the linear sequence. maintainability: The ease with which the system/software can be modified to correct faults, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment. maintainability requirements: A specification of the required maintainability for the system/software. modified condition/decision coverage: The percentage of all branch condition outcomes that independently affect a decision outcome that have been exercised by a test case suite. mutation analysis: A method to determine test case suite thoroughness by measuring the extent to which a test case suite can discriminate the program from slight variants (mutants) of the program. See also error seeding.
N-transitions: A sequence of N+ transitions.
negative testing: Testing aimed at showing software does not work. [Beizer] non-functional requirements testing: Testing of those requirements that do not relate to functionality. i.e. performance, usability, etc. operational testing:Testing conducted to evaluate a system or component in its operational environment. [IEEE] output: A variable (whether stored within a component
or outside it) that is written to by the component.
output domain: The set of all possible output
output value: An instance of an output. performance testing: Testing conducted to evaluate the compliance of a system or component with specified performance requirements. [IEEE]
portability: The ease with which the system/software can be transferred from one hardware or software environment to another.
precondition: Environmental and state conditions which must be fulfilled before the component can be executed with a particular input value. predicate: A logical expression which evaluates to TRUE or FALSE, normally to direct the execution path
pseudo-random: A series which appears to be random but is in fact generated according to some prearranged sequence. recovery testing: Testing aimed at verifying the system's ability to recover from varying degrees of failure. regression testing: Retesting of a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.
reliability: The ability of the system/software to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations.
requirement: A capability that must be met or possessed by the system/software (requirements may be functional or non-functional).
requirements-based testing: Designing tests based on objectives derived from requirements for the software component (e.g., tests that exercise specific functions or probe the non-functional constraints such as performance or security). See functional test case design. review: A process or meeting during which a work product, or set of work products, is presented to project personnel, managers, users or other interested parties for comment or approval. [ieee]
security: Preservation of confidentiality, integrity and availability of information, where
availability is ensuring that authorized users have access to information and associated assets when required, and
integrity is safeguarding the accuracy and completeness of information and processing methods, and
confidentiality is ensuring that information is accessible only to those authorized to have access.
security requirements: a specification of the required security for the system/software. simple subpath: A subpath of the control flow graph in which no program part is executed more than necessary.
simulation: The representation of selected behavioural characteristics of one physical or abstract system by another system. [ISO /].
simulator: A device, computer program or system used during software verification, which behaves or operates like a given system when provided with a set of controlled inputs. [IEEE,dob] specification: A description, in any suitable form, of requirement
specification testing: An approach to testing wherein the testing is restricted to verifying the system/software meets the specification. state transition: A transition between two allowable states of a system or component.
statement: An entity in a programming language which is typically the smallest indivisible unit of execution.
static analysis: Analysis of a program carried out without executing the program.
static testing: Testing of an object without execution on a computer. storage testing: Testing whether the system meets its specified storage objectives. stress testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements. [IEEE] structural coverage: Coverage measures based on the internal structure of the component. structural test case design: Test case selection that is based on an analysis of the internal structure of the component. stub: A skeletal or special-purpose implementation of a software module, used to develop or test a component that calls or is otherwise dependent on it. After [IEEE]. symbolic execution:
A static analysis technique that derives a symbolic expression for program path
system testing: The process of testing an integrated system to verify that it meets specified requirements. [Hetzel] test case:
A set of inputs, execution precondition
s, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement. After [IEEE,dob]
test case design technique: A method used to derive or select test cases. test case suite: A collection of one or more test cases for the software under test. test comparator: A test tool that compares the actual outputs produced by the software under test with the expected output
s for that test case.
test driver: A program or test tool used to execute software against a test case suite. test environment: A description of the hardware and software environment in which the tests will be run, and any other software with which the software under test interacts when under test including stub
s and test drivers.
test generator: A program that generates test cases in accordance to a specified strategy or heuristic. After [Beizer]. test measurement technique: A method used to measure test coverage
test procedure: A document providing detailed instructions for the execution of one or more test cases. testing: The process of exercising software to verify that it satisfies specified requirements and to detect error
s. After [dob]
usability: The capability of the software to be understood, learned, used and attractive to the user when used under specified conditions.
validation: Determination of the correctness of the products of software development with respect to the user needs and requirements. verification: The process of evaluating a system or component to determine whether the products of the given development phase satisfy the conditions imposed at the start of that phase. [IEEE] volume testing: Testing where the system is subjected to large volumes of data. walkthrough: A review of requirements, designs or code characterized by the author of the object under review guiding the progression of the review.