1. The percentage of paths that have been exercised by a test suite. 100% path coverage implies 100% LCSAJ coverage.
2. Separation of responsibilities - which encourages the accomplishment of objective testing. [After DO-178b]
3. The individual element to be tested. There usually is one test object and many test items. See also test object. A reason or purpose for designing and executing a test.
4. An element of configuration management - consisting of the recording and reporting of information needed to manage a configuration effectively. This information includes a listing of the approved configuration identification - the status of proposed
5. Acronym for Computer Aided Software Engineering.
6. The activity of establishing or updating a test plan.
7. The degree to which a component - system or process meets specified requirements and/or user/customer needs and expectations. [After IEEE 610]
8. Two persons - e.g. two testers - a developer and a tester - or an end-user and a tester - working together to find defects. Typically - they share one computer and trade control of it while testing.
9. A high level metric of effectiveness and/or efficiency used to guide and control progressive test development - e.g. Defect Detection Percentage (DDP).
10. The percentage of boundary values that have been exercised by a test suite.
11. Tests aimed at showing that a component or system does not work. Negative testing is related to the testers' attitude rather than a specific test approach or test design technique - e.g. testing with invalid input values or exceptions. [After Beizer]
12. A tool that supports the recording of requirements - requirements attributes (e.g. priority - knowledge responsible) and annotation - and facilitates traceability through layers of requirements and requirements change management. Some requirements ma
13. A meeting at the end of a project during which the project team members evaluate the project and learn lessons that can be applied to the next project.
14. A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content. [Freedman and Weinberg - IEEE 1028] See also peer review. A review of a software work product by colleagues
15. A flaw in a component or system that can cause the component or system to fail to perform its required function - e.g. an incorrect statement or data definition. A defect - if encountered during execution - may cause a failure of the component or sys
16. Testing to determine the extent to which the software product is understood - easy to learn - easy to operate and attractive to the users under specified conditions. [After ISO 9126]
17. Hardware and software products installed at users' or customers' sites where the component or system under test will be used. The software may include operating systems - database management systems - and other applications.
18. A software tool used to carry out instrumentation.
19. Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers - to determine whether or not a component or system satisfies the user/customer needs and fits within the business process
20. A variable (whether stored within a component or outside) that is read by a component.
21. A statement of test objectives - and possibly test ideas about how to test. Test charters are used in exploratory testing. See also exploratory testing. An informal test design technique where the tester actively controls the design of the tests as t
22. The behavior predicted by the specification - or another source - of the component or system under specified conditions.
23. The process of identifying differences between the actual results produced by the component or system under test and the expected results for a test. Test comparison can be performed during test execution (dynamic comparison) or after test execution.
24. The capability of the software product to re-establish a specified level of performance and recover the data directly affected in case of failure. [ISO 9126] See also reliability. The ability of the software product to perform its required functions
25. A method to determine test suite thoroughness by measuring the extent to which a test suite can discriminate the program from slight variants (mutants) of the program.
26. The ease with which the software product can be transferred from one hardware or software environment to another. [ISO 9126]
27. A pointer within a web page that leads to other web pages.
28. A white box test design technique in which test cases are designed to execute combinations of single condition outcomes (within one statement).
29. The capability of the software product to be used in place of another specified software product for the same purpose in the same environment. [ISO 9126] See also portability. The ease with which the software product can be transferred from one hardw
30. A type of performance testing conducted to evaluate a system or component at or beyond the limits of its anticipated or specified work loads - or with reduced availability of resources such as access to memory or servers. [After IEEE 610] See also pe