1. A type of performance testing conducted to evaluate a system or component at or beyond the limits of its anticipated or specified work loads - or with reduced availability of resources such as access to memory or servers. [After IEEE 610] See also pe
2. The capability of the software product to be used in place of another specified software product for the same purpose in the same environment. [ISO 9126] See also portability. The ease with which the software product can be transferred from one hardw
3. A white box test design technique in which test cases are designed to execute combinations of single condition outcomes (within one statement).
4. A pointer within a web page that leads to other web pages.
5. The ease with which the software product can be transferred from one hardware or software environment to another. [ISO 9126]
6. A method to determine test suite thoroughness by measuring the extent to which a test suite can discriminate the program from slight variants (mutants) of the program.
7. The capability of the software product to re-establish a specified level of performance and recover the data directly affected in case of failure. [ISO 9126] See also reliability. The ability of the software product to perform its required functions
8. The process of identifying differences between the actual results produced by the component or system under test and the expected results for a test. Test comparison can be performed during test execution (dynamic comparison) or after test execution.
9. The behavior predicted by the specification - or another source - of the component or system under specified conditions.
10. A statement of test objectives - and possibly test ideas about how to test. Test charters are used in exploratory testing. See also exploratory testing. An informal test design technique where the tester actively controls the design of the tests as t
11. A variable (whether stored within a component or outside) that is read by a component.
12. Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers - to determine whether or not a component or system satisfies the user/customer needs and fits within the business process
13. A software tool used to carry out instrumentation.
14. Hardware and software products installed at users' or customers' sites where the component or system under test will be used. The software may include operating systems - database management systems - and other applications.
15. Testing to determine the extent to which the software product is understood - easy to learn - easy to operate and attractive to the users under specified conditions. [After ISO 9126]
16. A flaw in a component or system that can cause the component or system to fail to perform its required function - e.g. an incorrect statement or data definition. A defect - if encountered during execution - may cause a failure of the component or sys
17. A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content. [Freedman and Weinberg - IEEE 1028] See also peer review. A review of a software work product by colleagues
18. A meeting at the end of a project during which the project team members evaluate the project and learn lessons that can be applied to the next project.
19. A tool that supports the recording of requirements - requirements attributes (e.g. priority - knowledge responsible) and annotation - and facilitates traceability through layers of requirements and requirements change management. Some requirements ma
20. Tests aimed at showing that a component or system does not work. Negative testing is related to the testers' attitude rather than a specific test approach or test design technique - e.g. testing with invalid input values or exceptions. [After Beizer]
21. The percentage of boundary values that have been exercised by a test suite.
22. A high level metric of effectiveness and/or efficiency used to guide and control progressive test development - e.g. Defect Detection Percentage (DDP).
23. Two persons - e.g. two testers - a developer and a tester - or an end-user and a tester - working together to find defects. Typically - they share one computer and trade control of it while testing.
24. The degree to which a component - system or process meets specified requirements and/or user/customer needs and expectations. [After IEEE 610]
25. The activity of establishing or updating a test plan.
26. Acronym for Computer Aided Software Engineering.
27. An element of configuration management - consisting of the recording and reporting of information needed to manage a configuration effectively. This information includes a listing of the approved configuration identification - the status of proposed
28. The individual element to be tested. There usually is one test object and many test items. See also test object. A reason or purpose for designing and executing a test.
29. Separation of responsibilities - which encourages the accomplishment of objective testing. [After DO-178b]
30. The percentage of paths that have been exercised by a test suite. 100% path coverage implies 100% LCSAJ coverage.
MCSE
MSITP
Oracle Sun Certified Java Programmer
RHCSA
RHCSA Commands
iOS Game Development
Related MCQ's