TESTING TERMS
Application: A single software product that mayor may not fully support a business function.
Audit: This is an inspection/assessment activity that verifies compliance with plans, policies, and procedures, and ensures that resources are conserved. Audit is a staff function; it serves as the "eyes and ears" of management
Baseline: A quantitative measure of the current level of performance.
Benchmarking: Comparing your company's products, services, or processes against best practices, or competitive practices, to help define superior performance of a product, service, or support process.
Benefits Realization Test: A test or analysis conducted after an application is moved into production to determine whether it is likely to meet the originating business case.
Black-box Testing: A test technique that focuses on testing the functionality of the program, component, or application against its specifications without knowledge of how the system is constructed; usually data or business process driven.
Boundary Value Analysis: A data selection technique in which test data is chosen from the "boundaries" of the input or output domain classes, data structures, and procedure parameters. Choices often include the actual minimum and maximum boundary values, the maximum value plus or minus one, and the minimum value plus or minus one.
Bug: A catchall term for all software defects or errors.
Certification: Acceptance of software by an authorized agent after the software has been validated by the agent or after its validity has been demonstrated to the agent.
Check sheet: A form used to record data as it is gathered.
Checkpoint: A formal review of key project deliverables. One checkpoint is defined for each key project deliverable, and verification and validation must be done for each of these deliverables that is produced.
Condition Coverage: A white-box testing technique that measures the number of percentage of decision outcomes covered by the test cases designed. 100% Condition coverage would indicate that every possible outcome of each decision had been executed at least once during testing.
Configuration Testing: Testing of an application on all supported hardware and software platforms. This may include various combinations of hardware types, configuration settings, and software versions.
Cost of Quality (COQ): Money spent above and beyond expected production costs (labor, materials, equipment) to ensure that the product the customer receives is a quality (defect free) product The Cost of Quality includes prevention, appraisal, and correction or repair costs.
Conversion Testing: Validates the effectiveness of data conversion processes, including field-to-field mapping, and data translation.
Decision Coverage: A white-box testing technique that measures the number of -or percentage -of decision directions executed by the test case designed. 100% Decision coverage would indicate that all decision directions had been executed at least once during testing. Alternatively, each logical path through the program can be tested. Often, paths through the program are grouped into a finite set of classes, and one path from each class is tested
Decision/Condition Coverage: A white-box testing technique that executes possible combinations of condition outcomes in each decision.
Defect: Operationally, it is useful to work with two definitions of a defect: (1) From the producer's viewpoint: a product requirement that has not been met or a product attribute possessed by a product or a function performed by a product that is not in the statement of requirements that define the product; or (2) From the customer's viewpoint: anything that causes customer dissatisfaction, whether in the statement of requirements or not.
Driver: Code that sets up an environment and calls a module for test.
Defect Tracking Tools: Tools for documenting defects as they are found during testing and for tracking their status through to resolution.
Desk Checking: The most traditional means for analyzing a system to a program. The developer of a system or program conducts desk checking. The process involves reviewing the complete product to ensure that it is structurally sound and that the standards and requirements have been met. This tool can also be used on artifacts created during analysis and design.
Entrance Criteria: Required conditions and standards for work product quality that must be present or met for entry into the next stage of the software development process.
Equivalence Partitioning: A test technique that utilizes a subset of data that is representative of a larger class. This is done in place of undertaking exhaustive testing of each value of the larger class of data. For example, a business rule that indicates that a program should edit salaries within a given range ($10,000 -$15,000) might have 3 equivalence classes to test:
Less than $10,000 (invalid) Between $10,000 and $15,000 (valid) Greater than $15,000 (invalid)
Error or Defect: 1. A discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically corrects value or condition. 2. Human action that results in software containing a fault (e.g., omission or misinterpretation of user requirements in a software specification, incorrect translation, or omission of a requirement in the design specification).
Error Guessing: The data selection technique for picking values that seems likely to cause defects. This technique is based upon the theory that test cases and test data can be developed based on the intuition and experience of the tester.
Exhaustive Testing: Executing the program through all possible combinations of values for program variables. Exit Criteria: Standards for work product quality, which block the promotion of incomplete or defective work products to subsequent stages of the software development process.
Functional Testing: Application of test data derived from the specified functional requirements without regard to the final program structure.
Inspection: A formal assessment of a work product conducted by one or more qualified independent reviewers to detect defects, violations of development standards, and other problems. Inspections involve authors only when specific questions concerning deliverables exist. An inspection identifies defects, but does not attempt to correct them. Authors take corrective actions and arrange follow-up reviews as needed.
Integration Testing: This test begins after two or more programs or application components have been successfully unit tested. The development team to validate the technical quality or design of the application conducts it. It is the first level of testing which formally integrates a set of programs that communicate among themselves via messages or files (a client and its server(s), a string of batch programs, or a set of on-line modules within a dialog or conversation).
Life Cycle Testing: The process of verifying the consistency, completeness, and correctness of software at each stage of the development life cycle.
Performance Test: Validates that both the on-line response time and batch run times meet the defined performance requirements.
Quality: A product is a quality product if it is defect free. To the producer, a product is a quality product if it meets or conforms to the statement of requirements that defines the product. This statement is usually shortened to: quality means meets requirements. From a customer's perspective, quality means "fit for use".
Quality Assurance (QA): The set of support activities (including facilitation, training, measurement, and analysis) needed to provide adequate confidence that processes are established and continuously improved to produce products that meet specifications and are fit for use.
Quality Control (QC): The process by which product quality is compared with applicable standards, and the action taken when nonconformance is detected. Its focus is defect detection and removal. This is a line function; that is, the performance of these tasks is the responsibility of the people working within the process.
Recovery Test: Evaluate the contingency features built into the application for handling interruptions and for returning to specific points in Life application processing cycle, including. -checkpoints, backups, restores, and restarts. This test also assures that disaster recovery is possible.
Regression Testing: Regression testing is the process of retesting software to detect errors that may have been caused by program changes. The technique requires the use of a set of test cases that have been developed to test all of the software's functional capabilities.
Stress Testing: This test subjects a system, or components of a system, to varying environmental conditions that delay normal expectations. For example: high transaction volume, large database size or restart/recovery circumstances. The intention of stress testing is to identify constraints and to ensu4re that there are no performance problems,
Structural Testing: A testing method in which the test data are derived solely from the program structure.
Stub: Special code segments -that when invoked by a code segment under testing sinuate the behavior of designed and specified modules not yet constructed.
System test: During this event, the entire system is tested to verify that all functional, information, structural and quality requirements have been met. A predetermined combination of tests is designed that, when executed successfully, satisfy management that the system meets specifications. System testing verifies the functional quality of the system in addition to all external interfaces, manual procedures, restart and recovery, and human-computer interfaces. It also verifies that interfaces between the application and open environment work correctly, that JCL functions correctly, and that the application functions appropriately with the Database Management System, Operations environment, and any communications systems.
Test Case: A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.
Test Case Specification: -An individual test condition, executed as part of a larger test contributes to the test's objectives. Test cases document the input, expected results, execution conditions of a given test item. Test cases are broken down into one or more detailed test scripts and test data conditions for execution.
Test Data Set: Set of input elements used in the testing process
Test Design Specification: A document that specifies the details of the test approach for a software feature or a combination of features and identifies the associated tests.
Test Item: A software item that is an object of testing.
Test Log: A chronological record of relevant details about the execution of tests.
Test Plan: A document describing the intended scope, approach, resources, and schedule of testing activities. It identifies test items, the features to be tested, the testing tasks, the personnel performing each task, and any risks requiring contingency planning.
Test Procedure Specification: A document specifying a sequence of actions for the execution of a test.
Test Summary Report A document that describes testing activities and results and evaluates the corresponding test items.
Testing: Examination by manual or automated means of the behavior of a program by executing the program on sample data sets to verify that it satisfies specified requirements or to verify differences between expected and actual results.
Test Scripts: A tool that specifies an order of actions that should be performed during a test session. The script also contains expected results. Test scripts may be manually prepared using paper forms, or may be automated using capture/playback tools or other kinds of automated scripting tools.
Usability Test: The purpose of this event is to review the application user interface and other human factors of the application with the people who will be using the application. This is to ensure that the design (layout and sequence, etc.) enables the business functions to be executed as easily and intuitively as possible. This review includes assuring that the user interface adheres to documented User Interface standards, and should be conducted early in the design stage of development. Ideally, an application prototype is used to walk the client group through various business scenarios, although paper copies of screens, windows, menus, and reports can be used.
User Acceptance Test: User Acceptance Testing (UAT) is conducted to ensure that the system meets the needs of the organization and the end user/customer. It validates that the system will work as intended by the test in the real world, and is based on real world business scenarios, not system requirements. Essentially, this test validates that the RIGHT system was built.
Validation: Determination of the correctness of the final program or software produced from a development project with respect to the user needs and requirements. Validation is usually accomplished by verifying each stage of the software development life cycle.
Verification:
Walkthrough: A manual analysis technique in which the module author describes the module's structure and logic to an audience of colleagues. Techniques focus on error detection, not correction. Will usually sue a formal set of standards or criteria as the basis of the review.
White-box Testing: A testing technique that assumes that the path of the logic in a program unit or component is known. White-box testing usually consists of testing paths, branch by branch, to produce predictable results. This technique is usually used during tests executed by the development team, such as Unit or Component testing.
Application: A single software product that mayor may not fully support a business function.
Audit: This is an inspection/assessment activity that verifies compliance with plans, policies, and procedures, and ensures that resources are conserved. Audit is a staff function; it serves as the "eyes and ears" of management
Baseline: A quantitative measure of the current level of performance.
Benchmarking: Comparing your company's products, services, or processes against best practices, or competitive practices, to help define superior performance of a product, service, or support process.
Benefits Realization Test: A test or analysis conducted after an application is moved into production to determine whether it is likely to meet the originating business case.
Black-box Testing: A test technique that focuses on testing the functionality of the program, component, or application against its specifications without knowledge of how the system is constructed; usually data or business process driven.
Boundary Value Analysis: A data selection technique in which test data is chosen from the "boundaries" of the input or output domain classes, data structures, and procedure parameters. Choices often include the actual minimum and maximum boundary values, the maximum value plus or minus one, and the minimum value plus or minus one.
Bug: A catchall term for all software defects or errors.
Certification: Acceptance of software by an authorized agent after the software has been validated by the agent or after its validity has been demonstrated to the agent.
Check sheet: A form used to record data as it is gathered.
Checkpoint: A formal review of key project deliverables. One checkpoint is defined for each key project deliverable, and verification and validation must be done for each of these deliverables that is produced.
Condition Coverage: A white-box testing technique that measures the number of percentage of decision outcomes covered by the test cases designed. 100% Condition coverage would indicate that every possible outcome of each decision had been executed at least once during testing.
Configuration Testing: Testing of an application on all supported hardware and software platforms. This may include various combinations of hardware types, configuration settings, and software versions.
Cost of Quality (COQ): Money spent above and beyond expected production costs (labor, materials, equipment) to ensure that the product the customer receives is a quality (defect free) product The Cost of Quality includes prevention, appraisal, and correction or repair costs.
Conversion Testing: Validates the effectiveness of data conversion processes, including field-to-field mapping, and data translation.
Decision Coverage: A white-box testing technique that measures the number of -or percentage -of decision directions executed by the test case designed. 100% Decision coverage would indicate that all decision directions had been executed at least once during testing. Alternatively, each logical path through the program can be tested. Often, paths through the program are grouped into a finite set of classes, and one path from each class is tested
Decision/Condition Coverage: A white-box testing technique that executes possible combinations of condition outcomes in each decision.
Defect: Operationally, it is useful to work with two definitions of a defect: (1) From the producer's viewpoint: a product requirement that has not been met or a product attribute possessed by a product or a function performed by a product that is not in the statement of requirements that define the product; or (2) From the customer's viewpoint: anything that causes customer dissatisfaction, whether in the statement of requirements or not.
Driver: Code that sets up an environment and calls a module for test.
Defect Tracking Tools: Tools for documenting defects as they are found during testing and for tracking their status through to resolution.
Desk Checking: The most traditional means for analyzing a system to a program. The developer of a system or program conducts desk checking. The process involves reviewing the complete product to ensure that it is structurally sound and that the standards and requirements have been met. This tool can also be used on artifacts created during analysis and design.
Entrance Criteria: Required conditions and standards for work product quality that must be present or met for entry into the next stage of the software development process.
Equivalence Partitioning: A test technique that utilizes a subset of data that is representative of a larger class. This is done in place of undertaking exhaustive testing of each value of the larger class of data. For example, a business rule that indicates that a program should edit salaries within a given range ($10,000 -$15,000) might have 3 equivalence classes to test:
Less than $10,000 (invalid) Between $10,000 and $15,000 (valid) Greater than $15,000 (invalid)
Error or Defect: 1. A discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically corrects value or condition. 2. Human action that results in software containing a fault (e.g., omission or misinterpretation of user requirements in a software specification, incorrect translation, or omission of a requirement in the design specification).
Error Guessing: The data selection technique for picking values that seems likely to cause defects. This technique is based upon the theory that test cases and test data can be developed based on the intuition and experience of the tester.
Exhaustive Testing: Executing the program through all possible combinations of values for program variables. Exit Criteria: Standards for work product quality, which block the promotion of incomplete or defective work products to subsequent stages of the software development process.
Functional Testing: Application of test data derived from the specified functional requirements without regard to the final program structure.
Inspection: A formal assessment of a work product conducted by one or more qualified independent reviewers to detect defects, violations of development standards, and other problems. Inspections involve authors only when specific questions concerning deliverables exist. An inspection identifies defects, but does not attempt to correct them. Authors take corrective actions and arrange follow-up reviews as needed.
Integration Testing: This test begins after two or more programs or application components have been successfully unit tested. The development team to validate the technical quality or design of the application conducts it. It is the first level of testing which formally integrates a set of programs that communicate among themselves via messages or files (a client and its server(s), a string of batch programs, or a set of on-line modules within a dialog or conversation).
Life Cycle Testing: The process of verifying the consistency, completeness, and correctness of software at each stage of the development life cycle.
Performance Test: Validates that both the on-line response time and batch run times meet the defined performance requirements.
Quality: A product is a quality product if it is defect free. To the producer, a product is a quality product if it meets or conforms to the statement of requirements that defines the product. This statement is usually shortened to: quality means meets requirements. From a customer's perspective, quality means "fit for use".
Quality Assurance (QA): The set of support activities (including facilitation, training, measurement, and analysis) needed to provide adequate confidence that processes are established and continuously improved to produce products that meet specifications and are fit for use.
Quality Control (QC): The process by which product quality is compared with applicable standards, and the action taken when nonconformance is detected. Its focus is defect detection and removal. This is a line function; that is, the performance of these tasks is the responsibility of the people working within the process.
Recovery Test: Evaluate the contingency features built into the application for handling interruptions and for returning to specific points in Life application processing cycle, including. -checkpoints, backups, restores, and restarts. This test also assures that disaster recovery is possible.
Regression Testing: Regression testing is the process of retesting software to detect errors that may have been caused by program changes. The technique requires the use of a set of test cases that have been developed to test all of the software's functional capabilities.
Stress Testing: This test subjects a system, or components of a system, to varying environmental conditions that delay normal expectations. For example: high transaction volume, large database size or restart/recovery circumstances. The intention of stress testing is to identify constraints and to ensu4re that there are no performance problems,
Structural Testing: A testing method in which the test data are derived solely from the program structure.
Stub: Special code segments -that when invoked by a code segment under testing sinuate the behavior of designed and specified modules not yet constructed.
System test: During this event, the entire system is tested to verify that all functional, information, structural and quality requirements have been met. A predetermined combination of tests is designed that, when executed successfully, satisfy management that the system meets specifications. System testing verifies the functional quality of the system in addition to all external interfaces, manual procedures, restart and recovery, and human-computer interfaces. It also verifies that interfaces between the application and open environment work correctly, that JCL functions correctly, and that the application functions appropriately with the Database Management System, Operations environment, and any communications systems.
Test Case: A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.
Test Case Specification: -An individual test condition, executed as part of a larger test contributes to the test's objectives. Test cases document the input, expected results, execution conditions of a given test item. Test cases are broken down into one or more detailed test scripts and test data conditions for execution.
Test Data Set: Set of input elements used in the testing process
Test Design Specification: A document that specifies the details of the test approach for a software feature or a combination of features and identifies the associated tests.
Test Item: A software item that is an object of testing.
Test Log: A chronological record of relevant details about the execution of tests.
Test Plan: A document describing the intended scope, approach, resources, and schedule of testing activities. It identifies test items, the features to be tested, the testing tasks, the personnel performing each task, and any risks requiring contingency planning.
Test Procedure Specification: A document specifying a sequence of actions for the execution of a test.
Test Summary Report A document that describes testing activities and results and evaluates the corresponding test items.
Testing: Examination by manual or automated means of the behavior of a program by executing the program on sample data sets to verify that it satisfies specified requirements or to verify differences between expected and actual results.
Test Scripts: A tool that specifies an order of actions that should be performed during a test session. The script also contains expected results. Test scripts may be manually prepared using paper forms, or may be automated using capture/playback tools or other kinds of automated scripting tools.
Usability Test: The purpose of this event is to review the application user interface and other human factors of the application with the people who will be using the application. This is to ensure that the design (layout and sequence, etc.) enables the business functions to be executed as easily and intuitively as possible. This review includes assuring that the user interface adheres to documented User Interface standards, and should be conducted early in the design stage of development. Ideally, an application prototype is used to walk the client group through various business scenarios, although paper copies of screens, windows, menus, and reports can be used.
User Acceptance Test: User Acceptance Testing (UAT) is conducted to ensure that the system meets the needs of the organization and the end user/customer. It validates that the system will work as intended by the test in the real world, and is based on real world business scenarios, not system requirements. Essentially, this test validates that the RIGHT system was built.
Validation: Determination of the correctness of the final program or software produced from a development project with respect to the user needs and requirements. Validation is usually accomplished by verifying each stage of the software development life cycle.
Verification:
- The process of determining whether the products of a given phase of the software development cycle fulfill the requirements established during the previous phase.
- The act of reviewing, inspecting, testing, checking, auditing, or otherwise establishing and documenting whether items, processes, services, or documents conform to specified requirements.
Walkthrough: A manual analysis technique in which the module author describes the module's structure and logic to an audience of colleagues. Techniques focus on error detection, not correction. Will usually sue a formal set of standards or criteria as the basis of the review.
White-box Testing: A testing technique that assumes that the path of the logic in a program unit or component is known. White-box testing usually consists of testing paths, branch by branch, to produce predictable results. This technique is usually used during tests executed by the development team, such as Unit or Component testing.