Erstellt von Jannik Fritz
vor fast 5 Jahre
|
||
Frage | Antworten |
Acceptance testing | Formal testing with respect to user needs, requirements and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other suthorized entity to determine whether or not to accept the system |
Ad hoc reviewing | A review technique carried out by independent reviewers informally, without a structured process |
Alpha testing | Simulated or actual operational testing conducted in the developer´s test environment, by roles outside the development organisation |
Beta testing | Simulated or actual operational testing conducted at an external site, by roles outside the development testing |
Black box test technique | A procedure to derive and/or select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure. |
Boundary value analysis | A black-box test technique in which test cases are designed based on boundary values |
Checklist based reviewing | A review technique guided by a list of questions or required attributes |
checklist based testing | An experience-based test technique whereby the experienced tester uses a high-level list of items to be noted, checked or remembered or a set of rules or criteria against which a product has to be verified |
Commercial off the shelf (COTS) | A software product that is developed for the general market i.e. for a large number of customers in identical format. |
Component integration testing | Testing performed to expose defects in the interfaces and interactions between integrated components |
Component testing | The testing of indicidual hardware or software components (unit testing) |
Configuration management | A discipline applying technical and administrative direction and surveillance to identify and document the functional and pyhsical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status and verify compliance with specified requirements |
Confirmation testing | Dynamic testing conducted after fixing defects with the objective to confirm that failures caused by those defects do not occur anymore |
Contractual acceptance testing | Acceptance testing conducted to verify whether a system satisfies its contractual requirements |
Coverage | The degree to which specified coverage items have been determined to have been exercised by a test suite expressed as percentage |
Data-driven testing | A scripting technique that stores test input and expected results in a table or spreadsheet, so that a single control script can execute all of the tests in the table. Data-driven testing is often used to support the application of test execution tools such as capture/playback tools. |
Debugging | The process of finding, analyzing and removing the cause of failures in software. |
Decision coverage | The coverage of decision outcomes (If-statements) |
Decision table testing | A black-box test technique in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) |
Defect management | The process of recognizing and recording defects, classifying them, investigating them, taking action to resolve them and disposing of them when resolved. |
Dynamic testing | Testing that involves the execution of the software of a component or system |
Entry criteria | The set of conditions for officially starting a defined task (definition of ready) |
Equivalence partitioning | A black-box test technique in which test cases are designed to exercise equivalence partitions by using one representative member of each partition. |
Error | A human action that produces an incorrect result |
Error guessing | A test technique in which tests are derived on the basis of the tester´s knowledge of past failures, or general knowlegde of failure modes. |
Exit criteria | The set of conditions for officially completing a defined task |
Experience-based test technique | A procedure to derive and/or select test cases based on the tester´s experience, knowledge and intuition. |
Exploratory testing | An approach to testing whereby the testers dynamically design and execute tests based on their knowledge, exploration of the test item and the results of previous tests |
Failure | An event in which a component or system does not perform a required function within specified limits |
Formal review | A form of review that follows a defined process with a formally documented output |
Functional testing | Testing conducted to evaluate the compliance of a component or system with functional requirements |
Impact analysis | The identification of all work products affected by a change, including an estimate of the resources needed to accomplish the change |
Informal review | A type of review without a formal (documented) procedure |
Inspection | A type of formal review to identify issues in a work product, which provides measurement to improve the review process and the software development process |
Integration testing | Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems. |
Keyword-driven testing | A scripting technique that uses data files to contain not only test data and expected results but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control script for the test |
Maintenance Testing | Testing the changes to an operational system or the impact of a changed environment to an operational system |
Non-functional testing | Testing conducted to evaluate the compliance of a component or system with non-functional requirements |
Operational acceptance testing | Operational testing in the acceptance test phase, typically performed in a (simulated) operational environment by operations and/or systems administration staff focusing on operational aspects, for example, recoverability, resource-behavior, installability and technival compliance |
Performance testing tool | A test tool that generates load for a designated test item and that measures and records its performance during test execution |
Perspective-based reading | A review technique whereby reviewers evaluate the work product from different viewpoints |
Product risk | A risk impacting the quality of a product |
Project risk | A risk that impacts project success |
Quality | The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations |
Quality assurance | Part of quality management focused on providing confidence that quality requirements will be fulfilled |
Regression testing | Testing of a previously tested component or system following modification to ensure that defects have not been introduced or have been uncovered in unchanged areas of the software as a result of the changes made |
Regulatory acceptance testing | Acceptance testing conducted to verify whether a system conforms to relevant laws. policies and regulations |
Review | A type of static testing during which a work product or process is evaluated by one or more individuals to detect issues and to provide improvements |
Risk | A factor that could result in future negative consequences |
Risk-based testing | Testing in which the management. selection. prioritization and use of testing activities and resources are based on corresponding risk types and risk levels |
Risk level | The qualitative or quantitative measure of a risk defined by impact and likelihood |
Role-based reviewing | A review technique where reviewers evaluate a work product from the perspective of different stakeholder roles |
Root cause | A source of a defect such that if it is removed, the occurrence of the defect type is decreased or removed |
Scenario-based reviewing | A review technique where the review is guided by determining the ability of the work product to address specific scenarios |
Sequential development model | A type of development life cycle model in which a complete system is developed in a linear way of several discrete and successive phases with no overlap between them. |
State transition testing | A black-box test technique using a state transistion diagram or state table to derive test cases to evaluate whether the test item successfully executes valid transitions and blocks invalid transitions |
Statement coverage | The percentage of executable statements that have been exercised by a test suite |
Static analysis | The process of evaluating a component or system without executing it, based on its form, structure, content or documentation |
Static testing | Testing a work product without code being executed |
System integration testing | Testing the comnbination and interaction of systems |
System testing | Testing an integrated system to verify that it meets specified requirements |
Technical review | A formal review type by a team of technically-qualified personnel that examines the suitability of a work product for its intended use and identifies discrepancies from specifications and standards |
Test analysis | The activity that identifies test conditions by analyzing the test basis |
Test approach | The implementation of the test strategy for a specific project |
Test automation | The use of software to perform or support test activities, for example, test management, test design, test execution and results checking |
Test analysis | The body of knowledge used as the basis for test analysis and design |
Test case | A set of preconditions, inputs, actions (where applicable), expected results and postconditions, developed based on test conditions |
Test completion | The activity that makes test assets available for later use, leaves test environments in a satisfactory condition and communicates the results of testing to relevant stakeholders |
Test condition | An aspect of the test basis that is relevant in order to achieve specifiy test objectives |
Test control | A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned |
Test data | Data created or seleted to satisfy the execution preconditions and inputs to execute one or more test cases |
Test design | The activity of deriving and specifying test cases from test conditions |
Test environment | An environment containing hardware, instrumentation, simulators, software tools and other support elements needed to conduct a test |
T | The calculated approximation of a result related to various aspects of testing (for example, effort spent, completion date, costs involved, number of test cases, etc.), which is usable even if input data may be incomplete, uncertain or noisy |
Test execution | The process of running a test on the component or system under test, producing actual result(s) |
Test execution schedule | A schedule for the execution of test suites wihtin a test cycle |
Test execution tool | A test tool that executes tests against a designated test item and evaluates the outcomes against expected results and postconditions |
Test implementation | The activity that prepares the testware needed for test execution based on test analysis and design. |
Test level | A specific instantiation of a test process |
Test management tool | A tool that provides support to the test management and control part of a test process. It often has several capabilities, such as testware management, scheduling of tests, the logging of results, progress tracking, incident management and test reporting |
Test manager | The person responsible for project management of testing acitvities and resources, and evaluation of a test object. The individual who directs, controls, administers, plans and regulates the evaluation of a test object |
Test monitoring | A test management activity that involves checking the status of testing activities, identifying any variances from the planned or expected status and reporting status to stakeholders |
Test object | The component or system to be tested |
Test objective | A reason or purpose for designing and executing a test |
Test oracle | A source to determine expected results to compare with the actual result of the system under test |
Test plan | Documentation describing the test objectives to be achived and the means and the schedule for achieving them, organized to coordinate testing acitivites |
Test planning | The activity of establishing or updating a test plan |
Test procedure | A sequence of test cases in execution order, and any associated actions that may be required to set up the initial preconditions and any wrap-up activities post execution |
Test progress report | A test report produced at regular intervals about the progress of test activities against a baseline, risks, and alternatives requiring a decision |
Test strategy | Documentation that expresses the generic requirements for testing one or more projects run within an organization, providing detail on how testing is to be performed, and is aligned with the test policy |
Test suite | A set of test cases or test procedures to be executed in a specific test cycle |
Test summary report | A test report that provides an evaluation of the corresponding test items against exit criteria |
Test technique | A procedure used to derive and/or select test cases |
Test types | A group of test activities based on specific test objectives aimed at specific characteristics of a component or system |
Tester | A skilled professional who is involved in the testing of a conponent or system |
Testing | The process consisting of all life cycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects |
Testware | Work products produced during the test process for use in planning, designing, executing, evaluating and reporting on testing |
Traceability | The degree to which a relationship can be established between two or more work products |
Use case testing | A black-box test technique in which test cases are designed to execute scenarios of use cases |
User acceptance test | Acceptance testing conducted in a real or simulated operational environment by intended users focusing on theis needs, requirements and business processes |
Validation | Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled |
Verification | Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilles |
Walkthrough | A type of review in which an author leads members of the review through a work product and the members ask questions and make comments about possible issues |
White-box test technique | A procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system |
White-box testing | Testing based on an analysis of the internal structure of the component or system |
Möchten Sie mit GoConqr kostenlos Ihre eigenen Karteikarten erstellen? Mehr erfahren.