Artifacts > Environment Artifact Set > Development Case > Guidelines > Important Decisions in Test

SWEN 5135 Configuration Management

Topics

Decide How to Perform the Workflow To top of page

The following decisions should be made regarding the Test discipline's workflow:

  • Decide how to perform the workflow by looking at the Test: Workflow. Study the diagram with its guard conditions and the guidelines below.
  • Decide what parts of the Test workflow details to perform. One key issue for the Test workflow is to decide what quality dimensions are interesting for the project in general, and most importantly, for each iteration (see Concepts: Types of Tests). Decide what appropriate combinations of types of tests you should focus on for the current iteration.

Document the decisions in the Development Case, under the headings Disciplines, Test, Workflow.

Decide How to Use Artifacts To top of page

Decide what artifacts to use and how to effectively make use of them. The table below describes those artifacts we recommend you should make use of and those you might consider using in particular contexts. For more detailed information on how to tailor each artifact, and a discussion of the advantages and disadvantages of that specific artifact, read the section titled "Tailoring" for each artifact.

For each artifact, decide how the artifact should be used: Must have, Should have, Could have or Won't have. For more details, see Guidelines: Classifying Artifacts.


Artifact Purpose

Tailoring (Optional, Recommended)

Test Evaluation Summary Summarizes the Test Results for use primarily by the management team and other stakeholders external to the test team.. Recommended for most projects.
Test Results This artifact is the analyzed result determined from the raw data in one or more Test Logs. Recommended. Most test teams retain some form of reasonably detailed record of the results of testing. Manual testing results are usually recorded directly here, and combined with the distilled Test Logs from automated tests.

In some cases, test teams will go directly from the Test Logs to producing the Test Evaluation Summary.

[Iteration] Test Plan Defines testing goals, objectives, approach, resources and deliverables for an iteration.

Recommended for most projects.

A separate Test Plan per iteration is recommended unless the testing is trivial, in which case you should include the Test Plan as a section within the Iteration Plan.

Consider whether to maintain a "Master" Test Plan in addition to the "Iteration" Test Plans. The Master Test Plan typically covers logistic and process information that relates to the entire project lifecycle, and is unlikely to change

Test Ideas List This is an enumerated list of ideas, often partially formed, to be considered as useful tests to conduct.

Recommended for most projects.

In some cases these lists will be informally defined and discarded once Test Scripts or Test Cases have been defined from them.

Test Script, Test Data The Test Scripts and Test Data are the realization or implementation of the test, where the Test Script embodies the procedural aspects, and the Test Data the defining characteristics.

Recommended for most projects.

Where projects differ is how formally these artifacts are treated. In some cases, these are informal and transitory, and the test team is judged based on other criteria.

In other cases—especially with automated tests—the Test Scripts and associated Test Data (or some subset thereof) are regarded as a major deliverable of the test effort.

Test Suite Used to group related Test Scripts and also to define any Test Script execution sequences that are required for tests to work correctly. Recommended for most projects.
Workload Analysis Model Used to define a representative workload to allow risks associated with the performance of the system under load to be assessed.

Recommended for most systems, especially those where system performance under load must be evaluated, or where there is significant risk of performance issues.

Not usually required for systems that will be deployed on a standalone target system.

Test Case

Defines a specific set of test inputs, execution conditions, and expected results.

Documenting test cases allows them to be reviewed for completeness and correctness, and considered before implementation effort is planned & expended.

This is most useful where the input, execution conditions and expected results are particularly complex.

We recommend that you define at least the complex Test Cases on most projects. Depending on contractual obligations, you may need to define more.

In most other cases we recommend using the Test-Ideas List instead of Test Cases.

Some projects will describe Test Cases at a high level and defer details to the Test Scripts. One effective tailoring technique is to document the Test Cases as comments in the Test Scripts.

Test Classes in the Design Model

Test Components in the Implementation Model

The Design Model & Implementation Model include Test Classes & Components if the project has to develop significant additional specialized behavior to accommodate and support testing.

Stubs are a common category of Test Classes and Test Component.

Test Log The raw data output during test execution, typically from automated tests.

Many projects that perform automated testing will have some form of Test Log. Where projects differ is whether the Test Logs are retained or discarded after Test Results have been determined.

You might retain Test Logs if you need to satisfy certain audit requirements, or if you want to perform analysis on how the raw data changes over time.

Test Automation Architecture Provides an architectural overview of the test automation system, using a number of different architectural views to depict different aspects of the system.

Optional.

Recommended on projects where the test architecture is relatively complex.

In many cases this may simply be a white-board diagram that is placed centrally for interested parties to consult.

Test Interface Specification Defines a required set of behaviors by a classifier (specifically, a Class, Subsystem or Component) for the purposes of test access (testability).

Optional.

On many projects, there is either sufficient accessibility for test in the visible operations on classes, user interfaces etc.

Some common reasons to create Test Interface Specifications include UI extensions to allow GUI test tools to interact with the tool and diagnostic message logging routines, especially for batch processes.


Tailor each artifact by performing the steps described in the Activity: Develop Development Case, under the heading "Tailor Artifacts per Discipline".

Decide How to Review Artifacts To top of page

This section gives some guidelines to help you decide how you should review the test artifacts. For more details, see Guidelines: Review Levels.

Test Cases To top of page

Test Cases are created by the test team and are usually treated as Informal, meaning they are approved by someone within the test team.

Where useful, Test Cases might be approved by other team members and should then be treated as Formal-Internal.

If a customer wants to validate a product before it's released, some subset of the Test Cases could be selected as the basis for that validation. These Test Cases should be treated as Formal-External.

Test Scripts To top of page

Test Scripts are usually treated as Informal; that is, they are approved by someone within the test team.

If the Test Scripts are to be used by many testers, and shared or reused for many different tests, they should be treated as Formal-Internal.

Test artifacts in design and implementation To top of page

Test Classes are found in the Design Model, and Test Components in the Implementation Model. There are also two other related although not test specific artifacts: Packages in the Design Model, and Subsystems in the Implementation Model.

These artifacts are like design and implementation artifacts, however, they're created for the purpose of testing. The natural place to keep them is with the design and implementation artifacts. Remember to name or otherwise label them in such a way that they are clearly separated from the design and implementation of the core system.

Defects To top of page

Defects are usually treated as Informal and are usually handled in a defect-tracking system. On a small project, you can manage the defects as a simple list, for example, using your favorite spreadsheet. This is only manageable for small systems—when the number of people involved and the amount of defects grow, you'll need to start using a more flexible defect-tracking system.

Another decision to make is whether you need to separate the handling of defects—also known as symptoms or failures—from faults; the actual errors. For small projects, you may manage to track only the defects and implicitly handle the faults. However, as the system grows, you usually need to separate the management of defects from faults. For example, several defects may be caused by the same fault. Therefore, if a fault is fixed, it's necessary to find the reported defects and inform those users who submitted the defects, which is only possible if defects and faults can be identified separately.

Iteration and Master Test Plans To top of page

In any project where the testing is nontrivial, you need a Test Plan for each Iteration. Optionally you might retain a Master Test Plan. In many cases, it's treated as Informal; that is, it's not reviewed and approved. Where testing has important visibility to external stakeholders, it could be treated as Formal-Internal or even Formal-External.

Decide on Iteration Approval Criteria To top of page

You must decide who is responsible for determining if an iteration has met its criteria. Strive to have clearly defined upfront as you enter each iteration how the test effort is expected to demonstrate this, and how it will be measured. This individual or group then decides if the iteration has met its criteria, if the system fulfills the desired quality criteria, and if the Test Results are sufficient and satisfactory to support these conclusions.

The following are examples of ways to handle iteration approval:

  • The project management team approves Iteration and the system quality, based on recommendations from the system testers.
  • The customer approves the system quality, based on recommendations from the system testers.
  • The customer approves the system quality, based on the results of a demonstration that exercises a certain subset of the total tests. This set must be formally defined and agreed before hand, preferably early in the iteration. These test are treated as Formal-External.
  • The customer approves the system quality by conducting their own independent tests. Again, the nature of these tests should clearly defined and agreed before hand, preferably early in the iteration. These test are treated as Formal-External.

This is an important decision—you cannot reach a goal if you don't know what it is.



Copyright  © 1987 - 2001 Rational Software Corporation


Display Rational Unified Process using frames

Rational Unified Process