Artifacts > Environment Artifact Set > Development Case > Guidelines > Important Decisions in Implementation

Topics

Decide How to Perform the Workflow To top of page

The following decisions should be made regarding the Implementation discipline's workflow:

  • Decide how to perform the workflow by looking at the Implementation: Workflow. Study the diagram with its guard conditions and the guidelines below. Decide which workflow details to perform and in which order. 
  • Decide what parts of the Implementation workflow details to perform. The following are some parts that can be introduced relatively independently from each other.

Part of workflow

Comments

Integration and build management The role Integrator and the Activity: Plan System Integration together with the Artifact: Integration Build Plan are usually introduced early in the project. The other integration related activities, such as Activity: Plan Subsystem Integration, Activity: Integrate Subsystem, and Activity: Integrate System are introduced just in time when the integration starts. 
Implementing components The roles Implementer and Code Reviewer, and their activities and artifacts, are introduced at the start of implementation, in each iteration.
  • Decide when, during the project lifecycle, to introduce each part of the workflow. You can often wait until the Elaboration phase before introducing the whole Implementation discipline. Any prototyping that occurs in the Inception phase is usually exploratory and is not conducted with the same rigor (with respect to artifacts and reviews, for example) as required by the complete Implementation workflow during elaboration and construction.

Document the decisions in the Development Case, under the headings Disciplines, Implementation, Workflow.  

Decide How to Use Artifacts To top of page

Decide which artifacts to use and how to use each of them. The table below describes those artifacts you must have and those used in some cases. For more detailed information on how to tailor each artifact, and a discussion of the advantages and disadvantages of that specific artifact, read the section titled "Tailoring" for each artifact.

For each artifact, decide how the artifact should be used: Must have, Should have, Could have or Won't have. For more details, see Guidelines: Classifying Artifacts.

Artifact Purpose

Tailoring (Optional, Recommended)

Implementation model

(Implementation subsystem, Component)

The implementation model is source code, executables, and all other artifacts needed to build and manage the system in the run-time environment.

An implementation subsystem is a collection of components and other implementation subsystems, and is used to structure the implementation model by dividing it into smaller parts.

A component represents a piece of software code (source, binary or executable), or a file containing information (for example, a startup file or a ReadMe file). A component can also be an aggregate of other components; for example, an application consisting of several executables.

All software projects have an implementation model with components including as a minimum some source code and executables.

Some projects will also include subsystems, libraries, and visual models.

Subsystems are useful when there are a large number of components to be organized.

Integration Build Plan Defines the order in which the components and subsystems should be implemented, which builds to create when integrating the system, and how they are to be assessed.

Optional.

Recommended if you need to plan the integration. Omit it only when the integration is trivial.

Tailor each artifact by performing the steps described in the Activity: Develop Development Case, under the heading "Tailor Artifacts per Discipline".

Decide Unit Test Coverage To top of page

Decide the extent to which unit testing will be performed and the level of code coverage, which has a  scale that goes from informal to 100% code coverage. This scale is described in the Test Plan.

The level of unit test coverage is often driven by the needs of the integration and system testers, to whom the code was handed over. The system testers are dependent on the quality of the code for their work. If the code has too many defects, the integration and system testers will send the code back to the implementers too often. This is a sign of a poor development process and the solution may be to require the implementers to do more thorough unit testing.

Of course, you cannot expect the unit-tested code to be completely free of defects. You do, however, need to find a "healthy" balance between unit testing and quality.

The level of unit test coverage can also differ between different phases. Even a safety-critical project that requires 100% code coverage during construction and transition does not usually require that during elaboration because many classes are only partially implemented at that stage.

Decide How to Review Code To top of page

Decide to what extent the code should be reviewed. 

The advantages of code reviews are:

  • To enforce and encourage a common coding style on the project. Code reviewing is an efficient way to make the members of the project follow the Programming Guidelines. To ensure this, it's more important to review results from all authors and Implementers than to review all source code files.
  • To find errors that automated tests do not find. Code reviews catch errors not encountered in testing.
  • To share knowledge between individuals and to transfer knowledge from the more experienced individuals to the less experienced individuals.

The disadvantages of code reviews are:

  • It takes time and resources.
  • If not done properly, it may be inefficient. There is a danger that code reviewing is done "just because we have to" and is not done as an efficient complement to automated testing.

For more information about code reviewing, also see Activity: Review Code.

Code reviewing adds significant value to the project. All projects that start to measure the levels of bugs and maintenance problems related to code reviews claim they gain performance from the reviews. However, in many organizations it's difficult to make them "take off" for several reasons:

  • Not enough data is collected to verify if code reviewing actually works.
  • Too much data is collected.
  • Implementers are very protective about their code.
  • The reviews get bogged down in formalities.
  • Administrating reviews takes too much effort.

Keep the following in mind to make the best possible use of code reviews:

  • Collect only adequate data.
  • Measure the performance of the reviews—and display the result.
  • Use reviews in a "lean" way.
 

Copyright  © 1987 - 2001 Rational Software Corporation


Display Rational Unified Process using frames

Rational Unified Process