
This is the introduction to test planning performed by the
Test Engineering. The introduction discusses the test planning objectives,
major issues and activities and the content of the master test plan. The
master test plan is sometimes called the project software test plan.
Test Planning Objectives
The test planning objectives can be broken down into five activities:
identify design and performance requirements; identify the phases of testing
to be performed; determine test philosophy to be used; specify acceptance
criteria to be met by testing per phase; and risk mitigation.
- Test Engineering must identify design and performance requirements.
Testing should be conducted against baselined requirements. An early
objective of Test Engineering is involvement with requirements analysis
to determine that requirements are testable. These requirements must
then be tracked into test work products and be accounted for.
- Test Engineering must identify the phases of testing to be performed.
Often, the expected test phases are identified in the contract. This
is especially true for the latter stages of test. The latter stages
of test are function testing, formal qualification test (FQT) and system
test. Guidelines for unit and integration test should be developed and
documented in a company's standard document available to Test Engineering.
- Test Engineering must determine the test philosophy to be used. The
test philosophy needs to take into account a limited schedule, personnel,
and resources and determine how to accomplish an acceptable level of
test within these constraints and budget.
- Test Engineering must specify acceptance criteria to be met by testing
per life cycle phase. Acceptance criteria should be available from the
preliminary acceptance test plan submitted with the proposal.
- Test Engineering must mitigate risk. Throughout all planning activities,
risk mitigation must remain a priority. Some methods of assisting in
risk mitigation are: by doing detailed planning, collecting estimated
time to complete metrics, and using tools to improve and enforce procedures.
Major Issues and Activities
Test Planning major issues and activities that are the responsibilities
of Test Engineering are:
- Review the contract requirements and company standards to determine
whether additional test activities or phases are needed to define the
test effort.
- Understand the limitations placed on each phase of test by schedule,
budget, and resource limitations. Test Engineering must be able to plan
out the test activities within these guidelines.
- Determine which test teams will be responsible for planning and conducting
each phase of test.
- Stress the importance of including all the peripheral activities in
the planning phase:
- Determining the lead-time for developing or selecting test tools,
simulators, test beds, and test drivers.
- Determining if Software Engineering and Test Engineering will share
hardware and software resources.
- Identifying how and when defects will be tracked.
- Identifying what metrics will be collected.
- Another recurring theme is test involvement in the early phases of
the project. This includes all activities focused at finding defects:
requirements analysis, design peer reviews, code peer reviews, etc.
- Determine what skill levels are needed to plan and conduct each phase
of testing.

Test Planning Trade Offs
Test Engineering major goals in the test planning activity is to determining
how to balance the tradeoff between quality, risk, and economics.
Master Test Plan
The goal of the master test plan document is to provide an overview
of the entire test effort. The master test plan focuses more on the test
phases planned and conducted by Test Engineering. This document is not
required under DoD MIL-SDT-498, MIL-STD-2167A and many other standards,
and thus is often overlooked. Test Engineering needs to include this in
their test efforts, as this is to the test effort as a Software Development
Plan is to the software effort. The Master test plan is also referred
to as the Project Software Test Plan.
The Master Test Plan describes the overall testing strategy for both
the informal and formal software tests. This includes defining the required
level and kind of testing to ensure the level of confidence in the software.
An example of this level of software confidence, is man in the loop flight
rated software that requires a higher reliability than flight simulation
software or database applications.
The projects should conduct software tests at the following levels:
Unit Testing
This is the lowest level testing applied to each routine and subroutine
in the software. Generally, each unit has a test driver and several
test cases associated with the unit. The unit testing purpose is to
identify as many internal logic errors as possible in the early coding
phase before the software is released.
Integration Testing
At this level of testing, software elements are incrementally integrated
to form continually larger and more complex software builds. This testing
type's purpose is to both identify errors and demonstrate interface
compatibility. At the initial integration stage, software units are
incrementally integrated to form components and components are integrated
to form configuration items. Integration continues until all software
configuration items are integrated with the system level hardware suite
into a single functioning software system. There are two approaches
to integration testing:
- Top-down testing Top-down testing starts with a high-level
driver that ultimately evolves into the system exerciser; supports
early definition of interfaces, early system level executions, and
parallel testing at several levels.
- Bottom-up testing Bottom-up testing usually begins on individual
modules i.e., routines, subroutines, database, global tables). In
turn, these modules are integrated with other modules in a building
block approach; individual drivers are combined, added to, or modified,
until they become very complicated and usually cannot adequately test
the entire system at one time; this allows early testing of critical
or complex modules.
- Recommended approach The recommended approach is to use a
combination of top-down and bottom-up testing. This minimizes risks
by rigorously testing critical and complex software early and minimizes
the development of test software by using high level drivers and previously
tested system software to drive tests.
Configuration Item Testing/Function Testing
Configuration Item Testing (CIT) or Function Testing is the term
for testing a major software entity that is developed and put through
software configuration management as a single item. The purpose of this
testing type is to both identify errors and demonstrate software functionality.
System Testing
System Testing consists of a collection of subtypes, including load/stress,
volume, performance, reliability/availability, degradation and recovery,
configuration, compatibility, security, installation, serviceability,
and human factors. The purpose of system testing is to both identify
errors and demonstrate the system's ability to meet specifications.
Acceptance Testing
Acceptance Testing consists of a combination of any of the above
types, depending upon customer requirements, but primarily they consist
of a combination of system test subtypes. The sole purpose of this test
type is to demonstrate the system's ability to meet specifications,
and to provide a means to sell off the system to the customer.
Regression Testing
Regression Testing is employed upon implementation of any system/software
changes. Regression testing is a subset of previously executed tests,
the level of which is proportional to the scope and/or impact of the
change; therefore, regression testing includes various combinations
of the previously defined types.
Test Engineering evaluates each of these techniques in terms
of the complexity of the system software capabilities. Then an appropriate
mix is selected to ensure adequate testing of all software requirements.
These test types are documented in the Software Test Plan.
The suggested Software Test Plan contents is:
- Identification of test strategy
- Unit, integration, function, system & acceptance
- Regression
- Resources
- Hardware, software, data, personnel, & facilities
- Test disclaimer
- Requirements that are not testable
- Requirements/features not responsible for testing
- Test constraints, assumptions, & risks
- Test schedule, personnel, & deliverables
- Project team responsibilities
|