Concepts:
Performance Testing
Performance testing is a class of tests implemented and executed to
characterize and evaluate the performance related characteristics of the
target-of-test such as the timing profiles, execution flow, response times, and
operational reliability and limits. Different types of performance tests, each
focused on a different test objective are implemented throughout the software
development life cycle (SDLC). Early, in the architecture iterations,
performance tests are focused on identifying and eliminating
architectural-related performance bottlenecks. In the construction iterations,
additional types of performance tests are implemented and executed to tune the
software and environment (optimizing response time and resources), and to verify
that the applications and system acceptably handle high load and stress
conditions, such as a large numbers of transactions, clients, and / or volumes
of data.
Included in Performance Testing are the following types of tests:
- Benchmark testing: Compares the
performance of new or unknown target-of-test to a known reference standard
such as existing software or measurements.
- Contention test: Verifies the
target-of-test can acceptably handle multiple actor demands on the same
resource (data records, memory, and so forth).
- Performance profiling: Verifies the acceptability
of the target-of-test's performance behavior using varying configurations
while the operational conditions remain constant.
- Load testing: Verifies the acceptability of the
target-of-test's performance behavior under varying operational conditions
(such as number of users, number of transactions, and so on) while the
configuration remains constant.
- Stress testing: Verifies the acceptability of the
target-of-test's performance behavior when abnormal or extreme conditions
are encountered, such as diminished resources or extremely high number of
users.
Performance evaluation is normally performed in conjunction with the User
representative and is done in a multi-level approach.
The first level of performance analysis involves evaluation of the results
for a single actor or use-case instance and the comparison of results across
several test executions. For example, capturing the performance behavior of a
single actor performing a single use case without any other activity on the
target-of-test, and comparing the results with several other test executions of
the same actor or use case. This first-level analysis can help identify trends
which may indicate there is contention among system resources which may affect
the validity of the conclusions drawn from other performance test results.
A second level of analysis examines the summary statistics and actual data
values for specific actor or use-case execution and the target-of-test's
performance behavior. Summary statistics include standard deviations and
percentile distributions for the response times, which provide an indication of
the variability in system responses as seen by individual actors.
A third level of analysis can help to understand the causes and significance
of performance problems. This detailed analysis takes the low-level data and
uses statistical methods to help testers draw correct conclusions from the data.
Detailed analysis provides objective and quantitative criteria for making
decisions, but it is more time consuming and requires a basic understanding of
statistics.
Detailed analysis uses the concept of statistical significance
to help understand when differences in performance behavior are real or due to
some random event associated with the test data collection. The idea is that on
a fundamental level, there is randomness associated with any event. Statistical
testing determines whether there is a systematic difference that can’t be
explained by random events.
See Concepts: Key Measures of Test for more
information on the different performance test reports.
Copyright
© 1987 - 2001 Rational Software Corporation
|