Artifacts > Test Artifact Set > {More Test Artifacts} > Test Definition > Workload Analysis Model > Guidelines

SWEN 5135 Configuration Management


Workload Analysis Document
A workload analysis document identifies the variables and defines their values used in the different performance tests to simulate or emulate the actor characteristics, end-user's business functions (use cases), load, and volume.
Topics

Overview To top of page

Software quality is assessed along different dimensions, including reliability, function, and performance (see Concepts: Quality Dimensions). The Workload Analysis Document (see Artifact: Workload Analysis Document) is created to identify and define the different variables that affect or influence an application or system's performance and the measures required to assess performance. The workload analysis document is used by the following roles:

  • the test designer (see Role: Test Designer) uses the workload analysis document to derive test cases for the different performance tests
  • the tester (see Role: Tester) uses the workload analysis document to better understand the goals of the test and execute it properly
  • the user representative (see Role: Stakeholder) uses the workload analysis document to assess the appropriateness of the workload, test cases, and tests being executed to assess performance

The information included in the workload analysis document focuses on characteristics and attributes of following primary variables:

  • Use Cases (see Artifact: Use Case) to be executed and evaluated during the performance tests
  • Actors (see Artifact: Actor) to be simulated / emulated during the performance tests
  • Load - number of and type of simultaneous actors, use cases executed, and percent of time or throughput
  • the deployed System (actual, simulated or emulated) to be used to execute and evaluate the performance tests (see Artifact: Software Architecture Document, Deployment view)

Performance tests, as their name implies, are executed to measure and evaluate the performance characteristics or behaviors of the target-of-test. Successfully designing, implementing, and executing performance tests requires identifying and using values to the variables that represent realistic, and exceptional values for these variables.

Use Cases and Use Case Attributes To top of page

Two types of use cases are identified and used for performance testing:

  • critical use cases - the use cases being measured and evaluated in the performance test
  • significant use cases - non-critical use cases that may impact the performance behaviors of critical use cases

Critical Use Cases

Not all use cases being implemented in the target-of-test may be the target of a performance test. Critical use cases are those use cases that will be the focus of a performance test - that is their performance behaviors will be measured and evaluated.

To identify the critical use cases, identify those use cases that meet one or more of the following criteria:

  • use cases that require a measurement and assessment of performance
  • use cases that are executed frequently by one or more actors
  • use cases that represent a high percentage of system use
  • use cases that require significant system resources

List the critical use case for inclusion in the performance test.

As the critical use cases are being identified and listed, the use case flow of events should be reviewed. Specifically, begin to identify the specific sequence of events between the actor (type) and system when the use case is executed.

Additionally, identify (or verify) the following information:

  • Preconditions for the use cases, such as the state of the data (what data should / should not exist) and the state of the target-of-test
  • Data that may be constant (the same) or must differ from one use case instance to the next
  • Relationship between the use case and other use cases, such as the sequence in which the use cases must be performed.
  • The frequency of execution of the use case, such as the number of simultaneous instances of the use case or as a percent of the total load on the system.

Significant Use Cases

Unlike critical use cases, which are the focus of performance test, significant use cases are those use cases that may impact the performance behaviors of critical use cases. Significant use cases include those use cases that meet the one or more of the following criteria:

  • use cases that must be executed before or after executing a critical use case
  • use cases that are executed frequently by one or more actors
  • use cases that represent a high percentage of system use
  • use cases that require significant system resources
  • use cases that execute routinely on the deployed system while critical use cases are executed, such as e-mail or background printing

As the significant use cases are being identified and listed, review the use case flow of events and additional information as done above for the critical use cases.

Actors and Actor Attributes To top of page

Successful performance tests requires identifying not just the actors executing the critical and significant use cases, but must also simulate / emulate actor behavior. That is, one instance of an actor may interact with the target-of-test differently (take longer to respond to prompts, enter different data values, etc.) while executing the same use case and use-case flow of events as another instance of an actor. Consider the simple use cases below:

Actors and use cases in an ATM machine.

The actor "Customer" in the first instance of the use-case execution, in an experienced ATM user, but in another instance of the actor, it is an inexperienced ATM user. The experienced ATM actor quickly navigates through the ATM user-interface and spends little time reading each prompt, instead, pressing the buttons by rote. The inexperienced ATM actor however, reads each prompt and takes extra time to interpret the information before responding. Realistic performance tests reflect this difference to ensure accurate assessment of the performance behaviors when the target-of-test is deployed.

Begin by identifying the actors for each use case identified above. Then identify the different actor stereotypes that may execute each use case. In the ATM example above, we may have the following actor stereotypes:

  • Experienced ATM user
  • Inexperienced ATM user
  • ATM user's account is "inside" the ATM's bank network (user's account is with bank owning ATM)
  • ATM user's account is outside the ATM's bank network (competing bank)

For each actor stereotype, identify the different values for actor attributes such as:

  • Think time - the period of time it takes for an actor to respond to a target-of-test's individual prompts
  • Typing rate - the rate at which the actor interacts with the interface
  • Request Pace - the rate at which the actor makes requests of the target-of-test
  • Repeat factor - the number of times a use case or request is repeated in sequence
  • Interaction method - the method of interaction used by the actor, such as using the keyboard to enter in values, tabbing to a field, using accelerator keys, etc., or using the mouse to "point and click", "cut and paste", etc.

Additionally, for each actor stereotype, identify their work profile, specific all the use cases and flows they execute, and the percentage of time or proportion of effort spent by the actor executing the use cases. Identifying this information is used in identifying and creating a realistic load (see Load and Load Attributes below).

System Attributes and Variables To top of page

The specific attributes and variables of the deployment system that uniquely identify the environment must also be identified, as these attributes also impact the measurement and evaluation of performance. These attributes include:

  • The physical hardware (CPU speed, memory, disk caching, etc.)
  • The deployment architecture (number of servers, distribution of processing, etc.)
  • The network attributes
  • Other software (and use cases) that may be installed and executed simultaneously to the target-of-test

Identify and list the system attributes and variables that are to be considered for inclusion in the performance tests. This information may be obtained from several sources, including:

Load and Load Attributes To top of page

As stated previously, load is one of the factors impacting the performance behaviors of a target-of-test. Load is defined as:

  • "an instance of the emulated end-users interacting with the target-of-test and the variables that affect system use and performance"

Accurately identifying the load that will be used to execute and evaluate the performance behaviors is critical. Typically, performance testing is executed several times, using different loads, each load a variation of the attributes described below:

  • The number of simultaneous actors interacting with the target-of-test
  • The type of actors (and the use cases executed by each) interacting with the target-of-test
  • The frequency of each critical use case executed and how often its executed in sequence (repetition)

For each load used to evaluate the performance of the target-of-test, identify the values for each of the above variables. The values used for each variable in the different loads may be obtained from the Business Use-Case Model (see Artifact: Business Use-Case Model), or derived by observing or interviewing actors. At a minimum, three loads should be derived:

  • Optimal - a load that reflects the best possible deployment conditions, such as one or few actors interacting with the system, executing only the critical use cases, with few or no additional software or use cases executing during the test.
  • Nominal - a load that reflects current deployment conditions
  • Peak - a load that reflects the worst case deployment conditions, such as the maximum number of actors, executing the most critical use cases, with many or all of the additional software and use cases also executing.

When performance testing includes Stress Testing (see Concepts: Performance Test and Concepts: Test Types), several additional loads should be identified, each specifically targeting and setting one system or load variable to be beyond the expected normal capacity of the deployed system.

Performance Measurements and Criteria To top of page

Successful performance testing can only be achieved if the tests are measured and the performance behaviors evaluated. In identifying performance measurements and criteria, the following factors should be considered:

  • What measurements are to be made?
  • Where / what are the critical measurement points in the target-of-test / use-case execution.
  • What are the criteria to be used for determining acceptable performance behavior?

Performance Measurements

There are many different measurements that can be made during test execution. Identify the significant measurements to be made and justify why they are the most significant measurements.

Listed below are the more common performance behaviors monitored or captured:

  • Test script state or status - a graphical depiction of the current state, status, or progress of the test execution
  • Response time / Throughput - measurement (or calculation) of response times or throughput (usually stated as transactions per second).
  • Statistical performance - a measured (or calculated) depiction of response time / throughput using statistical methods, such as mean, standard deviation, and percentiles.
  • Traces - capturing the messages / conversations between the actor (test script) and the target-of-test, or the dataflow and / or process flow during execution.

See Concepts: Key Measures of Test for additional information

Critical Performance Measurement Points

In the Use Cases and Use Case Attributes section above, it was noted that not all use case are executed for performance testing. Similarly, not all performance measures are made for each executed use case. Typically there is a specific use case flow (or several) that is (are) targeted for measurement. Or there may be a specific sequence of events within a specific use case flow of events that will be measured to assess the performance behavior. Care should be taken to select the most significant starting and ending "points" for the measuring the performance behaviors. The most significant ones are typically those the most visible sequences of events or those that we can affect directly through changes to the software or hardware.

For example, in the ATM - Cash Withdraw use case identified above, we may measure the performance characteristics of the entire use case, from the point where the Actor initiates the withdrawal, to the point in which the use case is terminated - that is, the Actor receives their bank card and the ATM is now ready to accept another card, as shown by the black "Total Elapsed Time" line in the diagram below:

md_wlmd1.gif (6225 bytes)

Notice, however, there are many sequences of events that contribute to the total elapsed time, some that we may have control over (such as read card information, verify card type, initiate communication with bank system, etc., items B, D, and E above), but other sequences, we have not control over (such as the actor entering their PIN or reading the prompts before entering their withdrawal amount, items A, C, and F). In the above example, in addition to measuring the total elapsed time, we would measure the response times for sequences B, D, and E, since these events are the most visible response times to the actor (and we may affect them via the software / hardware for deployment).

Performance Measurement Criteria

Once the critical performance measures and measurement points have been identified, review the performance criteria. Performance criteria are stated in the Supplemental Specifications (see Artifact: Supplementary Specifications). If necessary revise the criteria.

Two criteria are often used for performance measurement:

  • response time or throughput rate
  • percentiles

Response time, measured in seconds, or throughput rate, measured by the number of transactions (or messages) processed is the main criteria.

For example, using the Cash Withdraw use case, the criteria is stated as "events B, D, and E (see diagram above) must each occur in under 3 seconds (for a combined total of 9 seconds)". If during testing, we note that that any one of the events identified as B, D, or E takes longer than the stated 3 second criteria, we would note a failure.

Percentile measurements are combined with the response times and / or throughput rates and are used to "statistically ignore" measurements that are outside of the stated criteria. For example, the performance criteria for the use case was now states "for the 90th percentile, events B, D, and E must each occur in under 3 seconds ...". During test execution, if we measure 90 percent of all performance measurements occur within the stated criteria, no failures are noted.

Copyright  © 1987 - 2001 Rational Software Corporation


Display Rational Unified Process using frames

Rational Unified Process