unit 9

content


Unit 9 Software Integration Testing

This unit discusses designing integration test plans and cases. It provides activities, metrics collection ideas and tools for test integration planning, test integration execution and test integration reporting.

Designing Integration Test Plans and Cases

Let us first address the contractual issue regarding documentation at this stage. Does the contract require a formal integration test plan document? Or are the test plans, cases, & results informally done in the software (or integration) development folders and delivered to the contracting agency that way? If so, references to earlier documentation in the integration test plan may become an internal document process for the organization. The guidelines, objectives, requirements specifications, and design specifications are a minimum set of inputs. The table below identifies the documentation and availability pertaining to the project.

NOTE: Delete or add items as appropriate.

Document
(and version/date)

Created or Available

Received or Reviewed

Author or Resource

Notes

Requirements Specification

o Yes
o No

o Yes
o No

   
Functional Specification

o Yes
o No

o Yes
o No

   
Use Case Reports

o Yes
o No

o Yes
o No

   
Project Plan

o Yes
o No

o Yes
o No

   
Design Specifications

o Yes
o No

o Yes
o No

   
Prototype

o Yes
o No

o Yes
o No

   
Users Manuals

o Yes
o No

o Yes
o No

   
Business Model/Flow

o Yes
o No

o Yes
o No

   
Data Model/Flow

o Yes
o No

o Yes
o No

   
Business Functions and Rules

o Yes
o No

o Yes
o No

   
Project/Business Risk Assessment

o Yes
o No

o Yes
o No

   

 

The test plan identifies what is going to be tested. The test plan documents for the project the following objectives:

  1. Identify existing project information and the software components that should be tested
  2. List the recommended Test Requirements (high level)
  3. Recommend and describe the testing strategies to be employed
  4. Identify the required resources and provide an estimate of the test efforts
  5. List the deliverable elements of the test project

Designing Integration Test Cases.


design integration test plans



There is a required set of inputs to begin writing test cases and provide guidance. Test data is generated and/or identified. The results of the test case planning is documented in the Integration Test Plan or an integration test procedures document. Each test case identifies how an objective is going to be tested. A test strategy is developed to present the recommended approach to testing the software applications. The considerations for the test strategy are the techniques to be used and the criterion for knowing when the testing is completed. In addition to the considerations provided for the possible tests listed next, testing should only be executed using known, controlled databases, in a secured environments. Below is an example content of an Integration Test Plan.

 

Integration Test Plan

Build ID: Date Test Conducted:

Version Control ID:

Build Contents:

 

Requirements to be validated:

1.

2.

3.

 

Test Tools, Drivers, or Special Conditions:

 

 

Input Data:

 

 

Expected Output Data:

 

Test Procedures:

1.

2.

3.

Analysis Procedures

1.

2.

3.

 

 

Twelve possible testing types are given that could be used to design integration test cases depending on the project being developed. The testing types are:

  1. Data and Database Integrity Testing
  2. System Testing
  3. Business Cycle Testing
  4. User Interface Testing
  5. Performance Testing
  6. Load Testing
  7. Stress Testing
  8. Volume Testing
  9. Security and Access Control Testing
  10. Failover / Recovery Testing Configuration Testing
  11. Installation Testing
Data and Database Integrity Testing
The databases and the database processes should be tested as separate systems. These systems should be tested without the applications (as the interface to the data). Additional research into the DBMS needs to be performed to identify the tools / techniques that may exist to support the testing identified below.

 

Test Objective:

Ensure Database access methods and processes function properly and without data corruption

Technique:

  • Invoke each database access method and process, seeding each with valid and invalid data (or requests for data)
  • Inspect the database to ensure the data has been populated as intended, all database events occurred properly, or review the returned data to ensure that the correct data was retrieved (for the correct reasons)

Completion Criteria:

All database access methods and processes function as designed and without any data corruption

Special Considerations:

  • Testing may require a DBMS development environment or drivers to enter or modify data directly in the databases
  • Processes should be invoked manually
  • Small or minimally sized databases (limited number of records) should be used to increase the visibility of any non-acceptable events

 

System Testing
Testing of the application should focus on any target requirements that can be traced directly to use cases (or business functions), and business rules. The goals of these tests are to verify proper data acceptance, processing, and retrieval, and the appropriate implementation of the business rules. This type of testing is based upon black box techniques, that is, verifying the application (and its internal processes) by interacting with the application via the GUI and analyzing the output (results). Identified below is an outline of the testing recommended for each application:

Test Objective:

Ensure proper application navigation, data entry, processing, and retrieval.

Technique:

  • Execute each use case, use case flow, or function, using valid and invalid data, to verify the following:
    • The expected results occur when valid data is used
    • The appropriate error / warning messages are displayed when invalid data is used
    • Each business rule is properly applied

Completion Criteria:

  • All planned tests have been executed
  • All identified defects have been addressed

Special Considerations:

  • Identify / describe those items or issues (internal or external) that impact the implementation and execution of System test

 

Business Cycle Testing
Business Cycle Testing should emulate the activities performed on the system over time. A period should be identified, such as one year, and transactions and activities that would occur during a year’s period should be executed. This includes all daily, weekly, monthly cycles and events that are date sensitive, such as ticklers.

Test Objective

Ensure proper application and background processes function according to required business models and schedules

Technique:

  • Testing will simulate several business cycles by performing the following:
    • The tests used for application function testing will be modified / enhanced to increase the number of times each function is executed to simulate several different users over a specified period
    • All time or date sensitive functions will be executed using valid and invalid dates or time periods
    • All functions that occur on a periodic schedule will be executed / launched at the appropriate time
  • Testing will include using valid and invalid data, to verify the following:
    • The expected results occur when valid data is used
    • The appropriate error / warning messages are displayed when invalid data is used
    • Each business rule is properly applied

Completion Criteria:

  • All planned tests have been executed
  • All identified defects have been addressed

Special Considerations:

  • System dates and events may require special support activities
  • Business model is required to identify appropriate test requirements and procedures

 

Interface Testing
User Interface (UI) testing verifies a user’s interaction with the software. The goal of UI Testing is to ensure that the User Interface provides the user with the appropriate access and navigation through the functions of the applications. In addition, UI Testing ensures that the objects within the UI function as expected and conform to corporate or industry standards.

 

Test Objective:

Verify the following:

  • Navigation through the application properly reflects business functions and requirements, including window to window, field to field, and use of access methods (tab keys, mouse movements, accelerator keys)
  • Window objects and characteristics, such as menus, size, position, state, and focus conform to standards

Technique:

  • Create/modify tests for each window to verify proper navigation and object states for each application window and object

Completion Criteria:

Each window successfully verified to remain consistent with benchmark version or within acceptable standard

Special Considerations:

  • Not all properties for custom and third party objects can be accessed

 

Performance Testing
Performance testing measures response times, transaction rates, and other time sensitive requirements. The goal of Performance testing is to verify and validate the performance requirements have been achieved. Performance Testing is usually executed several times, each using a different "background load" on the system. The initial test should be performed with a "nominal" load, similar to the normal load experienced (or anticipated) on the target system. A second Performance Test is run using a peak load.

Additionally, Performance Tests can be used to profile and tune a system’s performance as a function of conditions such as workload or hardware configurations.

NOTE: Transactions below refer to "logical business transactions." These transactions are defined as specific functions that an end user of the system is expected to perform using the application, such as add or modify a given contract.

Test Objective:

Validate System Response time for designated transactions or business functions under the following two conditions:

- normal anticipated volume

- anticipated worse case volume

Technique:

  • Use Test Procedures developed for Business Model Testing (System Testing).
  • Modify data files (to increase the number of transactions) or the scripts to increase the number of iterations each transaction occurs.
  • Scripts should be run on one machine (best case to benchmark single user, single transaction) and be repeated with multiple clients (virtual or actual, see special considerations below).

Completion Criteria:

  • Single Transaction/single user: Successful completion of the test scripts without any failures and within the expecte/required time allocation (per transaction)
  • Multiple transactions/multiple users: Successful completion of the test scripts without any failures and within acceptable time allocation.

Special Considerations:

  • Comprehensive performance testing includes having a "background" load on the server. There are several methods that can be used to perform this, including:
    • "Drive transactions" directly to the server, usually in the form of SQL calls.
    • Create "virtual" user load to simulate many (usually several hundred) clients. Remote Terminal Emulation tools are used to accomplish this load. This technique can also be used to load the network with "traffic."
    • Use multiple physical clients, each running test scripts to place a load on the system.
  • Performance testing should be performed on a dedicated machine or at a dedicated time. This permits full control and accurate measurement.
  • The databases used for Performance testing should be either actual size, or scaled equally.

 

Testing

Load Testing Measures subjects the system-under-test to varying workloads to evaluate the systems ability to continue to function properly under these different workloads. The goal of load testing is to determine and ensure that the system functions properly beyond the expected maximum workload. Additionally, load testing evaluates the performance characteristics (response times, transaction rates, and other time sensitive issues).

NOTE: Transactions below refer to "logical business transactions." These transactions are defined as specific functions that an end user of the system is expected to perform using the application, such as add or modify a given contract.

Test Objective:

Verify System Response time for designated transactions or business cases under varying workload conditions

Technique:

  • Use tests developed for Business Cycle Testing
  • Modify data files (to increase the number of transactions) or the tests to increase the number of times each transaction occurs

Completion Criteria:

  • Multiple transactions / multiple users: Successful completion of the tests without any failures and within acceptable time allocation

Special Considerations:

  • Load testing should be performed on a dedicated machine or at a dedicated time. This permits full control and accurate measurement
  • The databases used for load testing should be either actual size, or scaled equally

 

Stress Testing
Stress testing is intended to find errors due to low resources or competition for resources. Low memory or disk space may reveal defects in the software that aren't apparent under normal conditions. Other defects might results from competition for shared resource like database locks or network bandwidth. Stress testing identifies the peak load the system can handle.

NOTE: References to transactions below refer to logical business transactions.

Test Objective:

Verify the system and software function properly and without error under the following stress conditions:

  • little or no memory available on the server (RAM and DASD)
  • maximum (actual or physically capable) number of clients connected (or simulated)
  • multiple users performing the same transactions against the same data / accounts
  • worst case transaction volume / mix (see performance testing above).

NOTES: Stress testing’s goal might also be stated as identify and document the conditions under which the system FAILS to continue functioning properly.

Stress testing of the client is described under Configuration Testing in Module 9.

Technique:

  • Use tests developed for Performance Testing.
  • To test limited resources, tests should be run on single machine, RAM and DASD on server should be reduced (or limited).
  • For remaining stress tests, multiple clients should be used, either running the same tests or complementary tests to produce the worst case transaction volume / mix.

Completion Criteria:

All planned tests are executed and specified system limits are reached / exceeded without the software or software failing (or conditions under which system failure occurs is outside of the specified conditions).

Special Considerations:

  • Stressing the network may require network tools to load the network with messages / packets.
  • The DASD used for the system should temporarily be reduced to restrict the available space for the database to grow.
  • Synchronization of the simultaneous clients accessing of the same records / data accounts.

 

Volume Testing
Volume Testing subjects the software to large amounts of data to determine if limits are reached that cause the software to fail. Volume testing also identifies the continuous maximum load or volume the system can handle for a given period. For example, if the software is processing a set of database records to generate a report, a Volume Test would use a large test database and check that the software behaved normally and produced the correct report.

Test Objective:

Verify the application/system successfully functions under the following high volume scenarios:

  • maximum (actual or physically capable) number of clients connected (or simulated) all performing the same, worst case (performance) business function for an extended period
  • maximum database size has been reached (actual or scaled) and multiple queries / report transactions are executed simultaneously

Technique:

  • Use tests developed for Performance Testing
  • Multiple clients should be used, either running the same tests or complementary tests to produce the worst case transaction volume/mix (see stress test above) for an extended period
  • Maximum database size is created (actual, scaled, or filled with representative data) and multiple clients used to run queries/report transactions simultaneously for extended periods

Completion Criteria:

All planned tests have been executed and specified system limits are reached/exceeded without the software or software failing

Special Considerations:

  • What period of time would be considered an acceptable time for high volume conditions? (as noted above)

© January 1, 2006 James C. Helm, PhD., P.E.