Principles of Software Testing for Testers

About this course

This course is intended to help customers learn how to test software better. Parallel to the Rational courses, RMUC and OOAD, this course focuses on principles and proven software engineering practices, and does so in the framework of RUP. The course does not require a computer and does not delve into tools.

Most of this course was developed by Cem Kaner, with guidance from Paul Szymkowiak, who wrote the RUP content in this area. The course draws on Kaner’s notes from his industrial courses on black box software testing, his academic courses at Florida Institute of Technology and his work (supported by the National Science Foundation) to produce a national core curriculum in software testing.


About these notes

These supplemental notes are intended only to help you, as a Rational instructor, prepare to teach this course. These are not for distribution.

Most of these instructor notes are taken from transcripts of Cem Kaner’s delivery of a Master Class 4/6-7/2002 to RUP instructors. When you see the first person singular in the notes, it is a quote from Kaner’s presentation.

As compiler, I (Sam Guckenheimer) have tried to edit the transcripts for improved readability and to tie the discussion to the appropriate slides. I have not done a thorough copy edit. There are lots of transcription mistakes that remain in these notes. (These I have classified as P4 bugs – no plan to fix.)

Only four modules are covered – the ones from the master class. Not all slides have supplemental notes here and the notes vary enormously in length, depending on the breadth of discussion that a topic received in the first master class.

Along with the course notes, every student will get a book, Lessons Learned in Software Testing by Cem Kaner, James Bach, and Bret Pettichord. The student notes make frequent reference to this book. To keep the student notes simple, where the content overlaps

with Lessons Learned, the notes reference the pages or chapter of the book.

Sam Guckenheimer August 28, 2002

Table of contents

Principles of Software Testing for Testers 1

About this course 1

About these notes 1

Table of contents 2

Common controversies 4

Purpose of this section 4

General classroom management 4

Which is the one technique / style I should use? 5

Can I mix techniques? 5

Automation 7

Should all tests be automated? 7

What experiences have you had with automated testing? 7

Requirements, specs and documentation 9

Should testers demand specs or requirements documents as a prerequisite to testing? 9

Do you need to update the project documentation every time you run a test that wasn’t directly in the spec? 9

Staffing 11

Who should you hire as a tester? 11

How do you get subject matter expertise? 12

Is the test group’s role QA or testing? 13

Context 13

Short Answer 13

Kaner’s Answer 13

Steve Hunt’s Answer 14

The Process Improvement Rathole 14

Variation: What is the ship-decision role of your test group? 15

Module 1 - What Testers Should Know About Software Engineering Practices 16

Module 2 - Core Concepts of Software Testing 17

Slide 2.2 17

Slide 2.4: Functional Testing 18

Slide 2.5: Exercise 2.1. 19

Exercise guidelines 19

•Transition 19

Slide 2.6: How Some Experts Have Defined Quality... 20 Slide 2.7: Quality As Satisfiers and Dissatisfiers 21

Slide 2.8: Quality Involves Many Stakeholders 22

Slide 2.9: Exercise 2.2: Quality Has Many Stakeholders (1/2) 23

In-house class 23

Open enrollment class 23

Slide 2.10: Exercise 2.2: Quality Has Many Stakeholders (2/2) 25

Slide 2.11: A Working Definition of Quality 26

Slide 2.12: Change Requests and Quality 27

Slide 2.13: Dimensions of Quality: FURPS 28

Slide 2.14: It May be Useful to List More Dimensions 29 Slide 2.16: Test Ideas 30

Slide 2.17: Exercise 2.3: Brainstorm Test Ideas (1/2) .. 31 Slide 2.18: Exercise 2.3: Brainstorm Test Ideas (2/2) .. 32 Sample discussion 32

Slide 2.20: A Test Ideas List for Integer-Input tests 33

Slide 2.21: Discussion 2.4: Where Would You Use Test Ideas Lists? 35

Slide 2.23: Identify a Generic List of Test Ideas 36

Slide 2.24: A Catalog of Test Ideas for Integer-Input tests 37

Slide 2.25: The Test Ideas Catalog 38

Slide 2.26: Apply a Test Ideas Catalog Using a Test Matrix 39

Slide 2.27: Exercise 2.5: Your Own Test Ideas Lists .. 40 Does context of the application help the brainstorming? 40

Module 4: Define Evaluation Mission 42

Slide 4.2: Module 4 Content Outline 42

Slide 4.3-5: Workflow: Define Evaluation Mission 43

Slide 4.7: Exercise 4.1: Which Group is Better? 44

Slide 4.9: Exercise 4.2: Which Group is Better? 45

Slide 4.10: So? Purpose of Testing? 50

Slide 4.11: Varying Missions of Test Groups 51

Slide 4.12: Optional Exercise 4.3: What Is Your Mission? 52

Slide 4.13: A Different Take on Mission: Public vs. Private Bugs 53

Slide 4.14: Defining the Test Approach 56

Slide 4.15: Heuristics for Evaluating Testing Approach

..................................................................................... 58

Slide 4.17: What Test Documentation Should You Use?

..................................................................................... 59

Slide 4.18: IEEE Standard 829 for Software Test Documentation 60

Slide 4.19: Considerations for IEEE 829 61

Slide 4.20: Requirements for Test Documentation 63

Slide 4.21: Test Docs Requirements Questions 64

Slide 4.22: Write a Purpose Statement for Test Documentation 66

Slide 4.23: Exercise 4.4: Purpose for Your Test Documentation? 67

Module 5: Test & Evaluate 68

Slide 5.2: Module 5 Agenda 68

Slide 5.3-5: Workflow: Test and Evaluate 69

Slide 5.8: Discussion Exercise 5.1: Test Techniques... 70 Slide 5.9: Dimensions of Test Techniques 71

Is oracle a widely used term? 72

Slide 5.10: Test Techniques—Dominant Test Approaches 73

Slide 5.13: Module 5 Agenda 74

Slide 5.14: Test Techniques—Function Testing 75

Skills involved 75

Take home exercise (in student manual) 75

Slide 5.17-19: Test Techniques—Equivalence Analysis

..................................................................................... 77

Stratified sampling 77

Printer compatibility 78

Subject matter expertise 79

Blind spots 80

Doug Hoffman’s story 80

Skills involved – Lead in to exercises 81

Equivalence classes for configuration testing 82

Timeout example 82

Background for exercises 83

Slide 5.21: Optional Exercise 5.3: Myers’ Triangle Exercise 85

Slide 5.22: Exercise 5.3: Myers’ Answers 87

Slide 5.23: Optional Exercise 5.4: Equivalence Analysis with Output 89

Slide 5.25-6: Test Techniques—Specification-Based Testing 92

Slide 5.27: Traceability Tool for Specification-Based Testing 95

Slide 5.28: Optional Exercise 5.5: What “Specs” Can You Use? 98

Slide 5.33: Definitions—Risk-Based Testing 99

Slids 5.34: Test Techniques—Risk-Based Testing 100

Slide 5. 35: Strengths & Weaknesses—Risk-Based

Testing 102

Slide 5.36: Workbook Page—Risks in Qualities of Service 103

Take-Home Exercise 103

Slides 5.37-38: Workbook Page—Heuristics to Find Risks 104

Slide 5.39: Workbook Page—Bug Patterns As a Source of Risks 105

Slide 5.40: Workbook Page—Risk-Based Test Management 106

Slide 5.41: Optional Exercise 5.6: Risk-Based Testing

................................................................................... 107

Top down approach 107

Bottom-up approach 107

Slide 5.43: Test Techniques—Stress Testing 109

Slide 5.46: Test Techniques—Regression Testing 110

Slide 5.49: Test Techniques—Exploratory Testing ... 111 Slide 5.50: Strengths & Weaknesses: Exploratory Testing 112

Who should do exploratory testing? 112

Slide 5.52: Test Techniques—User Testing (1/2) 113

Slide 5.53: Test Techniques—User Testing (2/2) 114

Slide 5.55: Test Techniques—Scenario Testing (1/5) 115 Slide 5.56: Test Techniques—Scenario Testing (2/5) 116 Slide 5.57: Test Techniques—Scenario Testing (3/5) 117 Slide 5.58: Test Techniques—Scenario Testing (4/5) 119 Order of applying test techniques 119

Slide 5.59: Test Techniques—Scenario Testing (5/5) 121 Slide 5.62: Test Techniques—Stochastic or Random Testing 123

Slide 5.66: Applying Opposite Techniques to Boost Coverage 125

Can we automate exploratory testing? 125

Module 6: Analyze Test Failures 126

Slide 6.2: Module 6 Objectives 126

Slide 6.10: Championing Your Defect Reports 127

Slide 6.11: Discussion 6.1: What happens to your defect report? 128

Slide 6.12: Motivating the Defect Fixer: Analyzing the Impact 129

Slide 6.13: Overcoming Objections: Think About Your Audience 130

Slide 6.15: Analyzing Failures with Follow-Up Testing

................................................................................... 131

Slide 6.17: Analyzing Severity: Follow-Up Testing .. 132 Slide 6.18: Follow-Up: Vary Your Behavior 133

Slide 6.19: Follow-Up: Vary Options and Settings 135

Slide 6.20: Follow-Up: Vary the Configuration 136

Slide 6.21: Analyzing Generality: Configurations 137

Slide 6.22: Analyzing Failure Conditions 139

Slide 6.23: Uncorner the Corner Case 140

Slide 6.24: Analyzing Non-Reproducible Errors 141

Slide 6.25: Analyzing Non-Reproducible Errors 143

Slide 6.26: Analyzing Non-Reproducible Errors 144

Slide 6.32: Writing the Defect Report: Make It Clear 145 Slide 6.33: Writing the Report: Keep it Simple 147

Slide 6.34: Writing the Defect Report 148

Slide 6.38: Writing the Report: The Headline 149

Slide 6.44: Exercise 6.3: Defect Reporting (1/18) 150

Slide 6.45: Exercise 6.3: Defect Reporting (2/18) 151

Slide 6.46: Exercise 6.3: Defect Reporting (3/18) 152

Slide 6.47: Exercise 6.3: Defect Reporting (4/18) 153

Slide 6.48: Exercise 6.3: Defect Reporting (5/18) 154

Slide 6.49: Exercise 6.3: Defect Reporting (6/18) 155

Slide 6.50: Exercise 6.3: Defect Reporting (7/18) 156

Slide 6.51: Exercise 6.3: Defect Reporting (8/18) 157

Slide 6.52: Exercise 6.3: Defect Reporting (9/18) 158

Slide 6.53: Exercise 6.3: Defect Reporting (10/18) ... 159

Slide 6.54: Exercise 6.3: Defect Reporting (11/18) ... 160

Slide 6.55: Exercise 6.3: Defect Reporting (12/18) ... 161

Slide 6.56: Exercise 6.3: Defect Reporting (13/18) ... 162

Slide 6.57: Exercise 6.3: Defect Reporting (14/18) ... 163

Slide 6.58: Exercise 6.3: Defect Reporting (15/18) ... 164

Slide 6.59: Exercise 6.3: Defect Reporting (16/18) ... 165

Slide 6.60: Exercise 6.3: Defect Reporting (17/18) ... 166

Slide 6.61: Exercise 6.3: Defect Reporting (18/18) ... 167

Slide 6.61: Exercise 6.3: Defect Reporting 168

Common controversies


Purpose of this section

The purpose of this section is to help you as the instructor prepare answers to topics that students might raise, sometimes very legitimately, and other times with private agendas. The sections that follow distill the discussion from the master class on 4/6/02.


General classroom management

My responsibility to the room is to close the discussion down when I see only a small portion of people that look to be engaged. It does not mean it is not a valuable discussion, it means that there are some discussions that will be just as valuable with 3 as with the full room. And for those we’ll need to take it off-line. The time’s that we’ll take discussions off-line will be as follows, I am available every day at lunch to meet with 3, or 4, or 5 people to sit around the table and carry the discussion. In fact, we’ll have a flip chart and we will note what the lunch time discussions are and we’ll pick them on a day to day basis – this is the day we’ll talk about this.

I’m also willing to come in ¾ of an hour early for class if at least 3 people will actually sign up and show up to do it to facility a relatively private discussion. Given that, does anyone object if I close the discussion when it’s my sense that there are not more than 3 or 4 people actively engaged? I’ve never had anyone have the nerve to object. As a ground rule for the class, this is something I say this during the time I’m telling them where the bathrooms are.

During the opening ground rules for the class, no one has wanted to invest in a fight with me.

And later if there’s a discussion I think is going too long, and someone protests when I suggest we close it, I get to say, okay, how many people want to spend 10 more minutes on this. I can’t remember a time when I’ve been surprised by a lot of people saying this. But if the whole class wants to go with this, great. But if only a few put their hand up, you say great, we’ll move this to one of the lunch time discussion and that’s wonderful. I get pretty nice course reviews from those people who got bored in those things and I shut the discussion down, they say thank you, thank you, thank you. And the people I do shut down, understand it’s part of an organized process and it’s not personal.

Which is the one technique / style I should use?

(a.ka. If I have an infinite stuff of things to do and a limited time to do them how do I pick which technique to do?)

The course gives perspective on a wide range of testing styles. All of those are valuable to practice in any company. Most companies don’t apply all of the techniques; they apply a subset. I’ve seen some that were very successful and applied 4, 5 at most. But picking which 5 are right for the organization it’s a lot better to make that decision based on your knowledge on what the alternatives are and which things are easily automated in a practical way inside your organization.

From my point of view, the more senior a tester gets the wider the range of techniques they understand and the wider the range of relationships among the techniques they understand and to a greater degree they understand what juggling is required for each specific case.

And the second answer is: Start where you are and expand your scale incrementally by adding a new technique. Give your team six months to master it and then add another in the next round. And repeat. Nothing else will work anyway.

Get better at what you're doing but broaden yourself. As you broaden yourself you’ll discover that you're getting deeper you're going to understand the techniques you know better as well {audio not audible} and over time you'll discover that you know many more things and we'll have more complex test documents, more complex plans but richer ones.

There is a distinction between the tester who has ten years experience gaining breadth over ten years and the tester who has twenty six-months strung together over a period of ten years who really has six months experience repeated twenty times. But you will unfortunately often find are people who have been in the business for ten years redoing the project twenty times.

One of the core pieces that distinguishes really skilled, capable, mature testers from junior's is our ability to understand that there are many different attacks that can be effective on the same problem. Many combinations of those can be effective on the same problem. And then to apply judgment to find out which piece would be more efficient today for the situation we are in. The recognition of the diversity of approaches, I've got a lot of different tools, and I want to pull the right one out-of-the-box for today and I know how to use that tool well is something

that builds up over a long period of time. We can try to develop education that will make that process faster that's part of the objective of this course but that build up of skill is a gradual thing whether it’s in University, in private teaching, or in practice {audio not audible}.

When people don't have varied experience as testers and have locked themselves into a specific paradigm, they have a big career retraining problem that they are going to have to face as soon as the situation changes. .

It's not a lesson than that everybody can learn the first time that they hear it. It's an experience-based lesson. It's hard for somebody to hear that the work they’re doing is going to be tougher next week that it was last week. They don't necessarily want to hear that. Oh well.


Can I mix techniques?

Many of the techniques readily, obviously blend together.

So if you go into testing this morning and you say what I want to is really good specification based testing the most important thing for me to think about is the spec and I will do whatever it takes for me to really understand how to compare the program to the spec - you're doing spec-based testing. If you're doing that and creating tests for reuse at the same time, you're still doing spec-based testing. If you’re doing that and exploring you're still doing spec-based testing. If you look at the spec and you say well, to tell whether this item is true, I want to analyze every risk associated with this claim, you're still doing

spec-based testing.

Now tomorrow maybe you come in and it happens that you still have the spec but you say I want to analyze every individual function that is mentioned in the spec and kicked these individual functions as hard as I can. I don't care where they are in the spec’s claims, I'm going to say let me find all the information about printing. I will put it in one list called printing then I will take each subpart in printing and hammer it down in itself and if something is not in the spec but I know it is in printing then I still want to look at printing. Well, you're kind of doing spec educated testing but you're doing function testing.

If your focus is on current learning and testing, you're focus is exploratory no matter what object you're testing.

If your focus is on the many ways it could fail you're doing risk-based testing, which might be done

exploratory or might be done spec-based. The real question is where is the primary focus in your mind. All of these techniques end up complementing each other in use. Although some individuals get so caught up in the one dominant way of thinking that they don’t look at the others as things to round themselves out and help them do this one approach.

Automation


Should all tests be automated?

(a.k.a. How can we implement a testing process that is agile, exploratory, constantly brain engaged, with automation tools?)

There are a lot of testing practices that can be automated. There are a lot of programming practices that can be automated but not all of them. Tom DeMarco talks at length about the distinction between programming practices that become routine and then get folded into tools and programming practices that are unique project to project and can never be well measured, and never be well standardized, and never be folded into tools.

We have the same problem in testing. As we come to understand more about testing, we come to an ability to automate more and more of the tasks. Automated testing is computer- assisted testing. It is not computer-does-it-all testing. So there’s always going to be room for people to be operating by their wits. In fact, as the tools get better, the role of the tester is going to be more and more of someone who has to rely on her own judgment more and more while using the support of increasingly powerful tools to take away work that doesn’t need to be done directly by a person.

Exploratory testing by its nature is testing that is not routine and yet testing that we have to do under a lot of circumstances because we don’t have routine practices that have any assurance of working efficiently for the problem that the exploratory tester is trying to solve. Some of the things we do in exploratory testing turn out over time to be stereotyped enough that we can convert those into practices and automate them, as we spin off the things that look routine.

Equivalence class testing is a great example. People intuitively understood that boundaries were good places to test. Now test case generators generate boundary conditions, it’s a routine thing. Exploratory testers who live their life in a boundary testing world end up generating results that are not necessarily very interesting. Not if the testing was done by the programming staff was any good at all.

So we’ll find out that as we spin off the routine things from the exploratory tester, the extent to which the exploratory tester is going to be making one of a kind judgments constantly is going to get greater and what we’re doing in this course largely is educating that person’s judgment.

Similarly, there’s a whole lot of testing that can be done through automated regression tests. We’ll see that there are risks to applying automated regression tools in some ways and there are benefits to applying them in some ways. We’ll probably have some discussion from several folks here about that and there’s some discussion in the course notes. No tool solves all problems. All of Rational’s tools solve some problems. No one’s tools solve the problems that we’re leaving as open problems here.


Can we automate exploratory testing?

The only automation would be to clone intelligence and imagination which we’ll probably not be able to do within the next year. An exploratory tester brings to the table a curiosity and some intelligence to put together new paths to travel in the application. There are certain tools that can figure out how to travel every path in the application and all permutations given a certain amount of time, but that’s not exploratory testing.

So short of cloning the exploratory tester and their capabilities, the only tools we can really offer are those tools that can assist the exploratory tester. What would those tools be? Documenting the moves they’re making so that after the tests they are performing we understand what they did. Or tracking ideas, I may be percolating full with ideas but I need to write them down. I need to try this, I need to try this, I need to try this. In the process of doing a test, four other ideas come. So right now the yellow pad is the best vehicle I have for that. So from an automation standpoint, all I can really do is add a small piece of assistance to that.


What experiences have you had with automated testing?

This is a controversy to the extent that you have people who come into the room who say that either automated testing should be done for everything and that any testing that isn’t automated is the stuff for fools. And if you don’t think there are people who think that way, just go to the Extreme Testing mailing list.

So the two opposite perspectives are:

  1. Everything has to be automated

  2. Anybody that does automated testing is wasting their time. You evil tool vendors are overselling something that has really gotten some people into trouble.

Both of these are based on experiences, and an experienced tester might be justified by her experience in drawing either conclusion. It’s important to show tolerance and respect for either extreme view (and the many middle grounds), encouraging people to describe the experiences that led them to their view.

Requirements, specs and documentation


Should testers demand specs or requirements documents as a prerequisite to testing?

Let me set your understanding of the context of these questions. Here you are, teaching away, and somebody says, “What, you need to come up with test ideas to figure out how to test a numeric input field? Why, that should be laid out in the specification! You wouldn’t want to do any testing without understanding exactly what things need to be covered, and that’s all covered in the specification/requirements document. So why are you wasting your time on that kind of thinking?”

It needs to be acknowledged, time limited, and gotten past.

The sabotaging nature of this discussion comes from the group of three people in the back of the room who allege that their company follows a process, an ideal process, they are QA and in their company there are genuine and real specs. They write thorough test plans and everyone else is just subhuman. They’d be happy to teach the class. What they would teach would be an interesting, perhaps non-iterative process, that requires a great deal of work to be done by other groups before they can start.

The response that I give to those folks who are quite persistent sometimes is to say that testing is done on the basis of whatever information is available and in different companies operating under different risk strategies, different kinds and different amounts of information are available to the testers.

No test organization has complete information. Some test organizations have adequate information for me (as a lawyer) to base a contract on, but still not complete because there’s a great deal about how to use the product and how the product will fail in the field until the product emerges, even if the specification was perfectly written. Whatever information they can get, they should. They should go for the most valuable information they can find to the maximum extent the company will provide.

But if they believe they can block development process by saying they won’t be ready to test until information somebody else has to provide is provided to them, the odds are that they will discover at some point that they are frozen out of much of the development process and put on a schedule that makes it impossible for them. And maybe in tester

Valhalla that they’re in at the moment, that’s not true. They have the bigger axes to knock everybody else down, that’s great. But they’re going to find on the next project, that having skills to deal with incomplete information will be of value to them.

I don’t know what else to say to them. And I think that no class that succeeds in being realistic can assume that you have complete information walking in or that you can get complete information by interacting with the development team.


Do you need to update the project documentation every time you run a test that wasn’t directly in the spec?

Cost-benefit trade-off is not an unreasonable approach to thinking about the value of documentation. As with every other aspect of the software development project, we have a finite amount we can do, we have a finite amount of time, we have stakeholder who value some things more than others, and the stakeholder values varies across projects in ways that are project specific. It is the responsibility of the project manager to come up with the right balance of investment to maximize the satisfaction of the stakeholders, which includes preserving the safety of anyone who might interact with the software. Safety sometimes is a big issue and sometimes is not.

I don't think that a general process should give guidance beyond saying do something that makes sense and here are some ways of thinking about what makes sense, like a cost benefit analysis. I don't think we can say it's good to fold these back into specifications or that it's bad. For example, if the rest of the development group is not updating the spec when the software design or limitations change, it might well be a waste of your time to try to update the spec as you discover new constraints or new risks while you test.

Whether you update a development spec, you might or might not update testing specs. Later in this class, we’ll talk about the requirements planning that testers might do, to decide what information they should put into testing documentation (and how much maintenance they should do to keep it up to date). In some circumstances, it is essential to update and extend the test documentation regularly. In others, this is entirely inappropriate.

Companies also sometimes roll testing stuff into user documentation, especially in cases where we actually have very intensive customer calls and so it pays to

put frequently asked troubleshooting questions into the manual instead of just leaving them in tech support.

One of the things I like to do is create a release kit for testing into tech support. The release kit includes things like here are all of the last tests that we did on the printers we actually tested. Here's what the printouts are, the cover all the boxes, this is what we got from the video cards, this is what we got from the modems, this is what we got from the printers, this is what we got from all of these other kinds of things.

And the tech support person is happy to have all of those things so when someone calls up and says, “I have this letter and it's printing oddly in this way.” The tech support person can say, “Gee, I don't recall having heard about that but we did test on that printer.” He opens the binder and flips to the appropriate page and says, “Oh it looks kind of vertically stretched, right?” And the customer says, “Yes.” And the tech support person says, “Yeah, that's our product works with that printer. If that's unacceptable, would you like to refund?”

It might not be the most delightful thing for the test support person to say, you would like to say all our test results were fine but if it didn't work -- fine, you don't have a lot of baloney going back and forth. “Yes this is the way the product works, if it's not acceptable, the product is not acceptable. Let's not waste any more of your time and our time on this. This is how it is. If that's not good -- fine we will deal with it from there.”

We found release kits of value with products that were selling into market space of like a million people. Where many thousands of configuration related calls would come in and the cost of those calls and the cost of trying to troubleshoot those sorts of problems on the phone when you couldn't see the output was enormous. I have been involved in that but it was guided by the cost benefit trade-off. It was very clear that there was a processing cost of testing and creating the release kit and that processing cost saved the company many times that cost instead of wasting tech support time on the phone.

I'm saying that the level of the documentation in any place has to be appropriate to the objectives of the product. And those objectives will vary across products, and across stakeholders, and in my experience, it's just not possible to come up with guidance that works in all cases.

There's a good line in Scott Ambler’s book, Agile Modeling, on the principal of when to update documentation. His rule of thumb is, Update the documentation when the wrong or missing

documentation causes pain. So if there's a problem created by not having the right documentation, update the documentation. Otherwise his guidance is, don't take the time to do it.

Staffing


Who should you hire as a tester?

The most common controversy I see is between the viewpoints:

  1. All testers should be programmers.

  2. No reasonable programmer would ever work as a tester.

So I’m hearing that some other folks are pretty talked out that way. And that debate takes what might be a straightforward discussion and turns it into a long, emotionally based dance. What answer would you give?

Let me introduce you to one way of thinking about requirements for a position you want to fill. KSAO is a fairly standard, unpronounceable short form.

Knowledge, Skills, Attributes, Other.


We also go through a much longer exercise where everybody writes out the entire bug report.

(BTW, everybody will not write out the entire bug report because some people don’t have English as their first language and to halt the class long enough for the slowest person to write the entire report would be disastrous. Give them an extra 20 minutes, some of them will write a full report and some of them will not. But show mercy at the end of about 20 minutes and stop it. And then take it up.)

There are a lot of ways to take it up. My most effective way is not to have people put their bug reports up on the board, because I’m going to talk about people completely misunderstanding the problem and that would be really embarrassing. Here is just bad wording. I haven’t said anyone is dumb or missed the point. If I’m going to do that, I want it to be very anonymous.

What I’ve done over the 20 minutes is to look over everybody’s shoulder and ask, what are you writing? Here’s the most common single mistake: When I was describing the bug, every time there was a new test I said now get out of the program and come back in, color the background gray, draw the circle, you know I went through that nice little sing song. You will be surprised how many students will forget that. And when they write how to reproduce a bug they will start with the very first test on slide 2 – color the background, freehand select, then draw a circle, freehand select again, cut the circle, color the background again, draw the circle again, and on they go until they finally say, and now the circle doesn’t work.

Are all those steps relevant? No, we exited the program, we came back in, we got a new screen. It shouldn’t make any difference. Why are they doing this? Because in the written stuff it never says exit the program, come back in. It was only auditory. We process information in two channels, we process a lot of channels, sound, smell, sight – for most people, sight is dominate. If there is visual information and there is auditory information, visual will win over sound. What you see is a description of the report in a certain sequence. What you heard was a modification of the sequence.

But especially if you give people over lunch or over night, a few of them will forget the sequence and will just write the entire thing. The longer you wait, the more people will not do it. If it is a short break or a short lunch, may be only 1 person will not do it. You don’t want to subject that person to ridicule, it’s very important that you wander around the room and look and see the patterns of mistakes yourself. And then say, here’s an example of something somebody did. It’s also important to not say, here’s an example of

something some fool did. Because they’ll know you got it off their paper and they won’t appreciate it.

Here’s an example of something somebody did, they started on the first slide and they went through the entire sequence. Now why is it relevant to set that thing up in the first place? It’s relevant because situations like this happen in bug reporting all the time. Here’s the scenario that testers run into regularly. You write a bug, it goes to the programmer, the programmer calls you up and goes, I don’t understand. You walk over to the programmer’s cube, you show the programmer what happens or you tell the programmer what happens.

The programmer goes, oh, I get it now. And then you go away. And tomorrow the programmer starts working with the bug again. And what she sees is the written description of the bug. And what she doesn’t remember is exactly what you told her in the supplement.

So with that bug, it’s unlikely the programmer will call you and say I just don’t remember what you told me yesterday, would you mind coming back to my cube and telling me again. More than likely she’s going to react to this bug in an embarrassed way and the embarrassed way she’s going to react to it unless it’s a fatal bug is say, she’s going to react to it by saying it’s a feature, it’s not a bug, I cannot reproduce, no one would do this; which all really means, rats, I’m too embarrassed to call up and find out what it was, I’m just going to make it go away.

When you get someone calling you and telling you there’s missing information, they’re giving you a bug report against your bug. The fix is not to walk over and fix it with the person, you have to do that too to make interpersonal nice/nice, but the real fix for the report is to go into the report and fix it. Add the extra step or add the qualification or add the explanation, if it’s not in writing, if it’s not in the same place, the same medium as the dominate information, it will be forgotten and it will not be linked when somebody reads it again it just won’t tie together.

So just like some students will forget all of those obvious cues, a reader who is told and demonstrated what the problem is will forget that or will have a reasonable non-zero probability of forgetting the detail if it’s not in the report itself.

So if you write an incomplete report, the fix is not to explain what else needs to be done person-to-person, the fix is to explain in the report itself exactly what needs to be done. And then to walk over to the person and say, yeah, I updated the report let me show you what I showed you the first time.

I don’t know if my visual was overriding my auditory, but I could swear I heard you say, “write your headline” not “headlines” and that felt to me like a constraint to come up with one and only one which then I felt like, don’t criticize me on not writing two, you told me – if I was the angry student I would pop off at that point. And the second thing I’d do is say all right, would you please critique the last two in the student notes on the same basis that you critiqued ours.

So when I tell people to strike a bug, I know exactly what you’re saying. In most classes I get a substantial number of people writing two bugs anyway because I throw them two failures with two bugs. You guys follow directions too well. I have actually never had anybody come back to me and say, but you told me to write only one. They might have felt it but you are the first person who raised it as an issue. I’m not saying that it’s a bad thing.

I’ve had a lot of feisty students. But the sharp, feisty ones express it by giving me two reports. And often how that works in a class where there’s a little more time, I wander around the room, I ask question.

I typically do this in a 20 minute session where I say take 20 minutes and out of that make sure you reserve 10 minutes for yourself for a break and we’ll come back together at the end of 20 minutes. That way, the people who can’t write anything in 10 minutes, can take 15. Now in that period I wander around and people are likely to ask questions like, do I really have to stick with one? And I go, do what’s right. That might be the distinction. Almost every exercise I do in the professional classes that I teach do coincide with a break so that I have an extra 10 minutes for people who’s first language isn’t English so they can write it without everybody else staying. We still have to wait for slow writers, but not as much.