This is a book that we covered in our ST book club.
The book is a well written introduction to exploratory testing, and covers wide and narrow-scale exploratory testing and how to create and use exploratory test plans. James Whittaker clearly does have good ideas, and they’re well explained here.
That said, I didn’t really like it – I think because it seems to start with the assumption that much of the audience need to be cajoled away from writing and then mindlessly executing test cases and so spends time and effort on stuff that I already took for granted.
I’d recommend it as a good introduction for newer testers or indeed anyone who’s used to writing a prescriptive plan of all their testing up front before getting started. If you’re already well versed in more modern testing and have thought about exploratory testing at least a bit, you might find it a bit slow – probably still worth a flick through and read for reference, but I wouldn’t dash out and buy it.
People’s motivation and speed of progression improves dramatically when they have a clear solid vision for what there they want to be in 3-12 months time and a set of concrete actions they can take to get there. “Be a better tester” is all very well, but what does that actually mean?
To try and answer this question, David Parker and I set out to create a set of TesterSkillTracks. Each track is a different skill related to testing, and along each track is a set of “levels”. Each level is defined as something you can do, and is a clear qualitative step up from the previous level – it’s not just “do X more and better”. We’re using this within Metaswitch Networks (where we both work) and finding it useful.
We’ve found that working on these tracks has enabled people to
focus on a particular area to develop
stretch themselves further when they’re already really good at testing
break out achievable chunks when they’re brand new
identify and discuss where their manager thinks their capabilities are different from their own assessment.
So, please take and use and let me know if this helps you. And please let me know any feedback that you have. What’s not exactly right? What makes no sense or could be clearer? What tracks are missing entirely? What’s there but completely irrelevant or rubbish? I’d really appreciate any thoughts, positive or negative.
Some notes and caveats.
This is intended to let you break down, clarify and focus “I want to be a better tester” into “The next thing I will achieve is <this>”. This is not a tick-box exercise of competencies to go up a pay-grade.
We have taken a wide view on skills useful to a tester and included various skills that are outside the narrowest definitions of the “tester” role. We want people to grow and develop beyond just being a particular shaped cog in a machine.
We deliberately tried to avoid linking these skills to any particular testing creed or school.
I’ve recently been tasked with educating some of our development teams in “better testing”. One of the key things I keep coming across is a mindset change, and I realised that while it’s a key part of good testing, it’s also something that I found critical when stepping up to take on lead development roles too…
When I was learning to be a developer, I was trained heavily into asking “How?” How do we implement this? How does that work? How can we make this happen? How do I fix this? “How?” is a good question to ask, but once it becomes your instinctive reaction to think “How?”, you’re in trouble. “How?” is a focusing question, and almost always, you want to start by defocusing and getting the context that you’re working in. An easy way to do that is to train your first instinct to be to ask “Why?”
Why are we implementing this? Why does the product work this way? Why does that happen? Not only is instinctively asking “Why?” a basic requirement for testing, it’s also the question that will lead you to big gains in development. Because they lead to us thinking about the context for the work we’re being asked to do, and stop us carefully implementing the wrong thing, or implementing something great, but isn’t extensible in the right way, or implementing something that’s way too over-engineered for what any customer will ever want.
We all (should!) use “How?” and “Why?” but the difference between a good developer and a great one, and an important difference between a developer and a tester is that trained instinct to make “Why?” the first question to ask.
Managers and Makers (people who actually get real work done) work in fundamentally different ways. I was going to spend time explaining this in detail, but Paul Graham has already done it better than I could. Go and read that, and then come back.
TL;DR Makers mostly work doing deep thinking in large blocks of time. Most manager work requires broad thinking in small blocks of time. Don’t forget that your people work different, and to be effective, you need to allow for that. As a manager, there are two things that you can do to help.
Stop interrupting them!
Prevent other people from interrupting them!
Some straightforward things that you can try (I’ve tried or seen all of these with various effects).
Meetings only at start of day or just before/after lunch. (Easy and obvious!)
“No morning meetings.” (Our team have a stand-up first thing, and then I try and prevent other meetings before lunch).
“No meeting Thursdays” (One day a week that your team refuses to do meetings. You can really get stuff done. Publicise widely what day you’re not available.)
“Cans on; can’t disturb.” Headphones on – no interruptions. If you want to interrupt someone on my team with headphones on, you have to come past me first.
Whether you’re a manager or a maker, I recommend trying one or more of these out, or invent your own. People might grumble to start with, but the actual cost (beyond a little bit of thought up front) is very low, and the benefits high.
This post is really little more than a chance to link to a favourite book. But this book also serves as a reminder that it doesn’t matter how closely you explore conformance to some specification or intent, it is ultimately the utility to and happiness of the customer that matters. If you have a spare five minutes, enjoy. Happy Christmas.
I’m a tester; I break things. I’ve heard and said it many, many times. It’s the standard way people describe testing. Quite apart from making testing sound like a simple, easy job (worthy of a blog post in and of itself), it’s a dangerous terminology.
It suggests that the product is ok, until the testers have their wicked way with it – that the testers are malicious little buggers who try to make the product do things it’s not meant to, and then make it fall over. The very terminology leads to fallacies of thought:
Developers can think that they’ve passed the product to test, good to ship.
Managers can think that because the testers haven’t broken the product, it’s good to ship.
Everyone can think that the testers have stopped the product shipping successfully.
Bugs get rejected, because people think that no true user would do that.
In reality, the product given to the testers is not good. It’s given to them already broken; a haunted house with booby traps galore. The testers aren’t rampaging lunatics with sledge-hammers; they’re the housing inspectors sent to find those traps before the general public are allowed in. Except that software products are a lot more complicated and much less easily perfected than a house, so there are a lot more dark holes for the bugs to lurk and deciding where and how to shine the torches around to light them up is slower and more complicated.
So, I’m going to try and change my default language. I’m a tester. I investigate things. In my vicinity, stones get turned over and creepy crawlies come to light. When there might be a can of worms around, I’m the one wielding the tin-opener. But I no longer claim to break things.
Footnotes  Picture of the Joker, by arthurforzus on deviantart. I don’t know him; I was just looking for pictures of the Joker with appropriate permissions and thought it was rather good.
I recently watched The Martian. At one point, NASA are trying to launch a space probe in a hurry – and cut 10 days of launch site testing having established that historically there’s only been a 1/20 chance of finding an issue during that tesing. It was unusual to see a film actually covering the concept of “How do we minimise risk with the resources that we have?” rather than just having established process vs a maverick.
It was also an interesting scene (and film) in that having decided to launch a resuce mission (I’ll leave aside the rationality behind that for this post) the film draws out in several places the contrast between the logically optimal strategy (maximising the chances of rescuing a stranded astronaut) and the socially acceptable strategy (minimising the damage to NASA’s public reputation and future spaceflight if something goes wrong with that rescue mission). As a specific example, the choice above to dodge some testing was a great logical way to effectively save time at a 5% risk cost, which could then be spent more effectively elsewhere to reduce the risk by more than that. The problem was that specifically choosing not to do some testing that you might otherwise have done is hard to explain away if things go wrong – when 20/20 hindsight suggests that was the wrong call.
Most of us see that as a conflict between the “right thing to do” and some stupid politics. However, looking further…
If the goal of some process is to maximise the quality of something, and we define quality using the standard “value to someone who matters”, then the above conflict is simply that we have multiple people who matter with different values. That means there’s not a “right” and “wrong” argument above – maximising quality means making a compromise and balancing the different “values”.
And that matches up with what we see in software development. We run QA phases and focus on mainline install processes, not to optimise for globally reduced risk – we could spend that time finding and fixing more other bugs – but because the customer disproportionately values “things working at first”, and “lack of regressions”. While it might not seem like we’re optimising for quality, we are – it’s that quality is defined by value to people that matter, and their views of quality may be different to ours or expressed in different ways.