Recap Of WHaTDa And QA Or The Highway

Monday and Tuesday of this week (02/16 and 02/17/2015) had me in Columbus, Ohio for the Workshop on Teaching Test Design and QA Or The Highway. QA Or The Highway is a regional conference designed for software testers. This is the second running, and the second time it has sold out. This time to a bigger crowd than last year.

QA Or The Highway was great again this year. I didn’t attend a whole lot of track sessions, but the ones I did get to were high quality. Joe Ours puts on a good conference.

I did speak at the conference this year which was different from last. I did a talk on testing APIs that included and intro to REST, information on the testing VS checking dilemma, and a live demo with some code I wrote using frisbyjs. Public speaking is a new craft for me, and I’m growing into is and learning from others. Overall, I feel like it went pretty well.

WHaTDa was a one day workshop focused on teaching test design. It reminded me a lot of WHOSE, that is a good thing. We started the workshop at 9am with introductions; a little about who we were, what we were working on, and how we were planning to contribute to the workshop.

The goal, as with most Excelon Development workshops was to produce something useful to the testing community by the end of the day. Usually about halfway into the day, I start getting the feeling that producing something quickly is completely impossible.

After lunch, we split into groups and focused on an exercise we wanted to build and contribute to the wider testing community. I paired, or maybe grouped is a better word, up with Paul Harju, Megan Studzenski, and Dwayne Green.

A new Test Challenge

The 4 of us built something, and we hope it is useful to you.

We build a testing exercise based on a program that determines whether the text you entered into a field is a palindrome or not. It sounds simple, and it is, but there are a number of ways you can frame this simple program into a test exercise.

Here is the Palindrome Challenge. Feel free to use it however you like with credit to the authors.

Here are some framing examples you can use. This is what we came up with during the workshop, there are of course many other ways you can run the challenge.

Framing
For the person giving the challenge: you have to address what the scope of the exercise should be.

Here is one possibility:
1 – Design a test strategy / how would you test this (actually write the strategy down)
2 – Test for a couple of minutes using strategy
3 – What did you find? Did the stuff you found matter? Why?
4 – If you had more time, what tests would you run?
5 – Debrief

You could also do a survey of test techniques.

— Domain
— Function
— Risk
— Load
— Security

Maybe run the exercise a couple times, and see how the test strategy differs based on the identified technique.

Seeding test ideas:
I know there is a bug in X area, how would you test for that?

Megan’s Gambit:
For any given exercise, you can make another exercise by having the student identify what they were doing and why.

Test exercises are everywhere, framing and scoping are the hard part.

Have fun!

Lean Software Testing: The best 3 day conference I’ve been to

Last week I went to the best thee day conference I have ever been to, and got paid for it too! I flew from my home in Nashville to Denver, CO to assist Matt Heusser in teaching a three day class focused on lean principles in software quality and delivery.

Lean Software Testing is a one, two, or three day class created by Matt Heusser. The class combines years of research and experience on applying Lean concepts in software development with software expertise in practical problems of software quality and delivery. The result is a class that can help your test and quality team do their work smaller (we are talking about Lean, right), better, and faster. This is not an introductory class about software testing, but testers of all experience levels can certainly find value in the material and teaching. We have experience teaching this class to experience levels ranging from the newest interns all the way to the most engaged experts in the test community with 20+ years of experience.

A few weeks before class starts, Matt and I will have a discovery call with your team where we will mention a few of the concepts covered in the class and see what your teams experience level with the material is. Also, we will ask a few probing questions to discover what we like to call critical success factors: these are the things you must absolutely take away from the course to consider it a success. We want to provide a class that is timely and relevant to your team, this can help get us there.

Day one started immediately with a group exercise focused on team dynamics. This is a great way for people that may not know each other to get acquainted and also jumps right into valuable course work from minute one. After this we do a formal introduction, meet the attendees, and  explore what they want to get out of the class. The rest of the day is spent on studying and doing exercises on team dynamics, test planning, and comparing scripted and exploratory test approaches.

Day two is spent alternating between theory from Lean and exercises crafted to show exactly how the theory works. Where Agile training will tell you what you should do, we will explain exactly why approaching work in specific ways will help you deliver better software faster. This is practical information you can take back to work on Monday and use to provide real value. Value that your customers will be happy to pay for. Attendees will have a set os tools, but more importantly, they will know why those tools work so they can apply them across a variety os situations in your organization. We end the day with a one hour facilitated problem session where attendees discuss problems relevant to their environment and the group helps discover solutions.

Day three is emergent, we take lessons from the last group session on day two and mash them together with the theme of our course. We pull from our bag of LST content and present material that that is immediately relevant to your context. Some of the popular topics for day three are: tactics to help your group do regression testing faster and in a way that helps the business, a survey of test design techniques, test automation for non-technical people with live examples, and performing skilled bug reporting. There is much more, we are happy to customize content to the needs of your group.

Once the class is done, your testers are not left high and dry wondering how to apply everything they just learned. We provide lifetime support for all graduates of LST. Once class is complete, students are given access to the Lean Software Testing google group where they can talk about questions and real work problems and get support from LST instructors and graduates alike.

I can’t wait for the next teaching opportunity. If you are interested in getting Lean Software at your organization, please get in touch with Matt Heusser or myself.

QAOrTheHighway 2014

QAOrTheHighway was held this past Tuesday in Dublin, Ohio, a city just outside of Columbus, Ohio. For the most part, this was a very regional thing. A lot of people I spoke with were from Columbus or very near by. That was sort of surprising considering the speakers (keynote and presentation) they managed to get. Despite the small/regional conference feel, there were a pretty good number of attendees present.

ce7f9ca58b24edd6c30995cca0583304

I was planning to get there early on Monday, hangout with friends and work maybe a little, but it turns out February is not a great time for flying. My layover was cancelled and I didn’t end up at the hotel till after 11:30pm. Luckily a few friendly faces were still up to chat and catch up despite the late hour on the day before giving a talk.

The day started at 6:30 am with breakfast in the hotel lobby and then rushing off to lean coffee. Lean coffee is a testing conference fixture at this point. People ask for it by name. This one was really enjoyable as usual. I left with some useful notes on preparing to give a conference talk. Oh, did I mention that I’ll be speaking at CAST 2014 in New York, NY?

The conference was kicked off with a keynote by Joseph Ours. The theme was some ways you could tell if you (the tester) were undervalued within your organization and some things you might do to change that. Joseph did a great job and presented some old ideas with a fresh perspective. Some things I thought were interesting was the way he thinks of testers as information brokers and also something he calls the OURS method: Observe, Understand, Review, Serve. One important thing to note is that Joseph was not the scheduled keynote speaker. Keith Klain was scheduled to talk but could not make it and Joseph did a fantastic impromptu keynote.

The first session I went to was The New Tester Skill Set by Matthew Eakin. There was some stuff in this presentation that I didn’t necessarily agree with such as an emphasis on documentation, an emphasis on tools, and very little about testing skill or how that fits into agile but I think Matt had some great points elsewhere. Mainly in emphasizing restricting WIP to be very small at any given time, and also something he mentioned about how a powerful test might tell you specifically where a problem is.

Session two was by JeanAnn Harrison on A Debate on the Merits of Mobile Software Test Automation. JeanAnn is a great speaker and conversationalist, I thought this was a fun talk. This was sort of a socratic talk, a lot of the content was posed as questions for the attendees to consider. I really enjoy this style. Some of the questions were around the idea of defining best, defining need, and asking if the project is worth the investment. She also mentioned that she doesn’t highly rely on domain expertise in new hired because that can usually be picked up quickly. I generally agree with that sentiment.

Session three was about Disintegration Testing presented by David Hoppe. This is another session that I thought was very good. The content was useful and engaging. David talked about the value of looking at problems in isolation as opposed to completely integrated products. He did this via stories about automotive repair, scenarios focusing on how a person might test their amazon home page, and also a live demo of a test program he wrote. The presentation has a little bit of everything and the attendees really seemed to respond to and enjoy that.

My last session of the day was by Scahin Mulik, Four Questions Every CXX Should Ask About Testing. This started off by modeling testing questions around Maslows Hierarchy of Needs. I thought it was quite interesting. The four questions Sachin came up with based on the hierarchy were: Is the software not doing what it is not supposed to do; It is secure, fast enough, reliable enough; Is it loved by its intended audience; Is it faster, cheaper. After this there was as a bit on testing measurements with absolutely no foundation and no reference for where the numbers came from. I wish I had written down the measurements he referenced. One I do remember was defect removal efficiency. There were also some measurements that were somehow supposed to represent industry maturity. This part left me really dissatisfied with the talk.

After this was a closing keynote given by Matt Heusser. Regretfully, I had to miss this to catch a flight back to Nashville.

I was in Columbus for about 20 hours total, not even a full day. If I go next year, I’ll try to hit the 24 hour mark.

Practice makes progress: exercise for stickyminds

I get only 5 tests, conveniently I see that there are five possible lots to choose from in the ParkCalc UI. The requirements are also in a nice tidy group of 5. Clearly, this isn’t the only way to structure the tests, but as of right now it seems like a decent place to start based on the information I have. Since I’m limited to 5 tests, I want these to be powerful. By powerful, I mean I want each test to have the possibility to expose multiple issues, provide multiple types of information, and be representative of how a person might actually use the app (according to my imagination).

So here goes, here are my five tests

Valet Parking
Requirements:
$18 per day
$12 for five hours or less

Test values:
Start: 01:32am 01/09/2014
End: 06:32am 01/13/2014

Total time: 4 days 5 hours 0 minutes

Expectation: (18*4)+12 = $84
Result: $84

Some stuff I think this test tells us:
non-round numbers for time handled
$18 / day calculation
$12 / <= 5 hours calculation Short-term hourly parking
Requirements:
$2.00 first hour; $1.00 each additional 1/2 hour
$24.00 daily maximum

Test values:
Start: 01:32am 01/09/2014
End: 04:17am 01/10/2014

Total time: 1 day 2 hours 45 minutes

Expectation: 24+2+2+1 = $29
Result = $30

This seems to have exposed a bug. I didn’t spend much time investigating what the bug is but I did notice that the cost of full days and 1/2 hours is not how it is described in the spec. In the app, a full day costs $26 and 1/2 hour (after the first hour) cost $2.

Some stuff I think this test tells us:
full day calc
first hour calc
additional 1/2 hour calc
info about rounding to 1/2 hour

Long-term garage parking
Requirements:
$2.00 per hour
$13.00 daily maximum
$78.00 per week (7th day free)

Test values:
Start: 1:32am 1/9/2014
End: 4:02am 1/17/2014

Total time: 8 days 2 hours 30 minutes

Expectation: 78 + 13 + 6 = $97
Result: 97

Some stuff I think this test tells us:
week calc
day calc
hour calc
info about rounding for times between one hour

Long Term Surface Parking
Requirements:
$2.00 per hour
$10.00 daily maximum
$60.00 per week (7th day free)

Test values:
Start: 1:32am 1/9/2014
End: 4:02am 1/17/2014

Total time: 8 days 2 hours 30 minutes

Expectation: 60 + 10 + 6 = $76
Result: $76

Some stuff I think this test tells us:

Economy lot parking
Requirements:
$2.00 per hour
$9.00 daily maximum
$54.00 per week (7th day free)

Test values:
Start: 1:32am 1/9/2014
End: 4:02am 1/17/2014

Total time: 8 days 2 hours 30 minutes

Expectation: 54 + 9 + 6 = $69
Result: $69

Some stuff I think this test tells us:
week calc
day calc
hour calc
info about rounding for times between one hour

Recap:
So, 5 tests and one bug was the outcome here. I’m pretty pleased about finding a bug of course but I’m not so sure I would call the software unfit for use because of this. I suppose that all depends on the person using the software. My opinion is that the difference in calculation is slight, and if the actual amount charged from the lot it correct, then it’s all good.

What skill(s) does this exercise help develop?
I’ll just make this a list of the stuff that jumps out at me:
domain testing
scenario testing
testing directed by a specific mission
note taking
thinking critically about how a bug affects a person(s) / thinking about value
describing the purpose(s) of a test
creating powerful tests

Book review: Two Books on writing

Like most people, I haven’t written much of anything since the required English curriculum. That curriculum, more than anything, robbed me of a desire to write. Part of what I’m doing here at my personal blog and over at StickyMinds, is a lesson in learning to write things people will read and enjoy, but also to have it not be so difficult every single time. To help get things moving, I’ve read a couple books about writing, Weinberg on Writing: The Fieldstone Method, and On Writing: A Memoir of the Craft.

There are many many books on writing, it seems that every big name author has one. These two were at the top of recommendations from friends, so that’s where I started. Both of these books are fantastic, I really enjoyed and recommend them to anyone that wants to try writing again. These two books are similar in some regards but very different in others.

On Writing by Stephen King begins with a story about his development as a writer from his youth to present day. After the story, King goes on to talk about many aspects of writing he thinks are important. This book is written by and for fiction readers, but there are lots of ideas that will transfer to non-fiction writers as well. There are sections about adverb usage, dialog development, and story development. One of the parts that stuck with me the most was King’s description of ideas as fossils that must be unearthed. First they must be located and excavated, but after that you have to delicately clean the ideas up with smaller picks and toothbrushes.

Weinberg’s book, Weinberg on Writing: The Fieldstone Method covers this excavation and unearthing process in detail. As a novice with no particular method to employ when writing, this book was a life saver. The fieldstone method is a method Jerry uses to describe the process of finding, shaping, organizing, and forming ideas into something people will read.

This book draws a parallel between writing something and building a stone wall. Each idea is a stone that fits into the wall in some way. Stones come in all different shapes, sizes, and materials and each fits into a special place in a wall.

Weinberg on Writing: The Fieldstone Method was a a great reference book for me. I didn’t read the chapters in order, or even read the whole book. I had authentic writing problems to solve and was able to browse to the relevant chapter.

These books are both invaluable, I don’t regret the purchase at all. One thing they won’t do for you however, is practice. Stephen King recommends writing 1000 words per day in his book, I don’t recall Weinberg making a recommendation in his book but I’m sure he would recommend something. You don’t get good at running by reading about it and you don’t get good at writing by reading about it.

Domain knowledge and software testing

Todays weekend testing mission was to test a mobile app which functioned as a sort of exercise documentation tool and determine how our domain knowledge (or lack there of) effected our ability to test the product. The idea of domain knowledge is interesting to me because of how vague it is. I’ve reading reading Rethinking Expertise so I thought I could explore some of the relevant parts I’ve read thus far.

According to Collins & Evans there are two main categories of expertise. There is ubiquitous tacit knowledge and specialist tacit knowledge. Ubiquitous tacit knowledge can be split in to beer-mat knowledge, popular understanding, and primary source knowledge. Specialist tacit knowledge can be split into interactional expertise and contributory expertise. These different categories create a sort of spectrum in wich the farther to the right (contributory expertise) you are, the deeper your understanding and ability to perform a task is.

Beer mat knowledge is they type of thing you read in passing about some topic. Popular understanding is the type of expertise you gain from following the common media or popular books on a topic. primary source knowledge is gained from reading primary sources of information.

On the specialist tacit knowledge side, interactional expertise is the ability to comunicate in the language of a field. Contributory expertise is the ability to actually contribute to a field. It should be noted that without actually seeing a person do work, these two types of expertise are indistinguishable.

So, back to domain knowledge. In terms of software testing and what is commonly referred to as domain knowledge, what Collins and Evans present is a pretty good representation. The type of expertise needed will be dependent on the role the tester is filling and the type of software.

BUT we discovered today that domain knowledge greater than the first level, beer mat knowledge, is not necessary to do meaningful testing. Domain knowledge can be beneficial for understanding the value specific people will get from a product but lack of should not prevent meaningful work. Development of skill as a tester in my opinion is more important than domain knowledge because most of the time these skills will be transferable to many different contexts and most of the domain knowledge needed can be learned in a short period of time.

Weekend Testing Americas with WIkipedia

Join Weekend Testing Americas on Saturday April 6th at 12pm CST for a special session with Chris McMahon from the Wikipedia test team. This session will focus on testing new features in the wikipedia product designed to enhance the user experience for new users. A secondary goal of this session will be to spend time having a public discussion about how wikipedia pages on software testin can be improved.

Here are some of the pages to consider for this discussion:
http://en.wikipedia.org/wiki/Software_testing
http://en.wikipedia.org/wiki/Black-box_testing
http://en.wikipedia.org/wiki/Exploratory_testing
http://en.wikipedia.org/wiki/Session-based_test

Test charters for the weekend testing sessions can be found here:
Test charter for session

Wikipedia uses radically open software and a community of testers to test a product used across the globe. Not only are the tools open source, but the implementation is completely open as well, from the source code to the Jenkins configuration to the real-time test results. Anyone may contribute by performing exploratory testing, analyzing test failures, adding scenarios to be tested, to writing code.

Here are some links of interest for participants:
Wikimedia feature testing
Wikimedia weekly testing goals
Wikipedia software deployments
Wikipedia mail list

As usual, to join the session send a message to weekendtestingamericas on skype just prior to the session.

WTA 37: experience report

I ventured into the latest edition of weekend testing americas in a different role that than I normally fill. This time it was as a facilitator, and a first time facilitator at that. Since this was my first time I learned a few important lessons and owe some gratitude to JeanAnn and Michael.

JeanAnn spent time with me refining my session idea into a practical skill development session. Over the span of a few days and some (many) emails, JeanAnn helped me expand a basic idea into the outline that we based the entire session on. She was a great help.

Michael gave me the opportunity to facilitate a community event that he normally facilitates. I have a feeling he used some boy scout-ish leadership methods.

Anyway, on to the experience report!

I broke the session into a few segments for a couple reasons: this is how I have experienced weekend testing in the past, and thinking about a few attributes of usability created a fairly tidy way to divide the session up. I had some preconceived notions of how I could break up the time for each segment. My plan was to prompt for a topic, learnability for example, give attendees 10 minutes or so to work in the software and then round everyone up to discuss their testing and what was learned. I more or less did this and might do it differently next time. My concern was getting through what I had planned for the session in a reasonable amount of time but this didn’t really allow for much think time.

In hindsight I would have followed the energy of the group a little more. Though, this is difficult. It can be hard to tell when a lull in the conversation means people are thinking or people are ready to move on. I think this may be a little easier to manage with a slightly larger group that we had.

Some of the main ways I found myself adding to the session were in asking clarifying questions. I think it is important to be able to clearly express ideas so asking questions around ambiguous words and phrases as well as asking people to discuss some word or phrase to come to a shared meaning was a big aspect of this session. You’ll see examples of this in transcript when we got to the topic of efficiency. I wish I had done this exercise purposefully with each topic before beginning the hands on part, but alas. I was able to use lessons from my coaching session with Ann Marie around the socratic method as a way to encourage critical thinking.

Facilitating, like anything else, is a skill that must be practiced to be developed. I’m looking forward to future possibilities to do that.

A full transcript of the session can be found here.

Deliberate Practice: Test coaching from Ann-Marie Charrett

Friday I had the pleasure of receiving some coaching on the topic of software testing from Ann-Marie Charrett. We started by discussing the socratic method as a method of coaching, defining the socratic method and giving examples. From there we moved on to topics such as defining the phrase software testing, defining what risk means. I won’t describe the session word for word, you can read the full transcript here if you like though.

One of the interesting themes of test coaching is practicing the skill of describing an idea. For many reasons, this is one of the larger problems in the testing. I think crafting ideas through critical thinking and with minimal reference material is a good exercise and often true to the nature of software testing.

Ann-Marie does a fantastic job of coaching, her method is engaging and thought provoking. I’m looking forward to the next session.

Deliberate Practice: describe a stone

I was thinking of a few different ways to do this description. Should it be a narrative to give life to the stone? Should it be a list of attributes to give some sort of unbiased description? Maybe something else? In the end, a mixture of these made the most sense. Too much of one and it is difficult to see what is important in the description. Too much of the other and you get something that is boring and difficult to read. I think that is how it goes with describing software, too.

The goal here is to practice sharing a visual observation I made in a written format. I’ll denote some places where I know clarifications could be made, measurements for example. So here goes…

  • The overall shape is sort of like a candy corn
  • The base (wide part) is not completely flat. It angles at about 45 degrees
  • Because of this angle, one long side is about 1/2 inch longer (or shorter, respectively) than the other
  • The length is about 2 inches, maybe 2.5 (measurement could be taken)
  • At the widest point (base of the candy corn), it is maybe 1 inch across (measurement could be taken)
  • The stone is made mostly of a grayish material but there are small lighter flecks that appear in the surface
  • The color is a deep gray, maybe slate colored
  • Color is not consistent, there are some darker spots
  • The stone is smooth to the touch but not to the extent of feeling glassy
  • About 2/3 up the stone from the base near an edge, there is a deep, angled gouge in the stone. The gouge almost reminds me of a meteor strike that came from an angle
  • There is a shallow gouge on the same edge as the deep gouge. This is almost like a small chip in the surface of the stone
  • The stone feels pretty heavy for its size. It may weigh a quarter of a pound or so (measurement could be taken)
  • The stone feels very hard. I wonder what it would take to break or shatter it. I may use it in a bonsai pot eventually, so won’t try that out now
  • There is a hole at the top of the stone that is shaped like a snowman. (difficult to make out in the picture)
  • Edges of the stone are inconsistent. One long edge is rounded smooth, the other long edge is part rounded and part smooth, and the base edge is rounded at one end and gradually comes to a point at the other end
  • After describing this stone, I was thinking about how descriptions of software are often used to draw some sort of relationship. A relationship between software and the environment it runs in or maybe a relationship between software and a person. I think I could use this description and some liberal assumptions to talk about the relationship between the stone and its environment. This stone seems like it came from a body of water. Probably a body of moving water. Have you ever gone to a river front that was lined with small stones? Did you notice how smooth some of them were? This stone has similar characteristics so maybe it has origins in a river or lake.

    Exhibit A: The stone
    just a stone