QAOrTheHighway 2014

QAOrTheHighway was held this past Tuesday in Dublin, Ohio, a city just outside of Columbus, Ohio. For the most part, this was a very regional thing. A lot of people I spoke with were from Columbus or very near by. That was sort of surprising considering the speakers (keynote and presentation) they managed to get. Despite the small/regional conference feel, there were a pretty good number of attendees present.


I was planning to get there early on Monday, hangout with friends and work maybe a little, but it turns out February is not a great time for flying. My layover was cancelled and I didn’t end up at the hotel till after 11:30pm. Luckily a few friendly faces were still up to chat and catch up despite the late hour on the day before giving a talk.

The day started at 6:30 am with breakfast in the hotel lobby and then rushing off to lean coffee. Lean coffee is a testing conference fixture at this point. People ask for it by name. This one was really enjoyable as usual. I left with some useful notes on preparing to give a conference talk. Oh, did I mention that I’ll be speaking at CAST 2014 in New York, NY?

The conference was kicked off with a keynote by Joseph Ours. The theme was some ways you could tell if you (the tester) were undervalued within your organization and some things you might do to change that. Joseph did a great job and presented some old ideas with a fresh perspective. Some things I thought were interesting was the way he thinks of testers as information brokers and also something he calls the OURS method: Observe, Understand, Review, Serve. One important thing to note is that Joseph was not the scheduled keynote speaker. Keith Klain was scheduled to talk but could not make it and Joseph did a fantastic impromptu keynote.

The first session I went to was The New Tester Skill Set by Matthew Eakin. There was some stuff in this presentation that I didn’t necessarily agree with such as an emphasis on documentation, an emphasis on tools, and very little about testing skill or how that fits into agile but I think Matt had some great points elsewhere. Mainly in emphasizing restricting WIP to be very small at any given time, and also something he mentioned about how a powerful test might tell you specifically where a problem is.

Session two was by JeanAnn Harrison on A Debate on the Merits of Mobile Software Test Automation. JeanAnn is a great speaker and conversationalist, I thought this was a fun talk. This was sort of a socratic talk, a lot of the content was posed as questions for the attendees to consider. I really enjoy this style. Some of the questions were around the idea of defining best, defining need, and asking if the project is worth the investment. She also mentioned that she doesn’t highly rely on domain expertise in new hired because that can usually be picked up quickly. I generally agree with that sentiment.

Session three was about Disintegration Testing presented by David Hoppe. This is another session that I thought was very good. The content was useful and engaging. David talked about the value of looking at problems in isolation as opposed to completely integrated products. He did this via stories about automotive repair, scenarios focusing on how a person might test their amazon home page, and also a live demo of a test program he wrote. The presentation has a little bit of everything and the attendees really seemed to respond to and enjoy that.

My last session of the day was by Scahin Mulik, Four Questions Every CXX Should Ask About Testing. This started off by modeling testing questions around Maslows Hierarchy of Needs. I thought it was quite interesting. The four questions Sachin came up with based on the hierarchy were: Is the software not doing what it is not supposed to do; It is secure, fast enough, reliable enough; Is it loved by its intended audience; Is it faster, cheaper. After this there was as a bit on testing measurements with absolutely no foundation and no reference for where the numbers came from. I wish I had written down the measurements he referenced. One I do remember was defect removal efficiency. There were also some measurements that were somehow supposed to represent industry maturity. This part left me really dissatisfied with the talk.

After this was a closing keynote given by Matt Heusser. Regretfully, I had to miss this to catch a flight back to Nashville.

I was in Columbus for about 20 hours total, not even a full day. If I go next year, I’ll try to hit the 24 hour mark.

Practice makes progress: exercise for stickyminds

I get only 5 tests, conveniently I see that there are five possible lots to choose from in the ParkCalc UI. The requirements are also in a nice tidy group of 5. Clearly, this isn’t the only way to structure the tests, but as of right now it seems like a decent place to start based on the information I have. Since I’m limited to 5 tests, I want these to be powerful. By powerful, I mean I want each test to have the possibility to expose multiple issues, provide multiple types of information, and be representative of how a person might actually use the app (according to my imagination).

So here goes, here are my five tests

Valet Parking
$18 per day
$12 for five hours or less

Test values:
Start: 01:32am 01/09/2014
End: 06:32am 01/13/2014

Total time: 4 days 5 hours 0 minutes

Expectation: (18*4)+12 = $84
Result: $84

Some stuff I think this test tells us:
non-round numbers for time handled
$18 / day calculation
$12 / <= 5 hours calculation Short-term hourly parking
$2.00 first hour; $1.00 each additional 1/2 hour
$24.00 daily maximum

Test values:
Start: 01:32am 01/09/2014
End: 04:17am 01/10/2014

Total time: 1 day 2 hours 45 minutes

Expectation: 24+2+2+1 = $29
Result = $30

This seems to have exposed a bug. I didn’t spend much time investigating what the bug is but I did notice that the cost of full days and 1/2 hours is not how it is described in the spec. In the app, a full day costs $26 and 1/2 hour (after the first hour) cost $2.

Some stuff I think this test tells us:
full day calc
first hour calc
additional 1/2 hour calc
info about rounding to 1/2 hour

Long-term garage parking
$2.00 per hour
$13.00 daily maximum
$78.00 per week (7th day free)

Test values:
Start: 1:32am 1/9/2014
End: 4:02am 1/17/2014

Total time: 8 days 2 hours 30 minutes

Expectation: 78 + 13 + 6 = $97
Result: 97

Some stuff I think this test tells us:
week calc
day calc
hour calc
info about rounding for times between one hour

Long Term Surface Parking
$2.00 per hour
$10.00 daily maximum
$60.00 per week (7th day free)

Test values:
Start: 1:32am 1/9/2014
End: 4:02am 1/17/2014

Total time: 8 days 2 hours 30 minutes

Expectation: 60 + 10 + 6 = $76
Result: $76

Some stuff I think this test tells us:

Economy lot parking
$2.00 per hour
$9.00 daily maximum
$54.00 per week (7th day free)

Test values:
Start: 1:32am 1/9/2014
End: 4:02am 1/17/2014

Total time: 8 days 2 hours 30 minutes

Expectation: 54 + 9 + 6 = $69
Result: $69

Some stuff I think this test tells us:
week calc
day calc
hour calc
info about rounding for times between one hour

So, 5 tests and one bug was the outcome here. I’m pretty pleased about finding a bug of course but I’m not so sure I would call the software unfit for use because of this. I suppose that all depends on the person using the software. My opinion is that the difference in calculation is slight, and if the actual amount charged from the lot it correct, then it’s all good.

What skill(s) does this exercise help develop?
I’ll just make this a list of the stuff that jumps out at me:
domain testing
scenario testing
testing directed by a specific mission
note taking
thinking critically about how a bug affects a person(s) / thinking about value
describing the purpose(s) of a test
creating powerful tests

This year and the next (2013 recap)

2013 was an exciting year for me as far as tester stuff goes. I jumped into a lot of new stuff that I hadn’t done before, that’s sort of my style. Being in an uncomfortable area seems like a good thing, it really pushes you (me) to explore and learn and grow.

I have been instructing BBST classes with AST for about a year now. I got an email from Michael Larsen one day asking if I had time to lead a class, oh and he was going to be away on vacation for the first week. I said yes of course. Why turn something fun like that down, really? That’s the brief story of how I started being a lead instructor for BBST classes.

This year marked my second CAST and first Test Retreat. For the second time, CAST was a formative event. The conference is really really good, what’s even better is getting to hang out with friends after. This is where the magic is. Test retreat is what lead to me getting to WHOSE2013. I did a session on tester skill lists, and it just happened that other people there were interested in the same thing. Who’d have thunk it. Test retreat also spawned a book club that I’ve been enjoying a lot. Next year I’d like to go to two conferences, I’m not sure what the second will be though. Maybe QAOrTheHighway.

I worked with Michael and JeanAnn a lot this year while contributing sessions for WeekendTesting Americas. Weekend Testing is a great thing for so many people. Participants get a free place to work with groups on new testing topics once a month, facilitators get a place to pursue whatever topics are interesting to them at the moment. All it takes is a good idea and 2 hours of your Saturday. I’m looking forward to more of these next year.

Writing is not something I’m particularly comfortable with, most times it is a struggle. So, because of that, I started writing more often here. Writing and reading a lot really help to make writing easier. I also started to do some writing outside of my personal blog. I’ll be writing one other place, at least for a few months, early next year. I want to thank Matt Heusser for that opportunity.

I’m not really sure what 2014 will bring, but whatever it is I probably won’t say no. There are a few really interesting in the air right now and if any one of them panned out I’d be pretty happy.

book review: Taiichi Ohno Workplace Management

This book was a bit of a digression for me. I have read a lot books this year, but very little about management topics. This is a big hole in my software tool kit, so the change up was definitely welcome. Workplace Management is a book about managing the manufacturing process, well, sort of. Workplace Management is the text of a series of recorded interview with Taiichi Ohno about work he did at Toyota over his career.

Despite the fact that this is mostly about manufacturing, there is a lot here that is applicable to the software world. Obviously I’m not the first person to say that. The phrases ‘lean’, ‘stop the line’, and ‘just in time’ are pretty common in most software dev shops now. This book isn’t an introduction to lean, or kanban, or kaizen concepts so if you’re looking for that you may want to start somewhere else.

Here are a few of the big ideas I took from the book:
Do kaizen when times are good
It is so easy to get lazy and mentally slow down when times are good. Life is comfortable and releases are happening consistently, paychecks are on time (start-ups can be tricky) and there is never any after hours work. Taiichi thinks this is the most important time to figure out what you can tighten up and optimize. If you can be lean when times are fat, you should have better prepared to survive and thrive when times are lean. An interesting aside, Ohno also mentioned that you must make people feel the squeeze for them to generate good ideas.

The wise mend their ways
There is a full chapter for this topic. To me this is about honesty, ethics, and virtue. People make mistakes in the best of situations. In the book, Ohno mentions that even smart people with good intentions will be make mistakes and be wrong 3 out of 10 times. The phrase ‘the wise mend their ways’, to me, is about recognizing what isn’t working, being open to being wrong and failing sometimes, and trying something different immediately.

Direct observation rules
Through the book, I don’t recall Ohno emphasizing measurement at all. Maybe he did, I just don’t remember it. He did however tell many stories about being on the gemba (where the work happens), with customers, and with other companies in similar lines of business. He emphasized being there with the workers to observe, learn, and lead. The software world is mostly obsessed with measuring everything, so this was a refreshing point of view for me. My main concern here is about convincing people at higher pay grades that observation is a useful alternative, or at least supplement, to measurement.

Jido is a Japanese word (concept?) meaning automation with a human element. I recently wrote about this very topic for stickyminds, so seeing that this idea has a word in another language was interesting. In the Toyota system, this was embodied by having people watching auotmatic looms. If some problem was to occur, the person would press a button that would shut the machine off to prevent defective products from being made.

Mess with your employees a little bit
This part bugged me a little bit, I just chalk it up to cultural differences. There is a segment in the book telling a story about Taiichi calling a floor supervisor into his office. Once the supervisor made it to his office, Ohno scolded him for coming so quickly and telling him that if he were able to do that, then his employees must not need him. I think this sort of behavior is disingenuous, but again…culture differences.

WHOSE 2013 recap

WHOSE 2013 was held at the offices of Hyland Software December 5-7, 2013.

A note first, this is my own experience of the workshop. I’m sure there will be other experience reports popping up soon, and they may have different, but perfectly valid personal experiences to share.

Jess Lancaster, Jon Hagar, Doug Hoffman, Jeremiah Carey-Dressler, Nick Stefanski, Pete Walen, Rob Sabourin, David Hoppe, Chris George, Alessandra Moreria, Justin Rohrman, Matt Heusser (facilitator), Simon Peter Schrijver (facilitator), Erik Davis (facilitator)

Day One
Day one began with presentations from:
Jon Hagar on current industry resources for skill lists and education resources such as ISO standards, IEEE standards, SWEBOK, and ISTQB.

Matt Heusser spoke on the goal of the workshop, defining what a skill is, discussion on how and if we should model the skill list any particular way. The working definition we came up with for skill is any activity which can be isolated, demonstrated, evaluated, developed , and observed.

After presentations, we went around the room and did introductions and a brief statement of why we were there, what we planned to contribute, and what we hoped to take away from the event.

List creation
After this we began creating and categorizing the skill list. This activity took place by individuals writing single skills on index cards over the course of 45 minutes or so. I’m not sure how many cards we ended up creating, but I would guess it was over a hundred. Some were very similar, and some overlapped to a degree. We categorized the cards by theme (examples: social, tech, test design) and this categorized list became version .000001 of our skills inventory. Every skill noted was something someone in the room felt relates directly to the activity of software testing.

After this we formed groups and began to get the categorized list into a wiki. This initial version was a working definition of a skill, and a few resources of where someone could go to learn about that skill. At the end of the day, each group presented on what the work we had done. We were mostly unhappy with what we had at that point.

Day Two
Day two began with a brief recap of the previous day and some talk about new tactics we could take. We “mobbed” one skill as a group and came up with a very good example of what would be the basis for the remaining work. This new style of skill list was significantly more time consuming to create, but, in my opinion, has far more value. We continued working in groups in this style for the remainder of the day with another recap at the end of the day. This work was mentally exhausting.

Day Three
We was a half day which ended at noon. We spent the day closing the workshop. This consisted of talking about the remaining work (who was going to do and how would it get done), and closing remarks.

Some personal notes
Being in a room full of smart people, all actively working side by side to improve the craft of software testing was an amazing experience for me. I have never participated in a facilitated LAWST style workshop before, originally this was intended to be in that format. Groups formed and gelled very quickly, so there was little to no need for facilitation. I heard comments by folks that have been to and facilitated many LAWST workshops that WHOSE was unlike any other workshop they had been to.

The CDT community has a reputation for being contentious and having a certain amount of infighting. I witnessed absolutely none of this. Groups had cordial, open discussions with disagreements without any negativity or personal attacks. I think that is an important thing to note.

Day two was long and exhausting, I hit a wall around 2 and was struggling to produce good work after this despite a constant flow of coffee. This kind of work is far more difficult that I imagined prior to the workshop. A monumental effort was put in over the three days and I’m proud of what was created. It will take some time to get the work into a more complete, presentable state, but I’m looking forward to that day. Feedback and contribution from the testing community will make this living document even more valuable.

Book review: Two Books on writing

Like most people, I haven’t written much of anything since the required English curriculum. That curriculum, more than anything, robbed me of a desire to write. Part of what I’m doing here at my personal blog and over at StickyMinds, is a lesson in learning to write things people will read and enjoy, but also to have it not be so difficult every single time. To help get things moving, I’ve read a couple books about writing, Weinberg on Writing: The Fieldstone Method, and On Writing: A Memoir of the Craft.

There are many many books on writing, it seems that every big name author has one. These two were at the top of recommendations from friends, so that’s where I started. Both of these books are fantastic, I really enjoyed and recommend them to anyone that wants to try writing again. These two books are similar in some regards but very different in others.

On Writing by Stephen King begins with a story about his development as a writer from his youth to present day. After the story, King goes on to talk about many aspects of writing he thinks are important. This book is written by and for fiction readers, but there are lots of ideas that will transfer to non-fiction writers as well. There are sections about adverb usage, dialog development, and story development. One of the parts that stuck with me the most was King’s description of ideas as fossils that must be unearthed. First they must be located and excavated, but after that you have to delicately clean the ideas up with smaller picks and toothbrushes.

Weinberg’s book, Weinberg on Writing: The Fieldstone Method covers this excavation and unearthing process in detail. As a novice with no particular method to employ when writing, this book was a life saver. The fieldstone method is a method Jerry uses to describe the process of finding, shaping, organizing, and forming ideas into something people will read.

This book draws a parallel between writing something and building a stone wall. Each idea is a stone that fits into the wall in some way. Stones come in all different shapes, sizes, and materials and each fits into a special place in a wall.

Weinberg on Writing: The Fieldstone Method was a a great reference book for me. I didn’t read the chapters in order, or even read the whole book. I had authentic writing problems to solve and was able to browse to the relevant chapter.

These books are both invaluable, I don’t regret the purchase at all. One thing they won’t do for you however, is practice. Stephen King recommends writing 1000 words per day in his book, I don’t recall Weinberg making a recommendation in his book but I’m sure he would recommend something. You don’t get good at running by reading about it and you don’t get good at writing by reading about it.

nasqpros Nov. meetup: Manual to automated shop

The quarterly Nashville Quality Assurance meetup happened this afternoon. The topic for todays talk was steps and organization can take to move from a primarily manual shop to a primarily automated shop. These sort of talks usually give me a pretty visceral reaction and the pages of (biased) notes I tool today probably reflect that pretty accurately.

I wanted to talk to the speaker but dominating his time didn’t seem appropriate for this sort of venue. So, as a result of that I’m writing this.

Here are some of the things that bugged me the most:

Not sharing your story
I have a hard time respecting talks that make sweeping generalizations and endorse specific ways of doing things without sharing failures, detailed context, and lessons learned along the way. This style of presentation gives me the impression that they are trying to share some sort of gospel. Phrases such as “this is the natural spot for X”, “This is how to get things right”, and “center of excellence” really drive that feeling home for me.

Not mentioning the gruesome details
The fellas that did this talk come from a Nashville company called Assurion. This company employs a varitable army of testers and developers to maintain the create the product. I feel like the fact that a large staff is needed to create and maintain this sort of high-volume automation was conveniently ignored. DSLs don’t magically get created, software tooling doesn’t happen magically. All of this takes significant time and development resources.

Rehashing folk wisdom
The folk wisdom is bullshit…whew I feel better now.
Here are some of the classics from today:
1 – X% of your tests should be automated.
Which percentage? Why? Why not the other percentage?

2 – Automation makes testing faster, more scalable, and able to be performed unattended.
No….just no. There is development time, maintenance time on the scripts, product, and framework, investigation for test failures, false positives, timing issues….the list goes on.

3 – The people writing code all day to make this sort of automation happen are testers.
The best I can give you here is a maybe. *Some* testers have the fluency and care enough about programming to do this. What I have seen most often though is a programmer(s) with heavy interest in tool smithing. Either way, these people will be writing code for the majority of their day be it test code or code to facilitate that.

4 – Test repeatability creates return on investment.
What is the investment? How did you measure that? What is the return? How did you measure that? Unwillingness to talk about this in terms of value as opposed to cost is baffling. If repeatability creates ROI, does that mean a test that is run once and never repeated has no return on investment?

5 – X scripts should be created per day per person
The reasoning for this aside from time accounting escapes me. I suspect it is linked to the fascination with ROI and ignoring value.

Tool fetishism
Spending time talking about all the tools you have used, all the tools you currently use, how much money you saved by switching from X to Y is uninteresting. Especially when you don’t frame the talk around what problems you were solving. Especially when you don’t show any examples of what you are actually working on.

This is a lot of stuff so I feel the need to mention a couple things. I don’t hate automation, it can be really useful in some situations. I really like the phrase tool-aided testing that has been gaining some traction lately in the community. This encapsulates the sort of automation described above as well as any other action we perform in which we use tools to amplify our ability to do something.

The speakers were charismatic and seemed like nice people. There, I said something nice 🙂

You can’t go home again

This train of though starts with Pete Walen posting an “overheard” tweet

I don’t have any experience making wine but I have made beer a few times. The thing about beer making and testing is that there are innumerable variables. If you buy a kit or a set of ingredients based on a recipe, the best case scenario is that you come *close* to what the recipe author intended. There are so many things to consider. The mineral content of your water, the yeast you’ve managed to procure, the harvest conditions of your grain and hops, temperatures at various points during the process.

This immediately brought to mind the phrase “you can’t go home again”. The general idea here is that experience changes your perspective in ways that you can’t account for. This makes it difficult to leave the place you grew up and then return and still call it home.

I think this idea applies to testing too and that made me think of a You can’t go home agian heuristic. If you have run a test once, chances are if you try to run it again you will be running a new test. The world is conspiring against your ability to do things in an identical way twice. Maybe you unintentionally take an alternate path through the software, maybe the timing of your actions are different, or maybe there are system changes that you are completely unaware of. Any of these things cause you to be performing a fundamentally different test and learning novel things about your software.

Nashville Lean Coffee #1

I facilitated a lean coffee meetup for testers in Nashville this past Wednesday. This was my first time doing something like this so it was a pretty interesting experience. The participants seemed pretty happy so I’ll call it a success! I wish I had taken more pictures but I’m no live blogger. It is tough to get involved and do that kind of active observation.

Date: 10/16/2013
Participants: Read Blankenship, Alexa Riter, Michael Alexander, Kendall Joseph, Justin Rohrman


Lessons Learned:
I really like doing this in the morning because people can be hesitant to cut out some of their evening free time for things like this. The problem with mornings is 7am comes early and it seems even earlier when it happens to rain all night and morning. The rain caused some people to be a little late but it was no big deal and we rolled with it just fine. I suppose you can please all the people all the time so any time will be inconvenient. I may take a poll on when the next event should be.

Strength in Numbers
This time there were 5 people total participating including me. I was hesitant to add my cards to the pile because I was trying to play the role of the facilitator and let everyone get their ideas on the table but in hindsight more cards may not have been a bad thing. There is definitely a benefit to having more people at lean coffee. You get more variety in topics and you get more people with varied experience to talk about each topic. Some things I did like about the smaller group was the intimacy of the group and how obvious the energy level is to watch with fewer people. We were able to get in a groove and ended up not having to vote much on whether to go onto the next topic.

Problem Solving is Energizing
Once we got into talking about each topic the group really got into it. Realizing that we can in fact solve our own problems is really empowering. Also, realizing that the things we were working on at 7 am could be used at 8:30 when everyone made it into the office was energizing.

Facilitating ie rewarding
I really like facilitating events like this, the people that come in and really make the event aren’t the only ones benefiting. I benefit because it makes me feel good to help others help themselves and also I get to hone some facilitating and problem solving skills. The lean coffee format helps to create an environment where people feel comfortable talking about problems they have and things they need help with. Also, it helps to create an environment where people that care can give the best kind of help, solicited help.

I’ll probably try to do another one of these in a month of so. If you are a tester in the Nashville area I hope you’ll join us!

Does your opinion matter?

How many times have you been testing something for work or some extracurricular activity and found something you though to be really super duper extra important only to have no one care? I’ve done it a few times, most folks I know have. This is a difficult situation to be in. rejection is tough. No one likes to spend time reproducing, isolating and whatnot just to find out that no one cares about your issue. That was a RIMGEA reference for my bug advocacy friends out there.

In my opinion, this is one of things at the heart of the context-driven community. Finding relevant information in a timely manner and presenting it to people who care. Here is a great anecdote from Perze.

Direct communication with the people who rely on your information certainly goes a long way to make your work worth while. Things like clarifying your information objectives, your mission, your test charters have been really helpful to me. Have you ever been in this kind of situation? How did you resolve the conflict, how did you reduce the chances it would happen again?