Walk the Line, Draw the Line

Borders are amusing. I’ve always been intrigued by the imaginary line that cuts through an area and separates countries, people, ideas… I think it’s an abstraction that irrevocably alters the state of the world but the logic of the border sometimes escapes me. When I have stood on the border between two countries (easy to do here – just drive down to Latvia, for instance), I get a ticklish feeling. It’s so puzzling that I have to imagine that a step forward will take me to another country. It’s the same road that continues over there… it’s the same forest, the same grassland, it’s the same freakin’ air on both sides.

But it’s not really the same because of the border.

Instead of getting into a discussion of the nature of borders, I actually wanted to meditate on the border between the responsibilities of testers and engineers.

The (supposed) Problem

A while back, testers in my team reported a number of issues related to the GUI: typos, missing words, and confusing and/or ungrammatical sentences in system messages, typos in label text, other inconsistencies in GUI (different terms are used to refer to the same thing).

Since these issues were reported quite often, it stuck out. It stuck out that testers report a number of typos and other “minor” issues.

Usually, these problems are fixed quickly. There have been very few occasions were the report is sent back and forth because the engineer didn’t get it right the first time.

Soon enough the proposition was made that maybe testers should fix these issues themselves. The line of reasoning was that these are such minor issues that engineers shouldn’t spend time on fixing them.

The Implications

In my opinion, the aforementioned reasoning has a number of implications:

  • the value of testers’ time versus the value of engineers’ time
  • the severity of the problem motivates redrawing the lines between different roles
  • different perceptions of the value of information
  • empathy for end users
  • assumptions about the level of knowledge
From the technical perspective, it is true that those issues are not difficult to fix with the exception of the system messages which can be tricky in our context. However, it seemed to me that I’d step into an unintended trap if I did agree. After having thought about this issue, I listed these implications and found that it doesn’t really make that much sense. My final line of reasoning is much more mundane than the given list. But discussion first.
The implication about the difference in the cost of engineer’s time and tester’s time is nothing new. Also, I don’t think it’s something to get worked up about anyway since the past year has proven that testing and testers are a useful and productive part of the team. Since the problem I’ve described is (claimed to be) a borderline case, it’s my job as a team lead to explain why I think the border should be drawn differently.
I think the second item in the list is the most interesting one. If a serious bug is found, it is always clear and never debated who should fix it. It’s the engineer. Now minor UI issues seem to motivate the argument for shift in responsibilites and spark the border dispute. All of a sudden, the world (as we know it) must be remapped.
The source of the problem (engineers making mistakes which IS a normal part of their job) remains the same, yet it is argued that it makes more sense that someone else fixed it. This implies that the roles of testers and engineers are not clearly and universally defined.
The voiced concern is that the engineers have so many other issues to fix in the code and that “it would take just five minutes” for the testers to do it themselves. The unvoiced assumption is that engineers wouldn’t want to spend time on it either. Well, that’s a matter of taste. I know engineers who think fixing the UI is a nice snack in between serious code-rich meals. Some have even said that it’s nice to have some quick and easy tasks to be done. I think this is human. Of course, if one gets bombarded with such reports, it can get annoying too.
Still, these problems come from somewhere and it’s obvious that the engineers haven’t double-checked the UI and proofread the messages. Being asked to do so, doesn’t feel that great but it’s probably due to their main focus being on the code (which makes sense; and no, we don’t have an in-house GUI specialist around). It’s not that testers try to rub these reports in the engineers faces. I closely monitor the tone of the reports, so it’s not a problem as far as I’m able to decide. But it’s simply the underlying idea that these mistakes don’t seem to be worth engineer’s second look.

Personally, I have always been a bit of a GUI and terminology “nazi” (pun intended)… well, that’s another blog post. The point is that I tend to have an eye for the inconsistencies and I also pick up on messy terminology and word usage because of my background.

I have instilled the same attitude in my testers that it IS important to look at these things. It’s not that these GUI problems are showstoppers or that we wouldn’t look for critical issues and risky scenarios.

Checking the consistency in GUI is an important aspect of testing in my context (software for medical clinics and practices). I often find that the mistakes in GUI tend to be an unnecessary source of confusion not only for testers but also for technical writers. I also know that if the testers wouldn’t catch these mistakes, the technical writers will and their work process will suffer from this.

The value of information about GUI seems to only become very clear when one confronts a system message that just doesn’t make any sense or when you’re trying to filter a list but the filter fields don’t match the list columns… The program must speak the user’s language, be consistent in terminology and be clear about what is communicated. In the end, the showstoppers tend to be the stars among the bug reports.

Yet the GUI problems are like drawing pins – they look deceptively harmless until you happen to sit on one.

Herewith I’d like to take the opportunity to refer to Kristjan Uba’s blog post (http://kristjanuba.wordpress.com/2011/12/06/when-small-things-are-huge/) which tackles a similar situation. I agree with his thoughts on small things being huge but it’s a different cup of tea when you actually have to explain someone why it’s important to pay attention.

Empathy for end users is an important implication because the motivation for fixing the GUI issues (and trying to avoid them in the first place) shouldn’t come from the fear that a tester will send another annoying report. It should be something one cares about. However, walking in the end user’s shoes is not easy to do and it’s not a simple task to explain why it’s necessary to do so if the perspectives on usability are different. Especially when the perspectives on what actually constitutes an issue are very different. Oh yes, it’s very easy to get into a debate about whether the inconsistencies in GUI actually are issues. I haven’t tried yet (and I think I should) but I could make an attempt to explain the problem using the concept that quality is value to some people, then tie it with the end user’s perspective.

Another unvoiced assumption is that the tester and the engineer have the same level of knowledge which is necessary for fixing the UI issues. Sure, a typo or a missing word in a sentence is a no-brainer (most of the time). However, changing the text in a system message or deciding which term is the correct one is a different deal.

Often the meaning that seems to be conveyed in the system message text can be a bit different from what was intended. Sometimes engineers ask me what would be the best wording, it can sometimes take me (even me!) 15-30 minutes to figure out the best wording (oh, that’s another blog post…), especially if the message needs to ask the user something and do it with few simple words in a simple syntax.

So the tester should ask the engineer a few questions anyway. This won’t save much time, would it?

Secondly, deciding which term is correct to use in the UI is also not a tester’s decision. Sometimes it can seem very clear that it should be one not the other. But I still check. Because I can’t know for sure if there isn’t some hidden underlying logic somewhere… If one misjudges the situation, they may have made the problem worse. Also, sometimes messages seem to be similar so it’s easy to jump to the conclusion that they should be the same. When I’ve checked, it’s not the case. Another mistake avoided.

I usually escalate the terminology issues myself if I see the consistency issue is not localized in one part of the program but is actually a widespread problem (or has the potential for becoming one). Also, the engineer is more likely to know if the change in the UI should trigger other changes (which the tester may not have found out about).

Coming back to the “five-minute-fix”, then of course it’s not just five minutes. You fix it. You check if you fixed it correctly. You check the file(s) in. You merge it to other versions. you compile the package. You download the packaged code. You check it again. Then you fix the next issue. Five minutes? Nope. Plus you have the “opportunity” to mess something else up. Also, I simply find it a disruption which is not necessary in tester’s work.

For me the severity of the issue doesn’t justify nudging the border. It’s a sure thing that testers should make fixing the GUI issues easy (provide the correct wording, make clear screenshots of the issue, etc). But that’s the far end and that’s the border I generally wouldn’t like to cross. It’s not because I’m too stubborn but it’s mostly because I’m aware how the stealthy shifts in responsibilities can have other interesting implications.

I also discussed this topic with my team. At first, fixing the GUI issues didn’t seem too bad an idea to them but when they started considering the risks and possible complications… they also thought that it might not be a good idea. In some cases (when a window’s GUI is seriously deficient), it may be a good idea to make the changes instead of describing them but then again, the changes should be reviewed by the engineer.

All in all, I think defining and discussing the borders between testers’ and engineers’ domains is fruitful and important. Instead of arguing if the borders are correct, though, I thing it’s far more intriguing to look into why such discussions arise at all. Does it imply anything else about the mentality in the company? Where does it place on the “normality” scale (if we employ one)? Are there any unvoiced assumptions that drive these discussions? Is it just the matter of egos?

Anyway, borders are there. It’s important to know why they are where they are. And you should know if you need a visa when you plan to cross one.


2 thoughts on “Walk the Line, Draw the Line

  1. You got me thinking about the whole “cosmetic bugs turning” ugly problem.

    Here’s what I got so far:

    I think that “representativeness bias” applied in a nontraditional way might be a way to explain why small issues can group together and become a huge problem once they achieve critical mass:

    *Lessons Learned in Software Testing (lesson 39) defines representativeness bias as: “Small problems have small causes whereas large problems require large causes.”

    *James Bach once showed us (you were there) an example of “representativeness bias” in the form of a harmless looking “ok” button that actually runs thousands of lines of code – so that tester’s didn’t think about testing it much, when they should have. That is “representativeness bias” as it affects the TESTER.

    *Now, we can think of representativeness bias as it affects the USER:
    Issues in the user interface mean issues in the code. A clumsy GUI thus indicates that the underlying software will fail me sooner or later.

    Now, the abose statement is of course not necessarily true. That’s why it’s called a bias after all. However, the fact that it isn’t true doesn’t mean it does not exist.

    I will keep thinking about the issue. More to come.

  2. In most cases i agree, testers should not fix even such small issues. The risk of messing up (and spending too much time) is too big.

    Then again, I can see that in some cases, when testers are part of development teams, having the knowledge and tools, they could fix some issues. No worries there.

    About this “I agree with his thoughts on small things being huge but it’s a different cup of tea when you actually have to explain someone why it’s important to pay attention.”
    I don’t actually understand whom to you explain the importance of paying attention.
    But it seems that Rasmus has summarized the idea of ‘small things being big’ very well by this:

    “*Now, we can think of representativeness bias as it affects the USER:
    Issues in the user interface mean issues in the code. A clumsy GUI thus indicates that the underlying software will fail me sooner or later.”

    I’d extend : “Testers know that this is a small typo. The developers know that, but the users don’t. They will conclude that if the developer can’t even fix these small issues, who knows what else there could be wrong.(And they blame that the software has not been tested – although this is a different problem.)”

Chime in!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s