Borders are amusing. I’ve always been intrigued by the imaginary line that cuts through an area and separates countries, people, ideas… I think it’s an abstraction that irrevocably alters the state of the world but the logic of the border sometimes escapes me. When I have stood on the border between two countries (easy to do here – just drive down to Latvia, for instance), I get a ticklish feeling. It’s so puzzling that I have to imagine that a step forward will take me to another country. It’s the same road that continues over there… it’s the same forest, the same grassland, it’s the same freakin’ air on both sides.
But it’s not really the same because of the border.
Instead of getting into a discussion of the nature of borders, I actually wanted to meditate on the border between the responsibilities of testers and engineers.
The (supposed) Problem
A while back, testers in my team reported a number of issues related to the GUI: typos, missing words, and confusing and/or ungrammatical sentences in system messages, typos in label text, other inconsistencies in GUI (different terms are used to refer to the same thing).
Since these issues were reported quite often, it stuck out. It stuck out that testers report a number of typos and other “minor” issues.
Usually, these problems are fixed quickly. There have been very few occasions were the report is sent back and forth because the engineer didn’t get it right the first time.
Soon enough the proposition was made that maybe testers should fix these issues themselves. The line of reasoning was that these are such minor issues that engineers shouldn’t spend time on fixing them.
In my opinion, the aforementioned reasoning has a number of implications:
- the value of testers’ time versus the value of engineers’ time
- the severity of the problem motivates redrawing the lines between different roles
- different perceptions of the value of information
- empathy for end users
- assumptions about the level of knowledge
Personally, I have always been a bit of a GUI and terminology “nazi” (pun intended)… well, that’s another blog post. The point is that I tend to have an eye for the inconsistencies and I also pick up on messy terminology and word usage because of my background.
I have instilled the same attitude in my testers that it IS important to look at these things. It’s not that these GUI problems are showstoppers or that we wouldn’t look for critical issues and risky scenarios.
Checking the consistency in GUI is an important aspect of testing in my context (software for medical clinics and practices). I often find that the mistakes in GUI tend to be an unnecessary source of confusion not only for testers but also for technical writers. I also know that if the testers wouldn’t catch these mistakes, the technical writers will and their work process will suffer from this.
The value of information about GUI seems to only become very clear when one confronts a system message that just doesn’t make any sense or when you’re trying to filter a list but the filter fields don’t match the list columns… The program must speak the user’s language, be consistent in terminology and be clear about what is communicated. In the end, the showstoppers tend to be the stars among the bug reports.
Yet the GUI problems are like drawing pins – they look deceptively harmless until you happen to sit on one.
Herewith I’d like to take the opportunity to refer to Kristjan Uba’s blog post (http://kristjanuba.wordpress.com/2011/12/06/when-small-things-are-huge/) which tackles a similar situation. I agree with his thoughts on small things being huge but it’s a different cup of tea when you actually have to explain someone why it’s important to pay attention.
Empathy for end users is an important implication because the motivation for fixing the GUI issues (and trying to avoid them in the first place) shouldn’t come from the fear that a tester will send another annoying report. It should be something one cares about. However, walking in the end user’s shoes is not easy to do and it’s not a simple task to explain why it’s necessary to do so if the perspectives on usability are different. Especially when the perspectives on what actually constitutes an issue are very different. Oh yes, it’s very easy to get into a debate about whether the inconsistencies in GUI actually are issues. I haven’t tried yet (and I think I should) but I could make an attempt to explain the problem using the concept that quality is value to some people, then tie it with the end user’s perspective.
Another unvoiced assumption is that the tester and the engineer have the same level of knowledge which is necessary for fixing the UI issues. Sure, a typo or a missing word in a sentence is a no-brainer (most of the time). However, changing the text in a system message or deciding which term is the correct one is a different deal.
Often the meaning that seems to be conveyed in the system message text can be a bit different from what was intended. Sometimes engineers ask me what would be the best wording, it can sometimes take me (even me!) 15-30 minutes to figure out the best wording (oh, that’s another blog post…), especially if the message needs to ask the user something and do it with few simple words in a simple syntax.
So the tester should ask the engineer a few questions anyway. This won’t save much time, would it?
Secondly, deciding which term is correct to use in the UI is also not a tester’s decision. Sometimes it can seem very clear that it should be one not the other. But I still check. Because I can’t know for sure if there isn’t some hidden underlying logic somewhere… If one misjudges the situation, they may have made the problem worse. Also, sometimes messages seem to be similar so it’s easy to jump to the conclusion that they should be the same. When I’ve checked, it’s not the case. Another mistake avoided.
I usually escalate the terminology issues myself if I see the consistency issue is not localized in one part of the program but is actually a widespread problem (or has the potential for becoming one). Also, the engineer is more likely to know if the change in the UI should trigger other changes (which the tester may not have found out about).
Coming back to the “five-minute-fix”, then of course it’s not just five minutes. You fix it. You check if you fixed it correctly. You check the file(s) in. You merge it to other versions. you compile the package. You download the packaged code. You check it again. Then you fix the next issue. Five minutes? Nope. Plus you have the “opportunity” to mess something else up. Also, I simply find it a disruption which is not necessary in tester’s work.
For me the severity of the issue doesn’t justify nudging the border. It’s a sure thing that testers should make fixing the GUI issues easy (provide the correct wording, make clear screenshots of the issue, etc). But that’s the far end and that’s the border I generally wouldn’t like to cross. It’s not because I’m too stubborn but it’s mostly because I’m aware how the stealthy shifts in responsibilities can have other interesting implications.
I also discussed this topic with my team. At first, fixing the GUI issues didn’t seem too bad an idea to them but when they started considering the risks and possible complications… they also thought that it might not be a good idea. In some cases (when a window’s GUI is seriously deficient), it may be a good idea to make the changes instead of describing them but then again, the changes should be reviewed by the engineer.
All in all, I think defining and discussing the borders between testers’ and engineers’ domains is fruitful and important. Instead of arguing if the borders are correct, though, I thing it’s far more intriguing to look into why such discussions arise at all. Does it imply anything else about the mentality in the company? Where does it place on the “normality” scale (if we employ one)? Are there any unvoiced assumptions that drive these discussions? Is it just the matter of egos?
Anyway, borders are there. It’s important to know why they are where they are. And you should know if you need a visa when you plan to cross one.