Testing gets disqualified

SoftwareTestingLast week I read an article (in Dutch) which presented some negative views on testing and IT people in general. “Just like other professionals they don’t like to take the blame.” Technology is unreliable and the -superior- human always has to fix it. The writer states that the root cause in 80 percent of the incidents is human failure. This seems strange; since the technology is developed and used by humans I’d say that this should be 100 percent… But the reasoning of the writer has some more strange ways. “From humans we can expect that a failure is made every now and then, but from machines we may expect more.” I think that the writer underestimates the humans in this case. Humans can recognize when something goes wrong. When they see something goes wrong, they can correct it, where a machine cannot think.

“All people involved seem to be stuck in sub optimal processes and don’t make use of modern technology” Some people indeed are stuck in processes and in some areas I think that is a good thing. But more and more development is Agile and will adapt. The next point actually felt insulting. “Instead of ‘fooling around’ IT people rather use the word ‘testing’.” So here we have a ‘manager’ disqualifying the work I love to do. I don’t have the feeling that he has any idea of what he is talking about.

“Let IT do what it’s good in, automate.” This is the only paragraph of the article where I had some recognition. Automation, more specific test automation, is used too little in the software development. I also see that in a lot of development processes people say that automation is too difficult, or too expensive. By now we must understand that it will cost you to invest in (test) automation, but if chosen well on what to automate, it will pay back.

I read the article just before I went away for a couple of days. I actually expected a lot of responses about how wrong the author was and that he made some statements that are clearly his opinion, but not based on any thorough research. Strange enough, it got positive responses… So I just had to write about it.

12 comments on “Testing gets disqualified

  1. ahh, but this manager is talking about “selflearning systems”, we should automate things through AI… sure… as if that is reliable.

    Great post Jeroen, and if I ever read something on the site that article was originally published I believe I would have bashed this guy, however, having been a manager for quite some years myself I prefer to shun a site for managers. I’d rather talk to people who understand what they are doing.
    The article gives me the feeling of a manager who does not feel appreciated and does not understand what the rest of the people in IT (e.g. those with some technical knowledge and understanding) are truly doing.

  2. Pingback: Five Blogs – 19 May 2012 « 5blogs

  3. Good post. I read first the dutch article, this was needed to understand what you have written. For what its worth, I placed a reaction on that article

    • Now that I read my post once again, I see that it could have been better with some more explanation about the article I refer to… I usually read my own posts a couple of times before hitting submit, but this time I didn’t. Wrote all in one pass and submitted, since I just needed to get it out there.

      Good reply on the original post btw.

  4. Hi Jeroen,

    I find it disturbing that in so many places test automation is deemed to be too expensive, complicated or complex, or plain “too hard” (whatever that means), because no one seems the bother thinking about the cost of *not* automating.

    This cost (debt with a rather high interest rate as far as I can see) includes—but is likely not limited to: Bad Design and its repercussions (hard to test, leading to more defects that are harder to fix & retest), less flexibility of the system (additional features may be hard to implement). This last point seems especially important, since software is supposed to be easy to adapt to new requirements (thus the soft in software in my opinion).

    • Hi Stephan,

      I completely agree with you. I can get really annoyed when people say that it is too difficult to automate or that it is too expensive. Normally I just do it and then show that it is possible. After the first part, it’s easy to derive the business case and usually the business is easily persuaded after the first signs that it can be done. Test automation should be used more often and the only way to achieve that is just to do it.

      • Ironically, a similar argument was made against the use of formal proofs. It was deemed too difficult or unable to cope with the complicated use-cases prevalent in business software. Dijkstra (Edsger) fought long and hard against the dogma that testing ‘was enough’ (the operational method).

        Of course we shouldn’t abolish testing as we should never assume our own infallibility. However, test-driven design and Agile methodologies (particularly those with a capital A) severely distracts from the roots of software engineering: the ability to find proof of the correctness of our algorithms.

        We have been absorbed by the unquenchable addiction of the average westerner: quantity over quality.

        Hopefully inspired, I hope you will continue by reading this short text by Edsger Dijkstra: http://www.cs.utexas.edu/users/EWD/transcriptions/EWD10xx/EWD1012.html

  5. Thank you for your reply Edwin. Dijkstra’s text in interesting to read. I actually disagree with you on the Agile statement that you make. In the Agile projects that I attended, we made sure that we did modelling of the system. The point is that stakeholders don’t want proof of algorithms, but only proof that they get what they want in terms of functionality. At system testing and unit testing level, we as testers need to proof the algorithms. At higher levels, nearer to the end customer, you will not see these proofs of algorithms.

    • It is difficult to pose a cogent response to the inexact wording of ‘in terms of functionality’. At what point does a provable system degrade into a functional description?

      Stakeholders want a proof of the algorithms driving the solutions to their functional requirements as much as they want a plethora of test reports. All they want, and need, is a solution to their business problem. Perhaps it sells better to shower them with reports, but this practice does not necessarily make the solution a better one.

      Anyways, one does not simply ‘proof’ the algorithms by testing them. The combinatorial state explosion is simply too high to cover all edge-cases. However, by asserting the correctness of one algorithm, it becomes possible to combine it with another, in turn proving the correctness of its combination. This will essentially shield you from the underlying complexity e.g. provides a new layer to work from.

      I’m not saying this is easy. Neither am I saying it will work at all levels. Perhaps it is too academic. Maybe we’re doing alright all along. Currently I see is an inflow of bug reports; inadequate and half baked solutions; developers struggling to wade against an ever increasing stream of complexity. And we’re managing, true… But at what cost? Agile is whipping the few remaining intelligent and passionate developers out of the field using senseless metrics such as velocity and hourly estimates, conveniently forgetting about the debt of code complexity caused by ‘easy’ solutions. Using an endless back-and-forth of testing, we try to knead our software into what it already is.

      My outlook of Agile and iterative testing is somewhat dark. Not because I think they are useless, quite the contrary. But because they are used as a dogma.

      • Edwin, I know how difficult it is to combine proofs. Like yourself, I also studied at the University of Twente. I graduated from the Formal Methods and Tools group, where I needed to proof some coverage metrics. I think the level of proof you are aiming at, is indeed academic, but my hope is that it will get more incorporated into real world scenarios.

        There is a lot of difference in how testing is applied within Agile and I can understand your point of view from a couple of situation that I have seen. Keep in mind that in a lot of cases the development is called Agile, but in reality it is not Agile… Personally I feel very positive about Agile, but people need to be aware of the context they are in. Just applying all that has worked before in one situation, might not work in the new situation. So it is a cognitive process every time.

What do you think of this post?