How to test refactoring?

This article has been published in Agile Record number 16.

refactor-cycle

A fundamental part of the Agile methodology is refactoring: rewriting small sections of code to be functionally equivalent but of better quality. Don’t forget to test the refactoring! What do you test? The answer is simple: you test if the code really is functionally equivalent.

To test the rewritten code, you use the unit tests that accompanied the original code. But does unit testing alone prove that you really have functionally equivalent code? No! While refactoring, developers often change more than just the complexity and quality of the code. A tester’s nightmare… It appears to be a small change, but the code is quite likely used in several parts of the solution. So you must perform a regression test after testing the changed code itself. First I will describe how to test the current and rewritten code with unit test. I have identified three scenarios that occur in practice. The code that needs refactoring has:

  • no unit tests;
  • bad unit tests;
  • good unit tests.

After these scenarios I will go into the regression test and explain the importance of proper regression testing while refactoring.

Unit test the current and rewritten code

Unit tests are tests to test small sections of the code. Ideally each test is independent and stubs and drivers are used to get control over the environment. Since refactoring deals with small sections of code, unit tests provide the correct scope.

Refactor code that has no existing unit tests

When you work with very old code, in general you do not have unit tests. So can you just start refactoring? No, first add unit tests to the existing code. After refactoring, these unit tests should still hold. In this way you improve the maintainability of the code as well as the quality of the code. This is a complex task. First you need to find out what the functionality of the code is. Then you need to think of test cases that properly cover the functionality. To discover the functionality, you provide several inputs to the code and observe the outputs. Functional equivalence is proven when the code is input/output conformant to the original code.

Refactor to increase the quality of the existing unit tests

You also see code which contains badly designed unit tests. For example the unit test verifies multiple scenarios at once. Usually this is caused by not properly decoupling the code from its dependencies (Code sample 1). This is undesired behavior; the test must not depend on the state of the environment. A solution is to refactor the code to support substitutable dependencies. This allows the test to use a test stub or mock object. As shown in Code sample 2, the unit test is split into three unit tests which test the three scenarios separately. The rewritten code has a configurable time provider. The test now uses its own time provider and has complete control over the environment.

Treat unit tests as code

The last situation deals with a piece of code which has good unit tests. Just refactor and then you are done, right? Wrong! When you refactor this code, the test will pass if you refactor right. But do not forget to check the validity of the tests. You might think the tests are good, but the unit tests are code too. Every refactor action incorporates a check, and possibly a refactor, of the unit tests.

Perform a regression test

After unit testing the code, you need to verify if the code works in the solution’s context. Remember: In Agile you must provide business value. To show the value, you need to perform a test that relates to the business. A regression test is designed to test the important flows through the solution. And these flows embody the business value. Do you run a complete regression test after each time you refactor? This depends on the risks and on the scalability of the regression test.

Create a scalable regression test

The use case is a common way to describe small parts of functionality. This is a great way to partition your regression test. Create a small set of regression test cases to cover a use case. When you use proper version management for the code, it is easy to see which part of the code belongs to which use case. Whenever a section of code is changed, you can see to which use case it belongs and then execute the regression tests for that use case. However, when code is reused (another good practice), you target a group of use cases. I generally use mindmaps for tracking dependencies within my projects. The mindmaps provide insight in which code is used by which use cases. This requires a disciplined development team. When you reuse existing code, you need to update the mindmap!

Expand the scope of the regression test

Do you test enough when you scale the regression test to the scope determined in the mindmap? No, the regression test serves a larger goal. You check if the (in theory) unaffected areas of the solution are really unaffected. So you test the part that is affected by the refactoring and you test the main flows through the solution. The flows that provide value to the customer are the most important.

Refactoring requires testing

Every change in the code needs to be tested. Therefore testing is required when refactoring. You test the changes at different levels. Since a small section of code is changed, unit testing seems the most fitting level. But do not forget the business value! Regression testing is of vital importance for the business.

Refactoring requires testing

Testing refactoring requires a good understanding of the code

A good understanding of the code requires a disciplined development team

A disciplined development team refactors

Code sample 1: Unit test depending on the environment

From http://xunitpatterns.com/

public void testDisplayCurrentTime_whenever() {
      // fixture setup
      TimeDisplay sut = new TimeDisplay();
      // Exercise sut
      String result = sut.getCurrentTimeAsHtmlFragment();
      // Verify outcome
      Calendar time = new DefaultTimeProvider().getTime();
      StringBuffer expectedTime = new StringBuffer();
      expectedTime.append("");
      if ((time.get(Calendar.HOUR_OF_DAY) == 0)
         && (time.get(Calendar.MINUTE) <= 1)) {
         expectedTime.append( "Midnight");
      } else if ((time.get(Calendar.HOUR_OF_DAY) == 12)
                  && (time.get(Calendar.MINUTE) == 0)) { // noon
         expectedTime.append("Noon");
      } else {
         SimpleDateFormat fr = new SimpleDateFormat("h:mm a");
         expectedTime.append(fr.format(time.getTime()));
      }
      expectedTime.append("");
      assertEquals( expectedTime, result);
}

Code sample 2: Independent unit tests

From http://xunitpatterns.com/

public void testDisplayCurrentTime_AtMidnight() throws Exception {
      // Fixture setup:
      TimeProviderTestStub tpStub = new TimeProviderTestStub();
      tpStub.setHours(0);
      tpStub.setMinutes(0);
      // Instantiate SUT:
      TimeDisplay sut = new TimeDisplay();
      sut.setTimeProvider(tpStub);
      // Exercise sut
      String result = sut.getCurrentTimeAsHtmlFragment();
      // Verify outcome
      String expectedTimeString = "Midnight";
      assertEquals("Midnight", expectedTimeString, result);
}

public void testDisplayCurrentTime_AtNoon() throws Exception {
      // Fixture setup:
      TimeProviderTestStub tpStub = new TimeProviderTestStub();
      tpStub.setHours(12);
      tpStub.setMinutes(0);
      // Instantiate SUT:
      TimeDisplay sut = new TimeDisplay();
      sut.setTimeProvider(tpStub);
      // Exercise sut
      String result = sut.getCurrentTimeAsHtmlFragment();
      // Verify outcome
      String expectedTimeString = "Noon";
      assertEquals("Noon", expectedTimeString, result);
}public void testDisplayCurrentTime_AtNonSpecialTime() throws Exception {
      // Fixture setup:
      TimeProviderTestStub tpStub = new TimeProviderTestStub();
      tpStub.setHours(7);
      tpStub.setMinutes(25);
      // Instantiate SUT:
      TimeDisplay sut = new TimeDisplay();
      sut.setTimeProvider(tpStub);
      // Exercise sut
      String result = sut.getCurrentTimeAsHtmlFragment();
      // Verify outcome
      String expectedTimeString = "7:25 AM";
      assertEquals("Non special time", expectedTimeString, result);
}
Advertisement

Why testers do not automate their tests

zoomwhy

Test automation exists for quite a while, but it isn’t practiced consistently. In the years that I am in software testing, I’ve heard a lot of arguments on why testers do not automate their tests. The ones I hear the most are:

  • Test automation will take away my job.
  • I don’t know where to start.
  • I cannot write code.

Test automation will take away my job

You as a tester think that you will lose your job when we automate testing? In short: No, it won’t! Testing is an intellectual process. Before you can automate anything, you need to think of what you want to automate. The test automation can only check situations that you defined. Remember, the test automation cannot think for itself and additional manual testing is always needed.

I don’t know where to start

Well, at the beginning of course! Start with investigating on test automation, what can it do, what has worked before and might also work in your situation? Since automating test is an investment, you need to find out which parts are important enough to automate. Product risk analysis (a good practice of structured testing) will definitely help you to define the risk-full and important parts of the software. Often repeated manual test cases are also a very likely candidate for test automation. Watch out not to start too complex. Work on some simple test automation to prove your business case on test automation and then expand.

I cannot write code

This is no reason not to automate tests! Who said you have to do it yourself… You – as a tester – can help decide which test scripts need to be automated. If you do want to do it yourself, you need to invest. At least know some basics in programming. Most frameworks that can help you do test automation don’t require in depth programming skills, but the basics will help you enough to do some valuable automation.

How to sell software testing?

In the last two months I was testing software for Enrise via Polteq as a part of their development team The Impediments. Testing for them has been nice and instructive. The people at Enrise did not stop asking questions. Most of the questions I answered immediately, but some took a while to find the right answer. The question that lingered the most was: “How do we sell software testing to our customers?” I have to say that I’m very glad that Enrise asks this question! It shows me that Enrise sees the value and the importance of testing. The high quality software that Enrise delivers, can only be delivered when there is time set aside for software testing.

So how do you sell software testing?testing

  • Sell a project (testing is part of the project).
  • Only provide guarantee for production incidents when the customer pays for testing.
  • Make the customer aware why you value testing.

Basically, you do not sell testing, you sell quality!

Software development is practiced in teams. Testing is a set of tasks that need to be executed in the teams, so there is implicit time for testing. You need to see that testing is essential to quality software and the time needed to test has to be allocated. The software may appear more expensive to your customer, but this will pay off! In the long-term, you will see less disturbances of production when the software is professionally tested. Investing in quality up front is worth it.

You’re probably thinking: “How will testing add to the quality?” Well, in many ways… I’ll give you three:

  1. If you plan to test, you will need testable requirements. Ask questions on user stories until you have clear what your customer actually wants to have. Developers usually make more assumptions and think about the solution, where testers think about value for the customer. More questions in the beginning will result in an easier development process and a smaller chance on defects.
  2. A tester tests software instead of checking if the software works. A customer will use the software to see if it works, where the tester tries to find the edge cases and out of the box situations. A negative age is an unlikely situation, but a typo in a birthdate is quite plausible… The tester thinks of these situations, so when these typos arise in production the situation has already been covered.
  3. Testing is about mitigating risks. This implies that the risks need to be identified. Testing will create a better view on risks. So even risks that are not mitigated, can be communicated to the customer. Then it’s up to the customer to decide what to do: Invest more to mitigate these risks, or live with the risks?

In short: if you want quality products, you really need to test!

*This blog is also posted at Enrise

Celebrate your Agile/Scrum milestones

Last week I was reading Celebrate Your Milestones at the WordPress blog. Which made me realise that with some of the Agile teams I was in, we did just that!

Things we celebrated were:cake

  • each sprint we finished
  • each x points of business value we delivered
  • completing all the features of an epic

Setting these goals resulted in more commitment and more motivation in the team. By the bonding that we got from celebrating these milestones, we had very good communication and openness in our team. Next to these team milestones, people had their personal milestones. Thanks to the bonding, people shared celebrating their personal milestones with the team by bringing cake 🙂

Conclusion is that celebrating milestones will help in creating efficient and committed teams.