Transitioning to Agile/SCRUM: the impact on testing

This article has been published in Testing Circus – Volume 4 – Edition 12 – December 2013.

testingcricus

An increasing number of companies are using Agile/SCRUM to implement their software. It is quite a shift from traditional/waterfall development to the more flexible, less documented Agile/SCRUM approach. The transition often proves to be a challenge at some point.

In the book “Scaling Software Agility: Best practices for large Organisations” by Dean Leffingwell you can find some important processes that will change. Going through a couple of these processes, I will describe the impact of the transition on testing.

Changes caused by the transition

Measures of success

In traditional development the main measure of success for a project is usually on time delivery, but in Agile this changes to working code. This is a fundamental difference in measuring success. Instead of time driven, the project will be quality driven. Of course this has its impact on testing. When working code is the measure of success, we need to put more effort in providing working code. The only way for the team to find out that it is working is by testing.

In traditional development the focus for the different disciplines (such as business analysts, developers and testers) is on different aspects. In Agile the complete cross functional team needs to have its main focus on quality. To make this happen, it is important to spread the knowledge of testing across the whole team. This can be achieved by:

  • using pairing to pair testers with people in other roles to facilitate implicit knowledge sharing;
  • providing basic test training for the team members to explicitly focus on testing aspects of their roles.

Both of these methods can be executed by the tester in the team, but the latter of the two can also be executed by people outside the team. Testers need to communicate with all different roles in a project and help them to understand and apply testing in their context.

Management culture

Another important aspect that needs to change is the management culture. Where the keywords in traditional development are command and control, the culture needs to shift towards collaborative leadership and empowerment of teams. The impact on testing as a craft seems minimal, but the impact on the traditional test functions, such as test managers and testers, is large.

Test managers used to be responsible for test strategy, product risk analysis, test plans, test estimation, resourcing, etc. But how will this work in Agile/SCRUM?

  • Planning and estimation are a team responsibility.
  • Detailed product risk analysis upfront is not possible.
  • Teams need a degree of freedom, so extensive strategy and plans are uncalled for.

In short, the role of test management changes. The human resources aspect of management is more important. How to get the right tester in the right team and keep the testing knowledge of the testers up-to-date? This is done by knowing the testers and their needs. Test management needs to find ways to get the necessary information out of the different (SCRUM) teams in order to have a bird’s eye view on the testing process.

Keep in mind that a lot of the previous management responsibility shifts to the teams. This requires a high rate of trust in the people and keeping away from micro management. So management needs to let go some of the control and the people in the teams have more responsibilities and need to deal with this. Mind that not everyone will feel comfortable with this, so make sure to pick the right people for the different roles in the team. Next to that: not every team needs the same type of tester.

Requirements and design

The change in requirements and design is very big and testing needs to find a way to cope with this. Where we had big upfront design, we now have continuously changing, emergent, just in time documents. This impact on testing is felt at management level and at engineering
level.

In Agile, test management cannot identify the detailed risks, since there only is a high level, global set of requirements. To retain a risk based testing approach, we need different levels of product risk analysis, abstract up front and more detailed in the teams when more detail is known. So (test)management should be able to do a high level risk assessment at product backlog level, where the team will do detailed risk assessments at the sprint backlog level.

One of the main complaints by traditional trained testers in an Agile environment is the lack of upfront requirements and designs to use as a basis for their testing. Test cases need to emerge from discussions at the grooming or planning session. Testers need to start creating test cases based on these discussions before the requirements and designs are properly documented. By having testers and designers review each other’s products you have quality control early in the process. Test cases and designs prove to have a better match and any differences can be discussed with the product owner.

Note that a good product owner is indispensable for Agile projects. Only by good product ownership, the right product gets build.

Coding and implementation

Everybody is aware of the different phases in a traditional project. Testing takes place after coding, which in its place is after design. A good practice in Agile/SCRUM is the use of test driven development (TDD). Coding and testing then go hand in hand. This increases the quality and maintainability of the code. The shift to TDD is not always that easy. Developers often don’t like to write unit tests and did not receive proper training in how to do right TDD. When used incorrectly, TDD will take a lot of time and have no or little benefits. Testing can support once again by pairing to support the developers with applying white box testing techniques.

The short development cycles and the incremental approach require a lot of regression testing. Since regression testing takes place in every sprint, test automation will save a lot of time. The impact is that people with test automation skills are needed it the team and that we need to plan for the automation. If the testers are not able to automate the tests themselves, they at least need to know what should be automated and communicate this with the automation specialist.

Overall impact on testing

Basically, we still need to test. The craft of testing is still in place and we must not forget what we learned in the past. But we need to adjust and adapt to our new context: Agile. The quick and changing world of Agile development requires a more pragmatic approach to testing. No large upfront planning and documenting, but small pieces of functionality. Pieces that are manageable by the teams. Testing goes beyond the tester; it is part of the complete team.

Automation has become an essential part of testing to keep up with the development pace in Agile. This requires the testers to have more technical knowledge and better communication skills. An early start with automation, usually results in a better maintainable product, so
enough reason to automate!

Last but not least, it’s all about people! Investing in people and skills is needed to perform well in an Agile context. Provide training in testing for all team members and don’t forget to provide training in other disciplines for the testers as well.

Advertisement

How to test refactoring?

This article has been published in Agile Record number 16.

refactor-cycle

A fundamental part of the Agile methodology is refactoring: rewriting small sections of code to be functionally equivalent but of better quality. Don’t forget to test the refactoring! What do you test? The answer is simple: you test if the code really is functionally equivalent.

To test the rewritten code, you use the unit tests that accompanied the original code. But does unit testing alone prove that you really have functionally equivalent code? No! While refactoring, developers often change more than just the complexity and quality of the code. A tester’s nightmare… It appears to be a small change, but the code is quite likely used in several parts of the solution. So you must perform a regression test after testing the changed code itself. First I will describe how to test the current and rewritten code with unit test. I have identified three scenarios that occur in practice. The code that needs refactoring has:

  • no unit tests;
  • bad unit tests;
  • good unit tests.

After these scenarios I will go into the regression test and explain the importance of proper regression testing while refactoring.

Unit test the current and rewritten code

Unit tests are tests to test small sections of the code. Ideally each test is independent and stubs and drivers are used to get control over the environment. Since refactoring deals with small sections of code, unit tests provide the correct scope.

Refactor code that has no existing unit tests

When you work with very old code, in general you do not have unit tests. So can you just start refactoring? No, first add unit tests to the existing code. After refactoring, these unit tests should still hold. In this way you improve the maintainability of the code as well as the quality of the code. This is a complex task. First you need to find out what the functionality of the code is. Then you need to think of test cases that properly cover the functionality. To discover the functionality, you provide several inputs to the code and observe the outputs. Functional equivalence is proven when the code is input/output conformant to the original code.

Refactor to increase the quality of the existing unit tests

You also see code which contains badly designed unit tests. For example the unit test verifies multiple scenarios at once. Usually this is caused by not properly decoupling the code from its dependencies (Code sample 1). This is undesired behavior; the test must not depend on the state of the environment. A solution is to refactor the code to support substitutable dependencies. This allows the test to use a test stub or mock object. As shown in Code sample 2, the unit test is split into three unit tests which test the three scenarios separately. The rewritten code has a configurable time provider. The test now uses its own time provider and has complete control over the environment.

Treat unit tests as code

The last situation deals with a piece of code which has good unit tests. Just refactor and then you are done, right? Wrong! When you refactor this code, the test will pass if you refactor right. But do not forget to check the validity of the tests. You might think the tests are good, but the unit tests are code too. Every refactor action incorporates a check, and possibly a refactor, of the unit tests.

Perform a regression test

After unit testing the code, you need to verify if the code works in the solution’s context. Remember: In Agile you must provide business value. To show the value, you need to perform a test that relates to the business. A regression test is designed to test the important flows through the solution. And these flows embody the business value. Do you run a complete regression test after each time you refactor? This depends on the risks and on the scalability of the regression test.

Create a scalable regression test

The use case is a common way to describe small parts of functionality. This is a great way to partition your regression test. Create a small set of regression test cases to cover a use case. When you use proper version management for the code, it is easy to see which part of the code belongs to which use case. Whenever a section of code is changed, you can see to which use case it belongs and then execute the regression tests for that use case. However, when code is reused (another good practice), you target a group of use cases. I generally use mindmaps for tracking dependencies within my projects. The mindmaps provide insight in which code is used by which use cases. This requires a disciplined development team. When you reuse existing code, you need to update the mindmap!

Expand the scope of the regression test

Do you test enough when you scale the regression test to the scope determined in the mindmap? No, the regression test serves a larger goal. You check if the (in theory) unaffected areas of the solution are really unaffected. So you test the part that is affected by the refactoring and you test the main flows through the solution. The flows that provide value to the customer are the most important.

Refactoring requires testing

Every change in the code needs to be tested. Therefore testing is required when refactoring. You test the changes at different levels. Since a small section of code is changed, unit testing seems the most fitting level. But do not forget the business value! Regression testing is of vital importance for the business.

Refactoring requires testing

Testing refactoring requires a good understanding of the code

A good understanding of the code requires a disciplined development team

A disciplined development team refactors

Code sample 1: Unit test depending on the environment

From http://xunitpatterns.com/

public void testDisplayCurrentTime_whenever() {
      // fixture setup
      TimeDisplay sut = new TimeDisplay();
      // Exercise sut
      String result = sut.getCurrentTimeAsHtmlFragment();
      // Verify outcome
      Calendar time = new DefaultTimeProvider().getTime();
      StringBuffer expectedTime = new StringBuffer();
      expectedTime.append("");
      if ((time.get(Calendar.HOUR_OF_DAY) == 0)
         && (time.get(Calendar.MINUTE) <= 1)) {
         expectedTime.append( "Midnight");
      } else if ((time.get(Calendar.HOUR_OF_DAY) == 12)
                  && (time.get(Calendar.MINUTE) == 0)) { // noon
         expectedTime.append("Noon");
      } else {
         SimpleDateFormat fr = new SimpleDateFormat("h:mm a");
         expectedTime.append(fr.format(time.getTime()));
      }
      expectedTime.append("");
      assertEquals( expectedTime, result);
}

Code sample 2: Independent unit tests

From http://xunitpatterns.com/

public void testDisplayCurrentTime_AtMidnight() throws Exception {
      // Fixture setup:
      TimeProviderTestStub tpStub = new TimeProviderTestStub();
      tpStub.setHours(0);
      tpStub.setMinutes(0);
      // Instantiate SUT:
      TimeDisplay sut = new TimeDisplay();
      sut.setTimeProvider(tpStub);
      // Exercise sut
      String result = sut.getCurrentTimeAsHtmlFragment();
      // Verify outcome
      String expectedTimeString = "Midnight";
      assertEquals("Midnight", expectedTimeString, result);
}

public void testDisplayCurrentTime_AtNoon() throws Exception {
      // Fixture setup:
      TimeProviderTestStub tpStub = new TimeProviderTestStub();
      tpStub.setHours(12);
      tpStub.setMinutes(0);
      // Instantiate SUT:
      TimeDisplay sut = new TimeDisplay();
      sut.setTimeProvider(tpStub);
      // Exercise sut
      String result = sut.getCurrentTimeAsHtmlFragment();
      // Verify outcome
      String expectedTimeString = "Noon";
      assertEquals("Noon", expectedTimeString, result);
}public void testDisplayCurrentTime_AtNonSpecialTime() throws Exception {
      // Fixture setup:
      TimeProviderTestStub tpStub = new TimeProviderTestStub();
      tpStub.setHours(7);
      tpStub.setMinutes(25);
      // Instantiate SUT:
      TimeDisplay sut = new TimeDisplay();
      sut.setTimeProvider(tpStub);
      // Exercise sut
      String result = sut.getCurrentTimeAsHtmlFragment();
      // Verify outcome
      String expectedTimeString = "7:25 AM";
      assertEquals("Non special time", expectedTimeString, result);
}

A good assessment is about improving

User story AssessmentImprovements require an assessment, either formal or informal.  A good assessment tells you what you are doing right and which areas are open for improvement. If you do not plan to act on the improvement areas, then why do you do an assessment? So the target for an assessment is twofold. First you need to know where you are, next you need to know where you want to go. The road from where you are to where you want to be can only be defined by improvement actions.

So how do you do a good assessment of any situation? You need to ask questions and more importantly, you need to listen to the answers and make sure you understand them. Every situation you want to assess, has its own context. So get to know the situation and the context. The clearer the view you achieve on a situation in its context, the better your assessment. When you define  improvement actions, you need to determine where you want to go. This target situation, needs as much context and information as possible. The best way from A to B can only be defined if we have a common understanding of A and B.

Beware when you define improvement actions! Not every action is suitable for every situation. In other words: each problem has multiple solutions. The solution must solve the problem and it must fit in the context. Only then it will prove to be valuable. The solutions that work in every context are usually abstract. These will not actually help you to achieve your goals, but are a good starting point. When making these solutions more concrete, you need to change them to fit the current context. Sometimes when you try to do this, the solution will not fit your context. So is this the right way to go? Yes, since knowing what not to do is also valuable.

At Polteq we always use assessments in support of improvements. The available improvement models contain important areas for testing. The models also provide a lot of questions to assess the areas. But we do not stop there, we keep asking questions outside the model! The areas with their questions provide nice guidelines, but a good assessment depends on the assessors. The assessors must take the context into account and therefore questions in the model can become either more or less important. While assessing, we try to capture the situation in its context. This allows us to write specific improvement actions which are fit for your situation.

How do you find the right improvement actions?

How to sell software testing?

In the last two months I was testing software for Enrise via Polteq as a part of their development team The Impediments. Testing for them has been nice and instructive. The people at Enrise did not stop asking questions. Most of the questions I answered immediately, but some took a while to find the right answer. The question that lingered the most was: “How do we sell software testing to our customers?” I have to say that I’m very glad that Enrise asks this question! It shows me that Enrise sees the value and the importance of testing. The high quality software that Enrise delivers, can only be delivered when there is time set aside for software testing.

So how do you sell software testing?testing

  • Sell a project (testing is part of the project).
  • Only provide guarantee for production incidents when the customer pays for testing.
  • Make the customer aware why you value testing.

Basically, you do not sell testing, you sell quality!

Software development is practiced in teams. Testing is a set of tasks that need to be executed in the teams, so there is implicit time for testing. You need to see that testing is essential to quality software and the time needed to test has to be allocated. The software may appear more expensive to your customer, but this will pay off! In the long-term, you will see less disturbances of production when the software is professionally tested. Investing in quality up front is worth it.

You’re probably thinking: “How will testing add to the quality?” Well, in many ways… I’ll give you three:

  1. If you plan to test, you will need testable requirements. Ask questions on user stories until you have clear what your customer actually wants to have. Developers usually make more assumptions and think about the solution, where testers think about value for the customer. More questions in the beginning will result in an easier development process and a smaller chance on defects.
  2. A tester tests software instead of checking if the software works. A customer will use the software to see if it works, where the tester tries to find the edge cases and out of the box situations. A negative age is an unlikely situation, but a typo in a birthdate is quite plausible… The tester thinks of these situations, so when these typos arise in production the situation has already been covered.
  3. Testing is about mitigating risks. This implies that the risks need to be identified. Testing will create a better view on risks. So even risks that are not mitigated, can be communicated to the customer. Then it’s up to the customer to decide what to do: Invest more to mitigate these risks, or live with the risks?

In short: if you want quality products, you really need to test!

*This blog is also posted at Enrise

Experiences at EuroSTAR 2012 (part 3)

So now my experiences at Thursday, the final day of EuroSTAR 2012. See my other posts for the Tuesday and Wednesday experiences.

What Agile Teams Can Learn From World of Warcraft – Alexandra Schladebeck

As I am a World of Warcraft (WoW) player myself and a great fan of Agile, the title alone was enough for me to decide to attend this presentation. Alexandra has done a great job in pointing out the parallels between WoW and Agile, not only the benefits, but also the pitfalls. WoW is a massive multiplayer online role-playing game. As in all role-playing games, we see different races, classes and professions for our characters. Each combination will have its own set of skills, when characters form groups to be able to complete dungeons, they need characters with different skills on board.  Sounds familiar if you think of multidisciplinary teams right?  A team of individuals working together to achieve a common goal… When we go one step bigger and we set our goal even higher, we can do raids in WoW. When we try to do such a project, we need several teams that work together.

Wow & Agile

When these WoW teams start their quests, they need to do some planning. In this process the teams need to estimate what the harder parts will be and who will be responsible for which tasks. The proper equipment for the specific quest needs to be put in place and they all need to work together. For the communication most groups use a tool named teamspeak.  However in some points WoW is easier, since we can use dragons for fast transportation and portals to get all the people easily at the same place.

The slide on what to learn was really interesting and therefore I added it to this post (click on it to see a larger version). Additional it is important to learn to do more that just your specialization. Just keep working in teams fun, this is applicable for both WoW and Agile teams. And finally learn to rely on your team, since you can’t kill the boss on your own 😉

Testing the API Behind a Mobile App – Marc van ‘t Veer

Polteq was happy enough to have my colleague Marc also selected with his presentation on testing an API. Marc used all his experience at T-Mobile to guide us through testing an API. He started off by explaining why T-Mobile wanted an API behind the mobile Apps. Since T-Mobile has a place where you as a customer can log in and see your calling and texting bundles. A lot of  independent App creators created App that allowed T-Mobile users to do this via their mobile phones by using screen scrapers to get the information to display. Whenever the App malfunctions – broken or incorrect data – the users blame T-Mobile for this. Even worse, the App creators also point a finger towards T-Mobile. So T-Mobile decided to decouple the content and make App creators use the API to get the content. This allowed T-Mobile to be more in control of the data and the meaning of the data.

So how to test an API? Marc starts off by showing us some risks involved with API’s:

  • It’s impossible to know up front how the API will integrate with the external Apps
  • There is a big variation in the data that will be provided by the API
  • There is no full control on the end-to-end process
  • The API may be used incorrectly

To be able to do early integration testing, T-Mobile used a prototype App and used dogfooding during development and system test. An adapter was created to let the API communicate with the back-end, so integration with T-Mobile’s back-end could be tested. This adapter also served the My T-Mobile pages, so the data on these pages could serve as an oracle for the data in the App. In testing they noticed that caching was not properly working. Since at first a single security key was used for all users. So when testing an API, make sure that you test with different users that have different authorizations. Another defect that showed, was that the HTTP-Statuses were not informative enough for the App. The API then was edited to supply extra information, so the application could provide the right information to its users. The T-Mobile data provided some difficulties of itself, since there are multiple types of bundles and each bundles has a maximum number of units that can be used. However the tag was used for different entities. One time it meant minutes, the other time it was a number that showed the number of text messages you had left or a combination of these two.

To test the API, the testers needed a lot more technical skills, since testing involved a lot of command line functionality. To actually test the API properly, automated regression testing in production was needed. Do not forget to apply the testing techniques that have proven to be valuable over the years in this new context.

In the end a good API was introduced, but people still see T-Mobile as responsible when an App malfunctions.

The testlab – Bart Knaack

The testlab cannot be absent in my experiences. How great is it to actually do some testing at a testing conference! In addition to the website and application testing, this year we got to play with Lego Mindstorms 😀 The first task was to find out what the provided car would do. It used a light sensor to read different colors and when it read the color it would do an action based on the color. After determining the actions that relate to the colors it was our tasks to see if these would hold. Of course there were bugs present! I don’t want to spoil the fun for future use of the Mindstorms for testlabs, so I won’t mention the bugs here. As you can see on the image, I earned the “I logged my bug in the testlab”-button. As simple as the reward seems, I made me happy and delivered a smile when I received it.

Testlab buttons