Learn Development Practices To Improve Your Test Automation Code

This was originally posted on testhuddle.

clean-code-bart-simpsonTest automation is a prominent part of testing. To improve your test automation code, you should look at development practices. Creating clean code is a good practice to improve the maintainability and readability of the test automation.
Recognizing clean code is quite easy. If it reads like prose and you understand the intent (almost) immediately even if you are not the author, then it’s probably clean. However, writing clean code is difficult. You will need a lot of practice to properly create clean code, but the following topics will help you along the way. Keep in mind that test automation is software development. So why not use good practices from the field of software development?

Naming

Let’s start with naming. Use proper names for everything that can be named, for instance variables, methods, classes, and packages. The code you create will have to be maintained and read by others. So its intention should be clear. You can improve clarity comes by applying proper naming. If you want your naming to be clear to all the people you work with, use names from the problem domain. These make sense to the business and not only to the development team. When iterating over rows, name your loop variable to indicate what you are looping over:


WRONG:     
for (int i = 0; i < rows.length; i++)

CORRECT:
for (int row = 0; row < rows.length; row++) 

Another example is the naming of your methods. When you write a method printRows, it should only print the rows and for instance not also alter them. When you read the method name, it should do what you expect it to do from the name, no more, no less. To achieve this so called single responsibility of a method, you usually need to apply refactoring.

As a final tip on naming, use a coding convention and automate the verification of this convention. This ensures that naming is consistent throughout the complete solution. Conventions around naming have changes over the years. In the early days, the Hungarian notation (prefixing variable names with an abbreviation of the type) was useful, but current integrated development environments (IDEs) show the type of a variable, making prefixing the type no longer necessary.

Refactoring

While creating your code, it usually evolves gradually. This evolution adds or improves functionality, but this does not mean the new version of your code is clean. Given that you want to create a method login that logs into the application with a given user. When you first run your login method, you notice that you need a registered user to be able to log in. Then you decide to alter your login method and you get something like this:


public void login(String user, String password) {
  driver.findElement(By.className("register")).click();
  driver.findElement(By.id("email")).sendKeys(user);
  driver.findElement(By.id("passwd")).sendKeys(password);
  driver.findElement(By.id("passwd2")).sendKeys(password);
  driver.findElement(By.id("SubmitRegistration")).click();

  driver.findElement(By.className("login")).click();
  driver.findElement(By.id("email")).sendKeys(user);
  driver.findElement(By.id("passwd")).sendKeys(password);
  driver.findElement(By.id("SubmitLogin")).click();
  Assert.assertTrue(driver.findElement(By.cssSelector("ul.myaccount_lnk_list"))
    .isDisplayed());
}

Obviously, you can refactor and take out the registration functionality into a different function as can be seen in the next piece of code. This appears to be quite nice, but now the method login, also does registration! In this case you can easily separate the responsibility to register from the responsibility to login. But the example shows how quickly the function name can become inconsistent with the function behavior.


public void login(String user, String password) {
  register(user, password);
  driver.findElement(By.className("login")).click();
  driver.findElement(By.id("email")).sendKeys(user);
  driver.findElement(By.id("passwd")).sendKeys(password);
  driver.findElement(By.id("SubmitLogin")).click();
  Assert.assertTrue(driver.findElement(By.cssSelector("ul.myaccount_lnk_list"))
    .isDisplayed());
}

The caller should decide if it is needed to register, so if we create a test that needs to register and login, that test calls the register method followed by the login method.

Conclusion

Test automation is software development! So you can improve your automation code by applying techniques that are already applied in software development. Proper naming keeps it easier to understand what is happening in the code. Refactoring is needed to keep the intent of the code clear. Separate the responsibilities into different functions, classes, and packages. Make sure that the caller of your code gets what they can expect from the naming, no more, no less.

Advertisement

A good assessment is about improving

User story AssessmentImprovements require an assessment, either formal or informal.  A good assessment tells you what you are doing right and which areas are open for improvement. If you do not plan to act on the improvement areas, then why do you do an assessment? So the target for an assessment is twofold. First you need to know where you are, next you need to know where you want to go. The road from where you are to where you want to be can only be defined by improvement actions.

So how do you do a good assessment of any situation? You need to ask questions and more importantly, you need to listen to the answers and make sure you understand them. Every situation you want to assess, has its own context. So get to know the situation and the context. The clearer the view you achieve on a situation in its context, the better your assessment. When you define  improvement actions, you need to determine where you want to go. This target situation, needs as much context and information as possible. The best way from A to B can only be defined if we have a common understanding of A and B.

Beware when you define improvement actions! Not every action is suitable for every situation. In other words: each problem has multiple solutions. The solution must solve the problem and it must fit in the context. Only then it will prove to be valuable. The solutions that work in every context are usually abstract. These will not actually help you to achieve your goals, but are a good starting point. When making these solutions more concrete, you need to change them to fit the current context. Sometimes when you try to do this, the solution will not fit your context. So is this the right way to go? Yes, since knowing what not to do is also valuable.

At Polteq we always use assessments in support of improvements. The available improvement models contain important areas for testing. The models also provide a lot of questions to assess the areas. But we do not stop there, we keep asking questions outside the model! The areas with their questions provide nice guidelines, but a good assessment depends on the assessors. The assessors must take the context into account and therefore questions in the model can become either more or less important. While assessing, we try to capture the situation in its context. This allows us to write specific improvement actions which are fit for your situation.

How do you find the right improvement actions?

How to sell software testing?

In the last two months I was testing software for Enrise via Polteq as a part of their development team The Impediments. Testing for them has been nice and instructive. The people at Enrise did not stop asking questions. Most of the questions I answered immediately, but some took a while to find the right answer. The question that lingered the most was: “How do we sell software testing to our customers?” I have to say that I’m very glad that Enrise asks this question! It shows me that Enrise sees the value and the importance of testing. The high quality software that Enrise delivers, can only be delivered when there is time set aside for software testing.

So how do you sell software testing?testing

  • Sell a project (testing is part of the project).
  • Only provide guarantee for production incidents when the customer pays for testing.
  • Make the customer aware why you value testing.

Basically, you do not sell testing, you sell quality!

Software development is practiced in teams. Testing is a set of tasks that need to be executed in the teams, so there is implicit time for testing. You need to see that testing is essential to quality software and the time needed to test has to be allocated. The software may appear more expensive to your customer, but this will pay off! In the long-term, you will see less disturbances of production when the software is professionally tested. Investing in quality up front is worth it.

You’re probably thinking: “How will testing add to the quality?” Well, in many ways… I’ll give you three:

  1. If you plan to test, you will need testable requirements. Ask questions on user stories until you have clear what your customer actually wants to have. Developers usually make more assumptions and think about the solution, where testers think about value for the customer. More questions in the beginning will result in an easier development process and a smaller chance on defects.
  2. A tester tests software instead of checking if the software works. A customer will use the software to see if it works, where the tester tries to find the edge cases and out of the box situations. A negative age is an unlikely situation, but a typo in a birthdate is quite plausible… The tester thinks of these situations, so when these typos arise in production the situation has already been covered.
  3. Testing is about mitigating risks. This implies that the risks need to be identified. Testing will create a better view on risks. So even risks that are not mitigated, can be communicated to the customer. Then it’s up to the customer to decide what to do: Invest more to mitigate these risks, or live with the risks?

In short: if you want quality products, you really need to test!

*This blog is also posted at Enrise

Experiences at EuroSTAR 2012 (part 3)

So now my experiences at Thursday, the final day of EuroSTAR 2012. See my other posts for the Tuesday and Wednesday experiences.

What Agile Teams Can Learn From World of Warcraft – Alexandra Schladebeck

As I am a World of Warcraft (WoW) player myself and a great fan of Agile, the title alone was enough for me to decide to attend this presentation. Alexandra has done a great job in pointing out the parallels between WoW and Agile, not only the benefits, but also the pitfalls. WoW is a massive multiplayer online role-playing game. As in all role-playing games, we see different races, classes and professions for our characters. Each combination will have its own set of skills, when characters form groups to be able to complete dungeons, they need characters with different skills on board.  Sounds familiar if you think of multidisciplinary teams right?  A team of individuals working together to achieve a common goal… When we go one step bigger and we set our goal even higher, we can do raids in WoW. When we try to do such a project, we need several teams that work together.

Wow & Agile

When these WoW teams start their quests, they need to do some planning. In this process the teams need to estimate what the harder parts will be and who will be responsible for which tasks. The proper equipment for the specific quest needs to be put in place and they all need to work together. For the communication most groups use a tool named teamspeak.  However in some points WoW is easier, since we can use dragons for fast transportation and portals to get all the people easily at the same place.

The slide on what to learn was really interesting and therefore I added it to this post (click on it to see a larger version). Additional it is important to learn to do more that just your specialization. Just keep working in teams fun, this is applicable for both WoW and Agile teams. And finally learn to rely on your team, since you can’t kill the boss on your own 😉

Testing the API Behind a Mobile App – Marc van ‘t Veer

Polteq was happy enough to have my colleague Marc also selected with his presentation on testing an API. Marc used all his experience at T-Mobile to guide us through testing an API. He started off by explaining why T-Mobile wanted an API behind the mobile Apps. Since T-Mobile has a place where you as a customer can log in and see your calling and texting bundles. A lot of  independent App creators created App that allowed T-Mobile users to do this via their mobile phones by using screen scrapers to get the information to display. Whenever the App malfunctions – broken or incorrect data – the users blame T-Mobile for this. Even worse, the App creators also point a finger towards T-Mobile. So T-Mobile decided to decouple the content and make App creators use the API to get the content. This allowed T-Mobile to be more in control of the data and the meaning of the data.

So how to test an API? Marc starts off by showing us some risks involved with API’s:

  • It’s impossible to know up front how the API will integrate with the external Apps
  • There is a big variation in the data that will be provided by the API
  • There is no full control on the end-to-end process
  • The API may be used incorrectly

To be able to do early integration testing, T-Mobile used a prototype App and used dogfooding during development and system test. An adapter was created to let the API communicate with the back-end, so integration with T-Mobile’s back-end could be tested. This adapter also served the My T-Mobile pages, so the data on these pages could serve as an oracle for the data in the App. In testing they noticed that caching was not properly working. Since at first a single security key was used for all users. So when testing an API, make sure that you test with different users that have different authorizations. Another defect that showed, was that the HTTP-Statuses were not informative enough for the App. The API then was edited to supply extra information, so the application could provide the right information to its users. The T-Mobile data provided some difficulties of itself, since there are multiple types of bundles and each bundles has a maximum number of units that can be used. However the tag was used for different entities. One time it meant minutes, the other time it was a number that showed the number of text messages you had left or a combination of these two.

To test the API, the testers needed a lot more technical skills, since testing involved a lot of command line functionality. To actually test the API properly, automated regression testing in production was needed. Do not forget to apply the testing techniques that have proven to be valuable over the years in this new context.

In the end a good API was introduced, but people still see T-Mobile as responsible when an App malfunctions.

The testlab – Bart Knaack

The testlab cannot be absent in my experiences. How great is it to actually do some testing at a testing conference! In addition to the website and application testing, this year we got to play with Lego Mindstorms 😀 The first task was to find out what the provided car would do. It used a light sensor to read different colors and when it read the color it would do an action based on the color. After determining the actions that relate to the colors it was our tasks to see if these would hold. Of course there were bugs present! I don’t want to spoil the fun for future use of the Mindstorms for testlabs, so I won’t mention the bugs here. As you can see on the image, I earned the “I logged my bug in the testlab”-button. As simple as the reward seems, I made me happy and delivered a smile when I received it.

Testlab buttons

Experiences at EuroSTAR 2012 (part 2)

In the previous post I described my experiences at the Tuesday of EuroSTAR 2012. In this post I will continue my EuroSTAR experience with the Wednesday.

Changing Management Thinking – John Seddon

A nice quote from this talk about changing management thinking: “The primary cause of failing is management”. Managers tend to make decisions that are not in the benefit of projects. For example when you want to decrease the costs, managers start managing on costs… This actually increases the costs in most cases. However, when you manage on value, this will more often decrease the costs. A very useful story that John told us, was about chicken wings and spare ribs. Management of a large chain of restaurants decided to replace the spare ribs as a starter with chicken wings, since the chicken wing had a larger margin. Customers were disappointed and asked the waiters if they could get a small portion of ribs (still available as a main) as a starter. The waiters want to please their customers, so they say that it is possible. Then the fun part will start… The waiter needs to put the starter in the cash register and since there is no starter of spare ribs listed, they choose to file it under chicken wings. Since management reads the registers and sees that chicken wings are sold very often, they order more chicken wings for their restaurants. A fine example of failing management.

Adventures in Test Automation – Breaking the Boundaries of Regression Testing –

 John Fodeh

John provided information on automated monkey testing. The presentation was supported by using some scenes from the IT Crowd to inform us on automation.  Automated monkey testing p roved to be an easy to understand concept: by randomizing each step, you are simulating monkey testing. The problem of course is that it is easy to miss out on obvious defects, it does not effectively emulate real scenarios and debugging lng test runs can be quite a pain. They felt the need to create more intelligent monkeys by creating somewhat more expectable behavior via the use of state tables with probabilities per action.

Evolving Agile Testing – Fran O’Hara

After a short introduction on Agile and SCRUM, Fran started off on requirements. When we start to talk about user stories, we should try to find out about acceptance criteria for the story. This serves several goals:

  • Define the boundaries for a user story/feature
  • Help the product owner to find out what it is that delivers value
  • Help the team gain understanding of the story
  • Help developers and testers to derive tests
  • Help developers when to stop adding functionality to a story

Fran reminds us to keep these acceptance criteria at a relatively high level, so do not lose yourself in too much details. Detailing will be done in e.g wireframes, mockups or validation rules. Another place where we find detailing is in the automated acceptance tests. Try to find examples that support your acceptance criteria.

Next Fran stresses the fact that we still need test strategy in Agile. We need to think about the minimal tests in the sprints (automated unit, automated acceptance, manual exploratory) a

nd sometimes need to do some additional testing e.g for non-functionals, feature integration or business processes. The testers themselves need to have broad knowledge (more than just testing) and deep knowledge in testing. This requires a ‘technical awareness’.

Testing of Cloud Services; The Approach: From Risks to Test Measures – Kees Blokland & Jeroen Mengerink

Kees and myself presented Cloutest® our approach to testing cloud services. We started off with an introduction to cloud computing to set the context. To properly introduce the concept, we decided to use the definition provided by NIST. After this into our approach. We identified 143 risks that arise when using cloud computing and grouped these risks into categories:Cloutest-Eurostar

  • Performance
  • Security
  • Availability & Continuity
  • Functionality
  • Manageability
  • Legislation & Regulations
  • Suppliers & Outsourcing

Since 143 risks is quite a lot, we decided to give a limited set of examples of risks and detail these. For instance there occurs a performance risk, since a cloud service usually has several customers. So it’s not only you as a customer that is putting load on the service, but also other users of other customers. This will influence the performance of the service. Imagine your webshop hosted at the same hosting provider that hosts WikiLeaks… The huge amounts of traffic that a new publication on WikiLeaks will generate, might result in your webshop not being available due to performance problems of the service.

With testing we provide methods to mitigate risks, so that is what we did too. The good news is that we can still use a lot of what we have learned over the years. Some techniques need to be tweaked to fit in the cloud context, but they are very useful. Next to the tweaked measures, we also describe some new measures that we have used at our clients. We grouped the measures too:

  • Selection
  • Performance
  • Security
  • Manageability
  • Availability & Continuity
  • Functional
  • Migration
  • Legislation & Regulations
  • Production

How to test the scalability of a cloud service??? Providers promise scalable services and customers pay per use, so if you need more, you will get more. With traditional load testing, we can gradually increase the load and see how the system responds. This can be applied to the service too, but it will scale. You will see the point where the scaling starts in your response times, they will drop when more performance is added at the service. Check around the boundary of the scaling point to see if the billing is also scaled.

We see that test starts earlier, the scope is wider and testing will not stop in production.

Inspirational Talk: Sky is not the limit: Copenhagen Suborbitals – Peter Madsen

The inspirational talk was very nice, however not very test related. It showed us that with the right vision and perseverance you can reach goals that seem to be unreachable. Peter showed us how he built a homemade submarine and a homemade rocket.