In the previous post I described my experiences at the Tuesday of EuroSTAR 2012. In this post I will continue my EuroSTAR experience with the Wednesday.
Changing Management Thinking – John Seddon
A nice quote from this talk about changing management thinking: “The primary cause of failing is management”. Managers tend to make decisions that are not in the benefit of projects. For example when you want to decrease the costs, managers start managing on costs… This actually increases the costs in most cases. However, when you manage on value, this will more often decrease the costs. A very useful story that John told us, was about chicken wings and spare ribs. Management of a large chain of restaurants decided to replace the spare ribs as a starter with chicken wings, since the chicken wing had a larger margin. Customers were disappointed and asked the waiters if they could get a small portion of ribs (still available as a main) as a starter. The waiters want to please their customers, so they say that it is possible. Then the fun part will start… The waiter needs to put the starter in the cash register and since there is no starter of spare ribs listed, they choose to file it under chicken wings. Since management reads the registers and sees that chicken wings are sold very often, they order more chicken wings for their restaurants. A fine example of failing management.
Adventures in Test Automation – Breaking the Boundaries of Regression Testing –
John provided information on automated monkey testing. The presentation was supported by using some scenes from the IT Crowd to inform us on automation. Automated monkey testing p roved to be an easy to understand concept: by randomizing each step, you are simulating monkey testing. The problem of course is that it is easy to miss out on obvious defects, it does not effectively emulate real scenarios and debugging lng test runs can be quite a pain. They felt the need to create more intelligent monkeys by creating somewhat more expectable behavior via the use of state tables with probabilities per action.
Evolving Agile Testing – Fran O’Hara
After a short introduction on Agile and SCRUM, Fran started off on requirements. When we start to talk about user stories, we should try to find out about acceptance criteria for the story. This serves several goals:
- Define the boundaries for a user story/feature
- Help the product owner to find out what it is that delivers value
- Help the team gain understanding of the story
- Help developers and testers to derive tests
- Help developers when to stop adding functionality to a story
Fran reminds us to keep these acceptance criteria at a relatively high level, so do not lose yourself in too much details. Detailing will be done in e.g wireframes, mockups or validation rules. Another place where we find detailing is in the automated acceptance tests. Try to find examples that support your acceptance criteria.
Next Fran stresses the fact that we still need test strategy in Agile. We need to think about the minimal tests in the sprints (automated unit, automated acceptance, manual exploratory) a
nd sometimes need to do some additional testing e.g for non-functionals, feature integration or business processes. The testers themselves need to have broad knowledge (more than just testing) and deep knowledge in testing. This requires a ‘technical awareness’.
Testing of Cloud Services; The Approach: From Risks to Test Measures – Kees Blokland & Jeroen Mengerink
Kees and myself presented Cloutest® our approach to testing cloud services. We started off with an introduction to cloud computing to set the context. To properly introduce the concept, we decided to use the definition provided by NIST. After this into our approach. We identified 143 risks that arise when using cloud computing and grouped these risks into categories:
- Availability & Continuity
- Legislation & Regulations
- Suppliers & Outsourcing
Since 143 risks is quite a lot, we decided to give a limited set of examples of risks and detail these. For instance there occurs a performance risk, since a cloud service usually has several customers. So it’s not only you as a customer that is putting load on the service, but also other users of other customers. This will influence the performance of the service. Imagine your webshop hosted at the same hosting provider that hosts WikiLeaks… The huge amounts of traffic that a new publication on WikiLeaks will generate, might result in your webshop not being available due to performance problems of the service.
With testing we provide methods to mitigate risks, so that is what we did too. The good news is that we can still use a lot of what we have learned over the years. Some techniques need to be tweaked to fit in the cloud context, but they are very useful. Next to the tweaked measures, we also describe some new measures that we have used at our clients. We grouped the measures too:
- Availability & Continuity
- Legislation & Regulations
How to test the scalability of a cloud service??? Providers promise scalable services and customers pay per use, so if you need more, you will get more. With traditional load testing, we can gradually increase the load and see how the system responds. This can be applied to the service too, but it will scale. You will see the point where the scaling starts in your response times, they will drop when more performance is added at the service. Check around the boundary of the scaling point to see if the billing is also scaled.
We see that test starts earlier, the scope is wider and testing will not stop in production.
Inspirational Talk: Sky is not the limit: Copenhagen Suborbitals – Peter Madsen
The inspirational talk was very nice, however not very test related. It showed us that with the right vision and perseverance you can reach goals that seem to be unreachable. Peter showed us how he built a homemade submarine and a homemade rocket.