Thursday, 18 June 2015

Test Strategy in Agile teams


Test Strategy is a systematic approach to test a product or an application
Test approach to be followed:
(while preparing this test strategy i have considered a java application, also the tools i proposed here is completely based on my requirements, you can decide your own tools off course)

The levels and types of  test:



1. First Level: Component / Unit level:

In this level we are testing the code and the service design. The code quality should be followed and tested based the available code quality check list. This is white box testing because it test the structure of the code.

Tools used: JUnit. (For Java application)

We have the below types of test in this level.

1.1 Component Test:

  • What is it ? - It is a test which verifies the smallest part of developed source code. These test are also smallest piece of program. Components may consists of several program unit. Developer can decide to do TDD in this test type.
  •     When ? - When a developer writing code for a feature, then we should have these tests.
  •     How ? - Here we should have a test for each major component and the focus is to test one component at a time. Should write a test for a component that focuses on the behavior of the system. Should have one test per test case. Each requirement behavior must have tests.
  •     Where ? - These test are available along with source code.
  •     Who ? - Written by Developers
  •     Documented ? - These are documented in terms of code.
  •     Optional / Mandatory ? - It is mandatory to have these component tests.
  •     Automated / Manual ? - Automated
  •     Why we need ? - These test confirms functional behavior at component level and code quality. Having more component test reduces the effort of huge system testing and finding bugs and fixing them. Here we can prevent bugs rather than detecting and fixing them at the later stage.
  •     Risk - The risk of failure due to any bug finding and fixing is very low at this stage. Can reduce the high risk of failure during the release if we have test for all components. (More tests)


1.2 Component Integration Test:

  •     What is it ? - It is a test which verifies the integration of two or more different components.
  •     When ? - When different components are ready to be integrated, and there is need to do test without mocking.
  •     How ? - In the component test we are using mocking, but here we should not use mocking during test. Here we should test the functional flow and data flow between different components without mocking. It should test the integration between code and database, with network, with queues , mail servers , reads and write files etc.
  •     Where ? - These test are available along with source code.
  •     Who ? - Written by Developers
  •     Documented ? - These are documented in terms of code.
  •     Optional / Mandatory ? - It is mandatory to have integration tests.
  •     Automated / Manual ? - Automated
  •     Why we need ? - These test confirms functional behavior at component integration level and code quality. It also reduces the effort of huge system integration testing and finding bugs and fixing them.
  •     Risk - The risk of failure during integrating with different components, due to any bug finding and fixing is very low at this stage. Can reduce the high risk of failure during the release if we have test for all components. (More tests)

   (Note- You can find the correct way to write integration test here : http://zeroturnaround.com/rebellabs/the-correct-way-to-use-integration-tests-in-your-build-process/)
 

2. Second Level: Story / Feature Level:

In this level we are validating the system against user requirement. The test cases will be in BDD (Behavior Driven Development) format. It should be possible to send these test cases to business or Product Owner for their review and understating of their requirement with different examples. This is Grey box testing because test case writing is independent of code development but test implementation is bind with feature code development. This level is completely independent from the component level because this level only considers the user requirements for verification.

Tools used: Cucumber

We have the below types of test in this level.


2.1 Story Acceptance Test:

  •     What is it ? - This test verifies the behavior of the software product at story level, generally expressed as an example or a usage scenario. The functionality of the system is tested based on story acceptance criteria.
  •     When ? - During sprint, when a feature story is getting implemented, testers should come up with different scenarios based on story acceptance criteria, and developers implements those test (here my assumption, tester is non-technical, as implementation of cucumber test for java application need good experience in java)
  •     How ? - The test cases are written in form of given when then in cucumber feature file. Will have test cases for all possible boundary use cases.
  •     Where ? - The test cases are available in cucumber feature file and test implementation is in Java code.
  •     Who ? - Tester and Developer
  •     Documented ? - These are documented in terms of cucumber feature file.
  •     Optional / Mandatory ? - It is mandatory to have acceptance test
  •     Automated / Manual ? - Automated
  •     Why we need ? - These test confirms the story requirement. The scenarios are easily visible to business/customer. It also prevents bugs which can appear during full system test.
  •     Risk - Business can find the flaws and change the requirement at the early stage by having the scenarios with example. So getting change request at the later stage is less.


2.2 Functional Integration Test (Feature Test):

  •     What is it ? - This is the functional integration test which verifies the behavior of the software product at feature level, generally expressed as an example or a usage scenario.
  •                       - If a feature is divided into several user story then this test will cover the functional flow between different functions. 
  •     When ? - When a feature is completed (after completion of different stories), and there is a need  testers provides test scenarios and developer implements those test
  •                 (here my assumption, tester is non-technical, as implementation of cucumber test for java application need good experience in java)
  •     How ? - The test cases are written in form of given when then in cucumber feature file. Will have test cases for functional flows between systems.
  •     Where ? - The test cases are available in cucumber feature file and test implementation is in Java code.
  •     Who ? - Tester and Developer
  •     Documented ? - These are documented in terms of cucumber feature file.
  •     Optional / Mandatory ? - It is Optional. Depends on feature, if feature is divided into several stories then this test should be available, otherwise if a story covers a feature itself then acceptance test is enough.
  •     Automated / Manual ? - Automated
  •     Why we need ? - These test confirms the business requirement in terms of information flows from one system to another. The scenarios are easily visible to business. It also prevents bugs which can appear during full system test.
  •     Risk - Business can find the flaws and change the requirement at the early stage by having the scenarios with example. So getting change request at the later stage is less.


3. Third Level: System Level:

In this level, we validate the system based on very high level requirement or an end to end system test, where we check if the system as whole can perform the operation what its intended to do with other system. This is Black box testing because test case writing and test implementation is not attached with feature code development. The testing activity involves by running the system on some given input and verifying the system output.

The different types in this levels are:


3.1 System Test:

  •     What is it ? - In this test we validate the complete system behavior, by giving some input and checks the system output without knowing the internal structure of the system. Happy path scenarios will be tested. Its also an end to end test
  •     When ? - After completion of a feature when the system is ready to be released
  •     How ? - The software will be rolled out in a test environment (production like), and will run the system in the test environment. The test cases will be written in excel sheet or in robot based on manual or automation. (I found robot tool can be used, people can decide tool based on their requirement and experience)
  •     Where ? - If manually executed then the test cases are available in excel sheet otherwise it is in robot file.
  •     Who ? - Tester for writing and implementing the tests and Developer for integrating with continuous delivery pipeline for automated system test.
  •     Documented ? - These are documented in excel sheet or robot file.
  •     Optional / Mandatory ? - It is Mandatory.
  •     Automated / Manual ? - Optional. For Automation, but always mandatory to have system test before release. Tools used: Robot. (The benefits of robot, this can run and test the application in different run time than the application. Also it has easy ssh library with which it can connect to different server machine and test the application.)
  •     Why we need ? - These test confirms the system as whole that it is working. The main goal is to accept the system for release. Bugs are not expected.
  •     Risk - If we have bugs at this stage then cost of fixing and retesting is high. Therefore risk is high at this stage


3.2 Performance Test: 

  •     What is it ? - This test is to check the performance of the application. The load test is enough to check performance.
  •     When ? - After application is developed, and when we have the performance requirement.
  •     How ? - Team can decide how to proceed, running the system on a specific data loads and measuring the performance etc.
  •     Where ? - This test is done on test environment (production like)
  •     Who ? - Developer/Performance testers are required to do this test. 
  •     Documented ? - These are documented in wiki (based on documentation tool available in organization)
  •     Optional / Mandatory ? - It is Optional. Based on requirement
  •     Automated / Manual ? - Automated.
  •     Why we need ? - It is possible that the developed system might fail under certain data load.
  •     Risk - System failure under production load is very high, so need to assure that our system can sustain a specific load.


3.3 Exploratory Test: 

  •     What is it ? - It is the test where we manually explore the software, based on our business and system knowledge.This is an unscripted test.
  •     When ? - Before releasing an application in production environment, this test has to be performed.
  •     How ? - We browsed through the application to find out any uncover bugs or to get familiar with the system.
  •     Where ? - This test is done on test environment (production like)
  •     Who ? - Tester, Developer, Product Owner and Business Owner
  •     Documented ? - These are documented in wiki. (based on documentation tool available in organization)
  •     Optional / Mandatory ? - It is Optional. Based on requirement
  •     Automated / Manual ? - Manual.
  •     Why we need ? - To learn the system and to find the system bugs which was not possible due to any scripted tests.
  •     Risk - Risk is very high if system fails or severe bugs found at this stage.


Quality Dash Board:

Should have quality dash board where we display the test cases and the test result of all above levels for each project

Wednesday, 9 April 2014

Agile tester role during design and development in an agile team


How an agile tester can contribute towards design and development of software application in an agile team.

There is always a question what are the task a tester should do during design and development in an Agile team. A simple answer, we all know, which is , during this time tester is busy designing test scenarios, test cases, preparing test automation etc.
But my experiences says an agile tester can also contribute towards proper design and development.
Here are my findings:

Contribution towards Design:
When team has the story to work on, at first developer does the low level design. Now either the whole team does the design together in a common room or a pair or a single developer does the design. If a single or pair of developer does the design then they should go through and explain their design to the whole team. In either case the tester should present in this meeting. In this discussion, tester will gain not only the system design knowledge but he/she can forward his already created test scenarios or think about scenarios and discuss within the team. This activity will help the developer to correct the design if necessary to avoid any occurrence of bug.
In my past experiences, it happened to me. During one of our design review meeting, when the developer was explaining his design, I saw one of my scenario is not getting covered then I asked what will happen in such cases. When my team understood the scenario they modified the design. This design modification leads proper implementation, which not only saves the time of finding the bug, reporting , fixing and retesting but at the end it improves the deliverable quality.
Now you might have a question if the tester is not well-versed with the development technology how can he understand the technical discussion, my experience says, it is not required to be well versed with the development technology, if the tester is attending the design review meeting regularly, they can definitely be able to understand the discussion.

Contribution towards Development:
After having the test scenarios, agile tester should explain his scenarios to the team. This activity helps the team to understand or visualize the functionality in a better way.
This leads to proper implementation and a bug free deliverable.
During this discussion team can also identifies or improve or add some more scenarios which helps the tester to understand the system behavior.

Sunday, 27 October 2013

How Business and Development team gets benefits from agile testing.


I tried this and found its working. I tried to pick a test story before implementation. Off course this should be discussed with team and Product Owner. If stories are big and can be divided into test and implementation story, then we try to do that and I pick the test story one sprint before the actual implementation. As per one of my friend, we can termed this test story as analysis story. But I would like to term it as test story.

The benefits I saw:

1.       After I finished with my test design (Specification by example), I did a review with my development team, and this helped my team to really consider and think all possible constrains and helped them to understand the requirement better. Everybody brought different question which needs business clarification.

Now imagine, this could have been a disaster if it comes up during implementation.

2.      when we asked clarification from business, (we also had test scenario review with business) they also had to given a rethink about their requirement. Because of the test scenarios, business is able to see the impact of the requirement. Business had to redefine their requirement which is now more polished.

3.       Finally in the sprint I was able to help my development team and business with my test scenarios, able to bring business and development team in the same direction.

4.       Sprint after when the actual implementation happened, it was smooth and we had quality output at the end. Without any trouble team developed the functionality, we had a good test results.