As testers, we often get our testing end date imposed upon us. As one of the last stages in the software development lifecycle, the testing phase can get squeezed into a smaller-than-needed time frame because the organisation have committed to a release date. Sound familiar?
This blog will help you work out how and when to finish a testing life cycle, equipping you with the justification you need to push back on premature launch dates, and to make the most of the time you do have where that isn’t possible.
We’ll also cover elements within the software testing life cycle that will help you prioritise – that way, if you have to cut your testing short, you can be confident that you’ve covered the right things off.
What is the software testing life cycle?
The software testing life cycle is every task and action you do to verify and validate the software prior to release. It’s an umbrella term to cover the the whole start to finish process of software testing. Read on for a break down of each stage and how to mitigate issues and improve your methods.
What are the software testing life cycle phases?
When I first trained in the software testing life cycle phases, we were taught a (now very dated) acronym: Posh Spice Eats Raw Carrots. Which stands for: Planning, Specification, Execution, Recording and Closure. While the specifics and techniques for each of these phases have developed since I was first learning the processes, the basis of this is still sound.
So, first things first, the Planning phase. This can be split into three parts, requirements analysis, risk assessment and test planning.
Requirements analysis
First thing you need to do is understand what you’re testing against – so you’ll need to define your criteria for what is a verified and validated product.
- Verification – this is making sure the software has no functional bugs. It’s answering the question: Does the product fulfill the requirements the end users set out at the start of the project? If we were testing a calculator, it would be making sure that when we plug in numbers, we get the answers that we were expecting.
- Validation – This is a bit more nuanced. Validation is checking that the product we’re developing is what the end user actually needs from the product. So we’re trying to answer the question: Are the requirements we’ve set out suitable for meeting the users needs? Using the same example, if we’ve set out a requirement that 2+2 = 5, and our calculater does that, it would be verified. But when we checked this with user acceptance testing, we’d find that wasn’t valid for the user’s needs.
It’s worth saying here that you need to define your validation criteria based on what the user actually wants, and what’s specified by the wider project. Regardless of what you think 2+2 should equal, if the project requirements specify a calculator should say 2+2 = 5, you need to be testing for that instead.
At this stage you should thoroughly analyse the requirements to make sure you are verifying the right conditions. Check the paperwork and ensure what’s written is fit for purpose and not ambiguous.
A common example we get of where this goes wrong is when we do performance testing. The requirements document will often list criteria like, “the system is quick.” What does quick mean! According to what, a snail or the speed of light?
Make sure all the system requirements you’re going to write tests for are specific, measurable and relevant to users’ needs, and that they’re understood the same by every team throughout the software testing lifecycle.
(This is a key reason why it’s important to have testers involved in the early requirements documentation process as encouraged by shift left testing. Read more about it here.)
Test Planning
Before you start planning your tests, you obviously need to assess your risks. Check out this blog for detailed advice on how to do it right.
Next you’re going to plan your tests. Before you crack on you need to write a test plan, which must include getting sign off from business executives or stakeholders, on what, how and when you’re going to test. If you don’t, you leave the door open for misunderstanding and blame later down the line.
What your test plan looks like will depend on the kind of project you’re working on. If you’re doing a more traditional waterfall or V model project, you might have a test plan template set out in a word document, which includes things like:
- What’s in scope
- What’s out of scope
- What teams are involved
- Who the stakeholders are
- How you’re going to carry out the tests
Test Planning in Agile
If you’re working on an agile project, the test planning should be part of the sprint planning. You’ll still need to consider the questions above, but it happens more regularly, and the intent and format of the planning might be a little different. For instance, it will probably be more tool driven – set out on kanban boards within your project management software, instead of in a hefty word document!
While your test planning document will be different depending on what your project is, these are the bare bones you need to have on every test plan:
Test levels
- Differentiations of tests across different levels of the project: unit tests (small parts of the system in isolation), system tests (the whole system) and system integration tests (how it integrates with other systems).
Non functional testing
- This should include elements like: Performance test, load test, security test, operational acceptance test etc.
Entry and exit criteria
- We’ll cover these further on, but you’ll need to plan ahead and write these criteria so you know when you’re ready to start and complete testing.
Once you’ve done your test plan, you must get sign off from stakeholders, so everyone is aware of what’s going to happen.
That said, your test plan needs to be a living document. As circumstances and risk profiles change, you’ll need to revisit it and change the plan to adapt to the changing project. It’s good practice not to dogmatically follow it if things change, but it’s also key when things change to update the document accordingly.
Test case design and development
Next is the specification stage. The best way to think about and do this testing phase is to split it out into test conditions and test cases. This will ensure your test designs actually fit the specifications you need to meet.
Design and confirm the test conditions
Start with writing your test conditions. If we go back to the calculator example, a test condition might be, ‘addition works’. When designing test conditions, we obviously can’t test everything. You can’t add every number combination possible together to confirm that all addition works. Instead, you create a subset of testing that’s representative.
There are tonnes of different testing design techniques to account for this, but two of the most common ones are:
- Boundary value analysis – This is where you focus your efforts around boundaries, as in software development that’s often where issues happen. For example, if you’re testing a system that computes interest rates, and a user will qualify for a higher interest rate when they’ve got more than £1000, you’d design a test that looks at £999, £1000, and £1001 cases.
- Equivalence partitioning – This is splitting out your test conditions by working out that if X works Y logically must also work. For instance, if 1+1 = 2, then we can presume 2 + 2 = 4 will also work.
Create your test cases
Depending on who or what will execute the test cases, this is likely to be a very detailed step by step guide, detailing exactly how each test should be delivered. At this stage you’ll decide who will be executing the tests: will you automate them or get a third party team to follow the scripts? If you’re giving the test scripts to a computer, or someone who isn’t very adept with the system, then you’ll need to be very specific on what you want done!
The most important thing here is that you have the expected result: what output do you need to show that the test case has passed or failed? Always focus on that objective when writing your test cases.
It’s also worthwhile to prioritise your test cases. Often these will inherit priority from the risk or requirement associated with them, but there can be some key differences if a test case is a prerequisite to another one. Sometimes a certain case won’t be testing a high priority risk, but a test case that is high priority will be dependent on it, so then the first test case also becomes high priority. Considering those dependencies at this stage will help you complete testing to a better standard, because it will help you get the most important tests done first.
Test environment setup
Test environments can be a real thorn in the side when trying to keep a software testing lifecycle on track.
This phase focuses on your test environment and ensuring that it’s appropriate for the tests you are required to execute. If it’s wrong, your tests will be invalid, and you’ll have to repeat work – so it’s worth spending the extra time to check it!
Here’s what you need to consider when setting up and managing your test environment:
- To avoid test environment issues, you need to be really specific about when and what you’re testing and ensure that the test environment stays the same each time you’re testing. The easiest way to do this is to do a smoke test to check you’ve got all the latest code versions and data sets etc for testing.
- Test environment access management is essential. Communicate with other teams so everyone is aware of the known state that the test environment needs to be reset to.
- Back up and store data correctly in your test environment, for extra protection against invalidation of tests if the environment is shared across teams.
- Be very careful of the privacy requirements of your test data. Back in the old days people would copy chunks of live data, but now it’s vital to take account of privacy. There are loads of data obfuscation tools that replace personal information with dummy data, so use one of those if you’ve not got dummy data already.
Finally, once you complete testing, it’s best practice to decommission your test environment if it’s not going to be reused. You’d be amazed how many times when consulting that we’ve found servers running legacy test environments and costing teams money. Increase the efficiency of your usage by managing access and reallocating resources from a decommissioned test environment.
Read more about how we can help you with personalised advice on your Test Environment Mangement here.
Test execution
This stage is the most simple to explain of all the software testing life cycle phases – run the tests! Think through whether you’re going to go manual or automated and plan accordingly.
In this stage best practice is to report defects and incidents. Once you’ve run the test, go back and compare it with the outcome expected (set out in the test condition & test case) and if there’s a discrepancy, raise a defect.
If you’re working in an agile environment with a blended team, it can feel tempting to just report the defect straight to the developer and have them fix the problem and then just run the test again. But be careful with this method, while it’s quicker, you don’t know if the fix that they made on that test condition has broken something else you’ve already tested. If you don’t have a collated record of defects, you can’t identify this in a later phase and fix it, so consider ensuring all incidents are recorded.
Recording
Even in agile projects, recording your outcomes thoroughly is very important. Mark each test as pass or failed (or if you’re running automated tests, it should do that for you) and measure your progress against time and quality. Create a regular report that goes to stakeholders which demonstrates your progress – though if you’re using agile tools, you will probably get realtime progress reporting.
When recording and reporting your test progress, it’s necessary to consider progress and quality attributes, and consider what gives the most accurate picture. For instance, if I’ve got 10 tests and I’ve done 9, it’s easy to say I’m 90% through my testing. But if those 9 are low priority and the last one is the most important and longest to execute, that’s not an accurate report. So consider using the risk assessment outcomes from the test planning phase, as well as quantitative data, to give a true picture of your progress and outcomes.
Test cycle closure
This is the last phase of the software testing lifecycle, and centres around the report. Once you’re at the point where you think you’re ready to complete testing – or you’re told your testing period is finished – the report summarises:
- What tests you carried out
- Any deviations from the test plan
- A summary of defects you found, especially those still outstanding
- A recommendation on how fit for purpose the product is
As much as it can be frustrating, it’s not up to us testers to make the judgement call around whether or not the project goes live. So your report needs to be designed to give stakeholders all the information they need to make an informed decision. Focus on what you’re trying to communicate, and don’t put in unnecessary data just because you can!
When we work with teams that have sophisticated reporting tools like qlikview which create tons of data, they’ll sometimes send out a 400 page report… but no one is reading them. So take a smart, qualitative and quantitative approach that clearly focuses on the data that business executives need to make decisions, and ditch the irrelevant data.
What are entry and exit criteria?
Entry criteria are the requirements you must meet before you’re ready to start the testing process, for instance, ‘Is the test environment ready?’. Exit criteria are the requirements you must meet to signify that you’re ready to complete testing. You can use entry and exit criteria at all different levels of the project – from entering and exiting system integrations to user acceptance testing. Sometimes if you’ve got test dependencies, your entry criteria will reference the previous test’s exit criteria from the level before.
Creating entry and exit criteria in software testing
When it comes to writing exit criteria, there’s a mistake we see often: similarly to quantitative reporting, sometimes testers will base their exit criteria on purely statistical requirements, such as ‘95% of planned tests have been run’. But this fails to take into account what those 5% are! So it’s better to take a slightly more priority and risk focused approach, for instance, ‘100% of priority 1 tests have been run’.
As you probably know, the most important thing when it comes to exit and entry criteria in software testing is to not go ahead if the exit criteria have not been met. If you do that, you’re building on shaky foundations – it will invalidate the next tests you do, and just mean you’re wasting your time.
When is software testing complete?
From a practical perspective, you can complete testing when all of your exit criteria have been met; which is why it’s essential to get them right. Many testers will have a set of boilerplate exit criteria in their template test plan and will just copy that. But you need to actually think about the specifics of the project and what is and isn’t relevant to your desired outcomes. Take risk into account and take the time to make bespoke entry and exit criteria, because that will often actually save you time later on.
While having to finish testing before you’re ready is a common issue, sometimes testers forget that you can complete testing early if you’ve met the exit criteria! If you’re confident you’ve mitigated all the risks identified, don’t be afraid to stand up and say the testing phase is finished. Obviously use data to back up your assertion, but I’ve seen projects where resources could have been reallocated earlier if testers felt able to report that they were finished.
And that’s the end of your guide to completing testing and the software testing lifecycle. If you want to continue expanding your testing expertise, check out our guide to shift left testing here.
Interested in getting some advice on conducting your software testing effectively and efficiently? Get in touch with us here.
Leave a Reply