With the credit crunch affecting IT budgets, Experimentus, the leading UK software quality management solutions consultancy, recently highlighted ten ways to reduce software development lifecycle costs within IT departments.
Today we look in more depth at the next of these tips, “Focus quality
approach on product risks, focus on preventing rather than detecting defects”.
Seems obvious doesn’t it? The proactive
prevention of defects before the code is built is better than the reactive
search for defects afterwards. After
all, testing is a process to demonstrate the level of product quality, so a good
test phase will aim to effectively demonstrate, through measurement, the level
of software quality. So we are
clear on the role of testing and testers, it doesn’t improve software quality,
testers don’t create defects and testers don’t fix defects.
Testers find defects that inform the project of the level of quality
(defects) of the software, developers then may improve software quality by
fixing them, assuming they don’t create more defects in the process.
Of course we live in the real world and in the real world there are many
challenges to creating high quality software straight off the workbench. But
let’s not lose sight of the fact that between 50% and 65% of all defects are
built into requirements and design documentation, the rest are introduced in the
act of coding and managing software configuration. So it stands to reason that
the software quality should never start with act of testing after the code has
been built, it should only end then.
So why are we so bad at early prevention and detection of defects? Well, there
are many reasons but primarily the main reason is that most development and test
models, as implemented, do not effectively manage the areas of Risk, Measurement
and Structured Reviews.
So why should we look at these in detail and what has Risk got to do with
Testing? Well, Risk has everything
to do with testing. The measurement of software quality (Testing) is a risk
mitigation process and needs to be driven by a method which enables us to
Analyse, Quantify and then Mitigate risk.
Reviews enable us to equally focus testing on removing defects from
requirements and design documentation and Metrics, well, if you haven’t got
metrics how do you know where you are, or if your testing is getting better or
getting worse?
Risk
Testing is a Risk Mitigation Process, so it should go without saying that a Risk
Mitigation Process (Testing) should include process and techniques to enable
effective and informative Risk based data to drive the test process.
Entrance and Exit meetings should in effect be mini Risk Assessment
Workshops, but of the thousands of Entrance and Exit Meetings held each week how
many are actually used in this way?
Projects
are usually instigated to meet a business need so what better place is there to
start recording risk than with business requirements. Business Requirements must
be Risk Assessed and scored in terms of their criticality to the business, this
could be anything from urgent functionality to highly sensitive functionality.
In much the same way the functional and architectural system designs can
be scored using a simple set of criteria in order to determine a level of risk.
For many people who work in a well structured project environment,
Risk Assessments are carried out but
unfortunately, often the process is severely flawed because the risk data is not
effectively applied to the test process.
Testing is a risk mitigation process and must reflect the results of risk
assessments in every way from test planning and design through to execution and
reporting, in particular the exit meeting which should be used to assess the
test phase results against the risk assessment.
Finally let’s not forget that risk is not a static Metric, it is not a one off
activity, the continual assessment and reassessment of risk is key to a
successful test project.
Reviews
First let’s make the assumption that the staff you employ are adequately trained
to do their job. So why do we need
to do reviews, surely therefore their work is perfect? There are three main
reasons:
1.
Developments can be complex
2. It is human to err
3. The language barrier
So
the first two are obvious but what’s the third all about? Well, there are some
serious communication challenges in IT Departments, business, development and
test staff, often don’t meet or talk face to face and even more importantly,
business people think like business people, developers think like developers and
testers try to think like both. The
fact of the matter is that often simple and straight forward statements or
requirements can mean different things to different people.
Language and communication consist of contextual references and body
language – if these are not taken into account it’s not surprising that software
fails.
When breaking a system design down into requirements there are many rules which
must be adhered to, to ensure everyone understands them in the same way.
We will cover these later, but for example, how many of you have sent
someone a text to a friend which has been misinterpreted in terms of its intent
and tone? Well, design documents
suffer from the same problem. So
when you consider all the communication problems we could be faced with, it’s
not hard to understand why so many defects are “Built in” to design
specifications.
So how can we improve this? Well
the answer is testing, and not the dynamic testing of software code but the
static testing of documentation through structured reviews and methods of
reviewing requirements to reduce the level of misunderstanding.
Firstly you could build a structured review process using a RACI (Responsible,
Accountable, Contributor, Informed) matrix and mandate the need for
documentation to be reviewed properly. Staff should clearly understand what they
are reviewing and why.
There is a useful little method for reviewing the Business requirements
document. In effect its part of the
static test process often run by the test team, it’s called the eight point
check. So are the requirements:
Complete
– The requirement is self contained and not part of another requirement
Measurable
– The requirement is clear and quantifiable, words like “roughly” or
“approximately” would not satisfy the requirements
Unambiguous
– Simple, easy to understand and cannot be confused in meaning
Developable
– The developers can code the requirements
Testable
– The Tester can write a test to test the requirement
Achievement Driven
– There is a tangible benefit relating to the requirements
Business Owned
– there is a business owner for the requirements
No tool is perfect and you have to accept that some requirements cannot fulfil
all of the criteria. However if the
percentage of satisfied requirements is more than 85% you have a good set of
requirements. Between 50% and 85%
you have requirements which need work.
Anything below 50% would indicate a very poor set of requirements.
Next of course once your requirements have been base lined and passed the static
check you have to ensure that the translation into functional and non functional
requirements is correct. All
documents which manage the translation of requirements should be carefully
reviewed by the business, development and test.
Assuming this has been carried out correctly you will have given yourself
a much more solid basis on which to carry out your software development.
Metrics
A good test process should implement a framework to be able to provide analysis
and measurement of the Product Quality, Resources and Process (efficiency and
effectiveness). This information is used dynamically and historically
The measurement framework
can also provide information on resource usage and test progress; planned versus
actual as well as informing on the effectiveness and efficiency of the (test)
process.
In the context of this
article, product risks are identified and mitigating activities (invariably
including prioritisation of test activities) planned. When the planned test
activities are executed, the results are measured and evaluated against the risk
(was the risk real or has testing demonstrated the absence of risk?)
The risk is that the
development will result in an abnormally high level of defects because the
code is being built using new technology which the Development team are not
highly experienced in.
One of the mitigating
actions could be to undertake structured reviews of the code and to ensure
100% statement coverage using structured test design techniques.
At the end of the
Structured Reviews phase, the measurement analysis will either show that
high numbers of defects were found or not.
If high levels were found
and removed, then on re-assessment of the risk, as the product quality has
been improved at this early stage of the delivery life cycle, the risk might
have been mitigated.
However, if low levels of
defects were found, on reassessment of the level of risk, it may be decided
that the risk was not as high as expected, or it could have been that
testing identified that there are more defects to be found which may lead to
significantly more tests being run.
Mike Doel, Senior Consultant
Return to the Newsletter
|
Chairman's Welcome
|
Probatur
Acquisition
|
Credit Crunch
TMMi Presentation
|
Smarter Software
Testing |
Prevention Rather Than Cure
|
Amelia's Term