02 Feb 2023 Accelerating DevOps delivery velocity by testing smarter with a Quality Engineering approach
At Shine, we get to work as an integral part of many software development teams in many organisations. From time to time we get introduced to new teams who think improving the quality of their software development is going to require more testing, potentially at the expense of time to delivery. That’s an outdated attitude anchored in traditional quality assurance (QA) practices where testing has typically been introduced in the later stages of development.
In recent years, high-performance software development teams have adopted a Quality Engineering (QE) approach that incorporates quality processes right from the beginning, and throughout the software development life cycle (SDLC), rather than the narrower QA approach of testing to detect and remediate defects.
The objective of Quality Engineering (QE) is to ensure the built-in quality of functional features and structural integrity, right from the beginning of development, to minimise lost time and cost caused by remediating defects or performance shortcomings found in production.
With QE, more effort with quality processes introduced earlier prevents and saves on later remediation efforts. By building quality into software development processes, QE is a more proactive approach than QA, focusing on defect prevention instead of defect detection. As a result, delivery velocity doesn’t need to be compromised by QE, and quality doesn’t need to be compromised to maintain or improve delivery velocity.
QE actually begins before development
A completely thorough QE approach actually builds quality into processes that begin before the software development life cycle, identifying testable requirements extending from the initial Concept and Design phases of the product development life cycle (PDLC), of which the SDLC is a subset.
The product development life cycle (PDLC) is the domain of product managers, who are oriented to the complete delivery and implementation of a valuable product that fulfils user needs. The SDLC is the domain of software developers whose goal is to develop a software product as efficiently as possible that is fit for purpose.
As this post is directed to software developers, I will focus on exploring best practices for applying a QE approach to the SDLC, rather than exploring the wider approach of QE to the PDLC. If you’d like to clarify how the SDLC is a subset of the PDLC, refer to this article.
QE addresses two aspects of ensuring high-quality software:
Functional quality addresses how well software performs its expected function, and how well it conforms to design specifications.
Structural quality addresses the non-functional aspects of software: how robust, secure, available, scalable and easy to maintain.
QE assures software is fit for purpose.
QE isn’t another layer of workflows – it integrates with agile and DevOps
Quality engineering closely integrates with existing agile and DevOps processes, rather than being added as a layer on top of existing workflows. It aligns with the shift-left testing principles of DevOps, with the intention to identify issues earlier in development.
Testing smarter, not harder
WIth QE incorporating quality processes into development, the fundamental approach is to optimise the type and volume of testing that is undertaken, rather than simply testing more. Testing smarter rather than harder or in more volume. This is achieved by adopting a risk-based approach to define and optimise testing strategies and effort. An appropriate balance between quality and velocity can be achieved by optimising the use of automated and parallel testing.
Four key QE success factors
In our Shine QE practice we’ve identified four key success factors for QE success:
1. Align testing strategies to business objectives and risk
Taking a more strategic approach to quality and testing, begins with developing testing strategies at the product level which are in complete alignment with an organisation’s overarching quality management policies and standards, and the individual project’s business objectives and quality management plans. Using a risk-based approach to defining testing strategies involves product management, architects and developers in a collaborative process to assess what, how, and how much to test according to risk management principles relative to the business and client objectives identified in the PDLC.
2. A one-team approach
Quality needs to be the responsibility of everyone in the development team. It’s a shared responsibility where quality engineers, developers, business analysts and business users work together in close proximity at every stage of the development process to ensure built-in quality. To enable this, an optimised level of testing should be undertaken by all members of the delivery team, and story points for testing should be included in all sprint planning.
3. Shift quality left and right
Shifting left refers to conducting testing earlier in the SDLC so that defects can be detected and remedied much earlier. Shifting right means extending testing to continually monitor and perform quality checks in Production, to uncover new and unexpected scenarios and provide feedback on improving features in development.
4. Optimise continual test effort
When deployments to production are automated, and CI/CD is enabled, testing needs to be as efficient and responsive as possible. Efficiency is improved by optimising how automation is used for different levels or classes of testing. Responsiveness is improved by designing clear and concise user stories and implementing continuous and parallel testing practices. Optimising the test effort is achieved through the risk-based approach that aims to avoid excessive or wasted test effort.
To prevent this post from becoming inordinately lengthy, rather than further detailing all four success factors, I will elaborate on some of the elements of success factors 1 & 3.
Aligning testing strategies to risk management
A risk-based approach to testing makes decisions about test scenarios, test condition selection, test effort allocation and test execution prioritisation according to degrees of risk. Every business and product has different testing requirements and risk appetite. A medical device manufacturer may be willing to sacrifice time-to-market for increased assurance there will be no defects, whereas a MarTech startup may be willing to ‘move fast and break things’.
The objective is not to eliminate risk, but rather to reduce risk to an acceptable level, with appropriate but not overused volumes of testing.
The key activities in the risk decision process include risk identification, evaluation, prioritisation, mitigation and management. Risk identification, evaluation and prioritisation must consider the likelihood and impact of actualised risks. The consideration of these should be facilitated in collaboration with cross-functional stakeholders in order to strike an appropriate balance of eliminating and mitigating most – but not all – risk, versus the amount of testing effort invested to do so. This is particularly necessary for the most complex and highest-risk areas of System Integration Testing (SIT), End-to-End Testing (E2E) and User Acceptance Testing (UAT).
Using this risk-based approach drives the testing strategies decisions about what is tested, how it’s tested and how much it’s tested. Both manual and automated testing can be expensive, so it’s important to determine an optimal level of coverage at the right level of testing, according to the risk analysis.
The Test Pyramid illustrates testing strategy and plan variation
How this risk-based approach guides testing decisions about what is tested, how it’s tested and how much it’s tested, can be oriented to the Testing Pyramid model for complexity, granularity, volume, speed, and automation at different levels of testing.
The Test Pyramid illustrates how the granularity of testing should vary according to the level of integration and complexity, whereby the number of tests created and volume executed, reduces for the higher levels of the pyramid as the granularity of testing increases.
Traditionally, most organisations have focussed automated testing efforts on System Integration Testing (SIT), End-to-End Testing (E2E) or User Interface (UI) Testing, which are all more complex, slower and expensive to execute and maintain, compared with Unit Testing. The test pyramid illustrates there should be less automation for UI & E2E, whereas Unit Testing should be automated as much as possible with the greatest volume of coverage because it is faster and easier to execute, and the less leakage of errors from Unit Testing, the fewer problems there will be in other areas.
Shift quality left and right
Shifting quality to the left refers to conducting testing earlier in the agile SDLC, as well as verifying and validating user requirements and solution designs. This starts with early engagement and collaboration with business, user and executive management stakeholders, to align testing and product quality goals with organisational and product goals. Testing begins parallel to design.
Shifting right means performing quality checks and continually monitoring Production, to uncover new and unexpected scenarios and provide feedback for improving features in development. It sees the horizon of testing extended to receive feedback after release, from end-users and the performance of live systems.
Shine’s best practices include:
- Shift left:
- Validating both functional and non-functional requirements.
- Reviewing Solution Design and architecture.
- Unit and Integration test automation with high levels of coverage.
- Review pull requests for new features and automated tests.
- Adopt Behaviour Driven Development (BDD) / Test-Driven Development (TDD).
- Static code analysis to evaluate source code according to predefined standards.
- Peer review of code to evaluate efficiency and logic, and for developers to improve collaboration and learning from each other.
- Automated unit testing to validate code units (components, methods and classes) against inputs.
- Shift right:
- Constant monitoring of Production for identifying any functional or non-functional issues, and working towards resolving them.
- Enable quality checks in Production post-release to ensure new features are indeed working as expected.
QE in action
Together, these four critical success factors drive Shine’s approach to quality engineering with the client teams Shine’s consulting engineers augment and lead. Yet, successfully implementing a holistic approach to quality engineering demands much more than outlined in this introduction. If you’d like to explore more detail about these four key success factors, and beyond, feel free to contact me at prashant.mohapatra (at) shinesolutions.com