Automation and Feedback: Closing the DevOps Loop
April 24, 2025

In Chapter 5: Branching and Continuous Deployment, we explored how short-lived branches, and a centralized continuous deployment (CD) platform helped us accelerate delivery and reduce integration friction. These changes dramatically improved the flow of code to production-but to truly complete the DevOps feedback loop, we needed more.

Chapter 6, the final segment in our DevOps for the Win series, dives into the last but vital pillar of our transformation: Automation and Feedback. If continuous deployment helped us release faster, then automation and feedback helped us learn faster enabling us to continuously improve with confidence.

Why Automation and Feedback Matter

In any DevOps transformation, speed without safety is a recipe for disaster. As our deployment frequency increased, the need for automated checks and real-time feedback became critical-not just for quality assurance, but for team empowerment and decision-making.

Automation and feedback systems give teams:

  • Early detection of issues before they reach production
  • Confidence in changes, through repeatable, reliable validation
  • Insightful metrics to understand system health and user impact

It’s the difference between flying blind and flying with a dashboard.

After making improvements to development velocity, our next focus was on identifying bottlenecks in the feedback loop once the development team marked work as ‘done’. Here, the main measure of flow was how quickly completed work could be deployed to production.

Lead Time for Change and Deployment Frequency are key metrics in measuring how efficiently work moves through the system. In our case, work was piling up at the system QA (SQA) stage, creating a bottleneck in deployment readiness.

The System QA Bottleneck

In most domains, including ours, regulatory requirements mandate a series of validation activities to ensure software quality. This includes:

  • Integration Testing
  • System-Level Testing
  • Non-Functional Testing (e.g., network verification, performance testing)
  • Validation Artifacts → System Testing Reports, IQ (Installation Qualification), OQ (Operational Qualification), and PQ (Performance Qualification)

These tasks must be performed in controlled environments, separate from development, to ensure compliance. Because of this, the system QA team operates independently, and testing can only begin after the development team completes their work.

Our dashboards showed that the time taken to move completed work to production-ready status was increasing, with more work waiting for SQA to begin testing. Following our fundamental approach to improving flow, this was an issue we had to address before any further DevOps optimizations could show real value.

Key Challenges in Flow from Dev to SQA

There were two major challenges in improving the flow of work from development to system QA:

  1. The delay in starting SQA tasks
  2. The speed at which system validation could be executed

If SQA testing is primarily manual, it can only start once code is fully developed and deployed in controlled environments. This determines both when testing can begin and how long it will take to complete. The best that SQA can do in this model is to prepare test scripts in advance, waiting for the development team’s Definition of Done (DoD) to be reached before executing tests.

Automating SQA: A Shift in Testing Strategy

The only way to improve this flow is to massively focus on automation. However, this is not just about automating test execution-it requires a fundamental shift in how testing is approached. This includes:

  • Redefining and realigning the goals of system QA.
  • Reevaluating how validation is performed to meet both speed and regulatory requirements.
  • Reimagining the tools and frameworks needed to support automation effectively.

One of the key shifts in mindset was moving from testing to confirm that the developed software is working → to testing to guide development.

This meant getting test scripts created, automated, and running in an environment where development code is regularly pushed, even before the feature DoD is reached. Here, test failures do not indicate a defect but rather provide early insight into which parts of the functionality are not yet implemented or fully functional.

With continuous delivery pipelines and a refined branching strategy in place, where feature branches are merged frequently into the main branch and deployed to the cloud, we set up a dedicated environment where these automated system tests could run continuously.

Faster Feedback with Automated Testing

This approach allowed system QA to start validation earlier, making testing faster and providing quicker feedback to the development team. Instead of waiting until after a feature is marked as done, testing now runs in parallel with development, giving teams real-time insights into what is working and what still needs to be completed.

At the same time, automating test cases required rethinking testing strategies.

  • Moving away from UI-based automation → API testing became the priority.
  • Shifting to Behavior-Driven Development (BDD) → Creating test scripts alongside user stories to be included in the acceptance criteria.

While full adoption of BDD from the start of user story creation is still a work in progress, we have improved collaboration between SQA and development teams, aligning them early in the process.

Aligning SQA with Development: The Team Topologies Influence

To achieve this, we applied the specialist team approach from Team Topologies, setting up the SQA team as an enabling team. Instead of being a separate, downstream testing function, SQA team members were aligned closely with development to ensure that automation test case creation starts early and runs continuously as code is developed.

A key prerequisite for this approach is having a cloud-based test environment to run system-level tests before moving to controlled environments. While this does introduce additional infrastructure costs, it significantly increases the flow of work from development to SQA, making it deployment-ready much faster.

Conclusion

This improvement reinforces a core DevOps principle-approaching transformation by considering Tools, Architecture, and People competencies while focusing on flow. By automating system QA, integrating testing earlier in the cycle, and aligning teams more effectively, we significantly reduced the bottleneck between development and system validation, accelerating feedback loops and making deployments smoother.

Wrapping Up the Series

As we close out our DevOps for the Win series, we reflect on the journey from tooling chaos and manual processes to a more connected, automated, and empowered organization. From defining our DevOps Triangle in Chapter 1 to establishing feedback-driven learning in Chapter 6, each chapter built upon the last to drive meaningful transformation.

Here’s a quick recap of our series:

This transformation is ongoing-but with the foundation we've built, we're better equipped than ever to deliver value faster, safer, and smarter.

Thank you for following along on this journey. We hope our story inspires your own DevOps evolution.

Stay curious. Stay iterative. And most importantly, stay connected.

Category: