A lot of seasoned agile project managers have always included a stabilization or a regression/release iteration before planning a release of their project. This has been a common norm for the last 10 years. A typical regression/stabilisation phase may last a couple of iterations for every 4-6 development iterations of a release.
The reasons for a regression/stabilisation sprint (s) is to
* Allow quality analysts to regress the system by performing end to end testing of the application rather than focussing on specific features or stories.
* Allow quality analysts to test the system in an actual production like environment with all the 3rd party interfaces the system interacts with.
* Allow developers to fix any regression defects arising out of this kind of testing and getting a stable production quality build which one can go live with.
The problems with this approach is that it is highly unpredictable because of the following
* There is often no reliable way to predict the length of these sprints upfront. These sprints often end up being a time boxed effort slotted in the planning diary towards fixing defects arising out of regression. Managers then spend huge amount of time generating defect reports, classifying them based on severity and priority, triaging with customers endlessly to keep the stabilisation sprints under control.
* The cost of fixing end to end regression defects so late in the development cycle is very high, if you were just focussing on fixing story/feature defects in your development iterations. This again makes the length of regression sprint unpredictable.
* You find issues with production environments and your deployment scripts when you are about to deploy on a production like environment.
The easy way to end these unpredictable regression sprints is to regress the application continuously, by deploying continuously on a production like environment. If you have invested in a good automation test suite and have been maintaining it for a long time this is a good first step. The next step is to automate your deployment infrastructure over various environments such as Dev, UAT and Production, and ensuring that your configurations across these environments are well tested. Jez Humble's book on Continuous Delivery has an exhaustive account of how to achieve this in projects.
It takes a lot of investment in terms of time and effort to reach such a stage of continuous automation and deployment into production. However you can start by taking smaller steps towards it
1. Try moving the definition of done towards the right side of the card wall as much as possible. If you count a story as done when it is QA tested, try moving it to Showcased or Automated, or even Deployed. Count velocity only when a story reaches that state.
2. Start automating every manual step you see on the path to deploying a story and automate it one by one. You can even plan this during your iteration planing meeting by focusing on automating at least one deployment step. (Think Kaizen)
3. Ask developers to start adding at least one acceptance test in addition to the unit tests which drive a story, before they call a story development complete. Ask QAs to do something similar
As a project manager you should be able to answer reliably (with minimum ifs and buts) the question of "How long will it take us to make simple code fix and take it all the way to production ? " . Having the unpredictable regression sprints out of the way as soon as possible is a big step in being able to answer the same.