Wednesday, December 14, 2011

How much does IT matter to clients (or businesses) ?

As a consultant I have the privilege of working with a variety of clients. I try and help them build good quality software by applying Lean and Agile methods. I ask teams to embrace changing requirements and to deliver as frequently to a production like environment, solicit feedback from end users and feed them back into the next build. I always believed this to be a recipe to keep every client happy until a client came and told me that

“I don’t think I need so much automated testing. I am also not sure if I need all the flexibility of having a production ready system with me all the time to deploy at one click.”

“I also think this is an overhead and costing us some unnecessary extra time.”

My first reaction to this was

“Maybe he does not really understand the essence of having a Continuous Delivery cadence yet!”

But then I had a long after thought

“Does my client really need the level of automation my team is putting in?”

“Does the organization really need the level of flexibility and value add which my team is providing?”

“Am I providing the right level of service which my client organization needs?”

To get answers to these questions, I tried to understand more about how the client organization views IT. Do they view it as their core competency, a differentiator in the market or just another utility overhead?

IT - Utility, Enabler or Strategic

Martin Fowler talked earlier about Utility V/S Strategic Dichotomy. He quotes a prospect drawing an analogy of IT and Sewage pipessoftware is like sewage pipes, I want it to work reliably and I don't want to know about the details”. This is thinking of IT as a pure utility. It is like electricity in your house. You will pay the bills and you just want it so that you can do your work under a working AC and a light bulb. You won’t care any more about it





My view of IT has 3 main categories

Utility – A lot of IT applications that merely provide a utility such as payroll or timesheets fall into this category. An organization will just want them to work within the business and will only care about cutting costs around these systems. (Just like you would look at cutting your electricity bills). There is no encouragement or even necessity to innovate. They deal with well-known problems where a package solution can fit in well.

Enabler – IT is an enabler for businesses that use software solutions to manage core internal business needs. For example a marriage registry will use custom software to maintain its records so that it is in good searchable format. The solution is also customized to the business needs (e.g. a marriage registry might have different legal requirements for allowed marriages in a jurisdiction). Organizations care a bit more when IT is an enabler than when it is a mere utility.

Strategic – This is where IT is viewed as the organisation's core competency. It is so blended into the business that it can provide a strategic advantage over competitors. The products in the market are IT products and services such as an online bookstore or an online ticket retail site. The organization needs to care about their online user experience and also be quick in bringing new features to market. They will also need to test and adapt their features based on market validation.

Why businesses differ in the way they measure success of IT?

When IT is viewed as a utility or enabler business will view success to be the project delivering on time and under budget. Businesses will largely look at cutting costs on such projects. How adaptive the delivery process is and how quickly we can take a feature to market is of no concern to the business. They don’t have any competitors. (e.g. No one is going to compete on writing a marriage registry software than the state registry themselves who need it).

In cases where an existing legacy system is being replaced by a new one, delivery on schedule may also become a major success measure. This is mainly due to the frustration and overhead of working with the legacy system, that the business might get desperate to replace it quickly.

In a strategic IT centric business however, success will be measured differently. Time to market, adaptation and ability to quickly reposition the products become critical to business. This is when the product is more important than the project. And this is where strategic methods of Agile, Continuous Delivery and techniques from the Lean Startup add value.

But for utility and enabler projects, it does not matter much. Quoting Martin Fowler from his post

“Most agilists tend to come from a strategic mindset, and the flexibility and rapid time-to-market that characterizes agile is crucial for strategic projects. For utility projects, however, the advantages of agile don't matter that much. I'm not sure whether using an agile approach for a utility project would be the wrong choice, but I am sure that it doesn't matter that much.”

Choosing the right IT governance model

Mary Poppendieck identifies 3 main IT archetypes in her Leadership workshop. Borrowing from her, here is a suggested IT governance model that can match the client based on their view of IT.

Utility (for Utility) - Provide cost effective utility like reliability (as described earlier)

Governance model: Minimize cost/ Minimize maintenance overhead

Supplier (for Enabler) Deliver business internal applications on time and on budget

Governance model: Manage variance to plan on the project

Partner (for Strategic) - Create differentiating competitive solutions

Governance model: Software can create differentiated product and services. Measure success by measuring business results.

Conclusion

As an IT consultant it is important to understand the client or the business’s view of IT. There are times when investment in Continuous Delivery is not best suited for a client. As Jez Humble talks about it in his post In fact, the most important criterion for using continuous delivery isn’t concerned with the technical nature of system you deliver or even the market you work in – it’s whether the system is strategically important to your organisation.”

Wednesday, September 28, 2011

Try not to be yet another project management overhead

Context

Most of our teams in Thoughtworks India work for clients outside India, which results in an “All Thoughtworks team” which is quite high in the Agile maturity scale.

When I used to interview Project Managers in Thoughtworks India, I often struggled to find people who would sound reasonable to call in for an interview after a preliminary conversation.

Something I often end up asking is “Can you tell me about a difficult or puzzling issue you had to handle on the last project  or so ? “ and the answer to this would typically be

“I had to manage dependencies across multiple vendors which was really painful”

“I had people with performance issues on my team, and we could never deliver”

“Vendors did not deliver on time”

“Scope kept changing”

“Ran out of budget”

“No one was convinced about Agile in my organization”

While these were legitimate problems, no one would actually come up and say

“There were too many defects every iteration and I had a tough time figuring out the root cause”

“Our velocity was low and we did not know how to handle a particular new technology”

“We did not know how to write automated tests around a few legacy systems”

“Our build times were quite high and we had no good solution “

“We had to coach a really difficult product owner”

This made me conclude that even though it has been so many years since Agile, XP, Scrum have been around, project managers still try and manage the golden triangle of Scope, Time and Budget. And when you are a bit more senior in the organization, you are given the responsibility of managing vendors, and be christened as a program manager. There were a few people who had at least good people management skills which was a sign of hope.

What should you know as a Project Manager to be useful ?

The reality is that in a highly collaborative and self organized environment like an agile team, a project manager is considered an overhead. And a project manager who does not take interest in the product the team is building , the way the team is collaborating and some of the technical challenges on the ground, is downright useless for a team.

Solving some of the project issues in an Agile team needs a deeper understanding of the engineering practices that are used to build the product as well as an understanding of the domain and the business in which the product fits in.

If you need to be an effective project manager, you should at least have a good handle of a few things in your project such as

* What is a good User Story in your project  or are we slicing stories correctly ?

* How good is the Continuous Integration on your project and how can it be improved ?

* What is your functional automation strategy and how can you achieve the maximum benefit from it ?

* How much technical debt are you carrying on the project and what is the impact ?

* What are the difficult areas in the codebase to test, and how can you increase coverage in those areas ?

* What does the production deployment infrastructure look like and how you best utilize that information within the project to create more similar environments ?

* If you are in a stand-up meeting where a QA calls out a defect, you should be in a position to question why no test actually caught that defect before hand in the continuous integration ?

For the typical project manager who worry only about the golden triangle of Scope, Time and Cost at a high level this might seem weird. Some of them might even want to delegate it other people in the team.

In a matured agile team, the case for a project manager can be made only when he/she has a handle on some of the things mentioned above. Estimating using points, putting it in a release plan based on velocity can be done by anyone in the team.

The crux lies in understanding the product the team is building , the tools and techniques they employ to do it and the collaboration required to yield the best results !

Monday, September 26, 2011

Experience design and continuous delivery

While integrating experience design in the overall agile lifecycle is a challenge, teams working in a continuous delivery mode can prove to be a boon for experience designers.

Using the release cadence

When you are working on a longer release cadence, life of an experience designer often looks like this



















A lot of discovery and research is done upfront in the first few iterations, with usability testing happening towards the end of the release with very minimal tie left for factoring in any feedback coming from usabiity testing.

However teams practicing delivering continous releases to production often end up with a better release cadence which allows a lot of opportunities for UXers on the team to perform research, concept testing and usability testing around these shorter releases.

















This allows teams to plan fixing feedback from previous releases into future releases resulting in building a better product iteratively. These windows of opportunity should be capitalized by experience designers on the team to do regular usability testing or even corridor testing whenever possible. Even if the minor releases do not actually go live, a lot of feedback can be gathered by testing on staging environments at regular intervals.

Packing in some good analytics within the application will also provide valuable insights into usage patterns which again can be used as feedback on the features delivered.



Reducing time to production to quickly deploy UI fixes

Another important thing which experience designers should push for and be congnizant about is how long it takes to promote a checkin to production. Often in larger organizations there are heavy processes involved in promoting a change which can be quite frustrating for designers when they want to promote small UI fixes into production over night















It is important that the whole team works towards removing any potential blockers that come in between a code checkin and deployment. The team should also constantly try to reduce the time it takes to make a deployment into production











A lot of project teams end up taking days to deploy to production which ends up being a major release overhead cost. The sooner this time is reduced, people will encourage more frequent changes to production, which again will be a great opportunity for deisgners focussing on visual design aspects to push in those minor tweaks to the UI.

Conclusion

I would like to keep this simple to say that fitting in experience design into continous delivery need not be viewed as a challenge. The many windows of opportunity are infact a boon for UXers and should slowly start encouraging them to take more risks and iterate over their design over releases, rather than be the perfectionist before the first release.

All it needs is the iterative mindset !

Saturday, September 24, 2011

Integrating experience design in an Agile project lifecycle

Many organizations now employ experience designers to design products and services with a strong focus on the end user experience.

According to Wikipedia Experience Design is

An emerging discipline, experience design draws from many other disciplines including cognitive psychology and perceptual psychology,linguistics, cognitive science, architecture and environmental design, haptics, hazard analysis, product design, theatre, information design, information architecture, ethnography, brand strategy, interaction design,service design, storytelling, heuristics, technical communication and design thinking.

While it is still a relatively new and emerging discipline, it has its roots in psychology, design thinking and various forms of user research. Given this background, people playing such a role are often in a dilemma of where do they fit in , when it comes to a fast paced lifecycle of an agile project.



If not resolved early, it is easy for user exerience designers to start working in a silo with minimal interactiion with the delivery team. This is counter productive as then the UX people have no idea of the project rythm and cadence and often end up taking huge amounts of time to deliver the relevant artifacts (wireframes, visual designs, content etc...) for the project.

This can become very frustrating for both parties and the best way to resolve this is by integrating the experience design people as a part of the entire project delivery team. Having a Lean approach towards UX leads to designers showing their work at regular intervals to the team, validating it and adapting , just like a regular agile lifecycle.


What works best during the course of a project is to have the UXers also accountable for each user story just like analysts, developers and testers. If you have more than one UXers in your team, ask them to signup as a UX Owner for user stories. This means that they are responsible for taking the user story or the feature all through to production from an experience design standpoint


Accountability to a user story is quite important because it integrates the work the UXers are doing to the daily cadence of the project team which is entirely based on user stories. Based on this, a typical standup update for a UXer would be something like this


An experience designer needs to work in a similar cadence as a business analyst who is trying to get the next set of stories analyzed for development.


A lot of small UI tweaks can be fixed very quickly if the UXers were to pair with developers and testers when they are actually integrating the visual design into the application.


So rather than working in a silo, a user experience designer has to be fully integrated with the development team during the entire project lifecycle. They carry the same responsibility as others in the team to maintain the flow of stories through till they are released.



Wednesday, September 7, 2011

The case for reducing focus on estimation

Why is an estimate required ?

An estimate on a task, story or a feature is really required to answer 2 questions

a. How long is it going to take to build it ?

b. How much is it going to cost ?

Who really needs estimates ?

In a typical software project, there are 2 kinds of people who really ask for estimates.

1. Product Owners – They are the ones who care about what is being built in the product. When they look at an estimate, they are thinking

a. Can I really wait this long to get this feature out to the market ?

b. Is it worth building this feature given the cost ?

c. Can I come up with a cheaper option which can give me a better return on investment ?

2. Managers – They are the people who are responsible for managing the budget allocated to build a product, and also schedule dependent activities based on delivery timelines. When they look at an estimate, they are thinking

a. Will this feature be delivered within budget ?

b. Can I make commitments to other stakeholders, dependent teams, schedule other activities based on these estimates ?

Issues with estimation

1. Time taken to estimate

Teams spend significant amount of time during the project to estimate and come up with accurate estimate numbers so that project managers can plan and make commitments. This takes even longer when you are estimating in hours or days instead of light weight metrics like points.

Some project teams have multiple reviews of these estimates and even bring the estimate down to a complex formula which is worked out based on over 5 parameters for every story. A lot of time is also spent in deciding which estimation model to use in the variety of estimation techniques available in the market.

2. Accuracy of estimates

The more upfront  estimation is done for stories, the less accurate the estimates are. Estimating stories which will potentially be played 3 months down the project makes no sense because a lot would have changed in project, from the codebase to the team composition which will ensure the upfront estimates are stale. Ultimately these stories will have to be re-estimated which will again consume equal amount of time if not more, after 3 months.

3. Undue pressure on the team

Estimates which are done especially in hours or days end up putting undue pressure on the team members. Inexperienced managers push their teams to achieve the planned velocity by making them work for longer hours which ends with a team death march instead of working in a sustainable pace.

4. Time wasted in questioning estimates during the project

A lot of time is also spent questioning estimates of individual stories in the project when things are not going too well and the project is not on track. Stakeholders as well as managers start questioning estimate of stories which have a larger number against them. Some people start measuring actuals on estimates which is counter productive.

Reducing focus on estimation

1. Engage with the right stakeholders

More than the estimates themselves, it is important that the estimates are being given to the right stakeholders.

Engage with the people who 

a. Care directly about the product and the features getting built in it

b. Are directly responsible for the money being spent to build that product

It is often difficult to achieve this in large organizations which do In House IT development with multiple vendors such as banks and insurance companies. It is often easier to get the right stakeholders in a product company or a start-up.

If you are reporting estimates and progress to someone who is just a middle manager not worried about either of the above, the you are engaging with the wrong stakeholders and are wasting your time.

2. Engage with stakeholders by showing working software than status reports

Once you have the right stakeholders as talked about above, it is important that you move the engagement by showing working software frequently rather than showing status reports with estimates and burn up charts.  Show progress by showcasing new features built by the team to the product owner rather than showing how many points were completed.

Release as often as possible. Achieve a monthly , weekly or even a daily release cadence. Once people who pay the bills, see the releases coming out, they will worry less about velocity in iterations.

3. Be transparent about the processes and practices

Stakeholders start worrying a lot about estimates for each story when they are sceptical about the way the team works.  Some people start looking into the estimate of each story and question the estimate, and the team spends hours justifying their estimate. This often happens because stakeholders do not understand that the team works on a pull based work schedule where if a story gets done , the team pulls in the next story to work on, and estimates largely become irrelevant during execution. (E.g.: even if a story is given a questionable estimate of 4 points, if the team completes it within 2 days instead of 4 days, and agile team will pickup the next story from the card wall )

When stakeholders understand this model of work distribution, they know that there is minimal waste of team capacity and stop worrying about estimates. Measuring actuals in such a scenario is also a compounding waste.

4. Use relative sizing of stories when there is a need for a rough timeline and cost

The question of “How long is it going to take and how much is it going to cost ?” will always we be asked for stories, features and product releases. Someone will have to signoff on a rough budget to build the product and commitments will have to be made to people dependent on the product launch. This demands some sort of raw estimation and scheduling of work.

Given the nuances of detailed estimation, a better approach is to use relative sizing using story points and either use raw velocity or yesterday’s weather for planning.

5. Focus on reducing cycle time or business value delivered if a KPI is needed for progress

There might be situations when a metric is required to track progress as a team, and if that is required, focus on reducing cycle time to delivering stories to production. Another good indicator is the amount of business value delivered in a period of time by the delivery team.

Conclusion

Software industry has moved from an era where teams did  upfront analysis and design for a quarter of a year before development began, to a much more iterative development model quicker release cycles since the wide adoption of the Agile manifesto. With the advent of cloud computing and continuous delivery it has now become cost effective to deliver frequent releases to production every month/week/day. This has left software estimation irrelevant over a long term. Estimates will still be required to give a budget and a date commitment to clients, but with there exists a strong case to focus lesser and lesser on estimation.

Sunday, September 4, 2011

The end of regression, stabilisation,hardening or release sprints

A lot of seasoned agile project managers have always included a stabilization  or a regression/release iteration before planning a release of their project. This has been a common norm for the last 10 years. A typical regression/stabilisation phase may last a couple of iterations for every 4-6 development iterations of a release.

 

The reasons for a regression/stabilisation sprint (s) is to 

* Allow quality analysts to regress the system by performing  end to end testing of the application rather than focussing on specific features or stories.

* Allow quality analysts to test the system in an actual production like environment  with all the 3rd party interfaces the system interacts with.

* Allow developers to fix any regression defects arising out of this kind of testing and getting a stable production quality build which one can go live with.

 The problems with this approach is that it is highly unpredictable because of the following

* There is often no reliable way to predict the length of these sprints upfront. These sprints often end up being a time boxed effort slotted in the planning diary towards fixing defects arising out of regression. Managers then spend huge amount of time generating defect reports, classifying them based on severity and priority, triaging with customers endlessly to keep the stabilisation sprints under control.

* The cost of fixing end to end regression defects so late in the development cycle is very high, if you were just focussing on fixing story/feature defects in your development iterations. This again makes the length of regression sprint unpredictable.

* You find issues with production environments and your deployment scripts when you are about to deploy on a production like environment. 

The easy way to end these unpredictable regression sprints is to regress the application continuously, by deploying continuously on  a production like environment. If you have invested in a good automation test suite and have been maintaining it for a long time this is a good first step. The next step is to automate your deployment infrastructure over various environments such as Dev, UAT and Production, and ensuring that your configurations across these environments are well tested. Jez Humble's book on Continuous Delivery has an exhaustive account of how to achieve this in projects.

It takes a lot of investment in terms of time and effort to reach such a stage of continuous automation and deployment into production. However you can start by taking smaller steps towards it

1. Try moving the definition of done towards the right side of the card wall as much as possible. If you count a story as done when it is QA tested, try moving it to Showcased or Automated, or even Deployed. Count velocity only when a story reaches that state.

2. Start automating every manual step you see on the path to deploying a story and automate it one by one. You can even plan this during your iteration planing meeting by focusing on automating at least one deployment step. (Think Kaizen)

3. Ask developers to start adding at least one acceptance test in addition to the unit tests which drive a story, before they call a story development complete. Ask QAs to do something similar

As a  project manager you should be able to answer reliably (with minimum ifs and buts) the question of "How long will it take us to make simple code fix and take it all the way to  production ? " .  Having the unpredictable regression sprints out of the way as soon as possible is a big step in being able to answer the same.

Tuesday, August 30, 2011

ROI on automated testing - a stepping stone for frequent releases



Every so often, I end up having to explain the benefits of automation in terms of either tradeoffs or statistics, to people who still question the value of automated testing and the investment to be made on them in a project. This is an attempt to articulate the same for such audience.


While most Developers and QAs on a team might be convinced that automation is the backbone of continuous delivery, it might not be the same case with other purely functional roles in the project such as a Business Analyst or a Product Owner when they look at stories and their cost and time of development individually.



Consider a BA or a Product Owner, they are mostly interested in the functional code written to ensure that the system is capable of executing a few user scenarios as defined in the user story.




Once the story is delivered it is often the Developers and the QAs on the team who have to worry about ensuring that the user story delivered previously is still working, while also delivering new user stories.




For that purpose Developers and QAs put in a safety net by either driving or at the very least covering the functional code with unit and acceptance tests. The unit tests end up testing a particular unit of code (eg: a method in a class )and the acceptance tests start testing the end to end behaviour of the story which is the story acceptance criteria.




Even though this makes the developers and QAs happy, it is not of as much interest to the people who look at just the pure functional side of the story. From a BA or Product Owner's perspective the time to actually complete and release a story now has just gone up because we are writing more tests before calling a story complete.



What they do not realize however is that they are looking at one particular story in isolation. The bigger picture however shows that without automated tests

Cost of a release exponentially increases with the number of user stories being delivered. This is mainly because the project will require more testers to ensure that stories are manually well tested. Number of defects will also increase without the lack of a safety net of tests.




Time to release again increases exponentially like cost. At some point there will be a cutoff when the project cannot take in more testers, and it will take more time for the fixed number of testers to release more stories into production



The overall project cost will hence shoot up with the frequency of releases you do to production because of lack of automation





Ultimately the lack of automation manifests itself in the overall project cost rendering the project team incapable of making frequent releases to production cheaply.

On the other hand if you have Definition of DONE for a user story to include unit and acceptance tests and think of the time you take doing that, as an investment, you will be building a safety net as you go around each story. The safety net will give developers feedback on whether they are breaking any existing functionality with any new code they are writing and it will leave QAs to test more edge case scenarios in the system rather than repeatedly testing the happy path scenarios for all stories.


It is also equally important to invest time in a Continuous Integration system which can run the complete suite of tests after every checkin.







The continuous integration system can now churn out production grade builds by running the complete suite of tests after every checkin.



Once this is done, you will see that the cost of actually doing more releases into production does not shoot up and is well within controllable limits as below






Reaching a stage of roughly 75-90% automated test coverage is not easy. It requires being diligent about the quality of tests being written ( no side effects, clean tear down etc...) and also about refactoring the tests and treating them as you would treat your production functional code.



This lays a strong platform for continuously delivering builds into production with the only additional cost being deployment and non functional testing (such as performance, security etc...). Even these additional costs can be minimised by automating as much as possible.



Often the return on investment is questioned by people who look at the project within a narrow timeline and forget the long term returns from the same.



[My colleague Ranjan has provided a good statistical justification here of why test automation is the first step towards continuous delivery]

Tuesday, July 12, 2011

Models of a Project Manager role in an agile project

The roles a project manager can play in an agile project are endless and a lot depends on where one’s interest lies and also the type of project one is working on. Again not all of these roles are applicable to all agile projects.

Here are a few possible options

The Iteration Manager

This is the most hands on delivery role one can do as a project manager on a team. The work largely involves planning iterations, resolving blockers from the daily stand-up, ensuring a smooth passage of stories across the story wall.

It requires working closely with all roles within the team and understanding the core practices of writing a user story to the essence of continuous integration

Program/Project Manager working with Iteration Manager(s)

If a project demands a larger team, with multiple parallel work streams, one might need an Iteration Manager for every decent sized (3-4 developer pairs) work stream and an overall project manager looking at work across teams. Depending on how large the project is the role is often termed as a “program” manager.

The work involves managing functional and technical dependencies across work streams during upfront release planning also during regular iterations . It also means providing an overall program status to senior stakeholders with the necessary abstraction of details.

Release Manager 

If it is a large enough project which is making multiple releases into production, there could be a need for a full time release manager role. Work here would involve planning upcoming releases and tracking stories and defect fixes are merged across different branches. It also requires a decent understanding of the techniques and tools such as continuous integration, version control systems, branching strategies etc.… to engage in an effective dialogue with senior developers on the team.

Project Manager working with a vendor PM counter part

If a lot of work is done by vendors either outsourced or co-sourced, there could be a need for a client project manager who can work across with all the vendor PM counterparts. Typical work of a client project manager involves resolving issues in the project dependent on decisions from stakeholders within the client organization. This could mean things as trivial as getting environments setup by the client IT team, to as complex as escalating critical issues about the product delivery to senior stakeholders from the business

The vendor PM(s) here acts as a project/iteration manager working closely with the client project manager every iteration.

Client Account Manager

A client account manager can become a full time role for a vendor executing a big project for a client. Major part of the work involves managing the ongoing client relationship by being part of key conversations with the senior client stakeholders, project sponsors etc.…  Work also includes billing and invoicing the client for the services offered.

A client account manager with a strong de livery background can assist a lot in having difficult conversations with a client when there are delivery issues within the project.

Sunday, July 10, 2011

The challenging setup of an offshore project

After managing a few agile projects from India and learning the tricks of the trade, I have been wondering why life is much easier when you are working closely with the client as compared to working offshore. What worried me more was that I had to learn and use a lot more of project management techniques (making trade offs , managing risks etc..) from offshore, but what mattered while at a local client was building relationships and managing client expectations.

The answer to that is partly the challenging setup of an offshore project. Here are some of the odds one needs to battle with up front while delivering from offshore.

Budget is a critical constraint

There are no 2 ways about this. Clients try offshore when they are constrained by the cost of the project. And with that, among all the variables a project manager can play with, cost becomes a constant.

This means that if you made a release plan with certain velocity assumptions initially which do not come good in the first few iterations, you cannot increase your team size easily (given enough work could be done in parallel) as the client will not be happy with the increase in cost.

A burning business backlog to be delivered

It is quite natural for client to have their own IT teams build the necessary software required to run their business. The in house IT team guarantees close collaboration with the business. All is well until the in house team cannot deliver which is when the business backlog of features starts growing, and the business slowly starts losing trust of their IT team. This is typically when an offshore vendor is called for to help the IT team climb the mountain of business backlog they have built up by not delivering consistently.

When you look at this from offshore, you are looking at a scope full of must have features which if not delivered in the next few months, will almost kill the business. Negotiating on such a backlog is never an easy conversation with the business people.

Negotiating is difficult from miles away

What makes negotiation worse is the fact that the offshore team is sitting thousands of miles away from where the actual business is. Building rapport with senior client stakeholders can no more be done via hall way conversations or over a water cooler. If you even needed 5 minutes from a client to talk about the priority of a single defect, you cannot just walk over to his desk. All these become formal conference calls over phone or video conference.

Client IT teams end up micromanaging offshore vendors

There is nothing worse for a project than a backlog full of must haves, very minimal budget, lack of trust from the business and an IT vendor half way down the globe trying to deliver. Given such a scenario, it is natural for a client IT team to be more risk averse and to micro manage their IT vendors.

A slip in velocity is treated as a red alert with the need for a full time vendor project manager to provide explanations to their client IT counterpart.

Tuesday, June 21, 2011

Dev Box testing is a mindset shift for QAs

I never thought that this needed a blog entry of its own since this has been a common practice in all the Agile projects I have done in TW. But apparently it isn't so easy in other organizations.

A “Dev Box” is basically a developer machine on which active development happens. The idea of “Dev Box” testing is to get the QA on the team to do a quick sanity test of the story on a developer machine before the final checkin of the story is done and the developer moves the card to Development Complete or Ready for Test.

It is as informal as a developer pair shouting out to a QA on the team “Hey, we think we are done with the story, can you do a quick round of Dev Box testing before we call it dev complete ? “ and the QA coming over to the dev pair station and doing a quick test. This usually takes no more than 15 mins.

Even though this sounds very basic, it has the following advantages

* Reduces the wait time to find defects as the QA need not wait for a build to be churned out and deployed on an environment, hence providing quick feedback to the developers.

* It provides more insight for the developers to look at how a QA is testing the application and vice versa.

* It also aligns developers and QAs towards building a much better quality product by having quality discussions much earlier in the cycle with a tangible story at hand. Sometimes the QA might have useful inputs in how a widget plays on the web page and it might be just a quick fix enhancement which the developers can jump on to during the testing session itself.

Apparently this is not quite easy as it sounds in organizations where cross functional teams are not a common practice. I have worked for clients who have a separate quality assurance department they report to and the QAs refuse to test on a developer machine as the policies do not allow them to do so, even though personally they agree with the benefits of cutting down the feedback loop.

Another client I was talking to remarked that QAs in their organization were actually driven by wanting to log a huge number of defects in their defect tracking system and not by actually wanting to deliver a quality product. This was to an extent that their yearly appraisals were affected party by what was logged in the defect tracking system as their managers will only look at those reports.

By having a QA as an integral part of the development team and adopting practices like Dev Box testing , the team goes through a mindset shift after which everyone is focussed on one goal, which is to deliver business value by building a quality product.

Wednesday, June 15, 2011

Release plan checklist

When I build release plans, or even look at release plans of other projects, I end up running through a checklist of things in my mind, to determine if it is good enough. If you are an Agile PM trying to build a plan, this could be useful for you.

Iterations Is the length of the iteration enough to be able to complete a medium size story within it ?

Do the number of iterations fit well within the acceptable timeline ?

Can we assume a production quality build after every iteration ?
Estimation Are the stories sized relatively ?

Does the team understand the estimation unit across all roles ?
Velocity Is the planned velocity, the average velocity of last 3 iterations ?

If its a new project, are we planning based on a raw velocity exercise ?

Are team members across roles in planning the velocity ?
Resource ramp up Is there time  factored in  for new people to ramp up on the team ?
Ordering of stories Are the stories ordered around the critical path functionality ? (Always remember the critical path determines the schedule)

Are the higher priority stories slotted for earlier iterations ?

Are the stories ordered so that they meet any functional or technical dependencies?
Negotiable Scope Are there some “nice to have” stories in the plan which can be later negotiated if need be to bring the project on track ?
Spikes / Proof of Concepts For technical unknowns, are there spike stories which allow the team to explore technical solutions ?
Non functional requirements Is there clarity on requirements for Performance, Security, Scalability and how they are going to be addressed ?
Functional Automation Will developers do functional automation as a part of a story or this will be done as a part of QA ?
Regression/Stabilization Is there a need for a separate regression/stabilization iteration once the development is complete ?
User Acceptance Testing How much time is required to UAT the set of stories the team will deliver ?
Risks Does the team understand how much risk is there in the plan ?

Are these risks shared with the customer ?

Friday, April 29, 2011

Track all development work as story cards

Managers new to agile teams are often in a dilemma about what work should be written on a Story Card and what should be tracked in your favourite project management tool. Since the smallest unit of work is widely known as a User Story, people often are confused how to track all other work that the team has to do.

In short, all development work done by the team should be written on Story Cards (as above). Majority of these will be User Stories but there can also be several other categories such as

image

  • Infrastructure/Technical Stories – Tasks such as setting up continuous integration environments, developer machines etc…
  • Spikes – Proof of concepts / Prototypes
  • Performance Stories – Stories involving performance testing / tuning etc…
  • Technical Debt – project’s technical debt.

If you are using an Agile project management tool (Eg: Mingle, Greenhopper etc…) it should allow you to create different card types and set workflows based on them.

The important aspect is that every bit of work which is done by a development team should be part of a card which is tracked.

All above story types should be estimated, prioritized, planned and tracked in the same way as one does with user stories. The technical stories however need not be signed off by business stakeholders. It would be of more interest to a technical stakeholder within the project team to sign off such stories.

Thursday, April 28, 2011

Questioning Velocity

What is velocity in an Agile project ?

Velocity is the total number of story points of stories which were completed in an iteration. Stories should be DONE by the team’s definition of DONE.

How do we account for partially completed stories in an iteration ?

In an iteration there will be stories which are partially completed. These stories should not be considered in calculating velocity for that iteration based on partial credit. These stories should be accounted in the velocity for the iteration in which they are actually DONE.

How does one plan velocity when starting a new project ?

When starting a new project, since there is no previous velocity to go by, there are a couple of options one can use

a. Ensure that at least a developer pair is available for a week to do a couple of Small/Medium stories from the release scope based on the technology stack proposed. A QA can then decide roughly how complex it is to test the story manually and also write an automation script for it, This should give the team some confidence on the technology proposed and also a rough likelihood of stories which  can be completed within a week.

b. Do  a raw velocity exercise. In a raw velocity exercise the team decides how many stories can it finish in an iteration period. This is done by repeatedly picking different sample sets of  already sized stories which can be done within an iteration period. The total points across different picks are averaged and that is taken as the velocity the team will achieve each iteration. (For example if the result of 3 picks was 6,8 and 10 points for a 2 week iteration then (10+8+6)/3 = 8 points is the raw velocity for the team for 2 weeks. A schedule can then be laid out assuming the team finishes 8 points in a 2 week iteration.)

Either a. or b. or both can be done before planning velocity for a release, but all of these will only provide an indicator of the team’s velocity but the actual velocity of the team will only stabilize over time as the team learns more about the domain and the technology of the project.

Should we also measure velocity in parts such as developer velocity , QA velocity etc… ?

Velocity should always be measured by the definition of DONE of a story. Measuring velocity in parts is again saying that a story is partially done, which in itself does not have any meaning. In an ideal world a story is done when it is in production, but for accounting velocity, once should at least account for stories only when they are tested, showcased and accepted by the customer in a production like environment

Should we plan the same velocity even if the team is ramping up with new people ?

During the release planning exercise, one should also arrive at a rough staffing plan. This should show addition of people (Developers, BAs, QAs) staggered across various iterations in  a project.  Based on this, a ramp up velocity should be planned as the new people joining the team will have a learning curve ahead of them before they start contributing as effectively as the existing team members.

As an example if 1 pair of developers were doing 5 points in an iteration, then adding a new pair of developers to the team will not double the velocity immediately to 10 points, as the new developers will be learning more about the project in the first few iterations. So a staggered velocity of 5,7,10 for the 3 iterations maybe more realistic.

What should be done if the team does not achieve the planned velocity for the iteration ?

Before making any decision to change the planned velocity, one needs to understand the root cause of the problem. If the team set out to achieve 10 points in an iteration and could do only 6 points, what was the actual reason for not completing 4 more points. Was it because of poorly analysed stories, technical debt in a certain area, new technology/framework usage etc… If the root cause can be fixed in the subsequent iteration, such as adding an experienced business analyst to work with the customer closely to provide well analysed stories, the team can still set out to achieve the planned velocity in the next iteration.

However, if the team is consistently able to achieve only 6 points as an average over the last 3 iterations after exploring and eliminating all possible root causes, then it is sufficient to say that the average velocity of the team is 6 points. In this scenario, the first thing is to let the customers know the current team’s velocity, and make them understand the reasons behind it. The next thing is to look at project variables such as Scope, Time & Cost along with the Trade off Sliders and see which of the variables can be compromised without conflicting interest of project stakeholders. An example could be reducing the scope of the release by dropping a few nice to have stories if the trade off sliders indicated scope being a lesser priority than being on time and within budget.

Do technical/infrastructure stories and spikes account for velocity in an iteration ?

Especially during the start of a new project, there will be activities such as setting up basic infrastructure for the project such as build scripts, continuous integration, environments etc… There will also be spike stories which need to be played to understand other frameworks and systems we need to work with. Since the team is spending effort in these stories and tasks, these should be accounted for in the velocity of an iteration. Hence any technical story or a spike should be estimated as all the other stories in the release and planned in the same manner.

The team also fixes defects in an iteration. How do we account for them in velocity ?

Defects can broadly be of 2 categories in an iteration.

The first category is of story defects which are defects on stories which have not yet been signed off by the QA team during an iteration. In this case, once the defects are fixed the story will get signed off by QAs and hence the story points associated with the story will account for the velocity.

The second category of defects will be because the application has regressed over a period of time and these defects are part of stories which were done in earlier iterations by the team. This can happen because of a missing safety net such as a unit or an acceptance test when the story was completed earlier. The team will have to spend effort fixing these defects but they do not account for velocity separately. If a team was doing 10 points in an iteration, having 4 such defects might let the team do only 8 points in that iteration. Hence the velocity is a clear indicator that regression defects are slowing down the team’s progress. Addressing the technical debt of test coverage around that area will ensure such defects are minimised.

Friday, March 25, 2011

Questioning Story Points

What is a Story Point ?

A Story Point is a subjective unit of estimation used by Agile teams to estimate User Stories.

What does a Story Point represent ?

Story points represent the amount of effort required to implement a user story. Some Agilists argue that it is a measure of complexity, but that is only true if the complexity or risk involved in implementing a user story translates into the effort involved in implementing it.

Here is an article by Mike Cohn that explains this in detail. Do make sure you read the comments.

What is included within a Story Point estimate ?

A story point estimate should include the amount of effort required to get the story done. The definition of done here should ideally involve development effort as well as testing effort required to implement a story.Once the user story is implemented, it should be usable in a production like environment.

Why are Story Points better than estimating in hours or days ?

Story point estimation is done using relative sizing by comparing one story with a sample set of already sized stories. Relative sizing across stories tends to be much more accurate over a larger sample, than trying to estimate each individual story for the effort involved.

A simple analogy is that it is much easier to say that Delhi to Bangalore is probably twice the distance of Mumbai to Bangalore than actually trying to put a number such as 2061 kms against Delhi – Bangalore.

Teams are hence able to estimate much more quickly without spending too much time in nailing down the exact number of hours or days required to finish a user story.

How do we estimate in points ?

The most common way of estimating stories is to categorize them into 1,2,4,8,16 points and so on. Some teams are more comfortable with using a Fibonacci series of 1,2,3,5,8 as the point scale. Once the stories are laid out in index cards, the team can start with sizing the first card which the team feels is of a smaller complexity.

As an example , a team might pick the Login user story and call it a 2 point story and subsequently pick a customer search story and call it a 4 point story as it probably involves double the effort to implement than the earlier login story. This exercise is continued till all stories have a story point attached to them.

Who should be involved in Story Point estimation ?

The team which is responsible for getting a story done should ideally be part of the estimation. If the team has QAs to test stories and do test automation, they should also be part of the estimation exercise along with the developers.

The QAs should be able to call out if story has less development effort and more testing effort involved. For example building a customer search screen might be a 4 point story, supporting it on 2 new browsers might be a 1 point development effort but a lot more from a testing perspective. In this scenario QAs who are part of the estimation exercise should call this out and size the story to reflect the adequate testing effort which in this example might be 2 points.

How do we convert Story Points into hours or days ?

This should not be done at a story point level. Story Points go hand in hand with Velocity and hence Velocity at the end of every iteration should be measured in the number of Story Points done by the team.

Should we do a best, likely , worst case estimate even when we are estimating in points ?

Giving 3 estimates for a User story can still be done using story points by just providing 3 different points for the best, likely and worst case scenarios. This is quite effective when estimating a large sample set of stories probably during the first release of the project where little code has been written.

Doing this provides a range across which estimates may vary depending on outcomes of certain assumptions which the team has made. For example a best case estimate for the Login story could e 2 points assuming integration with a local LDAP server, but if that assumption changes to a 3rd party provider integration the worst case could be 8 points.

How do we plan/schedule a project using Story Points ?

To convert story points into schedule, the team needs to calculate their velocity in terms of number of points the team can deliver in an iteration. This is typically done using yesterdays weather by averaging the velocity achieved by the team in last 3 iterations.

If the team is starting afresh , then a raw velocity exercise should be done. In a raw velocity exercise , the team decides how many stories can it finish in an iteration period. This is done by repeatedly picking different sample sets of already sized stories which can be done within an iteration period. The total points across different picks are averaged and that is taken as the velocity the team will achieve each iteration.

For example if the result of 3 picks was 6,8 and 10 points for a 2 week iteration then (10+8+6)/3 = 8 points is the raw velocity for the team for 2 weeks. A schedule can then be laid out assuming the team finishes 8 points in a 2 week iteration.

Can Story Points be standardized across various teams ?

Different teams will end up having a different measure of Story Point depending on the the sample set of stories they are sizing. Unless they are building the same system the amount of effort required to finish a 1 point story by team A will be different to the amount of effort required by team B to finish a 1 point story in their system. And this difference will ultimately show up in velocity of team A and team B.

If there is a large program of work which is split into multiple teams building certain areas of a large system, it is quite tempting to attempt to standardize the point scale across these teams. This again defeats the purpose of estimating using story points and it being a unit of measure subjective to a team.

How do we estimate spike stories in points ?

Spike stories are stories which a team plays to better understand how to implement a particular feature. It can also be used as a proof of concept. Since in a spike very little is known about the amount of effort involved, it is typically time boxed with an outcome that the team can agree upon. This can be approximately converted into points by looking at the velocity trend. For example, if it is required to plan a week long spike, and the velocity of the team has been 16 points, then we can attach 8 points against the spike story.

Is there a way we can calculate cost per point ?

Cost per point will typically be ( Cost of an iteration ) / (Velocity per iteration (in points) ) / . In cases where there is an additional stabilization sprint or regression iteration, the cost of that iteration should also be included to calculate the cost per point.

Are story points an excuse for teams not being able to estimate correctly in days/hours?

It is not an excuse but a reality that it is a waste to attempt arriving at an accurate number in days or hours for a User Story. The amount of effort and time required to arrive at such a number trades off against the benefits of estimating in days/hours.

Moreover estimating in days/hours often puts pressure on the team to deliver within the stipulated number of days and the team ends up burning themselves to meet such false commitments. This results in the team never reaching a sustainable pace over a period of time.

Do Story Points relate to Business Value ?

Story points are an internal measure of effort involved in implementing a user story. It does not, in any way, reflect the amount of business value a user story provides. There might be cases where 1 point story might provide a lot of business value versus a 4 point story in the same system. Business value is best left for the product owner and business stakeholders to be able to determine.

Here is an article which talks about measuring business value in much more detail.

How do we know if the team is getting better at estimation when it is estimating in points ?

It is a popular belief that if the team were to estimate in ideal days, then it is much easier to track if the estimation is good , by checking the actual days elapsed on a story and the progress against it. This is however counter productive as the team spends hours to estimate few stories to arrive at the magic number of days before being pressurized to deliver on that magic number.

When a team is relatively sizing stories in points, a trend slowly starts emerging where similar sized stories start showing similar time to implement them. If there is a bad estimate , then that bubbles up automatically as an exception

Should developers change their story point estimation as they learn more about the system they are building ?

If a story A was classified in the 2 points bucket, a similar story B coming in months later should be classified in the same bucket. If the team has learnt more about implementing them between when story A and story B were played, this will show up as an increase in velocity of the team.

It is better to setup a relative sizing triangulation board for the team which has placeholder stories from the initial estimation session, so that later on the team can relate to it while sizing a new story.



Tuesday, February 15, 2011

Increasing predictability by moving the definition of DONE

The definition of DONE for an agile project team is quite important as that determines which stories are actually completed and can be accounted for in velocity for the iteration.

A team usually starts by treating a product owner’s approval after a showcase/UAT of a story as DONE. Though this sounds simple, it can get dirty when environments are not available to showcase on, or when the product owner is not fully engaged with the team. In such a scenario it is very tempting to go down the path of having Dev Complete or Testing Complete as DONE and go on claiming velocity without fixing the root cause.

predictability

Velocity is a measure of predictability and by moving the definition of DONE to the left side of the story wall reduces a teams ability to predict how much work they can chew. A team should always strive towards moving this definition to the rightmost lane of the story wall.

predict-graph


Definition of DONE What you cannot predict ?
Development Complete Bugs in testing, Bugs in testing by a Product Owner, Deployment issues in production
Testing Complete Bugs in testing by a Product Owner, Deployment issues in production
UAT Complete Deployment issues in production
Deployed Really DONE – Its in Production !

Maintaining a predictable velocity is important to plan a project better long term. As a project manager of an Agile project one should strive towards moving to a "DONE = Deployed to Production" state with each iteration.

A nice side effect which starts to emerge is a reduced waste such as manual deployment activities which can be automated when one starts to focus on moving stories to the rightmost end of the story wall before accounting them in velocity.

No fluff, just releases into production every 2 weeks

My current project is a small Ruby on Rails application which is part of a larger web portal. The fun part is that we are releasing new features into production every 2 weeks. Each new feature adds a lot of business value to end users as well as the business stakeholders.

The business we cater to is evolving at a fast pace, and as an IT team, this quicker delivery service provides them a  lot of flexibility. The dynamic nature of business means that we never have requirements which are frozen, even for a period of 2 weeks. This also means that the IT team needs to respond to the changing requirements very quickly.

An important practice that has helped us, is to deploy into an exact production replica UAT environment 3-4 times a day so that the latest application is always available for the business to provide feedback. The business is exactly in the same situation as Jeff Patton describes here http://www.agileproductdesign.com/blog/dont_know_what_i_want.html by not knowing exactly what they want, and the rapid feedback cycle on the latest developed application allows us to build the feature right for them.

Most of our deployment scripts to UAT and Production are automated to an extent that it is a one click deployment to production. Not only does it make deployment simple, but also rollback simpler in case of an unexpected error.

We are still managing without creating a production branch. Having a production quality application in the UAT environment almost everyday helps us in applying an urgent bug fix if required and deploying it to production the very next day. Having a good version control like Mercurial is useful to keep half done stories outside the production build in such cases.

Lastly we do not spend time doing estimation on stories. We tend to keep the backlog to a minimum and try to move almost all the stories in progress to production every 2 weeks.

Thursday, January 6, 2011

How much should you plan in Agile ?

Planning in Agile varies a lot given the kind of project, the size of the team, the dependencies and commitments to stakeholders. Because of this, Agilists around the world hold varied opinions on how much one should plan in an Agile project.

There is no prescriptive answer to that question, however, the level of detail one can go to before putting together a project plan can be decided based on the given circumstances.


Minimal planning is working out of a prioritized backlog of stories
Prioritizing stories based on business value is almost always the first step in plannning. Once we have a prioritized product backlog the team can start working on the highest priority story from the top of the stack.

This mode of working is particularly useful when there are no strict timeline commitments for the stories to be released to production. This might happen in scenarios where it is a small application and the product owner is comfortable with doing a release when he feels that enough is built in the application to be seen by the users. This can also happen when the team is working on enhancements or bugs post a production release, and is waiting for business to decide the next milestone. This works well in small teams of 2-3 pairs.


Planning a Single Sprint
The next level of detail is planning atleast for a Sprint(2-4 weeks). A Sprint backlog (or an Iteration Plan) helps in creating visbility for a very limited period of time which is useful both for the Product Owner as well as the team. An example of a Sprint commitment to a product owner can be delivering 20 points of work from the product backlog which translated to 5-6 User Stories.

Planning only for one Sprint is useful when the Product Owner does not have longer term visibility on features/stories for the team. This might also happen when we are waiting things like user feedback etc... to determine the next set of stories to be played.

Doing Release Planning across Sprints




A longer term view of the feature pipeline can help the team build an exhaustive release plan across Sprints. This can be done by slotting stories in an Iteration and even looking at paralellization of stories between development pairs. This sort of activity is best done by sticking stories putting up all the stories on a release wall and collaboratively building the release plan on it.

The number of stories slotted within an iteration is guided by the velocity the team thinks it can achieve. This release plan gives a planned burn down of the scope which the team sets out to achieve and acts as a great metric for the team to measure "How are we doing ? "








Doing Release Planning for multiple work streams in a program of work

Doing a release plan for multiple workstreams in a program is extremely helpful to identify dependencies across streams. It also helps rollup a program level burn up across all teams in the entire program of work. This is particularly useful when running a large program with multiple work streams where tracking progress and dependencies on a day to day basis becomes difficult. Doing a little more of upfront planning helps in the long run.