One of the artifacts used to track work on an agile project is the release burndown chart. This chart plots functionality along the y-axis and time along the x-axis. A line is then drawn from the size of the desired functionality over to the correct time, labeled the “ideal” line. This is the line that depicts how much functionality needs to be left at each point in time in order to deliver all of the functionality within the desired time.
To make this example a little more concrete, if a project is tracking functionality in story points and time in sprints, the below chart would show the release burndown for a project that had 200 story points and wanted to finish within 4 sprints. So according to this chart, I would need to finish 50 points each sprint (200 points / 4 sprints) in order to deliver all functionality. The amount of points that the team can complete within a sprint is their velocity. So if the team could complete 50 points each sprint, their velocity would be 50.
This is great in theory, but anyone who has ever worked on a project or in a team knows that real life is not linear. When a team first starts working, they know the least about the work and may still be in the early stages of team building. Therefore they may not be as effective. So after the first sprint, the release burndown chart looks like this.
I would say this is pretty normal for a team. They had hoped to accomplish 50 points, but were only able to accomplish 25 points. Now this does not tell us much because it is a single sprint, but after 2 (and the more the better), we will start to have a much better idea of the actual team velocity. So maybe in sprint 2 the team completes 35 points in sprint 2. This would give them an average velocity of 30 points a sprint. So between working together as a team, seeing how much functionality they can deliver and learning more about what needs to be delivered, the team will most likely increase what they can deliver and have a more accurate velocity.
One of the problems I have seen with the above chart is that “management” looks at the chart and says, “Well, the ideal line says that the team was supposed to complete 50 points but they only completed 25, so they are behind schedule. What are you going to do to catch back up?”
At the start of the project, it is unrealistic for the team to know their velocity. Additionally, it is unrealistic to expect that the team will deliver 50 points each sprint. The amount of functionality will fluctuate some.
One of the tools used in project management to deal with this problem is the “Cone of Uncertainty.” What this model says is that at the start of a project, whatever you think the estimate is (x), you want to multiple that by a high and low factor to create a range that the project will fall into, given the level of uncertainty. So if we estimate that a project will take 1,000 hours at the start of the project, this model says that we should provide a range of 600-1,600 hours to complete the project. As time goes on, this range should become smaller as we learn more.
So I wondered … what would happen if we applied the concepts from the Cone of Uncertainty to the release burndown chart? There are 2 variables that can be estimated with the release burndown chart – functionality and time.
Estimating Functionality/Constraining Time
In this example, we know we want to release in 4 sprints. If we estimate the amount of functionality that can be completed each sprint, we would multiple what we expect to provide a range. So if we thought that the team could complete 50 points a sprint, we would multiple that by .6 and 1.6. This provides a range of 30 and 80. This provides a more accurate representation of the variation in a teams velocity. This chart still targets completing the functionality within 4 sprints, but shows the potential range of velocity. As the actual line is drawn with each sprint, we can see how the trending of the actual line and how it relates to the range.
Estimating Time/Constraining Functionality
In this example, we know that we want to release all of the functionality, regardless of how many sprints it takes. If we estimate the amount of functionality that can be completed each sprint, we would multiple what we expect to provide a range. So if we thought that the team could complete 50 points a sprint, we would multiple that by .6 and 1.6. This provides a range of 30 and 80. This provides a more accurate representation of the variation in a teams velocity. This chart still targets completing the functionality, but shows the potential range of velocity and how that related to how many sprints it will take. As the actual line is drawn with each sprint, we can see how the trending of the actual line and how it relates to the number of sprints.
These are just some thoughts I have been having lately. What has your experience been? Is the ideal line on the release burndown chart beneficial as a guide, do you think a range is a better piece of data for the chart or do you think both are important?
Hey Tom,
In coaching / training assignments I talk about the cone of uncertainty when explaining how the product backlog technique & user stories help reduce risks in understanding over big-upfront-requirements specifications. The above is a fantastic isomorphism; have you had any feedback from real world application on projects since writing the above?
Anthony Oden
Anthony – Thanks for the comment. My favorite real world application of this that I have used with team is adding a few trending lines to their release burndown chart. So there is the ideal line, but in addition we start of out with the range lines like above. Then after 1 or more sprints, we swap those lines out for trend lines (optimistic, pessimistic and most likely) based on the actual data. This then allows the product owner/stakeholders to really see the range based on variability and allows them to make better business decisions. It gets the focus away from just the ideal line and uses actual data for more accurate projections than just the initial estimates from the cone.