In 2011, we adopted Scrum, an agile methodology that helps development teams work more efficiently and ensures that teams critically review how they work in the effort of always getting better. Over the past few years, through meetings called retrospectives, we have modified our process quite a bit and now operate under what is a mix of Scrum and Kanban (also an agile development methodology).
With these methodologies, it’s very important to track data over time to ensure changes in process are working well. There’s a wide array of tools available to help with this, but we’d like to highlight one tool in particular. It’s one that we built ourselves, named Huburn.
The play on words in the name was derived from our usage of GitHub. It is the primary tool for tracking what’s being worked by the engineering team. In GitHub, they provide a nice feature called Labels that allow you to tag items with specific attributes. Labels for features, defects, technical debt, escalations and points are some examples we use.
The points label is especially helpful because we assign estimates to items, not in terms of time, but in terms of relative effort to one another. To do this, we use the Fibonacci sequence. Items will have an estimate of 1, 2, 3, 5, 8 or 13. This helps us project better when features and releases can be completed with the availability we currently have on the team.
The image below shows the burnup chart we created that shows progress in completing items over the course of our 2 week iterations. The statistics are for metrics that trigger important conversations about why those tasks are arising.
Below is a standard deviation chart showing the correlation of point estimates to one another in terms of time. Again, we don’t initially estimate in terms of time, but at the end of the day, items take a certain amount of time. What we are able to gain from this chart is the ability to know when an item is stretching into a time frame that is abnormal for its estimate based on historical data.
We aggregate the items worked over the past 6 iterations, or 12 weeks, to get a mathematical display of where on the spectrum all of our items sit. With this, we are able to determine potential disruptions to our future roadmap. Let’s say, for example, we have 4 items in an iteration with estimates of 1, 2, 3 and 5. This chart could show us that the items estimated as 3 and 5 are over the bounds of how long they have historically taken (the black lines in the image represent different items). From that, we are able to have a discussion about why this may be the case and whether or not we need to modify our roadmap for the future.
Lastly, we have a page with overall metrics of how long items are taking, broken down by the category they belong to. We can watch these numbers evolve over time and have discussions about the trends we are seeing.
Having the right tools in place for any team is a must.
For us, it’s always about improving and delivering the highest quality product we can. We are happy to have built a tool that gives us the exact information we desire and points out areas for us to improve.