Tag Archives: Reporting

Agile Reporting at the enterprise level (Part 2) – Measuring Productivity

productivityThose of you who know me personally, know that nothing can get me on my soapbox quicker than a discussion on measuring productivity. Just over the last week I have been asked three times how to measure this in Agile. I was surprised to notice that I had not yet put my thoughts on paper (well in a blog post). This is well overdue so here I share my thoughts.

Let’s start with the most obvious: Productivity measures output and not outcome. The business cares about outcomes first and outputs second, after all there is no point creating Betamax cassettes more productively than a competitor if everyone buys VHS. Understandably it is difficult to measure the outcome of software delivery so we end up talking about productivity. Having swallowed this pill and being unable to give all but anecdotal guidance on how to measure outcomes, let’s look at productivity measurements.

How not to do it! The worst possible way that I can think of is to do it literally based on output. Think of widgets or java classes or lines of code. If you measure this output you are at best not measuring something meaningful and at worst encouraging bad behaviour. Teams that focus on creating an elegant and easy to maintain solution with reusable components will look less productive than the ones just copying things or creating new components all the time. This is bad. And think of the introduction of technology patterns like stylesheets, all of a sudden for a redesign you only have to update a stylesheet and not all 100 web pages. On paper this would look like a huge productivity loss, updating 1 stylessheet over updating 100 pages in a similar timeframe. Innovative productivity improvements will not get accurately reflected by this kind of measure and teams will not look for innovative ways as much given they are measured on something different . Arguably function points are similar, but I have never dealt with them so I will reserve judgement on this until I have firsthand experience.

How to make it even worse! Yes, widget or line of code based measurements are bad, but it can get even worse. If we have done measurements on this we do not incentivise teams to look for reuse or componentisation of code, and we are also in danger of destroying their sense of teamwork by measuring what each team member contributes. “How many lines of code have you written today?” I have worked with many teams where the best coder writes very little code and that is because he is helping everyone else around him. The team is more productive by him doing this than by him writing lots of code himself. He multiplies his strength rather than linearly growing the team’s productivity by doing more himself.

Okay, you might say that this is all well and good, but what should we do? We clearly need some kind of measurement. I completely agree. Here is what I have used and I think this is a decent starting point:

Measure three different things:

  • Delivered Functionality – You can do this by either measuring how many user stories or story points you deliver. If you are not working in Agile, you can use requirements or use cases or scenarios. Anything that actually relates to what the user gets from the system. This is closest to measuring outcome and hence the most appropriate measure. Of course these items come in all different sizes and you’d be hard pressed to strictly compare two data points but the trending should be helpful. If you did some normalisations of story points (another great topic for a soapbox) then that will give some comparability.
  • Waste – While it is hard to measure productivity and outcomes, it is quite easy to measure the opposite: waste! Of course you should contextually decide which elements of waste you measure and I would be careful with composites unless you can translate this to money (e.g. all the waste adds to 3MUSD, not, we have a waste index of 3.6). Composites of such diverse elements such as defects, manual steps, process delays and handovers are difficult to understand. If you cannot translate these to dollars, just choose 2 or 3 main waste factors and measure them. Once they are good find the next one to measure and track.
  • Cycle time – This is the metric that I would consider above all others to be meaningful. How long does it take to get a good idea implemented in production? You should have the broadest definition that you can measure and then break it down into the sub-components to understand where your bottlenecks are and optimise those. Many of these will be impacted by the levels of automation you have implemented and the level of lean process optimisation you have done.

This is by no means perfect. You can game these metrics just like many others and sometimes external factors influence the measurement, but I strongly believe that if you improve on these three measures you will be more productive.

There is one more thing to mention as a caveat. You need to measure exhaustively and in an automated fashion. The more you rely on just a subset of work and the more you manually track activities the less accurate these measures will be. This also means that you need to measure things that don’t lead to functionality being delivered, like paying down technical debt, analysing new requests for functionality that does not implement or defect triage. There is plenty of opportunity to optimise in this space – Paying technical debt down quicker, validating feature requests quicker, reducing feedback cycles to reduce triage times of defects.

For other posts of the Agile reporting series look here: Agile Reporting at the enterprise level – where to look? (Part 1 – Status reporting)

Here is a related TED talk about productivity and the impact of too many rules and metrics by Yves Morieux from BCG

Agile Reporting at the enterprise level – where to look? (Part 1 – Status reporting)

In this week’s post I will talk about a topic close to my heart. How do you manage Agile projects if you have some scale. And especially how do you report on progress and use the lessons from your projects effectively across the enterprise? Let’s start by looking at what we want to get out of the reporting.

What should a good report show you:

  1. How has the scope changed over time?
  2. How much scope have we successfully completed?
  3. How is the team progressing towards the goal over time?
  4. How are we doing with cost?
  5. How is this team progressing compared to other teams?
  6. Which teams should I help to get them back on track? 

So here is the first example of a “traditional” Agile release burn-up that gives us information for points 1-3:
burnupSo how do you read this report? The first line shows how the target scope developed over time. In this case you can see that the number of story points required moved a bit up and down as stories were delivered. This is pretty normal for an agile project, nothing to worry here. We have an ideal line to show us what the “ideal” amount of accepted story points would be, of course no team ever delivers according to this line. If they do, I would ask a few questions as it just looks too good to be true – perhaps we bend the truth a bit in those cases to not explain the difference…

The actual burn-up and the predictions that the team tracks let me to the conclusion that this will be close, but that they can make it. This is a team I will keep an eye on going forward to see whether they catch-up as predicted. As the overall trend of total scope is also slightly downwards, there is a chance that the trend continues to a number even below 911 story points, which would be good for this team. 

Challenges with Agile reporting (points 4 & 5) 

If you are like me and are working in an enterprise environment it is likely that your teams use different iteration schedules, don’t have an aligned story point structure and are of different size. If you look at the release burn-up graphs, you can hardly compare them and by looking at many across the enterprise you don’t necessarily are able to compare their progress. Here is where my favourite report comes in – the relative burn-up that answers point 5 from our list of requirements.

relative burnupThe beauty is that you can show all kinds of things on this graph. In the example shown here, we track the progress of accepted scope against the budget (point 4) that we have used (and yes we are falling behind, looks like we need more budget to deliver our scope). Of course you can do the same for scope vs. schedule. And because these reports are relative to 100% of scope, budget or schedule you can now compare your status reports across teams. We are all aware that status is not linear and that the ideal line is mostly to be ignored, but if after the release has finished you look back at the teams that were successful and those that had challenges you can look for patterns. What percentage of scope did successful teams delivery after 50% of their schedule? How much budget did they have left after they completed 90% of the scope? Through this insight you can develop a portfolio view that I describe later.

One of the biggest challenges: How accurate is the data

Before I do that I want to share one more challenge: Real-Time data. It is near impossible in my experience to get good reporting if you have to manually collate the data and/or the report is purely used for outside stakeholders. To make your reporting effective you should explore ways to make the report automatic based on data that your team already provides rather than making it a separate activity. And to avoid the challenge of stale data, make it part of your stand-up to look at the report; this will help keeping people motivated to update their data. Ideally in any status meetings you have you and your stakeholders look at the tool you use and not extracts, excel sheets or PowerPoint slides; but I have not seen many organisations doing this successfully. 

And now to the promised portfolio view (point 6):
How to keep track of many projects or teams in your organisation? In the manual for Agile it says that you should get your status by attending stand-ups. How do you do that if you have 20 teams working at the same time? I think we all understand that some kind of reporting will be required and as long as it is helpful and used for the right purpose (to provide help where required, not to call teams out where they struggle) we all want that kind of visibility. If you have the relative reporting in place that I mentioned above, you can now use this to get a good view of your teams and you can chart your learning from the past into the report. It could look like this (this is still conceptual as I have not yet implemented it on my program):

portfolio

If you use the relative burn-ups you can now draw each of the teams on portfolio map like the above. Depending on how far they are through their scheduled iterations and how much scope has been accepted you position them on this map. From your experience with successful and challenged teams you know which area is green, amber and red (this will differ from organisation to organisation). But now you can quickly see for which teams it is worthwhile to look at them a bit closer. In the above example the size of the team (size of bubble) and the planned release date (colour) are also shown. As enterprise coach I would talk to the team represented by the large red bubble at 50% of schedule and the small red bubble at 85% of their schedule. At this point you will have to understand the context of these teams to see whether they need help or not. This diagram makes watermelon status (appearing green from the outside but really being red) less likely as it is purely based on the agile burn-up information and your experiences in the organisation. I will let you know how well this works once I have implemented it successfully.

Now this is one aspect of reporting done (status), I will talk about how to report on delivery health in the next post, where we will talk about a delivery health score card and cumulative flow diagrams. As always reach out if you have thoughts or perhaps even good examples.