Monthly Archives: March 2016

Thoughts on the 9th State of Agile Survey

State of Agile 9The 9th Annual State of Agile report had some confirming and some surprising results. If you have not read it, it is worthwhile having a look here. And yes I know that number 10 is soon coming out, but hey there is still value in the 9th one to look at. Besides the usual statistics around how much Agile people are doing ( key number for me was only 5% working for fully traditional organisations), it provided some interesting answers to some of the more common questions that I hear and the answers are often not surprising but there was the odd surprise. Let’s jump in:

What are typical benefits of Agile?

The report highlights the following three to be the top answers (which have been stable over the last few years):
– Ability to manage changing priorities
– Team productivity
– Project visibility

This sits well with me as the usual suspects that we have to debunk are not in this list, e.g. faster time to market (only on rank 7) and cheaper delivery (not in the list at all).

How do you measure the success of Agile?

Uhhh, tricky one this one. I have heard the question many times and honestly have struggled to give an answer that satisfies senior stakeholders. So what did the report say:
– On-time delivery
– Product Quality
– Customer Satisfaction

Good answers, I think this reflects well what success looks like. Interesting that some of the items mentioned under top benefits are showing up much lower here: Managing priorities obviously speaks to the product quality and customer satisfaction. But team productivity (29% measure it) and visibility (30% measure it) are much lower on the list. An open question for me is how people would measure productivity in the first place (see my other blog on this).

How do you scale Agile?

As much as I come across SAFe it is only used by 19% of the respondents with all the other ones even lower (DAD, LESS). The largest proportion just uses Scrum of Scrums or some custom created method.

Of the top 5 tips for Scaling Agile at least two are in my top lessons learned too: Consistent process & practices and Implementation of a common tool across teams. I agree with the other 3 tips as well: Executive Sponsorship, having a coach and creating an internal support team.

What tools and practices do you use?

Wow – I was surprised, but perhaps I should not have been that Excel and Project are the most used tools…seriously, are we not better than that??? Oh well on the real Agile tools, Jira and VersionOne takes the cake, with TFS close behind. IBM is much much lower. This represents my position as well, JIRA is certainly the one most used and few people complain about it, especially when integrated with Confluence.

There is also information on the practices uses and I was shocked to see that only 69% use retrospectives and 48% have a dedicated product owner. Overall the adoption rates of the practices feels very low, perhaps there is some fundamental flaw in the data if it considers people who run Waterfall projects but use a select few Agile practices…hmmm….

What makes Agile fail?

Good information here as well, lack of experience is the main reason Agile fails, this means that we should make sure experienced Scrum Masters and coaches support new projects. Lack of management support and a non-aligned company culture are the other two main reasons Agile fails. Those are a bit more difficult to tackle but are important to be aware of as you set off on an Agile project.

Overall I like the results in the report and it certainly helps to see some market data validating points that I keep making with my clients.

Thoughts on the State of DevOps 2015 report

state of devops front

So I have been re-reading the state of DevOps report (https://puppetlabs.com/2015-devops-report ) recently on the plane to my project and found a few interesting aspects that are worth highlighting – especially as they are matching my experience as well. It shows a much more balanced view than the more common unicorn-approach to DevOps.

First of all – I am glad that the important of deployments is highlighted. To me this is the core practice of DevOps, without reliable and fast deployments all the other practices are not effective, whether that is Test Automation or Environment provisioning. So this part from the report clearly is one of my favourites:

Deployment pain can tell you a lot about your IT performance. Do you want to know how your team is doing? All you have to do is ask one simple question: “How painful are deployments?” We found that where code deployments are most painful, you’ll find the poorest IT performance, organizational performance and culture.” – Page 5

And there is a whole page discussing this aspect on page 26, but here the summary steps to take:

  • Do smaller deployments more frequently (i.e., decrease batch sizes).
  • Automate more of the deployment steps.
  • Treat your infrastructure as code, using a standard configuration management tool.
  • Implement version control for all production artifacts.
  • Implement automated testing for code and environments.
  • Create common build mechanisms to build dev, test and production environments.

And even the speed to market with around 50% having more than a month of lead time to production feels about right (Page 11)

The two models on page 14 and 15 are quite good, there is not much new in this, but the choice of elements I think speaks to changes in the DevOps discussion that I have seen over the last 18 months and which I see on my projects too – and it highlights visualisation of data as one key ingredient. It’s such an important one, yet the one that not many spend their energy (and money) on. Probably because there is no obvious and easy tool choice (CapitalOne for example developed their own custom dashboard – https://github.com/capitalone/Hygieia & http://thenewstack.io/capital-one-out-to-display-its-geekdom-with-open-source-devops-dashboard/ ). The report goes on to advice “Turn data into actionable, visible information that provides teams with feedback on key quality, performance, and productivity metrics” – Amen.

Here are the two models from the report:

Model 1Model 2

And of course based on my recent discussion about DevOps for Systems of Record (Link here) I was glad to see the following statement in the report:

 “It doesn’t matter if your apps are greenfield, brownfield or legacy — as long as they are architected with testability and deployability in mind, high performance is achievable. We were surprised to find that the type of system — whether it was a system of engagement or a system of record, packaged or custom, legacy or greenfield — is not significant. Continuous delivery can be applied to any system, provided it is architected correctly.” – Page 5

Overall I think this report is really useful and represents what I see in my projects and at my clients very well. It also discusses the kind of investments required to really move forward (page 25) and provides guidance on many aspects of the DevOps journey. I am looking forward to see what the next year will bring.