Monthly Archives: May 2018

It’s time to make Application Management Sexy Again

app maintenanceA few weeks ago I was in group of people and someone made a statement that application management (AM) is commodity and most people in the group agreed. I didn’t say anything but I think application management is one of the most exciting places to be today. There are so many technology trends that apply to AM: Agile, DevOps, Artificial intelligence, etc.

When I say application management I mean some level of operations (e.g. level 3 support and code fixes) and some level of application development/maintenance (smaller changes that don’t require projects). The nature of those activities are that they are frequent and smaller, which means we have a lot of opportunity to learn and improve. It’s the ideal testbed to showcase how successful modern engineering practices can be assuming the volume of work is large enough to justify investment.

Let’s look at two dimensions of this: the code changes and the platform services

The product team making the actual code changes should have responsibilities for making changes for both new scope and production defects. This team needs to reserve some capacity for urgent production defects, but the bulk of production defects get prioritised against new scope and are just forming part of normal delivery. This is possible because the release frequency is high (at least monthly) and the team can leverage platform services like CI/CD, feature flags, automated environment provisioning and monitoring services. The team collaborates with the platform services team in many ways and focuses on the application they own. Both Kanban and Scrum are possible methodologies this team can use, making this a great Agile working space.

The platform team provides the services which make life easier for the product teams. They also provide the first levels of support in a centralised fashion to protect the product team from too many distractions. There are two goals for this team: to provide platform services to the product team (like CI/CD, environment provisioning, monitoring) and to keep improving the customer experience for end-users. Fast ticket resolution is not the right measure for this aspect, but rather how to reduce the tickets overall. Advanced monitoring of performance, functionality and infrastructure is being used to find problems before the customer finds it. Each user created ticket is an opportunity to improve the monitoring for next time. Chatbots and self service are used to provide the fastest service for standard requests and AI/ML is learning from tickets to make more and more requests become standard and to route tickets to the best team/person. Analytics on all aspects of the platform allows to find patterns and reconcile them against issues. The platform team collaborates with the product team to make sure the application related services keep being improved. Over time this team becomes a small team of automation experts rather than a large team of operators for manual tickets.

Seriously if you look at the above, how are you not super excited about this space?If you do this right you get to use a lot of super modern engineering practices to improve the delivery capability of your organisation. The amount of money/capacity that frees up once you have implemented the first improvements can then be used to make more and more meaningful changes to the products and services. You can do this with a focus on the creative tasks rather than the fire fighting and repetitive manual work that consumed application management teams in the past.

I am working with a couple of organisations now that are as excited as I am about this space and have a similar vision. I am very much looking forward to see what results we will achieve together.

The simple Agile question

A few weeks ago I published a blog post on simple DevOps questions you can ask your team to find out how mature your adoption of DevOps practices are (you find it here). There is a case to be made that you should have a similar question for your Agile adoption. The problem here is that there are so many different flavors of Agile and that not one is better than the others (no matter what all those zealots out there are saying). The large amount of variations makes it hard to analyse the maturity of Agile unless you understand the context of the organisation and can evaluate the delivery success of the team. But I found a question that helps me differentiate between the “false” or “hybrid” Agile flavors and “real” Agile. I call it my minimum definition of Agile for it to be something I can engage with and think that it can be successful. Here we go, Mirco’s minimum Agile definition:

“One Team, produces a working and tested version of software within each strict timebox of equal length.”

You might think this is a really low bar for an Agile definition, but it rules out a lot of the work that I see being done under the name “Agile”. It rules out all of the following patterns that are suboptimal if not dangerously risky for delivery success:

  • Separate design, build and/or test teams – often this is a remnant of existing vendor relationships or strong “fiefdoms” and the Agile change has not been able to break down the barrier. For me this is an absolute showstopper for Agile success. Pause and see how you can enable a joint team.
  • Design sprints, followed by build sprints, followed by test sprints – Well let’s call this what it is, Waterfall delivery 😉
  • Teams being changed and swapped – Many of the Agile benefits only manifest once the team has been working together for a while, if you need to keep changing people or ramp-up or down then there is something wrong with the work preparation part
  • Agile teams building lots of stuff but are unable to validate it end-to-end – This is perhaps an already more advanced problem but I keep seeing it a lot these days. Agile teams deliver software but cannot test it sufficiently and while the velocity is high, once the product gets end-to-end tested it falls apart. There are three root causes that I commonly see: unavailability of test environments, lack of maturity in DevOps practices and lack of up-front-planning of end-to-end scenarios.
  • Sprints that have different lengths or are changed at last-minute – Agile is more successful because it is more rigorous and forces you to address problems early and often. If you weaken the conditions you are stepping on a slippery slope that will likely impact you more and more. Step off the slope and fix the problems rather than changing the sprint timebox

I have been happy with this definition for now as it weeds out the most common bad patterns, perhaps it helps you too. Let me know if you have your own question or definition that you use.