Tag Archives: devops

The Testing Centre of Excellence is dead, Long Live the Centre of Quality Engineering

“Big, centralised Test Centres of Excellence are a thing of the past” – Forrester 2013

Agile clearly took the IT industry by storm. In my early days trying to bring Agile to projects a few years back I had to explain even what Agile means. Nowadays pretty much all my clients are using some kind of Agile somewhere in their organisation. Now we see the challenges of hot to support delivery in an Agile fashion and how to optimise for speed to market. One common obstacle is duration and effort for testing. Too many organisations are still relying on large scale manual inspection instead of having an optimised and automated approach to testing. Of course in most cases not everything can be automated, but it is far more common to see too little automation than to see too much automation.

“Quality comes not from inspection, but from improvement of the production process.“ – Dr W. Edward Deming

TCOE1The above picture illustrates the shift that is required to really support speed to market. Quality needs to be built into the development process from the beginning and every opportunity for early feedback should be used to avoid the late detection from manual inspection. There is so much good evidence showing that the temporal delay for finding defects is very costly. People don’t necessarily still remember what exactly they were thinking when they made a specific change a few weeks ago. Or worse the fix might even be given to someone completely different who first has to understand the context of the piece of code that he is trying to fix.

For many organisation this provides a challenge as most testing is organised centrally from a Testing Center of Excellence. This needs to change.

TCOE2Rather than having this large group of testers who are centrally organised, we need people with quality focus embedded in our Agile teams. The manual tester who just executes test scripts is a thing of the past mostly. The new roles require people who understand the business context and can come up with test strategies that minimise the risk in our application. Those test experts are the kind of people who look both ways on a one way street, they just think differently to other people and hence have a very specific mindset that we need.

“Good testers are the kind of people who look both ways before crossing a one-way street”

Besides creating test strategies they will also look after exploratory testing which finds things outside of the usual boundaries. And then we need the test automation developers who should also be embedded in the Agile teams. This decentralising of quality experts does not mean that there is no need for a central function. Test Automation requires a technical framework that should be owned by the central function. Additionally the central function should make sure that lessons learned and other experiences are being shared and that the profession of Automation Developers and Test strategists and Exploratory Testers continue to evolve. Providing networking opportunities, knowledge sharing session and more formal guidance on how to embed quality as early as possible in the development process should be the new focus. This also means to collaborate with the development and engineering teams on coding standards, on the way static code analysis is being used and what can be covered by automated unit testing. On the most basic level it is a shift from testing to quality engineering.

The nature of the central function will shift from a Testing Center of Excellence to a Center of Quality Engineering. Will this be difficult and transformation? Yes it will be and I will leave you with this thought that all TCOE practitioners (and testers for that matter) should take to heart:

“It is not necessary to change. Survival is not mandatory.“ – Dr. W. Edward Deming

A personal DevOps Journey or A Never-Ending Journey to Mastery

I spent the last few days at a technical workshop where I spoke about Agile and DevOps and while preparing my talks I did a bit of reflection. What I realised is that my story towards my current level of understanding might be a good illustration of the challenges and solutions we have come up with so far. Of course everyone’s story differs, but this is mine and sharing it with the community might help some people who are on the journey with me.

As a picture speaks more than a thousand words, here is the visual I will use to describe my journey.
(Note: The Y-axis shows success or as I like to call it the “chance of getting home on time”, the X-axis is the timeline of my career)

Personal Journey

The Waterfall Phase – a.k.a. the Recipe Book

When I joined the workforce from university and after doing some research into compilers, self-driving cars and other fascinating topics that I was allowed to explore in the IBM research labs, I was immediately thrown into project work. And of course as was custom I went to corporate training and learned about our waterfall method and the associated process and templates. I was amazed, project work seemed so simple. I got the methodology, processes and templates and all I have to do was following them. I set out to master this methodology and initial success followed the better I got at it. I had discovered the “recipe book” for success that described exactly how everyone should behave. Clearly I was off to a successful career.

The Agile Phase – a.k.a. A Better Recipe Book

All was well, until I took on a project for which someone else created a project plan that saw the project completed in 12 weeks’ time. I inherited the project plan and Gantt chart and was off to deliver this project. Very quickly it turned out that the requirements were very unclear and that even the customer didn’t know everything that we needed to know to build a successful solution. The initial 4 weeks went by and true to form I communicated 33% completion according to the timeline even though we clearly didn’t make as much progress as we should. Walking out of the status meeting I realised that this could not end well. I setup a more informal catch-up with my stakeholders and told them about the challenge. They agreed and understood the challenge ahead and asked me what to do. Coincidence came to my rescue. On my team we had staffed a contractor who had worked with Agile before and after a series of coffees (and beers for that matter) he had me convinced to try this new methodology. As a German I lived very much up to the stereotype as I found it very hard to let go of my beloved Gantt charts and project plans and the detailed status of percentage complete that I had received from my team every week. Very quickly we got into a rhythm with our stakeholders and delivered increments of the solution every two week. I slowly let go of some of the learned behaviour as waterfall project manager and slowly became a scrum master. The results were incredible, the team culture changed, the client was happier and even though we delivered the complete solution nowhere close to the 12 weeks (in fact it was closer to 12 months), I was convinced that I found a much better “recipe book” than I had before. Clearly if everyone followed this recipe book, project delivery would be much more successful.

The DevOps Phase – a.k.a. the Rediscovery of Tools

And then a bit later another engagement came my way. The client wanted to get faster to market and we had all kind of quality and expectation setting issues. So clearly the Agile “recipe book” would help again. And yes, our first projects were a resounding success and we quickly grew our Agile capability and more and more teams and projects adopted Agile. It however quickly became clear that we could not reduce the time to market as much as we liked and often the Agile “recipe book” created a kind of cargo cult – people stood up in the morning and used post-its and consider themselves successful Agile practitioners. Focusing on the time to market challenge I put a team in place to create the right Agile tooling to support the Agile process through an Agile Lifecycle Management system and introduced DevOps practices (well back then we didn’t call it DevOps yet). The intention was clear, as an engineer I thought we could solve the problem with tools and force people to follow our “recipe book”. Early results were great, we saved a lot of manual effort, tool adoption was going up, and we could derive status from our ALM. In short, my world was fine. I went off to do something different. Then a while later I came back to this project and to my surprise the solution that I put in place earlier had deteriorated. Many of the great things I put in place had disappeared or had changed. I wanted to understand what happened and spent some time investigating. It turned out that the people involved made small decisions along the way that slowly slowly lost sight of the intention of the tooling solution and the methodology we used. No big changes, just a death by a thousand cuts. So how am I going to fix this one…

The Lean Phase – a.k.a. Finally I Understand (or Think I do for Now)

Something that I should have known all along became clearer and clearer to me: Methodology and tools will not change your organisation. They can support it but culture is the important ingredient that was missing. As Drucker says: “Culture eats strategy for breakfast”. It is so very true. But how do you change culture… I am certainly still on this journey and cultural change management is clearly the next frontier for myself. I have quickly learned that I need to teach people the principles behind Agile and DevOps, which includes a elements of Lean, Systems Thinking, Constraint Theory, Product Development Flow and Lean Startup thinking. But how do I really change the culture of an organisation, how do I avoid the old saying that “to change people, you sometimes have to change (read replace) people”. As an engineer I am pretty good with the process, tools and methodology side, but the real challenge seems to lie in the organisational change management and organisational process design. And I wonder whether this is really the last frontier and or will there be a next challenge right after I have mastered this one…

The good news is that many of us are on this journey together, and I am confident that on the back of the great results we achieved with tools and methodology alone, truly great things lie ahead of us still as we master the cultural transformation towards becoming DevOps organisations.

What computer games can teach us about maturity models – Choose your own DevOps Adventure

To use maturity models or not is an eternal question that Agile and DevOps coaches struggle with. We all know that maturity models have some weaknesses, they can easily be gamed if they are used to incentivise and/or punish people, they are very prone to the Dunning-Kruger effect and often they are vague. Of course on the flipside, maturity models allow you to position yourself, your team or your company across a set of increasingly good practices and striving for the next level could be the required motivation to push ahead and implement the next improvement.

Clearly there are different reasons behind different kinds of maturity models. For a self-assessment and to set a roadmap, a traditional maturity model like the Accenture DevOps maturity model is what it takes to get these done. There are many others available on the internet, so feel free to choose the one you like best.

At one of my recent clients we performed many maturity assessments across a wide variety of teams, technologies and applications. Of course such large scope means that we did not spend a lot of time with each team to assess the maturity and not surprisingly the result was that we got very different levels of response. We heard things like “Of course we do Continuous Integration, we have Jenkins installed and it runs every Saturday”, had this team not mentioned the second part of the sentence we would have probably ticked the Continuous Integration box on the maturity sheet.

A few months later we were back in the same situation and needed to find a way to help teams self-assess their maturity in an environment where a lot of DevOps concepts are not well known and different vendors and client teams are involved which means the actual maturity rating becomes somewhat political. I was worrying about this for a while and then one night while playing on my PC, inspiration hit me – I remembered the good old Civilisation game and the technology tree:


Now if I could come up with a technology tree just like this for DevOps I might be able to use this with teams to document the practices they have in place and what it takes to get the next practice enabled. Enter the DevOps technology dependency tree (sample below):

CD Technical Dependencies Tree

In this tree for each leaf we created a definition and related metrics and now each team could go off and use this tree to chart where they are and how they progress. This way each team chooses their own DevOps adventure. We also marked capabilities that the company needed to provide so that each team could leverage common practices that are strategically aligned (like a common test automation framework or deployment framework). This tree has been hugely successful at this specific client and we continue to update it whenever we find a better representation and believe new practices should be represented.

Who would have thought that playing hours of computer games would come in handy one day…

Agile Reporting at the enterprise level (Part 2) – Measuring Productivity

productivityThose of you who know me personally, know that nothing can get me on my soapbox quicker than a discussion on measuring productivity. Just over the last week I have been asked three times how to measure this in Agile. I was surprised to notice that I had not yet put my thoughts on paper (well in a blog post). This is well overdue so here I share my thoughts.

Let’s start with the most obvious: Productivity measures output and not outcome. The business cares about outcomes first and outputs second, after all there is no point creating Betamax cassettes more productively than a competitor if everyone buys VHS. Understandably it is difficult to measure the outcome of software delivery so we end up talking about productivity. Having swallowed this pill and being unable to give all but anecdotal guidance on how to measure outcomes, let’s look at productivity measurements.

How not to do it! The worst possible way that I can think of is to do it literally based on output. Think of widgets or java classes or lines of code. If you measure this output you are at best not measuring something meaningful and at worst encouraging bad behaviour. Teams that focus on creating an elegant and easy to maintain solution with reusable components will look less productive than the ones just copying things or creating new components all the time. This is bad. And think of the introduction of technology patterns like stylesheets, all of a sudden for a redesign you only have to update a stylesheet and not all 100 web pages. On paper this would look like a huge productivity loss, updating 1 stylessheet over updating 100 pages in a similar timeframe. Innovative productivity improvements will not get accurately reflected by this kind of measure and teams will not look for innovative ways as much given they are measured on something different . Arguably function points are similar, but I have never dealt with them so I will reserve judgement on this until I have firsthand experience.

How to make it even worse! Yes, widget or line of code based measurements are bad, but it can get even worse. If we have done measurements on this we do not incentivise teams to look for reuse or componentisation of code, and we are also in danger of destroying their sense of teamwork by measuring what each team member contributes. “How many lines of code have you written today?” I have worked with many teams where the best coder writes very little code and that is because he is helping everyone else around him. The team is more productive by him doing this than by him writing lots of code himself. He multiplies his strength rather than linearly growing the team’s productivity by doing more himself.

Okay, you might say that this is all well and good, but what should we do? We clearly need some kind of measurement. I completely agree. Here is what I have used and I think this is a decent starting point:

Measure three different things:

  • Delivered Functionality – You can do this by either measuring how many user stories or story points you deliver. If you are not working in Agile, you can use requirements or use cases or scenarios. Anything that actually relates to what the user gets from the system. This is closest to measuring outcome and hence the most appropriate measure. Of course these items come in all different sizes and you’d be hard pressed to strictly compare two data points but the trending should be helpful. If you did some normalisations of story points (another great topic for a soapbox) then that will give some comparability.
  • Waste – While it is hard to measure productivity and outcomes, it is quite easy to measure the opposite: waste! Of course you should contextually decide which elements of waste you measure and I would be careful with composites unless you can translate this to money (e.g. all the waste adds to 3MUSD, not, we have a waste index of 3.6). Composites of such diverse elements such as defects, manual steps, process delays and handovers are difficult to understand. If you cannot translate these to dollars, just choose 2 or 3 main waste factors and measure them. Once they are good find the next one to measure and track.
  • Cycle time – This is the metric that I would consider above all others to be meaningful. How long does it take to get a good idea implemented in production? You should have the broadest definition that you can measure and then break it down into the sub-components to understand where your bottlenecks are and optimise those. Many of these will be impacted by the levels of automation you have implemented and the level of lean process optimisation you have done.

This is by no means perfect. You can game these metrics just like many others and sometimes external factors influence the measurement, but I strongly believe that if you improve on these three measures you will be more productive.

There is one more thing to mention as a caveat. You need to measure exhaustively and in an automated fashion. The more you rely on just a subset of work and the more you manually track activities the less accurate these measures will be. This also means that you need to measure things that don’t lead to functionality being delivered, like paying down technical debt, analysing new requests for functionality that does not implement or defect triage. There is plenty of opportunity to optimise in this space – Paying technical debt down quicker, validating feature requests quicker, reducing feedback cycles to reduce triage times of defects.

For other posts of the Agile reporting series look here: Agile Reporting at the enterprise level – where to look? (Part 1 – Status reporting)

Here is a related TED talk about productivity and the impact of too many rules and metrics by Yves Morieux from BCG

DevOps in Scaled Agile Models – Which one is best?

DevOps in ScaledAgileI have already written about the importance of DevOps practices (or for that matter Agile technical practices) for Agile adoption and I don’t think there are many people arguing for the contrary. Ultimately, you want those two things to go hand in hand to maximise the outcome for your organisation. In this post I want to have a closer look at popular scaling frameworks to see whether these models explicitly or implicitly include DevOps. One could of course argue that the Agile models should really focus on just the Agile methodology and associated processes and practices. However, given that often the technical side is the inhibitor of achieving the benefits of Agile, I think DevOps should be reflected in these models to remind everyone that Software is being created first and foremost by developers.

Let’s look at a few of the more well known models:

SAFE (Scaled Agile Framework) – This one is probably the easiest as it has DevOps being called out in the big picture. I would however consider two aspects of SAFe as relevant for the wider discussion, the DevOps team and the System Team. While the DevOps team talks about the aspects that have to do with deployment into production and the automation of the process, the System Team focuses more on the development side activities like Continuous Integration and Test automation. For me there is a problem here as it feels a lot like the DevOps team is the Operations team and the System Team is the Build team. I’d rather have them in one System/DevOps team with joint responsibilities. If you consider both of them as just concepts in the model and you have them working closely together then I feel you start getting somewhere. This is how I do this on my projects.

DAD (Disciplined Agile Delivery) – In DAD, DevOps is weaved into the fabric of the methodology but not as nicely spelled out as I would like. DAD is a lot more focused on the processes (perhaps an inheritance from RUP as both are influenced/created by IBM folks). There is however a blog post by “creator” Scott Ambler that draws all the elements together. I still feel that a bit more focus on the technical aspects of delivery in the construction phase would have been better. That being said, there are few good references if you go down to the detailed level. The Integrator role has an explicit responsibility to integrate all aspects of the solution and the Produce a Potentially Consumable Solution and Improve Quality processes call out many technical practices related to DevOps.

LESS (Large Scale Scrum) – In LESS DevOps is not explicitly called out, but is well covered under Technical Excellence. Here it talks about all the important practices and principles and the descriptions for each of them is really good. LESS has a lot less focus on telling you exactly how to put these practices in place, so it will be up to you to define which team or who in your team should be responsible for this (or in true Agile fashion perhaps it is everyone…).

In conclusion, I have to say that I like the idea of combining the explicit structure of SAFE with the principles and ideas of LESS to create my own meta-framework. I will certainly use both as reference going forward.

What do you think? Is it important to reflect DevOps techniques in a Scaled Agile Model? And if so, which one is your favourite representation?

8 DevOps Principles that will Improve Your Speed to Market

I recently got asked about the principles that I follow for DevOps adoptions, so I thought I’d write down my list of principles and what they mean to me:

  • Test Early & Often – This principle is probably the simplest one. Test as early as you can and as often as you can. In my ideal world we find more and more defects closer to the developer by a) running tests more frequently enabled by test automation, b) providing integration points (real or mocked) as early as possible and c) minimising the variables between environments. With all this in place the proportion of defects outside of the development cycle should reduce significantly.
  • Improve Continuously – This principle is the most important and hardest to do. No implementation of DevOps practices is ever complete. You will always learn new things about your technology and solution and without being vigilant about it, deterioration will set in. Of course the corollary to this is that you need to measure what you care about, otherwise all improvements will be unfocused and you will be unable to measure the improvements. You should measure metrics that best represent the areas of concern for you, like cycle time, automation level, defect rates etc.
  • Automate Everything – Easier said than done, this is the one most people associate with DevOps. It is the automation of all processes required to deliver software to production or any other environment. For me this goes further than that, it means automating your status reporting, integrating the tools in your ecosystem and getting everyone involved to focus on things that computers cannot (yet) do.
  • Cohesive Teams – Too often I have been in projects where the silo mentality was the biggest hindrance of progress, be it between delivery partner and client, between development and test teams or between development and operations. Not working together and not having aligned goals/measures of success is going to make the technical problems look like child’s play. Focus on getting the teams aligned early on in your process so that everyone is moving in the same direction.
  • Deliver Small Increments – Complexity and interdependence are the things that make your cycle time longer than required. The more complex and interdependent a piece of software is, the more difficult it is to test and to identify the root cause of any problem. Look for ways to make the chunks of functionality smaller. This will be difficult in the beginning but the more mature you get, the easier this becomes. It will be a multiplying effect that reduces your time to market further.
  • Outside-In Development – One of the most fascinating pieces of research I have seen is by Walker Royce about the success of different delivery approaches. He shows that delays are often caused by integration defects that are identified too late. They are identified late because that’s the first time you can test time in an integrated environment. Once you find them you might have to change the inner logic of your systems to accommodate the required changes to the interface. Now imagine doing it the other way around, you test your interfaces first and once they are stable you build out the inner workings. This reduces the need of rework significantly. This principle holds for integration between systems or for modules within the same application and there are many tools and patterns that support outside-in development.
  • Experiment Frequently – Last but not least you need to experiment. Try different things, use new tools and patterns and keep learning. It’s the same for your business, by using DevOps you can safely try new things out with your customers (think of A/B testing), but you should do the same internally for your development organisation. Be curious!

If you follow these 8 principles I am sure you are on the right track to improve your speed to market.

Working with SIs in a DevOps/Agile delivery model

In this blog post (another of my DevOps SI series), I want to explore the contractual aspects of working with SI (System Integrator). At the highest level there are three models that I have come across:

  • Fixed Price contract – Unfortunately these are not very flexible and usually require a level of detail to define the outcome, which is counterproductive for Agile engagements. It does however incentivise the SI to use best practices to reduce the cost of delivery.
  • Time and Material – This is the most flexible model that easily accommodates any scope changes. The challenge for this one is that the SI does not have an incentive to increase the level of automation because each manual step adds to the revenue the SI makes.
  • Gain Share – This is a great model if your SI is willing to share the gains and risks of your business model. While this is the ideal model, often it is not easy to agree on the contribution that the specific application makes to the overall business.

So what is my preferred model? Well let me start by saying that the contract itself will only ever be one aspect to consider, the overall partnership culture will likely make a bigger impact than the contract itself. I have worked with many different models and have made them work even when they were a hindrance for the Agile delivery approach. However, if I had to define a model that I consider most suitable (Ceteris Paribus – all other things being equal), I would agree on a time and materials contract to keep the flexibility. I would make it mandatory to do joint planning sessions so that both staff movements and release schedule are done in true partnership (it does not help if the SI has staffing issues the client is not aware of or the client makes any ramp-ups/ramp-downs the SI’s problem). I would agree on two scorecards that I would use to evaluate the partnership. One is a Delivery Scorecard, which shows the performance of delivery, things like: are we on track, have we delivered to our promises, is our delivery predictable. The second is an Operational Scorecard , which shows: the quality of delivery, the automation levels in place, the cycle times for key processes in the SDLC.

With these elements I feel that you can have a very fruitful partnership that truly brings together the best of both worlds.

How to support Multi-Speed IT with DevOps and Agile

These days a lot of organisations talk about Multi-Speed IT, so I thought I’d share my thoughts on this. I think the concept has been around for a while but now there is a nice label to associate this idea. Let’s start by looking at why Multi-Speed IT is important. The idea is best illustrated by a picture of two interlocking gears of different sizes and by using a simple example to explain the concept.

The smaller gear moves much faster than the larger one, but where the two gears interlock they gearsremain aligned to not stop the motion. But what does this mean in reality? Think about a banking app on your mobile. Your bank might update the app on a weekly basis with new functionality like reporting and/or improved user interface. That is a reasonable fast release cycle. The mainframe system that sits in the background and provides the mobile app with your account balance and transaction details does not have to change at the same speed. In fact it might only have to provide a new service for the mobile app once every quarter. Nonetheless the changes between those two systems need to align when new functionality is rolled out. However, it doesn’t mean both systems need to release at the same speed. In general, the customer facing systems are the fast applications (Systems of Engagement, Digital) and the slower ones are the Systems of Record or backend systems. The release cycles should take this into consideration.

So how do you get ready for the Multi-Speed IT Delivery Model?

  • Release Strategy (Agile) – Identify functionality that requires changes in multiple systems and ones that can be done in isolation. If you follow an Agile approach, you can align every n-th release for releasing functionality that is aligned while the releases in between can deliver isolated changes for the fast moving applications.
  • Application Architecture – Use versioned interface agreements so that you can decouple the gears (read applications) temporarily. This means you can release a new version of a backend system or a front-end system without impacting current functionality of the other. Once the other system catches up, new functionality becomes available across the system. This allows you to keep to your individual release schedule, which in turn means delivery is a lot less complex and interdependent. In the picture I used above, think of this as the clutch that temporarily disengages the gears.
  • Technical Practices and Tools (DevOps) – If the application architecture decoupling is the clutch, then the technical practices and tools are the grease. This is where DevOps comes into the picture. The whole idea of the Multi-Speed IT is to make the delivery of functionality less interdependent. On the flip side, you need to spend more effort on getting the right practices and tools in place to support this. For example you want to make sure that you can quickly test the different interface versions with automated testing, you need to have good version control to make sure you have in place the right components for each application, you also want to make sure you can manage your codeline very well through abstractions and branching where required. And the basics of configuration management, packaging and deployment will become even more important as you want to reduce the number of variables you have to deal with in your environments. You better remove those variables introduced through manual steps by having these processes completely automated.
  • Testing strategies – Given that you are now dealing with multiple versions of components being in the environment at the same time, you have to rethink your testing strategies. The rules of combinatorics make it very clear that it only takes a few different variables before it becomes unmanageable to test all permutations. So we need to think about different testing strategies that focus on valid permutations and risk profiles. After all, functionality that is not yet live requires less testing than the ones that will go live next.

The above points cover the technical aspects but to get there you will also have to solve some of the organisational challenges. Let me just highlight 3 of them here:

  • Partnership with delivery partners – It will be important to choose your partners wisely. Perhaps it helps to think of your partner ecosystem in three categories: Innovators (the ones who work with you in innovative spaces and with new technologies), Workhorses (the ones who support your core business applications that continue to change) and Commodities (the ones who run legacy applications that don’t require much new functionality and attention). It should be clear that you need to treat them differently in regards to contracts and incentives. I will blog later about the best way to incentivise your workhorses, the area that I see most challenges in.
  • Application Portfolio Management – Of course to find the right partner you first need to understand what your needs are. Look across your application portfolio and determine where your applications sit across the following dimensions: Importance to business, exposure to customers, frequency of change, and volume of change. Based on this you can find the right partner to optimise the outcome for each application.
  • Governance – Last but not least governance in very important. In a multi-speed IT world you will need flexible governance. One size fits all will not be good enough. You will need light-weight system driven governance for your high-speed applications and you can probably afford a more powerpoint/excel driven manual governance for your slower changing applications. If you can run status reports of live systems (like Jira, RTC or TFS) for your fast applications you are another step closer to mastering the multi-speed IT world.

The winding road to DevOps maturity

curving road

I have noticed a trend in the evolution of teams when it comes to DevOps maturity over the years, which I now call the winding road of maturity. Recently I was using a DevOps model to describe the progress overtime, which I’ve designed a while ago and realised that with the advent of cloud based DevOps I have to update my model. So I thought I’d share my observations and see what other people think. Surprisingly this pattern is seen in a lot of different work environments: Deployment Automation, Test Automation and many others. I will use the deployment automation scenario here but believe me it applies in many other technical aspects as well.


Here is my current model, which I have shared with many clients and colleagues over time:

Curve 3Stage 1: “Do it All Manually” – We do everything manually each time. We do all the steps in our checklist for deployments or test or whatever it is that we consider to be our daily job. Not a lot of optimisation at this stage and it feels all very heavy-handed.
Stage 2: “Do the Necessary Manually” – Over time we realise that there are many steps that we can skip if we do a quick impact assessment and based on that assessment only execute the steps that are required (e.g. not redeploying unchanged components or not executing tests for functionality that has not changed). We are now in a world where each deployment looks different based on our assessments – this is difficult if there is a high turnover of resources or if transitioning to newbies as they wouldn’t have the skills/knowledge to do reliable assessments.
Stage 3: “Automate the one way” – Then we discover automation. However, automation of the impact assessments is more complicated than automating one common process, so we go back to running all steps each time. This reduces the effort for deployments but might impact the actual duration.
Stage 4: “Optimise for performance” – Once we have mastered automation we start to look for ways to optimise this. We find ways of identifying only the steps that are required for each activity and dynamically create the automation playbook that gets executed. Now we have reduced effort and are continuing to reduce overall duration as well. We are an optimising organisation based on reliable automation.
1. Stage “Do it All Manually” –

Here is where my story usually ended and I would have said that this is an optimising end-state and that we don’t really move around another curve again. However, I believe that Cloud-based DevOps goes one step further requiring me to update my model accordingly:
Curve 5In this new model we do everything each time again. Let me explain. In the scenario of deployment automation, rather than only making the required incremental changes to an environment we completely instantiate the environment (infrastructure as code). In the scenario of test automation, we create several environments in parallel and run tests in parallel to reduce time rather than basing a subset of tests on an impact assessments. We can afford this luxury now because we have reached a new barrier in my model, which I call the Cloud Barrier.

This update to my model was long overdue to be honest and it just comes to show that when you work with a model for a long time you don’t realise when it is out-dated and you just try to make it fit with reality. Well hopefully my updated model helps to chart out your organisational journey as well. It is possible to short-cut the winding road to maturity but as is the case with cycling up a mountain a short-cut will be much steeper and will require some extra effort. See you on the top of the mountain!

Picture: Hairpin Curve – Wikimedia
License: Creative commons

Distributed Agile – An oxymoron?

It is time for me to put on paper (well not really paper…but you know what I mean) my thoughts on distributed Agile. I have worked with both distributed and co-located Agile. Distributed Agile is a reality, but there are a lot of myths surrounding it. I had some queries over the last few months where people were trying to compare co-located teams with distributed teams.

networkLet’s start by talking about one of the things that is being brought up again and again: “Agile works best with a small number of clever people in a room together”. Now I agree that this will be one of the best performing teams, but I would stipulate that in that situation you don’t need a methodology at all and that the team will be performing very well still. The power of Agile is the rigor it brings to your delivery when you either don’t have very experienced people in the team or when you are  a distributed team.

Now why would you choose a distributed Agile model?

  • Scaling up. It can be very difficult to quickly find enough people in your location
  • Access to skills. It’s also difficult to find people with the right skills.
  • Follow-the-sun development. By working in different regions of the world, you can work around the clock which means you can bring functionality to market quicker.
  • You are already distributed. Well if your teams are already in different locations, you don’t really have a choice, do you?
  • You DO NOT chose distributed Agile because it is inherently cheaper or better. That is a misconception.

The goal of distributed Agile is to get as close as possible to the performance the same team would be capable of if they were co-located.

distributed AgileAll things being equal, I don’t believe a distributed team will ever outperform a co-located team. But then all things are never equal. The best distributed teams are better performing than 80% of the co-located teams (see graph on the left). So it is important in both co-located and distributed Agile to get a well working team together. Improving the performance of your team is more important than the location of the members.

There are however factors that help you achieve an experience close to co-locations.

  • Invest in everything that has to do with communication: Webcams, Instant Messengers, Video conferencing,… And really make this one count, spending 10 minutes each time to enable a connection costs a lot of productive time over a sprint.
  • Physical workspace – where team members are co-located they need to have the right physical environment and shouldn’t sit among hundreds of other people who disturb them
  • Invest in virtual tooling like wikis, discussion boards etc.
  • Find ways to get to know each other. This happens naturally for co-located teams, but requires effort for distributed teams. Spend 10 min in each sprint review introducing a team member or create virtual challenges or social events in second life or world of warcraft.
  • Don’t save a few dollars on travel. Get key stakeholders or team members to travel to the other location so that you at least for a short period of time can enjoy the richness of communication that comes with co-location.
  • Agree on virtual etiquette – what should each team member do on calls or in forums. Retrospectives and Sprint reviews require some additional thought to really hear from everyone.

If you do all that you have a team that operates nearly as if co-located. And if you really want to push the performance of your team further then I have an answer as well:

  • Look at your engineering practices
  • Look at your tools
  • Implement DevOps/ContinuousDelivery/Continuous Integration

That will really push your productivity, much more than any methodology or location choice could possibly do.