Category Archives: Uncategorized

The DevOps Silver Bullet

If you work in the DevOps space long enough you would have been offered many “Silver Bullets” over the years. Everything from a specific tool, over just putting the Dev and Ops team together to specific practices like Continuous Delivery. Of course the truth is, there is no “Silver Bullet”. Last year at the DevOps Enterprise Summit I sat together with some of the brightest in this space. We spoke about what the potential “Silver Bullet” could be. It was surprising how quickly we all agreed what the one thing is that predicts success for DevOps in an organisation. So let me reveal the “Silver Bullet” we casted in that room and it is unfortunately a “Silver Bullet” that requires a lot of discipline and work: Continuous Improvement.

When we surveyed the room to see what the one characteristic is of companies that end up being successful with their DevOps transformation, we all agree that the ability to drive a mature continuous improvement program is the best indicator of success.

But what does this mean?

prime-directiveFor starters it means that these companies know what they are optimizing for. There is plenty anecdotal evidence that optimizing for speed will improve quality and reduce cost in the long run (as both of those influence speed negatively otherwise). On the flipside trying to optimize for cost does not improve quality or speed. Neither does a focus on quality, which often introduces additional expensive and time consuming steps. If you want to be fast you cannot afford rework and will remove any unnecessary steps and automate where possible. Hence speed is the prime directive for your continuous improvement program.

In my view it’s all about visibility and transparency. To improve we need to know where we should start improving. After all systems thinking and theory of constraints have taught us that we only improve when we improve the bottleneck and nowhere else. Hence the first activity should be a value stream mapping exercise where representation from across the IT and business come together to visualize the overall process of IT delivery (and run).

I like to use the 7 wastes of software engineering ( when doing value stream mapping to highlight areas of interest. And this map with the “hotspots” creates a target rich environment of bottlenecks you can improve.

The key for improvement is to use what basically comes down to the scientific method: Make a prediction of what will improve when you make a specific change, make the change, measure the change to see whether you have in fact improved it. Too often people don’t have the right rigor in the continuous improvement program. As a result, changes wont get implemented or if they get implemented no one can say whether it was successful or not. Avoid this by having the right rigor up front.

The other challenge with continuous improvement is that it unfortunately is not linear. It follows a slightly adjusted J-Curve. Initially you will find easy wins and things will look splendidly. But then things get harder and often regress a litte. You could jump ship here, but that would be a bad choice. If you stick to it you will see real improvements above and beyond easy wins later.



As a result the goal if continuous improvement needs to be to find many smaller J-Curves rather than working on one huge Transformation-type J-Curve. When I saw this concept explained at DevOps Enterprise Summit by Damon Edwards, it was quite a revelation. Such a simple graphical representation that explains so well why many DevOps adoptions struggle. It is too easy to assume a straight line and then get distracted when the initial downturn occurs after the easy wins.

I can tell you from experience that the curve is real and that no matter how much experience I gather, there are always context specific challenges that mean we have that dip that requires us to stay strong and stick with it. You need grit to be successful in the DevOps world.

So the Silver Bullet I promised you does exist, but unfortunately it requires discipline and hard work. I was encouraged to see some of the most experienced enterprise coaches and change agents agree on the need for a rigorous continuous improvement culture.

Look around and see whether you are in a place with such a culture and if not, what can you do tomorrow to start changing the culture for the better?

6 Questions to Test whether You are Working with your System Integrator in a DevOps way

anger-1226157_1280If you have been following my blog, you will know that I am disappointed on how little the cultural relationship between companies and their systems integrators is being discussed in blogs, articles and conference talks. As I am working for an SI I find this surprising. Most large organisations work with Sis, so why are we not talking about it? If we are serious about DevOps we should also have a DevOps culture with our SIs, shouldn’t we?

When I speak to CIOs and have a discussion about DevOps and how to improve going forward, I often get a comment at some stage – “Mirco you seem to get this. Why is it then that not all projects with your company leverage the principles you talk about?”.

A good question, and one that a few years ago I didn’t have an answer to and hence made me a bit unsure on how to answer. I have spent a lot of time analyzing in the years since. And the truth is, that often the relationship does not allow us to work in the way most of us would like to work.

The other week I had a workshop with lawyers from both my company and lawyers from a firm that represents our clients to discuss the best way to structure contracts. Finally we all seem to understand that there is a lot of room for improvement. We need to do more of this so that we can create constructs that work for all parties. I am looking forward to continue working them – and how often do you hear someone say that about lawyers 😉

Coming back from yet another conference where this topic was suspiciously absent, I thought I write down this checklist for you to test whether you have the right engagement culture with your system integrator that enables working to the benefit of both organisations:

  • Are you using average daily rate (ADR) as indicator of productivity, value for money, etc.?
    +1 if you said No. You can read more here as to why ADR is a really bad measure all things being equal.
  • Do have a mechanism in place that allows your SI to share benefits with you when they improve through automation or other practices?
    +1 if you said Yes. You cant really expect the SI to invest in new practices if there is no upside for them. And yes there is the “morally right thing to do” argument, but let’s be fair, we all have economic targets and not discussing this with your SI to find a mutually agreeable answer is just making it a bit a too easy for yourself I think.
  • Do you give your SI the “wiggle room” to improve and experiment and do you manage the process together?
    +1 if you said Yes. You want to know how much time the SI spends on improving things, on experimenting with new tools or practices. If they have just enough budget from you to do exactly what you ask them to do, then start asking for this innovation budget and manage it with them.
  • Do you celebrate or at least acknowledge failure of experiments?
    +1 if you said Yes. If you have innovation budget, are you okay when the SI comes back and one of the improvements didn’t work? Or are you just accepting successful experiments? I think you see which answer aligns with a DevOps culture.
  • Do you know what success looks like for your SI?
    +1 if you said Yes. Understanding what the goals are that your SI needs to achieve is important. Not just financially but also for the people that work for the SI. Career progression and other aspects of HR should be aligned to make the relationship successful.
  • Do you deal with your SI directly?
    +1 if you said Yes. If there is another party like your procurement team or an external party involved then it’s likely that messages get misunderstood. And there is no guarantee the procurement teams know the best practices for DevOps vendor management. Are you discussing any potential hindrance in the contracting space directly with your SI counterpart?

A lot is being said about moving from vendor relationship to partnerships in the DevOps world. I hope this little self-test helped you find a few things you can work on with your systems integrator. I am living on the other side and often have to be creative to do the right thing for my customers. It is encouraging to me to see that many companies are at least aware of these challenges. If we can have open discussions about the items above, we will accelerate the adoption of DevOps together. I promise on the side of the SIs you will find partners that want to go the way with you. Find the right partner, be open about the aspects I described above and identify a common strategy going forward. I am looking forward to this journey together. Let’s go!

Impressions from DOES 2016

2016-11-08-16-38-18And now it’s over again, the annual DevOps Family gathering a.k.a. DevOps Enterprise Summit. Another year goes by and we were able to check-in with some of our favorite DevOps leaders and got to know some new family members. The event was full of energy and as every year I am trying to summarize what I have seen.

First of all, some overall trends of things I heard coming up again and again:

  • Attracting people – the DevOps space continues to be a hot spot and we are all competing for rare talent in this space. I think the transformational nature makes it harder to find the right people who have the right technical skills and the right mindset to be in continuous change along the journey
  • Platforms as enabler and answer to the team structure question – At the first DOES the discussion about “DevOps teams” was still heated; should you or should you not have a dedicated team. Having an internal platform team to run and operate the DevOps platform seems to be the most common solution. The idea that the platform provides self-service capabilities to the product teams and uses this to abstract away the org structure problem was mentioned several times.
  • Open Source / Open IP – more companies are now talking about open sourcing some of their tooling, including Accenture. This is a good sign for an industry that too long has focused on internal IP. I think DevOps has done great things to open IT up for sharing and providing an ecosystem where we all work together on the big challenges ahead of us

Let’s look at some of the highlight talks below:

Heather Mickman from Target

We got to check-in with Heather Mickman from Target, to see how she has progressed. It was widely seen as one of the best talks of the conference. Some gems of this talk were:

  • How speaking externally about what Target has been doing, has enabled them to attract talent
  • How they moved more work in-house to control the culture and outcomes better
  • How they build their own platform to manage public and private cloud platforms
  • Key metrics she uses are: Number of incidents / deployment and the onboarding time
  • Heather pretty much addressed all the 3 main themes I mentioned above

Scott Prugh from CSG

Another favourite of previous years provided an update on their journey and how a more Ops focused view looks like. The numbers he mentioned are still impressive, with 10x quality and half the time to market achieved through the adoption of DevOps. Their deployment quality is close to perfect with near 0 incidents post deployment (same metric that Heather mentioned). And he also highlighted the self-service platform as key enabler. Another aspect I liked was his focus on automated reporting and making work visible. His colleague Erica than brought the phoenix project to life by comparing her world to the book. I love this.

Ben and Susanna from American Airlines

I am writing this summary while waiting for my 17hour delayed AA flight, so I assume there is still some room for improvement on the DevOps front 😉 Their talk focused on the “opportunities” provided by the merger of two airlines and what to do with 2 very different stacks initially and how to slowly merge them. They also highlighted the common challenge with test automation and how to measure success with DevOps. It feels better but how do we really measure it?

Gene and John’s fireside chat

I mean what can you say about this one…it was fascinating and a geek-out for all of us. So many threads to follow, it felt like Alice in Wonderland for DevOps guys. When you watch this in replay you will feel to urge to buy books and keep googling things. Hold your fire and buy the Beyond the Phoenix project audiobook when it comes out. I surely will!

Mark Schwartz on Business Value

Mark Schwarz was back and spoke about his book. A great exploration of the concept of business value. He did not provide the answer, but some interesting things to consider:

  • ROI misses the point – profit is a fiction, flexibility and agility and options are not reflected
  • ROI does not easily work to derive decisions, too far away or too much work
  • Not each item in the backlog can feasibly be assigned a value
  • It is so important to have a conversation about business value to decide how Agile teams will use it to derive priorities

Keith Pleas from Accenture

Another exploratory talk about the Automation of Automation. How we focus our attention to automate applications, but are we using the same ideas for our own DevOps architecture. Like Gene and Johns talk, there were many breadcrumb trails to follow with this one. Accenture has also open sourced it’s DevOps platform, which you can find here:

The main themes of Open IP, Platform teams and attracting talent were hit on by Keith.

There were so many more great talks, check them out when the recordings are available. I will choose a few more quick highlights below:

  • Pivotal’s talk added the product orientation as organisational mechanism to the discussion on platform teams.
  • My good friend Sam Guckenheimer from Microsoft had the guts to do a live demo on stage, which worked out really well and show some very interesting insights into Microsofts developer platform.
  • Carmen DeArdo from Nationwide had one of the best slides in the conference in my view. I really like the cycle transformation picture, what do you think?
  • Topo Pal from CaptialOne had some of the best nuggets at the confernece:
    • “It takes an army to manage a pipeline”
    • 16 gates of quality or as he calls it, 10 commandments in Hex 😉
  • We had a really good introduction to Site Reliability Engineering by David Blank-Edelmann and the concepts of Error budgets, Blameless post mortems and much more. He also phrased that “You cant fire your way to reliability” and that maturity models should be there to determine the right help, not to punish someone.

Best thing of course are the hallway talks, the opportunity to talk to old friends and to make new friends. Another great event gone by…

See you all at DOES 17 Nov 13-15 2017 back in San Francisco. I will be there and will look forward to meet you all there. Come join us at the family gathering next year!

Why Develop-Operate-Transition projects need to get the DevOps treatment

The DevOps movement has focused the IT industry on breaking down silos and increase collaboration across the different IT functions. There are popular commercial constructs that did a great job in the past but which are not appropriate for DevOps aligned delivery models. A while ago I talked about the focus on Average Daily Rate, in this post I want to discuss how to change the popular Develop-Operate-Transition (DOT) construct.

Let’s look at the intent behind the DOT construct. The profile of work usually changes over time for a specific system and idealised looks like this:

  • During Development the team needs to be large and deal with complex requirements, new problems need to be solved as the solution evolves
  • During Operate the changes are smaller, the application is relatively stable and changes are not very complex
  • At some stage the application stabilises and changes are rare and of low complexity
  • And then the lifecycle finishes when applications are being decommissioned (– and yes we are not really good at decommissioning applications, somehow we hang onto old systems for way too long. But for the sake of argument let’s assume we do decommission systems)

As an organisation it is quite common to respond to this with a rather obvious answer:

  • During development we engage a team of highly skilled IT workers who can deal with the complexity of building a new system from scratch and we will pay premium rates for this service
  • During Operate we are looking for a ‘commodity’ partner as the work is less complex now and cost-effective labour can be leveraged to reduce the cost profile
  • As the application further stabilises or usage reduces we prefer to take control of the system to use our in-house capabilities

So far so obvious.

If we look at this construct from a DevOps perspective it becomes clear that this construct is sub-optimal as we have two handover points and in the worst case these are between different organisations with different skills and culture. I have seen examples where applications stopped working once one vendor left the building because some intangible knowledge did not get transitioned to the new vendor. It is also understandable if the Develop vendor focuses on aspects that are required to deliver the initial version and less focused on how to keep it running and how to change it after go-live. While the operate vendor would care a lot about those aspects and rather compromise on scope. Now we could try to write really detailed contracts to prevent this from happening. I doubt that we can cover it completely in a contract or at least the contract would become way too extensive and complicated.

What is the alternative you ask? Let’s look at a slight variation:


Here the company is involved from the beginning and is building up some level of internal IP as the solution is being built out. In a time where IT is critical for business success I think it is important to build some level of internal IP about the systems you use to support your business. In this new type of arrangement in the beginning the partner is providing significant additional capabilities, yet the early involvement of both the company itself and the application outsourcing partner makes sure all things are considered and everyone is across the trade-offs that are being made during delivery of the initial project. Once the implementation is complete a part of the team continues on to operate the system in production and makes any necessary changes as required and any additional smaller enhancements. If and when the company chooses to take the application completely back in-house, it is possible to do so as the existing people can continue and the capability can be augmented in-house as required at this point. While there will still be some handover activities the continuous involvement of people makes the process a lot less risky and provides continuity across the different transitions.

Of course having a partner for both implementation and operating is a much better proposition as this will further reduce the fraction. I have now worked on a couple of deals like that and really like that model as it allows for long-term planning and partnership between the two organisations.

Most people I spoke find this quite an intuitive model, so hopefully we will see more of these engagements in the future.

How to Fail at Test Automation

(This post was first published on

Let me start by admitting that I am not a test automation expert. I have done some work with test automation and have supervised teams who practiced it, but when it comes to the intricacies of it, I have to call a friend. It is from such friends that I have learned why so many test automation efforts fail. Talking to people about test automation validates my impression that this is the DevOps-related practice that people most often fail at.

Let me share the four reasons why test automation fails, in the hope that it will help you avoid these mistakes in your test automation efforts.

Before I go into the four reasons, allow me one more thought: Test automation is actually a bad choice of word. You are not automating testing, you automate quality assessments. What do I mean by that? It is a mistake to think of test automation as automating what you otherwise would do manually. You are finding ways to assess the quality of your product in automated ways and you will execute it way more often than you would do manual testing. This conceptual difference explains to a large degree the four reasons for test automation failure below.

Reason 1: Underestimating the Impact on Infrastructure and the Ecosystem

There is a physical limit of how much pressure a number of manual testers can put on your systems. Automation will put very different stress on your system. What you otherwise do once a week manually you might now do 100 times a day with automation. And into the mix an integrated environment, which means external systems need to respond that frequently, too. So you really have to consider two different aspects: Can your infrastructure in your environments support 100 times the volume it currently supports, and are your external systems set up to support this volume? Of course, you can always choose to reduce the stress on external systems by limiting the real-time interactions and stub out a certain percentage of transactions or use virtual services.

Reason 2: Underestimate the Data Hunger

Very often test automation is used in the same system where manual testing takes place. Test automation is data-hungry, as it needs data for each run of test execution and, remember, this is much more frequent than manual testing. This means you cannot easily refresh all test data whenever you want to run test automation and have to wait until manual testing reaches a logical refresh point. This obviously is not good enough; instead, you need to be able to run your test automation at any time. There are a few different strategies you can use (and you will likely use a combination):

  • Finish the test in the same state of data that you started with;
  • Create the data as part of the test execution;
  • Identify a partial set of data across all involved applications that you can safely replace each time; or
  • Leverage a large base of data sets to feed into your automation to last until the next logical refresh point.

Reason 3: Not Thinking About the System

Test automation often is an orchestration exercise as the overall business process in test flows across many different applications. If you require manual steps in multiple systems, then your automation will depend on orchestrating all those. By just building automation for one system you might get stuck if your test automation solution is not able to be orchestrated across different solutions. Also, some walled-garden test automation tools might not play well together, so think about your overall system of applications and the business processes first before heavily investing in one specific solution for one application.

Reason 4: Not Integrating it into the Software Development Life Cycle

Test automation is not a separate task; to be successful it needs to be part of your development efforts. From the people I have spoken to there is general agreement that a separate test automation team usually doesn’t work for several reasons:

  • They are “too far away” from the application teams to influence “ability to automate testing,” which you want to build into your architecture to be able to test the services below the user interface;
  • Tests often are not integrated in the continuous delivery pipeline, which means key performance constraints are not considered (tests should be really fast to run with each deployment);
  • Tests often are not executed often enough, which means they become more brittle and less reliable. Tests need to be treated at the same level as code and should be treated with the same severity. This is much easier when the team has to run them to claim success for any new feature and is much harder to do when it is a separate team who does the automation. It also will take much longer to understand where the problem lies.

Of course, absence of failure does not mean success. But at least I was able to share the common mistakes I have seen and, as they say, “Learning from others’ mistakes is cheaper.” Perhaps these thoughts can help you avoid some mistakes in your test automation journey. I do have some positive guidance on test automation, too, but will leave this for another post.

And in the case you found your own ways of failing, please share it in the comments to help others avoid those in the future. Failures are part of life and even more so part of DevOps life (trust me, I have some scars to show). We should learn to share those and not just the rosy “conference-ready” side of our stories.

Test automation is for me the practice that requires more attention and more focus. Between some open-source solutions and very expensive proprietary solutions, I am not convinced we in the IT industry have mastered it.

One bonus thought: If you cannot automate testing, automate the recording of your testing.

If you cannot automate testing, find a way to record the screen with each test by default. Once you identify a defect you can use the recording to provide much richer context and make it a lot easier to find the problem and solve it. Verbal descriptions of error are very often lacking and don’t provide all the context of what was done. I keep being surprised how long triage takes because of the lack of context and detail in the defect description. There is really no excuse for not doing this. Record first, discard if successful, attach it to the defect record if you find a problem.

Is there such a thing as Hybrid Agile?

I recently wrote an article about Hybrid Agile for InfoQ  because the term has been misused too often. Feel free to read the whole article. Here is my conclusion from the article:

After many discussions, people convinced me that “Hybrid-Agile” is what is otherwise called Water-Scrum-Fall and after some deliberation I admit that this makes sense to me. Of course the term Water-Scrum-Fall or similar phrases are often used with contempt or smiled upon, but when you look closer, this is reality in many places and for good reasons.

Managing the Plethora of DevOps tools

I have been thinking about DevOps tools a lot and discussions about tools often distract from the real problems. But what are the right DevOps tools. Well I will not go into specific tools, instead I will tell what I am looking for in DevOps tools beyond the functionality they provide. In my experience you can build good DevOps tools chains with just about any tool, but some tools might take more effort to integrate than others. I will also provide some guidance on how to manage tools at the enterprise level.

It seems that new DevOps tools appear on the market every month. This is extenuated by the fact that it is difficult to classify all the tools in the DevOps toolbox. One of the best reference guides is the xebialabs periodic table of DevOps tools ( ) which is well worth checking out.

Before I go into the details of what characteristics a good DevOps tool should have, I want to address one other aspect: Should you have one tool or many in the organization?

In general, in large organization it makes sense to have a minimal set of tools to support for several reasons:

  • Optimise license cost
  • Leveraging skills across the organization
  • Minimising complexity of integration

Yet on the other side some tools are much better for specific contexts than others (e.g. your .NET tooling might be very different to your mainframe tooling). And then there are new tools coming out all the time. So how do you deal with this? Here is my preferred approach:

  • Start with small set of standard tools in your organization
  • Allow a certain percentage of teams to diverge from the standard for a period of time (3-6 months perhaps)
  • At the end of the “trial-period’ gather the evidence and decide what to do with the tool in discussion
    • Replace the current standard tool
    • Get it added as an additional tool for specific contexts
    • Discard the tool and the team transitions back to the standard tool

Obviously DevOps tools should support DevOps practices and promote the right culture. This means the tools should not be a “fenced garden” and only work within their own ecosystem. It is very unlikely anyway that a company uses only tools from one vendor or ecosystem. Hence the most important quality of tools is the ability to integrate it with other tools (and yes possibly be able to replace it which is important in such a fast moving market.)

  1. So then the first check is how well APIs are supported. Can you trigger all functionality that is available through the UI via an API (command line or programming language based)?
  2. We should treat our tools just like any other application in the organization, which means we want to version control it. The second check is hence whether all configuration of the tool can be version controlled in an externalised configuration file (not just in the application itself)?
  3. Related to the second point is the functionality to be able to support multiple environments for the tool ( e.g. Dev vs Prod). How easy is it promote configuration? How can you merge configuration of different environments (code lines)?
  4. We want everyone in the company to be able to use the same tool. This has implications for the license model that is appropriate. Of course open source works for us in this case, but what about commercial tools? They are not necessarily bad. What is important is that they don’t discourage usage. For example tools that require agents should not price for every agent as this means people will be tempted to not use it everywhere. Negotiate an enterprise wide license or ‘buckets of agents’ so that not each usage will require a business case.
  5. Visualization and analytics are important aspects of every DevOps toolchain. To make this work we need easy access to the underlying data and that means we likely want to export data or query data. If your data is stored in an obscure data model or you have no way to access the underlying data and export it for analysis and visualization then you will require additional overhead to get good data. Dashboards and reports within the tool are no replacement as you likely want to aggregate and analyse across tools.

I hope these criteria are all relatively clear. What is surprising is how few tools adhere to these. I hope tool vendors will start to realize that if they want to provide DevOps tools they need to adhere to the cultural values of DevOps to be accepted in the community.

Hopefully the tools you are using are adhering to many of these points. Let me know what you think in the comments section of this blogpost. I am very curious to learn how you perceive DevOps tools.

Thoughts on the State of DevOps 2015 report

state of devops front

So I have been re-reading the state of DevOps report ( ) recently on the plane to my project and found a few interesting aspects that are worth highlighting – especially as they are matching my experience as well. It shows a much more balanced view than the more common unicorn-approach to DevOps.

First of all – I am glad that the important of deployments is highlighted. To me this is the core practice of DevOps, without reliable and fast deployments all the other practices are not effective, whether that is Test Automation or Environment provisioning. So this part from the report clearly is one of my favourites:

Deployment pain can tell you a lot about your IT performance. Do you want to know how your team is doing? All you have to do is ask one simple question: “How painful are deployments?” We found that where code deployments are most painful, you’ll find the poorest IT performance, organizational performance and culture.” – Page 5

And there is a whole page discussing this aspect on page 26, but here the summary steps to take:

  • Do smaller deployments more frequently (i.e., decrease batch sizes).
  • Automate more of the deployment steps.
  • Treat your infrastructure as code, using a standard configuration management tool.
  • Implement version control for all production artifacts.
  • Implement automated testing for code and environments.
  • Create common build mechanisms to build dev, test and production environments.

And even the speed to market with around 50% having more than a month of lead time to production feels about right (Page 11)

The two models on page 14 and 15 are quite good, there is not much new in this, but the choice of elements I think speaks to changes in the DevOps discussion that I have seen over the last 18 months and which I see on my projects too – and it highlights visualisation of data as one key ingredient. It’s such an important one, yet the one that not many spend their energy (and money) on. Probably because there is no obvious and easy tool choice (CapitalOne for example developed their own custom dashboard – & ). The report goes on to advice “Turn data into actionable, visible information that provides teams with feedback on key quality, performance, and productivity metrics” – Amen.

Here are the two models from the report:

Model 1Model 2

And of course based on my recent discussion about DevOps for Systems of Record (Link here) I was glad to see the following statement in the report:

 “It doesn’t matter if your apps are greenfield, brownfield or legacy — as long as they are architected with testability and deployability in mind, high performance is achievable. We were surprised to find that the type of system — whether it was a system of engagement or a system of record, packaged or custom, legacy or greenfield — is not significant. Continuous delivery can be applied to any system, provided it is architected correctly.” – Page 5

Overall I think this report is really useful and represents what I see in my projects and at my clients very well. It also discusses the kind of investments required to really move forward (page 25) and provides guidance on many aspects of the DevOps journey. I am looking forward to see what the next year will bring.

Microservices 101 – What I have learned so far

lego-252602_960_720It is kind of difficult to walk the DevOps circles without hearing the word Microservices being mentioned again and again. I have sat through a bunch of conference talks about the topic and only recently came across a couple that were practical enough that I took things away from that have applicability in my normal project work. The below is heavily influenced by the talks at this year’s YOW conference and the talks at the DevOps Enterprise summit of the last two years.

What are Microservices?

Microservices are the other extreme of monolithic applications. So far, so obvious. But what does this mean. Monolithic applications look nice and neat from the outside, they behave very well in architecture diagrams as they are placeholders for “magic happens here” and some of the complexity is absorbed into that “black box”. I have seen enough Siebel and some SAP code that tells me that this perceived simplicity is just hidden complexity. Microservices make the complexity more visible. As far as catchy quotes go, I like Randy Shoup’s from YOW15: “Microservices are nothing more than SOA done properly.” Within this lies most of the definition of a good Microservice: It is a service (application) that is for one purpose, it is self-contained and independent, has a clearly defined interface and isolated persistence (even to the point of having a database per service).

An Analogy to help:

“This, milord, is my family’s axe. We have owned it for almost nine hundred years, see. Of course, sometimes it needed a new blade. And sometimes it has required a new handle, new designs on the metalwork, a little refreshing of the ornamentation . . . but is this not the nine hundred-year-old axe of my family? And because it has changed gently over time, it is still a pretty good axe, y’know. Pretty good.”

This is what Microservices do to your architecture…

What are the benefits of Microservices?

Over time everyone in IT has learned that there is no “end-state architecture”. The architecture of your systems always evolves and as soon as one implementation finishes people are already thinking about the next change. In the past the iterations of the architecture have been quite difficult to achieve as you had to replace large systems. With microservice you create an architecture ecosystem that allows you to change small components all the time and avoid big-bang migrations. This flexibility means you are much faster in evolving your architecture. Additionally the structure of Microservices means that teams have a larger level of control over their service and this ownership will likely see your teams become more productive and responsible while developing your services. The deployment architecture and release mechanism becomes significantly easier as you don’t have to worry about dependencies that need to be reflected in the release and deployment of the services. This of course comes with increased complexity in testing as you have many possible permutations of services to deal with, so automation and intelligent testing strategies are very important.

When should you use Microservices?

In my view Microservices are relevant in areas that you know your company will invest in over time. Areas where speed to market is especially important are a good starting point as speed is one of the key benefits of Microservice where dependency-ridden architecture get bogged down. This is due to many reasons from developers having to learn about all the dependencies of their code to the increasing risk of component to be delayed in the release cycle. Microservice wont have those issues. Another area to look for is applications that cannot scale vertical much longer in an economic fashion, the horizontal scaling abilities of microservices specific to those services increases the possibilities to find economic scaling models. And of course a move towards Microservices requires investment, so go for an area that can afford the investment and where the challenges mentioned above are providing the burning platform to start your journey.

What does it take to be successful with Microservices?

This will not surprise you, but the level of extra complexity that comes with independently deployable services which then also might exist in production in multiple versions, means you need to really know your stuff. And by this I mean you need to be mature in your engineering practices, have a well oiled deployment pipeline with “Automated Everything” (Continuous Integration, Deployment, Testing). Otherwise the effort and complexity in trying to maintain this manually will quickly outweigh the benefits of Microservices. Conway’s Law says that systems resemble the organisational structure they were build in. To build Microservices we hence need to have mastered the Agile and DevOps principle of Cross-Functional teams (and ideally they are aligned to your value streams). These teams have full ownership of the Microservices they create (from cradle to grave). This makes sense if the services are small and self-contained as having multiple teams (DBAs, .NET developers,…) involved would just add overhead to small services. As you can see my view is that Microservices are the next step of maturity from DevOps and Agile as they require organisations to have already mastered those (or being close to at least).

How can you get started?

If your organisation is ready (which similarly to Agile and DevOps is a prerequisite for the adoption of Microservices) go ahead and choose a pilot and create a Microservice that adheres to the definition above and is of real business value (e.g. something that is being used a lot, is customer facing and is in a suitable technology stack). Your first pilot is likely not going to be a runaway success, but you will learn from the experience. Microservices will require investment from the organisation and the initial value might not be clear cut (as just adding the functionality to the monolith might be cheaper initially), but in the long term the flexibility, speed and resilience of your Microservice architecture will change your IT landscape. Will you end with a pure Microservice architecture? Most likely not. But your core services might just migrate to an architecture that is build and designed to evolve and hence serve you better in the ever changing marketplace.

Now – over to you. Let me know what you have learned by using Microservices, whether you think the above is true or did you have different experiences. Looking forward to hear what the current state of play is in regards to Microservices.

Last but not least some references to good Microservices talks:

Randy Shoup

James Lewis

Jez Humble

Why our DevOps maturity has not improved as much as I expected 15 years ago

PrintOne thing that continues to fascinate me is how after so many years I can still walk into IT organisations and see basic practices not being followed (e.g. Software Configuration Management or Continuous Integration or …). Shouldn’t we all know better?!? I recently came across some slides from the late 90s and early 2000s that pretty much read like something that comes from a DevOps conference today. So I wonder what is it that has prevented us from being better today and I think I found a feasible answer to my question.

For illustrative purposes I have used the Accenture DevOps maturity model, but feel free to use any other model here as the tendency would be similar. I believe that as an industry we used to be better than we are today (of course all generalisations are dangerous, including this one). While it is hard to find hard evidence for this, let me try to justify my perspective.

DevOps Journey

More than 10 years ago we had much more custom development to solve the big problems and most work was done onshore in collocated teams. The only way to be more productive and reduce cost was to have better engineering practices and leverage more automation, hence companies invested in this. We used to be on the mature end of consistent in many places. Then two trends caused us to regress as far as engineering practices are concerned:

  • Offshoring – With the introduction and mastering of offshore delivery, the industry had an alternative to automation and good engineering practices as labour arbitrage was providing a quick fix to cost reduction. It was potentially cheaper to do something manually offshore than maintaining or introducing automation onshore. Of course this couldn’t last as the labour cost offshore increased steadily. Some organisations stayed the course of engineering practices and automation others lost maturity along the way. This trend is similar to the same trend in manufacturing where manual labor became so cheap that automation wasn’t a critical element to cost reduction anymore. Those companies who went offshore did not sufficiently focus on automation and good practices as it was harder to oversea this with across locations and cost reductions were easy to achieve.
  • Packaged and proprietary Software – As we leveraged more and more packaged and proprietary software we relied on those vendors to provide the solution to our productivity challenges. This led over time to island solutions, which supported specific packages, but are not open enough to integrate into the IT ecosystem. How do you baseline an enterprise wide software package when configuration management is hidden in a package or worse still configuration is only possible through a graphical user interface with no access to the underlying source code/text files? The pre-packaged software was a quick fix to get functionality, but again the good engineering practices were lost out of sight. And many organisations make so many customisations that they should really treat the original “COTS” just like any other application with the same rigor. The other problem that packaged software introduced is that the people customising them are often considered or consider themselves not to be programmers (which means they don’t have to worry about those practices that the custom application developers use), a dangerous mistake in my view.

“Speed is the new currency of IT and businesses”

This was all well and good when cost as expressed by the cost of a day of labor was the main driver for optimisation in organisations. This paradigm has shifted and nowadays speed to market is driving incentives and behaviour. Organisations across the globe are trying to find ways to speed up their delivery processes without risking quality. But how do you now revert the trend that many organisations followed – your IT organisations is mainly outsourced and/or offshored, you have an application architecture that is a mix of packaged software, proprietary solutions and custom elements and you lack the engineering experience and skills to fix it.

This is where we need to acknowledge that DevOps is a journey that is different for every organisation. Of course there are the engineering driven organisations like Etsy, Amazon and Facebook that have engineers/IT people in their C-suite of executives who intuitively understand the importance of application architecture and good engineering. I believe they won’t ask you for a business case for what is ultimate an optimisation for speed and flexibility – they understand how important good IT is – at least I believe that to be the case. They will also consider which applications to introduce into their ecosystem and making sure that those are open products for which good engineering practices can be employed. In other organisations the adoption of DevOps practices needs to be supported by business cases that are often difficult to create (especially in the middle maturity levels where the pain is not at mission-critical levels and the full benefit of highest maturity is not yet in arms reach). Organisations that are on this journey with the majority of us in the industry need to acknowledge the sometimes intangible nature of improvements. Speed to market and other cycle time measures should be the guides for your journey.

“Everyone is on the DevOps journey, whether you know it or not”

The question whether “you are doing DevOps or not” is not valid in my view. We are all on the DevOps journey, it’s the journey of improving delivery of IT solutions. DevOps is a broad field and provides us with an ever growing toolkit of practices and tools. All of us IT Executives are on the search to improve our organisations and our delivery capabilities. What label we use for that doesn’t matter as long as we keep moving forward and keep experimenting. Is there a goal to this journey? No, I don’t think there is concrete goal, a state when we declare we are done. I think the best of us will keep pushing forward no matter how good we are getting at it and from what I have seen during my travels there is still plenty of road in front of me that I can see and I don’t need to worry about getting “there” anytime soon in any case. Join me on the journey and may our paths cross to share stories of our adventures on the road.

Note: This post was first published on