Category Archives: DevOps

How to Structure Agile Contracts with System Integrators

As you know I work for a Systems Integrator and spend a lot of my time responding to proposals for projects. I am also spending time as consultant with CIOs and IT leadership to help them define strategies and guide DevOps/Agile transformations. An important part is to define successful partnerships.  When you look around it is quite difficult to find guidance on how to structure the relationships between vendor and company better. In this post I want to provide three things to look out for when engaging a systems integrator or other IT delivery partner. Engagements should consider these elements to come to a mutually beneficial commercial construct.

Focus on Dayrates is dangerous

We all know that more automation is better, why is it then that many companies evaluate the ‘productivity’ of a vendor by their dayrates. Normally organisations are roughly organised in a pyramide shape (but the model will work for other structures as well).

It is quite easy to do the math when it comes to more automation. If we automate activities they are usually either low skilled or at least highly repeatable activities which are usually performed by people with lower costs to the company. If we automate more tasks that means our ‘pyramid’ becomes smaller at the bottom. What does this do to the average dayrate? Well of course it brings it up. The overall cost goes down but the average dayrate goes up.

pyramid

You should therefore look for contracts that work on overall cost not dayrates. A drive for lower dayrates incentives manual activities rather than automation. Besides dayrates it is also beneficial to incentivise automation even further by sharing the upside of automation (e.g. gain sharing on savings from automation, so that the vendor makes automation investments by themselves)

Deliverables are overvalued

To this date many organisations structure contracts around deliverables. This is not in line with modern delivery. In Agile or iterative projects we are potentially never fully done with a deliverable and certainly shouldn’t encourage payments for things like design documents. We should focus on the functionality that is going live (and is documented) and should structure the release schedule so that frequent releases coincide with regular payments for the vendor. There are many ways to ‘measure’ functionality that goes live like story points, value points, function points etc. Each of them better than deliverable based payments.

Here is an example payment schedule:

  • We have 300 story points to be delivered in 3 iterations and 1 release to production. 1000$ total price
  • 10%/40%/30%/20% Payment schedule (First payment at kick-off, second one as stories are done in iterations, third one is once stories are releases to production, last payment after a short period of warranty)
  • 10% = 100$ on Signing contract
  • Iteration 1 (50 pts done): 50/300 *0.4 * 1000 = 66$
  • Iteration 2 (100 pts done): 100/300 * 0.4 * 1000 = 133$
  • Iteration 3 (150 pts done): 150/300 * 0.4 * 1000 = 201$
  • Hardening & Go-live: 30% = 300$
  • Warranty complete: 20% = 200$

BlackBox development is a thing of the past

In the past it was a quality of a vendor to take care of things for you in more or less a “blackbox” model. That means you trusted them to use their methodology, their tools and their premises to deliver a solution for you. Nowadays understanding your systems and your business well is an important factor for remaining relevant in the market. Therefore you should ask your vendor to work closely with people in your company so that you can keep key intellectual property in house and bring the best from both parties together, your business knowledge and the knowledge of your application architecture with the delivery capabilities of the systems integrator. A strong partner will be able to help you deliver beyond your internal capability and should be able to do so in a transparent fashion. It will also reduce your risk of nasty surprises. And last but not least in Agile one of the most important things for the delivery team is to work closely with business. That is just not possible if vendor and company are not working together closely and transparently. A contract should reflect the commitment from both sides to work together as it relates to making people, technology and premises available to each other.

One caveat to this guidance is that for applications that are due for retirement you can opt for a more traditional contractual model, but for systems critical to your business you should be strategic about choosing your delivery partner in line with the above.

I already posted some related posts in the past, feel free to read further on:

https://notafactoryanymore.com/2014/10/27/devops-and-outsourcing-are-we-ready-for-this-a-view-from-both-sides/

https://notafactoryanymore.com/2015/01/30/working-with-sis-in-a-devopsagile-delivery-model/

https://notafactoryanymore.com/2015/02/26/agile-reporting-at-the-enterprise-level-part-2-measuring-productivity/

Thoughts on State of DevOps report 2016

SOD2016And there it is – the most recent State of DevOps report, the 2016 version. If you read my previous blog posts for these kind of reports, you will expect a full summary. Sorry to disappoint. This time I focus on the things that stood out to me – we all know DevOps is a good thing and this report can give you ammunition if you need to convince someone else. But I don’t see the point of reiterating those. Let’s focus on the things that we didn’t know or that surprise us.

It is great to see that the report continues to highlight the importance of people and processes in addition to automation. That is very much in line with my practical experience at clients. High performance organisations have higher Employee Net Promoter Score (ENPS), which makes sense to me. I think there is an argument to be made that you can use ENPS to identify teams or projects with problems. I would love to test this hypothesis as alternative to self-assessments or other more involved tools that might cost more but might not be more accurate and are harder to deploy.

Another key finding is the impact it has when you build quality into your pipeline and don’t have it as a separate activity (e.g. no testing as a dedicated phase – I wrote about modern testing here) – but then the numbers didn’t really convince me on second look. It’s difficult to get this right and especially as the report has to work with guestimates from people at all levels of the organisation. But I agree with the sentiment behind this and there is anecdotal evidence that this holds true. I would love to have some more reliable data on this from real studies of work in organisations, it could be very powerful.

This year is the first time DevOpsSec is reflected in the report and the results are positive which is great. I have always argued that with the level of automation and logging in DevOps security should be a lot easier. The report has some very useful advice on how to integrate security all through the lifecycle on page 28.

We continue to see a good proportion of people coming from DevOps teams which is not surprising as that is the organisational form that most larger organisations choose for practical reasons (at least as a transition state) and flies in the face of a “DevOps team” is an anti-pattern. Glad to see the reality reflected.

On the results side the report uses some pretty impressive numbers on what high performers can do vs low performers. That’s great info, but I would like to see this compared between companies of similar size and complexity – otherwise we compare the proverbial DevOps unicorns with large enterprises and that is not really a fair comparison as the difference is not just in DevOps then. The more detailed data shows in my view the limitations of the comparison and some “kinks” in the data that are not easy to explain. I am glad they printed this data, as it shows that they researchers don’t massage data to fit their purpose which is good.

I really like how the researchers tried to find evidence for the positive benefits of trunk-based development, but I am not convinced this has been fully achieved yet. The same counts for visualising work – I see the point, but the report does not give me more reason and ammunition than I had before in my view.

Similar the ROI calculation is a good start, but nothing revolutionary. Its worth having a read of it but you will likely not find much new here – reduction in downtime, reduction in outage, increase in value through faster delivery.

Overall a good report, but not much revolutionary new. Great to see the trending over the years and that the data remains consistent. Looking forward to next year’s version. And yes I am writing this against the high expectations from previous year, it’s difficult to have revolutionary news each year…

Guide to the Guide to Continuous Delivery Vol 3

CD GuideI am not really objective when I say that I hope you have read the most recent Guide to Continuous Delivery Vol. 3 as I had the honor to contribute an article to it. My article is about mapping out a roadmap for your DevOps journey and I have an extended and updated blog article in draft on that topic that I will push out sometime soon. There is a lot of really good insight in this guide and for the ones with little time or who just prefer the “CliffsNotes”, I want to provide my personal highlights. I won’t go through every article but will cover many of them. Besides articles the guide provides a lot of information on tooling that can help in your DevOps journey.

Key Research Findings

The first article covers the CD survey that was put together for this guide. While less people said they use CD this might indicate more people understanding better what it takes to really do CD, I take this a positive indication for the community. Unsurprisingly Docker is very hot, but its clear that there is a long way to go to make it really work when you look at the survey results.

Five Steps to Automating Continuous Delivery Pipelines

Very decent guidance on how to create your CD pipeline, the two things that stood out for me are “Measure your pipeline”, which is absolutely critical to enable continuous improvement and potentially crucial for measuring the benefits for your CD business case. It also highlights that you sometimes need to include manual steps, which is where many tools fall down a bit. Gradually moving from manual to full automation by enabling a mix of automated and manual steps is a very good way to move forward.

Adding Architectural quality metrics to your cd pipeline

An interesting article on measuring more than just functional testing in your pipeline. It stresses the point to include performance and stress testing in the pipeline and that even without full scale in early environments you can get good insights from measuring performance in early environments and use the relative change to investigate areas of concern.
There is other information that can provide valuable insights into architectural weaknesses like # of calls to external systems, response time and size for those calls, number of exceptions and CPU time.

How to Define your DevOps roadmap – Well read the whole article 😉

Four Keys to Successful Continuous Delivery

Three of the keys are quite common: Culture, Automation and Cloud. What I was happy to see what the point about open and extendable tools. Hopefully over time more vendors realise that this is the right way to go.

A scorecard for measuring ROI of Continuous Delivery Investment

An interesting short model for measuring ROI, it uses lots of research based numbers as inputs into the calculations. Could come in handy for some who want a high-level business case.

Continuous Delivery & Release Automation for Microservices

I really liked this article with quite a few handy tips for managing Microservices that match my ideas to a large degree. For example you should only get into Microservices if you already have decent CI and CD skills and capabilities. Microservices require more governance than traditional architectures as you will likely deal with more technology stacks, have to deal with additional testing complexity and require a different ops model. To deal with this you need to have a real-time view of status and dependencies of your Microservices. The article goes into quite some detail and provides a nice checklist.

Top CD resources

No surprise here to see the State of DevOps report, Phoenix Project and the Continuous Delivery book on this list.

Make sure to check out the DevOps Checklist on devopschecklist.com – there is lots of good questions on this checklist that can make you think about possible next steps for your own adoption.

Continuous Delivery for Containerized Applications

A lot of common ground get revisited in this article like the need for immutable Microservices/containers, Canary launches and A/B testing. What I found great about this article is the description of a useful tagging mechanism to govern your containers through the CD pipeline.

Securing a Continuous Delivery Pipeline

Some practical guidance on leveraging the power of CD pipelines to increase security, a topic that was just discussed at the DevOps Forum in Portland too and which means we should see some guidance coming out later in the year. The article highlights that tools alone will not solve all your problems but can provide real insights. When starting to use tools like SonarQube be aware that the initial information can be confusing and it will take a while to make sense of all the information. Using the tools right will allow you to free up time for more meaningful manual inspections where required.

Executive Insights on Continuous Delivery

Based on interviewing 24 executives this articles gathers their insights. Not surprisingly they mention that it is much easier to start in a green fields environment than in brown fields. Even though everyone agrees that tools have significantly improved, the state of CD tools is still not where people would like it to be and many organisations still have to create a lot of homemade automation. The “elephant in the room” that is raised at the end is that in general people rely on intuition still for the ROI of DevOps, there is no obvious recommendation for how to measure this scientifically.

Is the DevOps the new black?

(This article was first published in the JAXmagazine)

jaxenterDevOps is everywhere: you can buy DevOps tools from vendors that used to sell ALM tools, you can buy DevOps from cloud vendors who used to sell you virtual infrastructure and you can buy DevOps from consulting companies who used to sell you IT strategies… How come that on close inspection a lot of the DevOps practices and tools look eerily familiar to those earlier products they tried to sell you?

I have been working in what used to be called Development Architecture for all my career – developing IDE extension and compilers in the beginning and later on setting up tooling solutions to support delivery. Reality is that tooling and methodology will only ever be part of the answer. The hard truth is that engineering skills are important in both your DevOps team (yes I dare call the team by this name, but feel free to call it tools team, platform team, technical service team, system team or by any other name you feel is most appropriate and will offend the least people) and your development and operations teams.

DevOps is both the best and worst thing that could have happened to people who work in this space. On the one side all of sudden the work we do has become sexy. For a long time looking for labour arbitrage through offshoring or investing in proprietary or commercial off the shelf products was the answer to increasing complexity and cost pressure in projects. Good old engineering practices and supporting developers with the right tools was not sexy. Now DevOps is the new black and people want to talk to me about supporting high performance delivery through engineering practices and the right tooling to support developers. Making this important aspect of IT delivery more visible was certainly great.

But… there is the dangerous flipside. All of sudden everyone is doing it. In my consulting role I spend quite a bit of time performing assessments for clients. And I come across the Dunning-Kruger effect way more often than I expected. For those of you who don’t know about Dunning-Kruger, check it out on Wikipedia (https://en.wikipedia.org/wiki/Dunning–Kruger_effect) – in short it is the common pattern that people who don’t know much about a certain area believe to be better at it than they really are. In my case the most common symptom of Dunning-Kruger involves Continuous Integration. I walk into an organisation and start working through my assessment framework and I ask the following question: “Do you practice Continuous Integration?” And the answer is “yes we do”. Here I could move on, tick the box for continuous integration and ask for the next practice. In my experience Continuous Integration is actually quite difficult, so I dig a bit deeper “How do you know you are doing Continuous Integration?” the answer “We have Jenkins as our Continuous Integration server.” Okay they use a common tool for CI. One more question I feel will not hurt: “What do you do with Jenkins and how often does it run?” and here Dunning-Kruger hits me “We run it weekly for our development branch”. Ah yes – here we are again I think. My assessment turns into an education exercise. Truth be told, I think this is actually what good assessments do, they are an educational tool. For some it is a tool for self reflection, for others it serves as a helpful guide to have these external discussions with a coach. But all too often exactly the described contradiction of self-perception “We practice Continuous Integration” and reality “We have a Jenkins server and run it occassionally” leads to people, teams or organisations saying that they are doing DevOps.

Of course I am not free of blame. Using the term DevOps is often a handy shortcut for the large set of practices that underpin DevOps as well as the cultural shift required for it. And when are you allowed to say you are doing DevOps anyway? For me the best way to deal with this is to say that we are on the DevOps journey. And we all are. Everyone who is involved in the delivery of IT solutions is on the DevOps journey. Its hardly ever a straight line, often people wander off the path or are lost on the way and get further and further away from the goal, but we are all on the DevOps journey to improve IT delivery. Because after all that is what DevOps is all about. And yes I don’t mind it being the new black and I take the negatives that come from the hype if it means we can have the discussion on improving delivery; not just because it matters to our businesses and clients. Because it makes IT delivery a more humane place to be, it removes stress from people’s life, it makes you enjoy work more than it used to and it provides all of us in society with better solutions for all our problems from the mundane (how to better post pictures on Facebook for example) to the impactful (how to support families in disaster areas with better information through crowdsourcing for example.)

Join me on the journey, it might be a long one, but one that is worth taking the next step on…

A personal DevOps Journey or A Never-Ending Journey to Mastery

I spent the last few days at a technical workshop where I spoke about Agile and DevOps and while preparing my talks I did a bit of reflection. What I realised is that my story towards my current level of understanding might be a good illustration of the challenges and solutions we have come up with so far. Of course everyone’s story differs, but this is mine and sharing it with the community might help some people who are on the journey with me.

As a picture speaks more than a thousand words, here is the visual I will use to describe my journey.
(Note: The Y-axis shows success or as I like to call it the “chance of getting home on time”, the X-axis is the timeline of my career)

Personal Journey

The Waterfall Phase – a.k.a. the Recipe Book

When I joined the workforce from university and after doing some research into compilers, self-driving cars and other fascinating topics that I was allowed to explore in the IBM research labs, I was immediately thrown into project work. And of course as was custom I went to corporate training and learned about our waterfall method and the associated process and templates. I was amazed, project work seemed so simple. I got the methodology, processes and templates and all I have to do was following them. I set out to master this methodology and initial success followed the better I got at it. I had discovered the “recipe book” for success that described exactly how everyone should behave. Clearly I was off to a successful career.

The Agile Phase – a.k.a. A Better Recipe Book

All was well, until I took on a project for which someone else created a project plan that saw the project completed in 12 weeks’ time. I inherited the project plan and Gantt chart and was off to deliver this project. Very quickly it turned out that the requirements were very unclear and that even the customer didn’t know everything that we needed to know to build a successful solution. The initial 4 weeks went by and true to form I communicated 33% completion according to the timeline even though we clearly didn’t make as much progress as we should. Walking out of the status meeting I realised that this could not end well. I setup a more informal catch-up with my stakeholders and told them about the challenge. They agreed and understood the challenge ahead and asked me what to do. Coincidence came to my rescue. On my team we had staffed a contractor who had worked with Agile before and after a series of coffees (and beers for that matter) he had me convinced to try this new methodology. As a German I lived very much up to the stereotype as I found it very hard to let go of my beloved Gantt charts and project plans and the detailed status of percentage complete that I had received from my team every week. Very quickly we got into a rhythm with our stakeholders and delivered increments of the solution every two week. I slowly let go of some of the learned behaviour as waterfall project manager and slowly became a scrum master. The results were incredible, the team culture changed, the client was happier and even though we delivered the complete solution nowhere close to the 12 weeks (in fact it was closer to 12 months), I was convinced that I found a much better “recipe book” than I had before. Clearly if everyone followed this recipe book, project delivery would be much more successful.

The DevOps Phase – a.k.a. the Rediscovery of Tools

And then a bit later another engagement came my way. The client wanted to get faster to market and we had all kind of quality and expectation setting issues. So clearly the Agile “recipe book” would help again. And yes, our first projects were a resounding success and we quickly grew our Agile capability and more and more teams and projects adopted Agile. It however quickly became clear that we could not reduce the time to market as much as we liked and often the Agile “recipe book” created a kind of cargo cult – people stood up in the morning and used post-its and consider themselves successful Agile practitioners. Focusing on the time to market challenge I put a team in place to create the right Agile tooling to support the Agile process through an Agile Lifecycle Management system and introduced DevOps practices (well back then we didn’t call it DevOps yet). The intention was clear, as an engineer I thought we could solve the problem with tools and force people to follow our “recipe book”. Early results were great, we saved a lot of manual effort, tool adoption was going up, and we could derive status from our ALM. In short, my world was fine. I went off to do something different. Then a while later I came back to this project and to my surprise the solution that I put in place earlier had deteriorated. Many of the great things I put in place had disappeared or had changed. I wanted to understand what happened and spent some time investigating. It turned out that the people involved made small decisions along the way that slowly slowly lost sight of the intention of the tooling solution and the methodology we used. No big changes, just a death by a thousand cuts. So how am I going to fix this one…

The Lean Phase – a.k.a. Finally I Understand (or Think I do for Now)

Something that I should have known all along became clearer and clearer to me: Methodology and tools will not change your organisation. They can support it but culture is the important ingredient that was missing. As Drucker says: “Culture eats strategy for breakfast”. It is so very true. But how do you change culture… I am certainly still on this journey and cultural change management is clearly the next frontier for myself. I have quickly learned that I need to teach people the principles behind Agile and DevOps, which includes a elements of Lean, Systems Thinking, Constraint Theory, Product Development Flow and Lean Startup thinking. But how do I really change the culture of an organisation, how do I avoid the old saying that “to change people, you sometimes have to change (read replace) people”. As an engineer I am pretty good with the process, tools and methodology side, but the real challenge seems to lie in the organisational change management and organisational process design. And I wonder whether this is really the last frontier and or will there be a next challenge right after I have mastered this one…

The good news is that many of us are on this journey together, and I am confident that on the back of the great results we achieved with tools and methodology alone, truly great things lie ahead of us still as we master the cultural transformation towards becoming DevOps organisations.

The winding road to DevOps maturity

curving road

I have noticed a trend in the evolution of teams when it comes to DevOps maturity over the years, which I now call the winding road of maturity. Recently I was using a DevOps model to describe the progress overtime, which I’ve designed a while ago and realised that with the advent of cloud based DevOps I have to update my model. So I thought I’d share my observations and see what other people think. Surprisingly this pattern is seen in a lot of different work environments: Deployment Automation, Test Automation and many others. I will use the deployment automation scenario here but believe me it applies in many other technical aspects as well.

 

Here is my current model, which I have shared with many clients and colleagues over time:

Curve 3Stage 1: “Do it All Manually” – We do everything manually each time. We do all the steps in our checklist for deployments or test or whatever it is that we consider to be our daily job. Not a lot of optimisation at this stage and it feels all very heavy-handed.
Stage 2: “Do the Necessary Manually” – Over time we realise that there are many steps that we can skip if we do a quick impact assessment and based on that assessment only execute the steps that are required (e.g. not redeploying unchanged components or not executing tests for functionality that has not changed). We are now in a world where each deployment looks different based on our assessments – this is difficult if there is a high turnover of resources or if transitioning to newbies as they wouldn’t have the skills/knowledge to do reliable assessments.
Stage 3: “Automate the one way” – Then we discover automation. However, automation of the impact assessments is more complicated than automating one common process, so we go back to running all steps each time. This reduces the effort for deployments but might impact the actual duration.
Stage 4: “Optimise for performance” – Once we have mastered automation we start to look for ways to optimise this. We find ways of identifying only the steps that are required for each activity and dynamically create the automation playbook that gets executed. Now we have reduced effort and are continuing to reduce overall duration as well. We are an optimising organisation based on reliable automation.
1. Stage “Do it All Manually” –

Here is where my story usually ended and I would have said that this is an optimising end-state and that we don’t really move around another curve again. However, I believe that Cloud-based DevOps goes one step further requiring me to update my model accordingly:
Curve 5In this new model we do everything each time again. Let me explain. In the scenario of deployment automation, rather than only making the required incremental changes to an environment we completely instantiate the environment (infrastructure as code). In the scenario of test automation, we create several environments in parallel and run tests in parallel to reduce time rather than basing a subset of tests on an impact assessments. We can afford this luxury now because we have reached a new barrier in my model, which I call the Cloud Barrier.

This update to my model was long overdue to be honest and it just comes to show that when you work with a model for a long time you don’t realise when it is out-dated and you just try to make it fit with reality. Well hopefully my updated model helps to chart out your organisational journey as well. It is possible to short-cut the winding road to maturity but as is the case with cycling up a mountain a short-cut will be much steeper and will require some extra effort. See you on the top of the mountain!

Picture: Hairpin Curve – Wikimedia
License: Creative commons

DevOps and Outsourcing: Are we ready for this? – A view from both sides

At the recent DevOps Enterprise Summit #DOES14 (http://devopsenterprisesummit.com) I was surprised to hear so little about the challenges and opportunities of working with systems integrators. Reality is that most large organisations work with system integrators and their DevOps journey needs to include them. With this blog post I want to start the conversation about it and let you in on my world, because I think this is an important discussion if we want DevOps to become the new normal, the “traditional”…

connect-316638_1280Let’s start with terminology – I think you will struggle with the culture change if you call the other party system integrator, outsourcer, vendor, 3rd party. I prefer the term delivery partner. Anything other than a partnership mindset will not achieve the right culture that you need to establish on both sides. I will talk about the culture aspect later in the post, but terminology can make such a difference, consider the difference between the terms tester and quality engineer.

A bit of my personal history to provide some context – feel free to skip to the next paragraph if you are after the meat of this blog post.
I have been working for a large system integrator for many years now and have been part of DevOps journeys on both sides – as the delivery partner (notice the choice of words ;-)) and in staff augmentation roles dealing with my own company and other delivery partners as a client. To use the metaphor used at #DOES14 – I am more of a horsewhisperer than a unicorns handler. And I wouldn’t want to have it any other way. That means my DevOps journeys deal mostly with Systems of Record (think Mainframe, Siebel, etc.) and yes once in a while I get to play with the easier stacks like Java and .NET. So my perspective is really from a large enterprise context and while it is sometimes tiring to see the speed we are moving at, it is a fascinating place to be in and gives you such satisfaction when you have success. Together with passionate people on both the client side and on my team, we have saved one client over 5000 days of manual effort per year or reduced deployment times from over 2 days to less than 3 hours at another client. This is amazing to see and I cannot wait to tackle each new challenge. One item on my bucket list is to drill open SAP and “DevOps-ify” it, just need to find the right organisation that is willing to go there. But enough about myself.

Working with Delivery Partners who develop applications for you
The elephant in the room is probably setting up the right contract with your delivery partner. This is not easy – Agile coaches will tell you that Fixed Price is evil, but if you go for a T&M model your delivery partner has no incentive to reduce manual labour as he gets paid for each manual step. I have seen both types of contract work and not work, so clearly it’s not the type of contract that makes the difference. But what is it then? The relationship and alignment of incentives and priorities will be important. I will talk about culture separately, so let’s look at incentives. One of the concepts that work reasonable well is if you can create a performance based incentive (e.g. measure the efforts for “DevOps service” baseline and then create an incentive for improvement, like sharing the benefit across both organisations. The SI will be happy to increase margin and the client saves money. A true Win-Win commercial construct.)

Another important aspect is culture. Don’t forget in your DevOps journey to consider the culture of your delivery partners and of the way your own organisation treats the partners. Too often outsourcing partners are not involved in the cultural shift, are not getting the communications or being invited to culture building activities. Try to understands what drives them, connect their DevOps champions with yours, give them the opportunity to provide inputs. And last but not least celebrate together!

The third aspect to consider is technical skills. It is not necessarily true that your delivery partner has the required technical skills available. Remember that you probably for a long time incentivised your partner to find staffing models that support a very low daily rate. this doesn’t change quickly and if you want to shift this you will have to address the need for technical skills and either create a structured up-skilling program or provide technical coaching from your side. Don’t just make it their problem, make it a joint problem and address it together including any required updates to the commercial arrangements. And as is true for managing your own team: assume positive intent and work from that basis.

Of course, if you don’t think the culture of the SI is DevOps aligned (and you as one client will not be able to change that easily, trust me) then you should look for a partner who is in it with you. Going in the DevOps direction is not always easy so you rather choose the right partner for the tricky path ahead of you. This is true for your Agile adoption and certainly is true for your DevOps adoption as well.

When to work with an SI in the DevOps team
Besides working with SIs who develop and maintain applications, there is also a case to be made for getting help from an SI to implement DevOps in your organisation. This is what I do for a living and I do think we can add real value. First of all, I don’t think you can outsource the DevOps implementation completely (at least I would advise against it), but you can create really mutual beneficial partnerships. What I enjoy about being an SI (okay that sounds weird), by working for an SI (that’s better) is that I have a huge internal network of people with similar challenges and with solutions for them. If I want to find the best way to automate Siebel deployments I have many colleagues who have been there before or who are doing it right now. Having access to this network and the associated skills can be very beneficial for clients. And if you setup the partnership right, both organisations can benefit. I have helped organisations setup the team, the processes and the platform and enabled them to operate it going forward. And nowadays with offshoring we can also be a long term part of the team to help with ongoing improvements. Reality is not everyone has the in-house capability to build this capability and getting a bit of external help can go a long way. If you want to do it all in-house you can grab a couple of coaches to augment, but if you want someone with skin in the game find a really good SI partner for your team.

I will stop here although there is more to be said. In one of my next posts I will focus on the inside view of an SI. What does it take to transition towards DevOps if you are fully dependent on your client in regards to process, flow of work etc. Is there something that can be done? I will tell you about one of my projects to give you an idea and to further the understanding of the role of an SI in the DevOps world.

Update 2016:
I have seen a bit more conversation about this now, the below links are worth reading if you want to know about a few more perspectives:
Will DevOps kill outsourcing?
The Year of Insourcing
DevOps – The new outsourcing