Category Archives: DevOps

A personal DevOps Journey or A Never-Ending Journey to Mastery

I spent the last few days at a technical workshop where I spoke about Agile and DevOps and while preparing my talks I did a bit of reflection. What I realised is that my story towards my current level of understanding might be a good illustration of the challenges and solutions we have come up with so far. Of course everyone’s story differs, but this is mine and sharing it with the community might help some people who are on the journey with me.

As a picture speaks more than a thousand words, here is the visual I will use to describe my journey.
(Note: The Y-axis shows success or as I like to call it the “chance of getting home on time”, the X-axis is the timeline of my career)

Personal Journey

The Waterfall Phase – a.k.a. the Recipe Book

When I joined the workforce from university and after doing some research into compilers, self-driving cars and other fascinating topics that I was allowed to explore in the IBM research labs, I was immediately thrown into project work. And of course as was custom I went to corporate training and learned about our waterfall method and the associated process and templates. I was amazed, project work seemed so simple. I got the methodology, processes and templates and all I have to do was following them. I set out to master this methodology and initial success followed the better I got at it. I had discovered the “recipe book” for success that described exactly how everyone should behave. Clearly I was off to a successful career.

The Agile Phase – a.k.a. A Better Recipe Book

All was well, until I took on a project for which someone else created a project plan that saw the project completed in 12 weeks’ time. I inherited the project plan and Gantt chart and was off to deliver this project. Very quickly it turned out that the requirements were very unclear and that even the customer didn’t know everything that we needed to know to build a successful solution. The initial 4 weeks went by and true to form I communicated 33% completion according to the timeline even though we clearly didn’t make as much progress as we should. Walking out of the status meeting I realised that this could not end well. I setup a more informal catch-up with my stakeholders and told them about the challenge. They agreed and understood the challenge ahead and asked me what to do. Coincidence came to my rescue. On my team we had staffed a contractor who had worked with Agile before and after a series of coffees (and beers for that matter) he had me convinced to try this new methodology. As a German I lived very much up to the stereotype as I found it very hard to let go of my beloved Gantt charts and project plans and the detailed status of percentage complete that I had received from my team every week. Very quickly we got into a rhythm with our stakeholders and delivered increments of the solution every two week. I slowly let go of some of the learned behaviour as waterfall project manager and slowly became a scrum master. The results were incredible, the team culture changed, the client was happier and even though we delivered the complete solution nowhere close to the 12 weeks (in fact it was closer to 12 months), I was convinced that I found a much better “recipe book” than I had before. Clearly if everyone followed this recipe book, project delivery would be much more successful.

The DevOps Phase – a.k.a. the Rediscovery of Tools

And then a bit later another engagement came my way. The client wanted to get faster to market and we had all kind of quality and expectation setting issues. So clearly the Agile “recipe book” would help again. And yes, our first projects were a resounding success and we quickly grew our Agile capability and more and more teams and projects adopted Agile. It however quickly became clear that we could not reduce the time to market as much as we liked and often the Agile “recipe book” created a kind of cargo cult – people stood up in the morning and used post-its and consider themselves successful Agile practitioners. Focusing on the time to market challenge I put a team in place to create the right Agile tooling to support the Agile process through an Agile Lifecycle Management system and introduced DevOps practices (well back then we didn’t call it DevOps yet). The intention was clear, as an engineer I thought we could solve the problem with tools and force people to follow our “recipe book”. Early results were great, we saved a lot of manual effort, tool adoption was going up, and we could derive status from our ALM. In short, my world was fine. I went off to do something different. Then a while later I came back to this project and to my surprise the solution that I put in place earlier had deteriorated. Many of the great things I put in place had disappeared or had changed. I wanted to understand what happened and spent some time investigating. It turned out that the people involved made small decisions along the way that slowly slowly lost sight of the intention of the tooling solution and the methodology we used. No big changes, just a death by a thousand cuts. So how am I going to fix this one…

The Lean Phase – a.k.a. Finally I Understand (or Think I do for Now)

Something that I should have known all along became clearer and clearer to me: Methodology and tools will not change your organisation. They can support it but culture is the important ingredient that was missing. As Drucker says: “Culture eats strategy for breakfast”. It is so very true. But how do you change culture… I am certainly still on this journey and cultural change management is clearly the next frontier for myself. I have quickly learned that I need to teach people the principles behind Agile and DevOps, which includes a elements of Lean, Systems Thinking, Constraint Theory, Product Development Flow and Lean Startup thinking. But how do I really change the culture of an organisation, how do I avoid the old saying that “to change people, you sometimes have to change (read replace) people”. As an engineer I am pretty good with the process, tools and methodology side, but the real challenge seems to lie in the organisational change management and organisational process design. And I wonder whether this is really the last frontier and or will there be a next challenge right after I have mastered this one…

The good news is that many of us are on this journey together, and I am confident that on the back of the great results we achieved with tools and methodology alone, truly great things lie ahead of us still as we master the cultural transformation towards becoming DevOps organisations.

The winding road to DevOps maturity

curving road

I have noticed a trend in the evolution of teams when it comes to DevOps maturity over the years, which I now call the winding road of maturity. Recently I was using a DevOps model to describe the progress overtime, which I’ve designed a while ago and realised that with the advent of cloud based DevOps I have to update my model. So I thought I’d share my observations and see what other people think. Surprisingly this pattern is seen in a lot of different work environments: Deployment Automation, Test Automation and many others. I will use the deployment automation scenario here but believe me it applies in many other technical aspects as well.

 

Here is my current model, which I have shared with many clients and colleagues over time:

Curve 3Stage 1: “Do it All Manually” – We do everything manually each time. We do all the steps in our checklist for deployments or test or whatever it is that we consider to be our daily job. Not a lot of optimisation at this stage and it feels all very heavy-handed.
Stage 2: “Do the Necessary Manually” – Over time we realise that there are many steps that we can skip if we do a quick impact assessment and based on that assessment only execute the steps that are required (e.g. not redeploying unchanged components or not executing tests for functionality that has not changed). We are now in a world where each deployment looks different based on our assessments – this is difficult if there is a high turnover of resources or if transitioning to newbies as they wouldn’t have the skills/knowledge to do reliable assessments.
Stage 3: “Automate the one way” – Then we discover automation. However, automation of the impact assessments is more complicated than automating one common process, so we go back to running all steps each time. This reduces the effort for deployments but might impact the actual duration.
Stage 4: “Optimise for performance” – Once we have mastered automation we start to look for ways to optimise this. We find ways of identifying only the steps that are required for each activity and dynamically create the automation playbook that gets executed. Now we have reduced effort and are continuing to reduce overall duration as well. We are an optimising organisation based on reliable automation.
1. Stage “Do it All Manually” –

Here is where my story usually ended and I would have said that this is an optimising end-state and that we don’t really move around another curve again. However, I believe that Cloud-based DevOps goes one step further requiring me to update my model accordingly:
Curve 5In this new model we do everything each time again. Let me explain. In the scenario of deployment automation, rather than only making the required incremental changes to an environment we completely instantiate the environment (infrastructure as code). In the scenario of test automation, we create several environments in parallel and run tests in parallel to reduce time rather than basing a subset of tests on an impact assessments. We can afford this luxury now because we have reached a new barrier in my model, which I call the Cloud Barrier.

This update to my model was long overdue to be honest and it just comes to show that when you work with a model for a long time you don’t realise when it is out-dated and you just try to make it fit with reality. Well hopefully my updated model helps to chart out your organisational journey as well. It is possible to short-cut the winding road to maturity but as is the case with cycling up a mountain a short-cut will be much steeper and will require some extra effort. See you on the top of the mountain!

Picture: Hairpin Curve – Wikimedia
License: Creative commons

DevOps and Outsourcing: Are we ready for this? – A view from both sides

At the recent DevOps Enterprise Summit #DOES14 (http://devopsenterprisesummit.com) I was surprised to hear so little about the challenges and opportunities of working with systems integrators. Reality is that most large organisations work with system integrators and their DevOps journey needs to include them. With this blog post I want to start the conversation about it and let you in on my world, because I think this is an important discussion if we want DevOps to become the new normal, the “traditional”…

connect-316638_1280Let’s start with terminology – I think you will struggle with the culture change if you call the other party system integrator, outsourcer, vendor, 3rd party. I prefer the term delivery partner. Anything other than a partnership mindset will not achieve the right culture that you need to establish on both sides. I will talk about the culture aspect later in the post, but terminology can make such a difference, consider the difference between the terms tester and quality engineer.

A bit of my personal history to provide some context – feel free to skip to the next paragraph if you are after the meat of this blog post.
I have been working for a large system integrator for many years now and have been part of DevOps journeys on both sides – as the delivery partner (notice the choice of words ;-)) and in staff augmentation roles dealing with my own company and other delivery partners as a client. To use the metaphor used at #DOES14 – I am more of a horsewhisperer than a unicorns handler. And I wouldn’t want to have it any other way. That means my DevOps journeys deal mostly with Systems of Record (think Mainframe, Siebel, etc.) and yes once in a while I get to play with the easier stacks like Java and .NET. So my perspective is really from a large enterprise context and while it is sometimes tiring to see the speed we are moving at, it is a fascinating place to be in and gives you such satisfaction when you have success. Together with passionate people on both the client side and on my team, we have saved one client over 5000 days of manual effort per year or reduced deployment times from over 2 days to less than 3 hours at another client. This is amazing to see and I cannot wait to tackle each new challenge. One item on my bucket list is to drill open SAP and “DevOps-ify” it, just need to find the right organisation that is willing to go there. But enough about myself.

Working with Delivery Partners who develop applications for you
The elephant in the room is probably setting up the right contract with your delivery partner. This is not easy – Agile coaches will tell you that Fixed Price is evil, but if you go for a T&M model your delivery partner has no incentive to reduce manual labour as he gets paid for each manual step. I have seen both types of contract work and not work, so clearly it’s not the type of contract that makes the difference. But what is it then? The relationship and alignment of incentives and priorities will be important. I will talk about culture separately, so let’s look at incentives. One of the concepts that work reasonable well is if you can create a performance based incentive (e.g. measure the efforts for “DevOps service” baseline and then create an incentive for improvement, like sharing the benefit across both organisations. The SI will be happy to increase margin and the client saves money. A true Win-Win commercial construct.)

Another important aspect is culture. Don’t forget in your DevOps journey to consider the culture of your delivery partners and of the way your own organisation treats the partners. Too often outsourcing partners are not involved in the cultural shift, are not getting the communications or being invited to culture building activities. Try to understands what drives them, connect their DevOps champions with yours, give them the opportunity to provide inputs. And last but not least celebrate together!

The third aspect to consider is technical skills. It is not necessarily true that your delivery partner has the required technical skills available. Remember that you probably for a long time incentivised your partner to find staffing models that support a very low daily rate. this doesn’t change quickly and if you want to shift this you will have to address the need for technical skills and either create a structured up-skilling program or provide technical coaching from your side. Don’t just make it their problem, make it a joint problem and address it together including any required updates to the commercial arrangements. And as is true for managing your own team: assume positive intent and work from that basis.

Of course, if you don’t think the culture of the SI is DevOps aligned (and you as one client will not be able to change that easily, trust me) then you should look for a partner who is in it with you. Going in the DevOps direction is not always easy so you rather choose the right partner for the tricky path ahead of you. This is true for your Agile adoption and certainly is true for your DevOps adoption as well.

When to work with an SI in the DevOps team
Besides working with SIs who develop and maintain applications, there is also a case to be made for getting help from an SI to implement DevOps in your organisation. This is what I do for a living and I do think we can add real value. First of all, I don’t think you can outsource the DevOps implementation completely (at least I would advise against it), but you can create really mutual beneficial partnerships. What I enjoy about being an SI (okay that sounds weird), by working for an SI (that’s better) is that I have a huge internal network of people with similar challenges and with solutions for them. If I want to find the best way to automate Siebel deployments I have many colleagues who have been there before or who are doing it right now. Having access to this network and the associated skills can be very beneficial for clients. And if you setup the partnership right, both organisations can benefit. I have helped organisations setup the team, the processes and the platform and enabled them to operate it going forward. And nowadays with offshoring we can also be a long term part of the team to help with ongoing improvements. Reality is not everyone has the in-house capability to build this capability and getting a bit of external help can go a long way. If you want to do it all in-house you can grab a couple of coaches to augment, but if you want someone with skin in the game find a really good SI partner for your team.

I will stop here although there is more to be said. In one of my next posts I will focus on the inside view of an SI. What does it take to transition towards DevOps if you are fully dependent on your client in regards to process, flow of work etc. Is there something that can be done? I will tell you about one of my projects to give you an idea and to further the understanding of the role of an SI in the DevOps world.

Update 2016:
I have seen a bit more conversation about this now, the below links are worth reading if you want to know about a few more perspectives:
Will DevOps kill outsourcing?
The Year of Insourcing
DevOps – The new outsourcing

Impressions from the DevOps Enterprise Summit 2014

I spent the last 3 days at the DevOps Enterprise summit here in San Francisco and wanted to share my thoughts with those that couldn’t come over here. Overall it was a great conference, especially if you consider that this was the first time this conference was organised. A few glitches, but that just made it more likeable. And I am sure it will be even better next year. And I have to admit that I hope not hear about horses and unicorns for a few days…

So what did I take away from the conference? Here are a few of the themes that were pretty common through the 3 days:

  • DevOps Teams – While there were certainly exceptions (most notably from Barclays), most organisations that spoke seemed to have a dedicated shared services DevOps team to focus on the tooling, governance and support of their DevOps platform. This is certainly my preferred approach as well and it was good to see that many organisation have made positive experiences with this. But it was also good to hear positive stories from organisations that have chosen a more federated approaches and to learn how they approached it successfully.
  • DevOpsSec – While obvious in hindsight, the frequent mentioning of information security as a critical element in the DevOps journey really brought this home for me. And as some of the speakers highlighted the ability to automated the compliance to regulations and policies is so powerful, that information security can actually be your ally in the DevOps journey and not a blocker. Great change of perspective for me personally.
  • Balance of Culture and Technical Practices – Not surprisingly a lot was being said about culture change and also about technical practices for DevOps. I think this balance is important and is good for us to keep in mind in our day-to-day as we sometimes get to focussed on only one side of the equation.
  • Internal Conferences – So many companies use internal conferences to spread the word and share experiences across the organisation. This is fantastic to hear and I am glad they are able to get the support for it as it can be hard to make a quantifiable business case for those as I know from experience.
  • Servers are cattle not pets – A lot was being said about the importance of having environments that are commodity and consistent, so that you can replace servers easily and reliably. Quite a few of the tool vendors were from a server monitoring, configuration management drift detection space as well. This clearly deserves more focus going forward.
  • Tooling – A completely non scientific impression is that certain tools are much more prolific than others in the DevOps toolkit, examples are: Jenkins, Atlassian tools, Git.
  • Measuring everything – Not really a new thought, but interesting to see how many of the organisations had good data to support their story. So important to get this right and use it to drill down on bottlenecks and cost sources.
  • Scaled Agile Framework – SAFe got a lot of positive mentioning by the speakers, seems to be widely adopted at large enterprises.
  • A few smaller takeaways:
    • Impact Score of Releases – I like the idea of measuring the impact of releases by measuring the sum of (number of Defects x severity). Brilliant.
    • Inverse Taylor Manoeuvre – Such a good name for self-enabled teams
    • Inverse Conway Manoeuvre –  A great name for addressing the architectural challenges that many of us face with existing architecture
    • Release notes as blog – Such a good idea to not send notes around but rather use a blog to document all release changes
    • Sprint Plan review meeting – A meeting after the sprint plan to get all relevant stakeholder across the plan (like Ops, InfoSec, Business). Great idea to test.
  • Favourite Quotes:
    • “Branches are evil”
    • “There is a right way to develop software (and DevOps is it)”
    • “We geeks don’t just like SkyNey – we want to build it”
    • “Cease dependence on mass inspection to achieve quality”
    • “Just talking nicely to each other does not delivery software”
    • “Time does not make software better”

A few things could be improved going forward in my opinion:

  • Many of the Enterprise Scale organisations were talking about their Web presence or Digital space, and only a few talked about the DevOps transformation for their Systems of Record. A better balance would be nice, to not only hear the positive stories and learn from the really difficult cases
  • One aspect that was only mentioned as a sidenote and by 2 or 3 speakers is the reality of working with many different vendors and systems integrators. How do you enable this multi-party setup for DevOps practices? Having been on both sides of that story, perhaps I should share my experiences next year…
  • The sessions were pretty much back-to-back and there was little time for Q&A and to ask informal questions. Perhaps a short break in between sessions or a more formal way to socialise with the speaker right after the session would be good. I have seen this very successful at other conferences.

And last but not least a shout-out to some of the outstanding speakers from the conference, if you get a chance check-out the recordings later in the week when they are available on the conference website at http://devopsenterprisesummit.com.
– Gary Gruver
– Em Campbell-Pretty
– Jason Cox
– Mark Schwarz
– Owen Gardner
– Carmen DeArdo
– to highlight just a few, there were many more that are worth listening to if you have the time

Continuous Everything in DevOps…What is the difference between CI, CD,CD,…?

Okay so training was more work than expected, hence I will now slowly make my way through the backlog of topics. We will start with some the different techniques being used in DevOps. I will move the definitions to my definitions page as well, as I will refer to them again and again over time I am sure.

Continuous Integration (the practice)
This is probably the most widely known in this list of practices. It is about compiling/building/packaging your software on a continuous basis. With every check-in a system triggers the compilation process, runs the unit test, runs any static analysis tools you use and any other quality related checks that you can automate. I would also add the automated deployment into one environment so that you know that the system can be deployed. It usually means that you have all code merged into the mainline or trunk before triggering this process off. Working from the mainline can be challenging and often concepts like feature toggles are being used to enable the differentiation between features that are ready for consumption and features that are still in progress. This leads to variants where you run continuous integration on specific code branches only, which is not ideal, but better than not having continuous integration at all.

Continuous Integration (the principle)
I like to talk about Continuous Integration in a broader sense that aims at integrating the whole system/solution as often and as early as possible. To me Continuous Integration means that I want to integrate my whole system, while I could have a Continuous Integration server running on individual modules of the system. This also means I want to run integration tests early on and deploy my system into an environment. It also means “integrating” test data early with system to test as close as possible to the final integration. Really to me it means test as far left as possible and don’t leave integration until Integration Test at the end of the delivery life-cycle.

Continuous Delivery vs. Continuous Deployment
What could be more confusing than having do different practices that are called CD: Continuous Delivery and Continuous Deployment? What is the difference between CD and CD. Have a look at the summary picture:CDvsCd

As you can see the main practices are the same and the difference is mainly in where to apply them. In Continuous Delivery you aim to have the full SDLC automated up until the last environment  before production, so that you are ready at any time to deploy automatically to production. In Continuous Deployment you go one step further, you actually automatically deploy to production. The difference is really just whether or not there is an automatic or manual trigger. Of course this kind of practice requires really good tooling across the whole delivery supply chain: everything that was already mentioned under continuous integration, but you will have to have more sophisticated test tooling that allows you to test all the different aspects of the system (performance, operational readiness, etc.). And to be honest I think there will often be cases where you require some human inspection for usability or other non-automatable aspects, but the goal is to minimise this as much as possible.

Continuous Testing
Last but not least Continuous Testing. To me this means that during the delivery of a system you keep running test batteries. You don’t wait until later phases of delivery to execute testing but rather you keep running tests on the latest software build and hence you have real-time status of the quality of your software and if you use Test-Driven-Development you have real-time status of progress. This is not terribly different to the others mentioned before but I like the term because it reflects the diffusion of testing from a distinct phase to an ongoing, continuous activity.

I hope this post was helpful for those of you who were a bit confused with the terms. Reach out with your thoughts.

Technical Debt or the Tyranny of the Short Term

This week you will see more content than the usual once per week posting. I am training technology architecture and as such I will include a daily post as basis for discussions in the course. Post the discussion in class I will update the post with a summary of the discussion. I hope you find this an interesting concept as I am experimenting with the blog medium. On to the first topic of four: Technical Debt.

Much has been said about technical debt and how hard it is to pay it down. In the last week I had a few discussions about this and I thought I put my thoughts down on paper (or really the keyboard).

What is technical debt?
To get everyone on the same page let’s define what technical debt actually means. Technical debt is best explained by what it causes. Similar to debt, it causes interest over time. And as you all know interest is basically paying money for which you don’t really get anything in return. You are paying for an earlier decision (e.g. a purchase you made when you could not yet afford it) and if you don’t start paying down the debt then you will be able to afford less and less with the same amount of money as the interest takes over.
DebtIn IT what happens is that you set out to implement a new solution and of course you try to deliver the best solution possible. Over time decision points come up where you could implement something that costs a bit more but would provide better maintainability later on, like automated unit testing or separating out a function that you might reuse later rather than keeping it within a different function. You now need to decide whether to invest the extra time/money for this non functional aspect or to focus purely on the functionality that your business stakeholder requires. Every time you choose the short-term solution you increase your technical debt as next time you want to change something or use the functionality that you could have split out, you now require an additional effort. Of course there are many more ways to incur technical debt than just the lack of automation or modularisation, but these serve as examples.

Why is it so hard to avoid technical debt?
The crux of the matter is that by making all the right decisions (according to some criteria), you can still incur an increasing amount of technical debt. Imagine you are working on an application and you have exactly one project and you don’t know whether there will be any other projects after you. Should you still make the investment in good automation and modularisation practices? What if you know there are other projects, but you don’t know whether it will impact the same areas of the code or would use the same automation? …
You can see its a slippery slope.
tech debtLook at the graph on the left. It shows the total cost of change over time, initially it is cheaper to just implement the functionality without investing in good practices, but then over time the cost of changes increases as the technical debt makes it more costly to make changes. At some stage the total cost of change means each change is now more expensive than if you had implemented all the good practices from the beginning, but now you have to pay down all that debt and it is costly to jump back to the other cost line. You also see that even with great practices the cost of change generally increases a bit over time, although there are people arguing that great modularisation and reuse can actually reduce the total cost of change over time as you recombine existing services to create new ones, but that is for another post.

What does it take to pay it down?
The challenge with paying down technical debt is that it usually takes a lot of time and while you can accelerate it through a dedicated program, the only long-term solution is to leave the software in a better state every time you make a change. Otherwise you run a “paydown” project to reduce technical debt but then increase it with each subsequent functional project until you do the next “paydown” project. If you do it little by little you will have a much more sustainable model and the cultural shift that is required to do this will be beneficial for your organisation in any case. If your Agile implementation can help by making the technical debt more visible and by visually tracking each time you pay a bit of debt down, then you are onto a model that get you to the total cost of change curve that you aspire to. And my personal view is that you need to make sure that the PMO organisation and the project managers are clear about their responsibility in all this.  They should be evaluated not only by the on-time and on-budget delivery of functionality but also by how much they have done with their projects to pay down technical debt, otherwise PMs are too often likely to choose the short-term answer to achieve on-time, on-budget delivery at the cost of technical debt for the next project or in not so kind terms by “kicking the can a bit down the road”.

How can you measure it?
Here I am still without a really good answer…theoretically I think it is the sum of the additional cost that each change costs at the moment minus what it would have costed if you had all the good practices in place. But that is really hard to calculate. Alternatives that i have seen are to create a backlog of all the things that you should have done and to add to it every time you make a short-term decision. The size of this backlog is you technical debt in this case. Not yet great answers, but i keep looking. Please reach out if you know of a better way.

Picture: 3D Shackled Debt by Chris Potter from http://www.stockmonkeys.com
License: Creative Commons

The Software Factory Model Analogy – Appropriate or Not?

I felt compelled after some recent discussions to provide another blog post about the analogy I have been using in the title of this blog: The Software Delivery Factory model.

Let’s talk about the traditional way of thinking about Factories:
We start from the Wikipedia definition of factory system characteristics

  • Unskilled Labor – Now while labor arbitrage has certainly been a factor in the move towards a software factory model, I think we all agree that we are unlikely able to move away from at least a mix of experience levels and that we cannot sustain good software delivery without the right skills. In this model people are usually referred to as resources, but that’s for another post later. Inappropriate analogy!
  • Economies of Scale – By bringing together everyone involved in the delivery process and by centralising some of the functions like PMO we do see some economies of scale. Appropriate analogy!
  • Location – In the past this has been about factories being close to infrastructure like rivers, roads and railways, these days it is to be close to the right talent. This continues to be important as you can see in the move to India and China to get closer to large talent pools there, and also in Silicon valley where a lot of top talent is located these days. Appropriate analogy!
  • Centralisation – In a factory means for production were brought together which individuals were not able to afford (e.g. an assembly line or weaving machine). In software delivery we see heaps of small competitors taking on the big guys with sometimes more advanced open source technology. We also see a lot of distributed teams across the globe who work from different office or even home. Inappropriate analogy!
  • Standardisation and Uniformity – How often do we produce the same piece of software many times over. Not really that often. There are some cases where the same pattern is required for example for pricing changes, but more often than not each project requires a unique solution and is contextual to the client and technology used. Inappropriate analogy!
  • Guarantee of Supply – In a factory the work flows relatively smoothly and there are few hiccups if any in the production process. Looking at data from the chaos report and looking at my own experience, the smoothness of flow in software delivery is an illusion. And to be honest if I see a burnup or burndown graph that is smooth I suspect some gaming is going on. Inappropriate analogy!

So in summary the vote goes to it not being an appropriate analogy 4:2. It conjures up images of people sitting in their cubicles actioning work packages,

  • one person translating a requirement into a design handing it over to
  • the next person writing a bit of code
  • then to the next one testing it
  • and in all this no one talks to each other, it’s all done mechanical like people on an assembly line

In bad times software delivery in a factory model can feel a bit like Charlie Chaplin in Modern Times

Some of my colleagues talked to me about a new factory model, so let’s talk about the characteristics of this alternative model that people point out to me:

  • Orchestration of complex production process – software delivery today does require the delivery of many individual components very similar to the complex production process that for example is required to build a Boeing Dreamliner. Most systems are built of many components who are developed by many different team sometimes even across many locations and organisations thanks to the offshoring and outsourcing models. This examples of a modern factory does apply to software delivery. Appropriate analogy!
  • Automated Tool chains – If you look at modern factories from Toyota or BMW, you see very few works and a lot of automaton chains. This is very similar to this little video on CD. In that regard I agree that software delivery should be like these modern factories. Appropriate analogy!

I guess a modern BMW factory is the right analogy for this model:

Overall we end up with 4:4 votes on this list. In my head the image of a factory is not that of empowerment and of people working together to achieve great unique outcomes, its one of mass production and that just doesn’t work well with my understanding of software delivery. I guess I will keep the name of my blog as it is and just look forward to many more interesting discussions about this topic.

Here are some thoughts from others on Software Delivery Factory models (and yes of course it is more likely I come across things that confirm rather than oppose my view – please call out references to opposing views and I will post them here):

Agile, DevOps and Design Thinking – How do they relate?

This week I had several discussions in which I had to explain how I see the relationship between Agile, DevOps and Design Thinking. Of course there is no clear differentiation and for that matter definition of these terms (or if there is then certainly not everyone is referring to it in the same way).

Let me start by defining these terms in my context:

Agile (in the context of Agile software delivery)
A set of adaptive methods to deliver software based solutions based on the Agile Manifesto. It is an umbrella term for delivery methods like SCRUM and Kanban and for engineering methods like XP. From a cultural perspective these methods are meant to bring the customer or business stakeholder closer to the IT organisation through closer collaboration and by making delivery less black box and more white box.

DevOps
I personally like the following definition:
“Using governance and automation techniques to optimise collaboration across development and operations to enable faster, more predictable and more frequent deployments to market”.
From a cultural perspective this is bringing down the barriers between the development and operations parts of the organisation to achieve the right balance between stability or reliability and the required changes to deliver to the end-customer.

Design Thinking
This is the process of identifying and defining what a solution should look like by emphasizing with the subjects in question, by creatively solving the problem at hand and by analytically testing whether or not the solution is feasible from a technical perspective and in the problem context (viable for the customer and the business).

Now that we have the definitions out-of-the-way – which I will also use to start my definition page here on the blog – lets look at how all this relates to each other. I will start with a picture that I started to use a while ago:
House of AgileFor me this explains the relationship quite nicely, but not comprehensively. It shows how the three ‘pillars’ that I described above work together to hold up the IT operating model and how it is based in the cloud. Clearly there is more to it than this picture shows and what is missing are the overlaps and differences. This all sounds very esoteric, so lets dig a bit deeper.

So let’s look at the first two pillars: Design Thinking & Agile.
If you pick up the original book by Ken Schwaber it pretty much starts to describe what happens when the Product Owner comes to the team with a list of prioritized items to implement. When I first read the book I had the same reaction that probably many of you had – “If Agile tells me ‘How’ to implement something, how do I find out ‘What’ to implement?”
Design Thinking can be an answer to this question, there are some groups that are really good at this like d.School or Fjord. In my travels it has often been a slightly less elaborate activity like a 1-2 week Discovery phase where IT and business come together to define the solutions, but ideally you use Design Thinking to come up with great solutions. In my MBA I had the pleasure to work an expert in this field and to go through a design thinking workshop and it certainly provided a very different perspective on the problem at hand. In a later post I will describe another aspect of this phase which can be called Design Slicing – the ability to define logical slices that provide value by itself.

Let’s move to the second set of pillars: Agile and DevOps.
Here is becomes more complicated. The previous comparison we could simplify to Design Thinking = What, Agile = How, for Agile and DevOps we wont be able to make such a clear differentiation. Really good Agile adoptions focus on the cultural change required, the methodology changes that come with SCRUM and Kanban among others and focus on the technical practices like the ones that XP describes. DevOps in a similar vein talks about cultural change and technical practices. Here is now where I take a pragmatic approach, for me the methodology aspects are sitting within the Agile space and seeing Post-It notes, burn-up graphs and stand-ups are indications that someone is adopting Agile – whether successful or not requires more than Post-Its by the way. If someone is changing the way software is being coded and deployed and the change is much less visible in the offices (perhaps you can see green and red build lights) then we are talking about DevOps. This differentiation also allows me to break with the conventional wisdom that DevOps and Agile go together. They are definitely better together (like a good meal and a good wine – great individually, even better together), but you can get value from one without the other – just not as much as you could from both of them working together. If I force myself to simplify the difference between the pillars, I would say Agile = Bringing business and IT together supported by methodology, DevOps = Bringing development and operations together supported by technical practices.

I have not spoken about the IT Operating Model and the Cloud.
Let me spend a few words on this as well, by starting with the IT operating model. One of the things I notice when I speak to clients is that no matter how well their Agile and/or DevOps adoptions go that there is a lurking problem that requires addressing as well and that is the IT operating model. Again this is a term that can mean many things to many people. I will highlight a few aspects of what I mean. If you really change the way you deliver software based solutions by using Agile, DevOps and Design Thinking you will likely run into challenges with your existing IT operating model in the following ways:
Funding – Your funding and budgeting process might not allow you to progressively learn and adapt but rather requires locking things down early and measure against that plan.
Workforce management – How can you change from assembling people for projects and disbanding the team at completion to standing teams that work flows towards. And how should these teams look like – teams representing value flows, a central DevOps team or federated accountability, release trains a la SAFe,…
Incentives and Commercial constructs – How do you make sure that all your employees, contractors and delivery partners/systems integrator share your goals and can support the new way of working?
Roles and Responsibilities – How do you need to change role descriptions to make the new way of working stick?
All these are aspects that are not necessarily covered in your Agile methodology or DevOps practices, but that require thinking about and adjusting. And I like to consider this a change of the IT operating model.

And of course we should talk about the cloud – there is lots to say here, but lets leave it with one sentence for now: To achieve the ultimate flexibility and speed to market many aspire to you will have to make effective use of the Cloud (private, public or mix).

Longest post so far, so I will say goodbye for now. Post your comments – I am looking forward to a controversial discussion of the above.

Are we Agilists in danger of making the perfect the enemy of the good?

perfectOver the last year or so I have had lots of good robust discussions with other Agile coaches and one thing started to worry me. I heard “But that is not Agile” or “But that is not REAL DevOps” more and more frequently. While I agree that we should always strive for better and better performance, the absolutes seem to me counterproductive. Two topics close to my heart seem to cause this kind of reaction more often than others.

 

SAFe – The Scaled Agile Framework
There is lots of discussion on the internet about SAFe and why it is or why it is not really that Agile. Most organisations that I talk to are nowhere close to the maturity that you assume when you see the SAFe framework. I am sure that there are companies who are further down their Agile path and think that SAFe is very restrictive and old-school. I have to say that most people I talk to would be extremely happy if they could achieve the Agility that SAFe can provide. And it is a framework after all, a bit like a scaffold that you can use to move forward from the old waterfall ways into a more Agile enterprise without throwing everything out. And yes once you think that SAFe is not challenging your organisations and you see opportunities to become even more effective go ahead – push yourself further. For now I quite happy to use the SAFe framework within large organisations to help me speak a language my clients understand and to push them a bit further on their journey. I will have to admit that probably have not spent enough time with the other scaled agile ideas to judge them all – and perhaps writing this blog can be a motivation to do that. For now SAFe is my go-to framework of choice and even those who argue it’s not really Agile, I think even those would agree that many organisations would be more Agile if they had implemented SAFe than they are now – and that’s good enough: For now!

DevOps
So many organisations and projects I encounter do not have the right technical practices in place that allows them to deliver solutions effectively. Practices like Configuration Management, Automation of Build, Deployment and Test, and Environment Management I think belong under the big headline of DevOps. So when I then talk about DevOps practices with peers at conferences and describe that in large organisations I often recommend a DevOps team to start with, I hear “But that is not what DevOps is about”. The so often quoted cultural barriers between Operations and Development in large organisations makes it simply impossible in my view to embedd an operations guys in each development team. And to be honest there are often many more development teams than operations folks who could be embedded. So why wouldn’t I then create a team with representation from both sides to begin with and to get the best guys into that team to solve the difficult technical problems. After all that what Google has done with their Engineering Tools team. I think that is a valid step and yes perhaps afterwards we push this further, but for now most organisations that I have been working with can gain a lot from good practices being implemented through a DevOps team. Having a DevOps team does not mean we don’t want to change the culture, it just means we want to do this one step at a time.

Picture: Perfect by Bruce Berrien
taken from https://www.flickr.com/photos/bruceberrien/with/384207390/
under Creative Commons license