Category Archives: Agile and DevOps

Managing an evolving transformation – A Scorecard approach

I have been speaking over the last couple of years about the nature of DevOps and Agile transformations. It is in my view not possible to manage them with a simple As-Is To-Be approach as your knowledge of your As-Is situation is usually incomplete and the possible To-Be state keeps evolving. You will need to be flexible in your approach and need to use agile concepts. For successful Agility you need to know what your success measures are, so that you see whether your latest release has made progress or not (BTW something way too many Agile efforts forget). So what could these success measures look like for your transformation?

Well there is not one metric, but I feel we can come up with a pretty good balanced scorecard. There are 4 high level areas we are trying to improve: Business, Delivery, Operations and Architecture. Let’s double click on each:

  • Business: We are trying to build better solutions for our business and we know we can only do this through experimentation with new features and better insights. So what measures can we use to show that we are getting better at it.
  • Delivery: We want to streamline our delivery and get faster. To do that we will automate and simplify the process wherever possible.
  • Operations: The idea has moved from control to react, as operations deals with a more and more complex architecture. So we should measure our ability to react quickly and effectively.
  • Architecture: Many businesses are struggling with their highly monolithic architectures or with their legacy platforms. Large scale replacements are expensive, so evolution is preferred. We need some measures to show our progress in that evolution.

With that in mind here is a sample scorecard design:

I think with these 4 areas we can drive and govern a transformation successfully. The actual metrics used will a) depend on the company you are working in and b) evolve over time as you find specific problems to solve. But having an eye on all 4 areas will make sure we are not forgetting anything important and we notice when we overoptimize in one area and something else drops.

Next time I get the chance I will use a scorecard like this, of course implemented in a dashboarding tool so that it’s real time. 😉

Segregation of Duties in a DevOps world

This scene could be from a spy movie: Two people enter the room where release management is coordinated for a major release. The person from the operations team takes out a folded piece of paper, looks at it and types half of the password on the keyboard. Then the person from the development team does the same for the second half and then deployment to production begins. A couple of years ago I was working for a client, where the dev and ops teams had half the password each for production. It was probably the most severe segregation of duties setup I have experienced. The topic of segregation of duties comes up frequently when organisations moving towards using DevOps ways of working. After all how can you have segregation of duties when you are breaking down all the barriers and silos?!?

Let’s explore this together in this post, but first let’s acknowledge a few things: First and foremost, I am not an auditor or lawyer. The approaches below have been accepted to different degrees by organisations I have worked with. Secondly; there are several concerns related to segregation of duties. I will cover the three most common ones that I have encountered and hopefully the principles still can be applied to further aspects accordingly. Let’s dive in!

Segregation of Development and Test

Problem statement: In a cross-functional team wouldn’t the developer “mark his own homework” if testing is done by the same team?!? To avoid this in the traditional waterfall world, a separate testing team performs an “objective” quality control.

Resolution approach: Even in a DevOps or Agile delivery team, more than one person is involved in defining, developing and testing a piece of work. The product owner or her delegate helps define the acceptance criteria. A developer writes the code to fulfill those and a quality engineer writes test automation code to validate the acceptance criteria. Additionally, team members with specific skills like penetration testing or UX test the work as well. And often business users perform additional testing. Agile ceremonies like the sprint demo and the acceptance by the product owner create additional scrutiny by someone other than the developer. In summary, segregation of duties between Dev and Test is achieved as long as people are working well across the team.

Segregation of Development and Release

Problem Statement: A developer should not be able to release software into production without independent quality check to make sure no low quality or malicious software can be deployed. Traditional approaches have the operations or release management team validate quality through inspection and governance.

Resolution approach: In a DevOps world, the teams should be able to deploy to production automatically without any intervention by another team. This is true whether we use traditional continuous delivery or more modern cloud native deployment mechanisms. But how can we create segregation of duties in those release scenarios? We leverage high levels of automated quality controls in modern release mechanisms, which means functionality, security, performance and other aspects of the software are assessed automatically before software is deployed and we can leverage this to create independent assurance. A separate group like a “Platform Engineering” team governs the quality gates of the release mechanisms, the standards for it and the access to it. This team functions as the independent assurance and all changes to the release pipeline are audited. The art here is to get the balance right so that the teams can work independently without relying on the Platform Engineering team for day-to-day changes to the quality gates, while still making sure that the quality gates are effective.

Segregation of Development and Production

Problem Statement: A developer should not be able to make changes to production or see confidential data from production, while a production engineer shouldn’t be able to use his knowledge of production to deploy malicious code that can cause harm. Traditionally access to production and non-production systems are only given to mutually exclusive development and operations teams.

Resolution approach: This is the most complicated of the three scenarios as people should get the best possible data to resolve issues, yet we want to avoid proliferation of confidential data that can lead to exploitation of such data. The mechanisms here are very contextual but the principles are similar across organisations. Give the developers access to “clean” data and logs through a mechanism that masks data. When the masked data is insufficient for analysis and resolution, then escalated access should be provided based on the incident that needs to be resolved. Automated access systems can tie the temporary access escalation to the ticket and remove it automatically once the ticket is resolved. This of course requires good hygiene of tickets as tickets which are open for a long time can create extended periods of escalated access. Contextual analysis is required to identify the exact mechanisms here, but in most organisations masked data should be able to cover most scenarios so that access to confidential data should be very limited. Root access to production systems should be very limited in any case as automation takes over traditional tasks that used to require such access hence the risk is more limited in a DevOps world. And automation also increase the auditability of changes as everything gets logged.

Summary of Segregation of Duties in a DevOps world

Hopefully this gives you a few ideas on how to approach segregation of duties in the DevOps world. Keep in mind that you achieve the best results by bringing the auditors and governance stakeholders to the table early and explore how to make their life better with these approaches as well. This should be a win-win situation, and in my experience it usually is, once you get to the bottom of what is actually the concern to address.

Learning at DevOps speed – How Running a DevOps Simulation can help to change your Culture

A short while ago my team and myself ran a pilot of a DevOps simulation with our friends from G2G3. The idea of learning from a simulation (not unlike business simulations that I used to play as PC games – does anyone remember “oil imperium”?) appealed to me and I set this up for my team.

Let me be honest, I wasn’t sure what to expect. Boy was I in for a treat. Although we had a room full of people who know DevOps principles and practices we learned a lot from this one day. Let me quickly explain how the simulation runs to give you an idea.

The simulation runs in three rounds and in each round you try to make money for the company. The attendees are split into traditional roles like developer, tester, operations, service desk, product owner, scrum master etc. You get precious little guidance and off you go building features and serving customer needs. Not surprisingly you initially struggle. After the first round you talk about what to improve and have another go. And then you do the same for the third round. The real power comes from the activities being non-technical which means everyone can contribute – think of Tetris-style puzzles you have to solve to implement a feature for example. And without worrying too much about specific DevOps practices the team “discovers” better ways of working that are aligned with DevOps principles – collaboration, visual management of work, looking for patterns.

Most of the other DevOps trainings I have been part of have been pretty technical, which is great for the techies among us. But what about the project managers, the defect coordinators, the change management people, the PMO – they either have to sit through some “foreign” material in a DevOps course or often don’t even attend DevOps training. How can we then change the culture of the organisation and be inclusive of everyone. I think this simulation will get us a step closer to everyone understanding what DevOps and Agile is about and that there is a lot that can be done in addition to automation and tech practices.

I believe this simulation can be super powerful if you get your project team or leadership to attend. In a safe environment people can take on roles they don’t usually play and hence emphasize with those roles better after the simulation. The whole team will work on improvements together and it is easy to see how the learnings will bleed into their day-to-day delivery experience. If you leave the training without thinking on how you can use Kanban boards better and how to improve the quality of communication that is associated with your service management tickets I will be very surprised.

The things you experience are the power of simple things like visual management and how to improve processes by looking end-to-end. Everyone in the simulation gets the chance to redesign delivery processes and tools like the ticket system and the Kanban boards. Nothing beats experiential learning and this is the best thing I have seen for DevOps and Agile ideas. We all left the room exhausted from the full-on day, but we also agreed that even though we all knew DevOps and Agile well, we learned a lot from it in regards to practical application. Just imagine how powerful this is with a group of people who have less previous knowledge. I cannot wait to run this again!!! And I cannot wait to run a simulation with our most experienced Delivery people to see how it changes their perspective.

After running the pilot I got a group together to become trainers for this simulation as I have so many ideas on how we can use this to improve organisations and delivery. Of course I want to run this internally as frequently as possible, but I also want to make this available for our clients. If you are intrigued, reach out to me and we can see how we can get something going for you.

3 things I learned from “The Art of Business Value”

art of businessI am trying something new with this blog post – providing a mix of book review and a summary of what I learned about a book I really like. Waiting for Mark Schwartz to release his latest book “War and Peace and IT” I thought I re-read his earlier works. And as I was reading “The Art of Business Value” again I noticed that I am reading it with fresh eyes and that I appreciate this book even more than a few years ago.

Spoiler alert – the answer the book provides is a bit of a cop out: “…Business value is what the business values, and that is that.” But reducing this book to the final response to the question does not do it justice. The book artfully explores different angles of business value and why they are challenging when it comes to driving Agile delivery on the basis of this: ROI, NPV, Shareholder Value.

This is must read for all product owners to understand why there is not one answer to the question of business value and why Mark’s final response is unsatisfying yet completely appropriate. The book is also small enough that you don’t have to feel bad to recommend the book to the product owners you work with. I have recommended my favorite book of all time “Goedel, Escher, Bach” to many people fully knowing that only very few people would work their way through this fascinating but challenging book. “The Art of Business Value” is a book you can recommend without such thoughts on your conscience.

What I found even more useful in Mark’s book is that he explores the space around business value and three key learnings stand out for me:

  1. That the language of Agile can lead to a new command and control paradigm – this time by the product owner or Agile coach as Only Person you can listen to (OPYCLT)
  2. That the product owner as interface to the business requires a special kind of organisation and having X-teams are a better approach
  3. That the bureaucracy and governance we encounter is codified business value of the past

Let’s explore each of these a little further:

Agile as a new command and control paradigm

This one hits close to home. For a while I have been complaining about the Agile coaches out there who evangelise their methods without being able to explain why calling something a “PBI” is better than “User Story” or why we will only provide documentation in code. Mark adds another interesting dimension to this, if the product owner is the Only Person You Can Listen To for the team then how is this different to the project manager assigning work. Mark argues in a similar vein that the prescription of technical practices is a similar command and control rule – I recently spoke to an organisation that does not do automated testing and rather does production monitoring in full knowledge that they can respond quickly enough if something breaks. So I think we need to all be vigilant to not let Agile drift into just another command and control world this time run by the agilists instead of the project managers.

Product Owner vs. X-Team

In traditional Agile thinking the product owner represents the business and he presents the business problems to be solved to the Agile team which go off and solve them. Mark compares this with a loosely coupled system where the details of implementation is up to the team as long as they fulfil the contract that the product owner has made with the business. I am with Mark that this too simplified. We have plenty of experience that shows that the product owner needs help to manage the backlog and to work with the rest of the business. Mark introduces a new term “X-Team” from another book as guiding principles that teams need to work internally and externally. It is amazing how much more productive teams can be when they have rich context. For one of my Agile teams we arranged recordings of customer calls and visits to call centers so that the team got a better understanding of the business problem rather than relying on the product owner. The levels of innovation immediately increased when we started doing this.

The value of bureaucracy

This one is probably the one that requires the most consideration and came out of nowhere for me in re-reading the book. I think I had dismissed this point last time I read it. Mark argues that the processes you encounter were at some stage codified business value in many cases. And that we would be at risk of losing tacit knowledge of the value if we just throw it all out. Rather we should understand what the underlying value was and whether or not it is still applicable. You can then decide whether there is an alternative way to create that value or whether you continue with the established process. A good example is transparency as a value which might require you to do certain things that might not on first view provide value of itself like additional documentation or reviews.

There you go – I really enjoyed the second time I read his book and hope you will too.

The Anti-Transformation Transformation of Agile and DevOps

Organisational transformations have been part of organisational life for many years. There are reorganisations, big IT transformations and nowadays Agile, Cloud, Digital or DevOps transformations. These transformations used to follow a familiar pattern: an organisation is going through a major transformation and invests significant amounts of money over a 3-5 year horizon into the transformation. At the end of the transformation when the “end-state” was achieved, the level of investment got reduced and focus shifted to stabilisation and cost reduction. Over time the requirements changed more than the current level of investment allowed us to adapt for. Technical debt and the gap between needs and system functionality increased until this reached a level that required significant reinvestment or a new transformation to the next trend.

The cycle repeated every few years. While far from ideal it seemed to work okay, it was good business for technology companies and consultancies, it provided a level of comfort for organisations as they executed their 3-5 year roadmaps of transformation. The duration was not really a problem as the environment changed slowly enough for organisations to catch-up with each cycle. The level of change in the environment has increased and competitors are increasingly coming from digital startups that move very quickly. This means that the traditional transformation cycle is too slow to react. We cannot afford 3-5 year cycles any longer and rather need to create an organisational capability to continuously adapt to the environment. If you do one more transformation in your organisation it needs to be the anti-transformation transformation. The idea of this transformation is to transform not with a specific technical capability in mind but rather to transform to an ever improving, a learning organisation and to build the organisational capability that allows you to drive this ongoing process in a sustainable pace and process.

anti-transformationThere are obviously a few things different with this transformation and the most obvious yet confusing thing is that there is no end-state. There is no end-state technology architecture, there is no end-state organisational structure and there is no end-state delivery methodology. But if there is no end-state how do we know when we are done? This is the bad news, we will never be done. We have to create capabilities that make it easier and easier to adapt incrementally and we need mechanisms to guide each improvement even in the absence of an end-state.

Having this discussion with my clients makes me feel like a GP who is telling the patient that is coming to the office that there is no pill that I can give to reduce his blood pressure and shortness of breath, but rather that the patient needs to eat healthier and do more sports. It is not going to be easy and each day will present a new challenge. Furthermore as his consultant I cannot do this work for him, I can only guide and support, but the patient has to do a lot of the work himself. The exact same is true for organisations neither Cloud, Robotic Process Automation, AI or any other technology will magically solve the problems. We need our organisations to change to a healthier lifestyle to remain fit and survive.

Enough of the analogy, but I hope you get the point. So what can we do to guide the anti-transformation transformation? First of all our view of technology architecture needs to change, as highlighted in this blog post there are 3 architectures we are dealing with and each one of them needs to be adaptable: our business systems architecture, our IT tools architecture and our QA and data architecture. We also need to have a guiding system to show us where our technical debt is and where systems are highly coupled – these need to reduce to remain adaptable. Last but not least we need to find ways that allow us to continue to evolve organisational structure and methodology in a way that does not disrupt the business – it is not about moving from the Spotify model to SAFe or vice versa, but rather its about running small experiments with your own contextual methodology or org structure to be able to evolve and continuously improve. If you are still in the beginnings of the anti-transformation then you might want to adopt one of the more common methodology frameworks to get yourself started, but if in 2-3 years you are either still doing the same things or feel the need to adopt another model then you have a problem. Neither of those two extremes should be correct, you should feel like you are working with a methodology and org structure that is truly your own and that has been optimized for your context over time.

One last thing to note – larger disruptions in business or technology will still cause more challenging needs for change and require you to increase investment, but it should not require another transformation. It should rather require a larger incremental change that is easier to manage because we decoupled our architectures and methods.

The transformation is dead, long live the anti-transformation transformation.

The need to change our mental models – A core idea behind DevOps for the Modern Enterprise

I always loved this quote: “Nothing is more dangerous than using yesterday’s logic for today’s problems” which shows you that you just cannot afford to get lazy and do the same thing again and again. This causes larger problems when you scale it up. Gary Hamel summarizes the problem our organisations face as follows: “Right now, your company has 21st century Internet enabled business processes, Mid 20th century management processes all built atop 19th century management principles.”

One of the main reasons for me to write “DevOps for the Modern Enterprise” was to help address this mismatch between the work we need to do, creative IT based problem solving, and the management mindset many managers still have, that of IT being managed just like manufacturing.

I like to use the term mental model to describe what having the wrong mindset means for the every day job of managers and other executives. Let’s take a very practical example to show you how your mental model shapes your view of reality. Look at the vase on the picture below. What do you see?

Bottle1

Depending on how your brain has been shaped up to this day, you will see different things on the vase. Children predominately see 9 dolphins (see further below to help you see them). I guess that you saw something different, didn’t you? What does that say about your mental model of reality and your preferences 😉 What this exercise hopefully shows you is that each persons view on reality is not exactly the same and that the mental model you use makes an important difference in how you perceive reality and act.

Perceiving IT as being similar to manufacturing leads to management processes that are inappropriate, you are looking for productivity measures where there are none (more about that here), you expect people to be replaceable resources, you think that fixing the process will fix the end-product and that you can upfront plan for projects. Pretty much all of those have been shows to be incorrect.

As a starting exercise for changing your mental model, I recommend watching Dan Pink’s video on motivation (Watch it here). I leverage his idea extensively in my book and think it is a perfect match for Agile delivery where we provide purpose by providing the agile team with the context of the problem they are solving, we allow them to achieve mastery through quick feedback cycles and we created cross-functional teams that are reasonably autonomous. Once you understand Dan Pink’s mental model you can easily diagnose some of the common problems with Agile projects that don’t provide those three motivators.

This shift in mental model is exciting stuff and goes much further in areas of operations and working with vendors/partners, you can read more about it in my book. For now I hope I was able to motivate you to look further into the topic and for you to try to be more conscious of your own mental model. It is worth challenging the model you have and perhaps you are then able to see those dolphins too 😉

bottle2

Don’t waste a perfectly fine Transformation for your Agile and DevOps Change efforts

Over last couple of months I have been speaking to project teams and organisations that are undergoing some major technology transformations and which have set out on this course in traditional more or less waterfall approaches. Changing course during such a transformation is risky and any changes are usually more of a smaller nature as the risk appetite is low when so much money is on the line. I understand, while I personally think that Agile is less risky in any case, the organisational maturity with Agile and the required change energy are probably preventive of making a change in-flight.

But here is the thing, once you get to the end of the current transformation your whole delivery process is tuned for the big change that you are currently undergoing. If you use the same governance approach and delivery method for the smaller changes that come after the transformation you will be really inefficient. You will wish you had used the transformation to not only set you up with a new technology but also with a delivery mechanism that supports you effectively after the transformation is over when change is smaller and more frequent.

This is where a bit of planning ahead can go a long way. If you realise the above you can use the time while your transformation is still under way to prepare yourself for post-transformation agile delivery. You can build DevOps practices into your ways of working, because they support waterfall delivery as well as Agile delivery. All the automation and process improvements will make the transformation effort less risky and the cultural shift can start to take momentum through changed behaviour. If you have a staged go-live over multiple releases you can start to embed Agile into your production support and maintenance processes so that your organisation starts to learn about Agile methods of working.

In my book “DevOps for the Modern Enterprise” I talk about transaction costs in IT and this is another case where this concept is helpful to explain the situation. If your transaction cost for a release (all the efforts for regression testing, deployments, release planning, go-live support etc.) is 100 units for your transformation which is a large development effort of 10000 units. Then using the same processes will still cost you close to 100 units for smaller changes post transformation (let’s say 1000 units). This will make delivery of small changes really inefficient and might start to bundle them up again into larger less frequent releases. What you should do is to take a close hard look at all the transaction costs and invest during the transformation to reduce them so that you get yourself ready for the time after. Otherwise the post-transformation blues is going to come quickly and you will soon see yourself in the next transformation cycle to improve the delivery process.

Transformation go to waste

Another reason to invest during the transformation is that once there is less work to be done on the functional side there is probably also less money around to make the required investment in changing the way you work and the automation and tooling that is required to support it. It is much easier to justify the bit of extra investment while the transformation is still under way and use the attention of the leadership team and the change energy already in the company during transformation to set yourself up for success post transformation. Don’t let a perfectly fine transformation go to waste for your Agile change effort.

From Factory to Labs – is that the better metaphor?

As you probably know this blog was partly inspired by my frustration with managers and leadership who compared IT delivery with factories. This year at Agile Australia I was very positively surprised that the topic of the factory metaphor came up in a few talks. I am really glad we finally talk about the problems that stem from management using manufacturing thinking for IT delivery. Given I have spoken about this before I don’t want revisit the reasons here and rather spend a bit of time on an alternative model that was put forward at the conference by Dom Price from Atlassian – it’s not a factory it’s a lab.

Look at this slide from the talk for a summary of why the Labs model is more appropriate.

2017-06-23 14.20.51

There is a lot I like about the Labs metaphor that could inspire better management – the inherent uncertainty around IT delivery, the data driven nature supported by the scientific method, building in failure as a normal occurrence for which we try to minimise the impact instead of assuming we could prevent it. That being said, I feel the Labs model might be taking it perhaps a step too far as there is a level of predictability that is required by management and by business stakeholders. A delivery roadmap highlighting features to be delivered is often underpinning the business case. I might be too far away from scientific labs and the right examples might exist, but it is my impression that those roadmaps are less common in labs than we would want in IT. My experience with labs has been that timelines are full of unknowns, more than we would accept in IT delivery.

At this point there are three mental models that I am aware of, the factory, the design studio and the lab. I believe the first one is the dangerous one to use as inspiration for management principles, for the last two I am hopeful that combined it might make for the right inspiration for management going forward. I have to think a bit more about this on the back of Agile Australia. Stay tuned as I will be coming back to this topic.

How to Structure Agile Contracts with System Integrators

As you know I work for a Systems Integrator and spend a lot of my time responding to proposals for projects. I am also spending time as consultant with CIOs and IT leadership to help them define strategies and guide DevOps/Agile transformations. An important part is to define successful partnerships.  When you look around it is quite difficult to find guidance on how to structure the relationships between vendor and company better. In this post I want to provide three things to look out for when engaging a systems integrator or other IT delivery partner. Engagements should consider these elements to come to a mutually beneficial commercial construct.

Focus on Dayrates is dangerous

We all know that more automation is better, why is it then that many companies evaluate the ‘productivity’ of a vendor by their dayrates. Normally organisations are roughly organised in a pyramide shape (but the model will work for other structures as well).

It is quite easy to do the math when it comes to more automation. If we automate activities they are usually either low skilled or at least highly repeatable activities which are usually performed by people with lower costs to the company. If we automate more tasks that means our ‘pyramid’ becomes smaller at the bottom. What does this do to the average dayrate? Well of course it brings it up. The overall cost goes down but the average dayrate goes up.

pyramid

You should therefore look for contracts that work on overall cost not dayrates. A drive for lower dayrates incentives manual activities rather than automation. Besides dayrates it is also beneficial to incentivise automation even further by sharing the upside of automation (e.g. gain sharing on savings from automation, so that the vendor makes automation investments by themselves)

Deliverables are overvalued

To this date many organisations structure contracts around deliverables. This is not in line with modern delivery. In Agile or iterative projects we are potentially never fully done with a deliverable and certainly shouldn’t encourage payments for things like design documents. We should focus on the functionality that is going live (and is documented) and should structure the release schedule so that frequent releases coincide with regular payments for the vendor. There are many ways to ‘measure’ functionality that goes live like story points, value points, function points etc. Each of them better than deliverable based payments.

Here is an example payment schedule:

  • We have 300 story points to be delivered in 3 iterations and 1 release to production. 1000$ total price
  • 10%/40%/30%/20% Payment schedule (First payment at kick-off, second one as stories are done in iterations, third one is once stories are releases to production, last payment after a short period of warranty)
  • 10% = 100$ on Signing contract
  • Iteration 1 (50 pts done): 50/300 *0.4 * 1000 = 66$
  • Iteration 2 (100 pts done): 100/300 * 0.4 * 1000 = 133$
  • Iteration 3 (150 pts done): 150/300 * 0.4 * 1000 = 201$
  • Hardening & Go-live: 30% = 300$
  • Warranty complete: 20% = 200$

BlackBox development is a thing of the past

In the past it was a quality of a vendor to take care of things for you in more or less a “blackbox” model. That means you trusted them to use their methodology, their tools and their premises to deliver a solution for you. Nowadays understanding your systems and your business well is an important factor for remaining relevant in the market. Therefore you should ask your vendor to work closely with people in your company so that you can keep key intellectual property in house and bring the best from both parties together, your business knowledge and the knowledge of your application architecture with the delivery capabilities of the systems integrator. A strong partner will be able to help you deliver beyond your internal capability and should be able to do so in a transparent fashion. It will also reduce your risk of nasty surprises. And last but not least in Agile one of the most important things for the delivery team is to work closely with business. That is just not possible if vendor and company are not working together closely and transparently. A contract should reflect the commitment from both sides to work together as it relates to making people, technology and premises available to each other.

One caveat to this guidance is that for applications that are due for retirement you can opt for a more traditional contractual model, but for systems critical to your business you should be strategic about choosing your delivery partner in line with the above.

I already posted some related posts in the past, feel free to read further on:

https://notafactoryanymore.com/2014/10/27/devops-and-outsourcing-are-we-ready-for-this-a-view-from-both-sides/

https://notafactoryanymore.com/2015/01/30/working-with-sis-in-a-devopsagile-delivery-model/

Agile Reporting at the enterprise level (Part 2) – Measuring Productivity

Guide to the Guide to Continuous Delivery Vol 3

CD GuideI am not really objective when I say that I hope you have read the most recent Guide to Continuous Delivery Vol. 3 as I had the honor to contribute an article to it. My article is about mapping out a roadmap for your DevOps journey and I have an extended and updated blog article in draft on that topic that I will push out sometime soon. There is a lot of really good insight in this guide and for the ones with little time or who just prefer the “CliffsNotes”, I want to provide my personal highlights. I won’t go through every article but will cover many of them. Besides articles the guide provides a lot of information on tooling that can help in your DevOps journey.

Key Research Findings

The first article covers the CD survey that was put together for this guide. While less people said they use CD this might indicate more people understanding better what it takes to really do CD, I take this a positive indication for the community. Unsurprisingly Docker is very hot, but its clear that there is a long way to go to make it really work when you look at the survey results.

Five Steps to Automating Continuous Delivery Pipelines

Very decent guidance on how to create your CD pipeline, the two things that stood out for me are “Measure your pipeline”, which is absolutely critical to enable continuous improvement and potentially crucial for measuring the benefits for your CD business case. It also highlights that you sometimes need to include manual steps, which is where many tools fall down a bit. Gradually moving from manual to full automation by enabling a mix of automated and manual steps is a very good way to move forward.

Adding Architectural quality metrics to your cd pipeline

An interesting article on measuring more than just functional testing in your pipeline. It stresses the point to include performance and stress testing in the pipeline and that even without full scale in early environments you can get good insights from measuring performance in early environments and use the relative change to investigate areas of concern.
There is other information that can provide valuable insights into architectural weaknesses like # of calls to external systems, response time and size for those calls, number of exceptions and CPU time.

How to Define your DevOps roadmap – Well read the whole article 😉

Four Keys to Successful Continuous Delivery

Three of the keys are quite common: Culture, Automation and Cloud. What I was happy to see what the point about open and extendable tools. Hopefully over time more vendors realise that this is the right way to go.

A scorecard for measuring ROI of Continuous Delivery Investment

An interesting short model for measuring ROI, it uses lots of research based numbers as inputs into the calculations. Could come in handy for some who want a high-level business case.

Continuous Delivery & Release Automation for Microservices

I really liked this article with quite a few handy tips for managing Microservices that match my ideas to a large degree. For example you should only get into Microservices if you already have decent CI and CD skills and capabilities. Microservices require more governance than traditional architectures as you will likely deal with more technology stacks, have to deal with additional testing complexity and require a different ops model. To deal with this you need to have a real-time view of status and dependencies of your Microservices. The article goes into quite some detail and provides a nice checklist.

Top CD resources

No surprise here to see the State of DevOps report, Phoenix Project and the Continuous Delivery book on this list.

Make sure to check out the DevOps Checklist on devopschecklist.com – there is lots of good questions on this checklist that can make you think about possible next steps for your own adoption.

Continuous Delivery for Containerized Applications

A lot of common ground get revisited in this article like the need for immutable Microservices/containers, Canary launches and A/B testing. What I found great about this article is the description of a useful tagging mechanism to govern your containers through the CD pipeline.

Securing a Continuous Delivery Pipeline

Some practical guidance on leveraging the power of CD pipelines to increase security, a topic that was just discussed at the DevOps Forum in Portland too and which means we should see some guidance coming out later in the year. The article highlights that tools alone will not solve all your problems but can provide real insights. When starting to use tools like SonarQube be aware that the initial information can be confusing and it will take a while to make sense of all the information. Using the tools right will allow you to free up time for more meaningful manual inspections where required.

Executive Insights on Continuous Delivery

Based on interviewing 24 executives this articles gathers their insights. Not surprisingly they mention that it is much easier to start in a green fields environment than in brown fields. Even though everyone agrees that tools have significantly improved, the state of CD tools is still not where people would like it to be and many organisations still have to create a lot of homemade automation. The “elephant in the room” that is raised at the end is that in general people rely on intuition still for the ROI of DevOps, there is no obvious recommendation for how to measure this scientifically.