Category Archives: Agile and DevOps

From Factory to Labs – is that the better metaphor?

As you probably know this blog was partly inspired by my frustration with managers and leadership who compared IT delivery with factories. This year at Agile Australia I was very positively surprised that the topic of the factory metaphor came up in a few talks. I am really glad we finally talk about the problems that stem from management using manufacturing thinking for IT delivery. Given I have spoken about this before I don’t want revisit the reasons here and rather spend a bit of time on an alternative model that was put forward at the conference by Dom Price from Atlassian – it’s not a factory it’s a lab.

Look at this slide from the talk for a summary of why the Labs model is more appropriate.

2017-06-23 14.20.51

There is a lot I like about the Labs metaphor that could inspire better management – the inherent uncertainty around IT delivery, the data driven nature supported by the scientific method, building in failure as a normal occurrence for which we try to minimise the impact instead of assuming we could prevent it. That being said, I feel the Labs model might be taking it perhaps a step too far as there is a level of predictability that is required by management and by business stakeholders. A delivery roadmap highlighting features to be delivered is often underpinning the business case. I might be too far away from scientific labs and the right examples might exist, but it is my impression that those roadmaps are less common in labs than we would want in IT. My experience with labs has been that timelines are full of unknowns, more than we would accept in IT delivery.

At this point there are three mental models that I am aware of, the factory, the design studio and the lab. I believe the first one is the dangerous one to use as inspiration for management principles, for the last two I am hopeful that combined it might make for the right inspiration for management going forward. I have to think a bit more about this on the back of Agile Australia. Stay tuned as I will be coming back to this topic.

How to Structure Agile Contracts with System Integrators

As you know I work for a Systems Integrator and spend a lot of my time responding to proposals for projects. I am also spending time as consultant with CIOs and IT leadership to help them define strategies and guide DevOps/Agile transformations. An important part is to define successful partnerships.  When you look around it is quite difficult to find guidance on how to structure the relationships between vendor and company better. In this post I want to provide three things to look out for when engaging a systems integrator or other IT delivery partner. Engagements should consider these elements to come to a mutually beneficial commercial construct.

Focus on Dayrates is dangerous

We all know that more automation is better, why is it then that many companies evaluate the ‘productivity’ of a vendor by their dayrates. Normally organisations are roughly organised in a pyramide shape (but the model will work for other structures as well).

It is quite easy to do the math when it comes to more automation. If we automate activities they are usually either low skilled or at least highly repeatable activities which are usually performed by people with lower costs to the company. If we automate more tasks that means our ‘pyramid’ becomes smaller at the bottom. What does this do to the average dayrate? Well of course it brings it up. The overall cost goes down but the average dayrate goes up.

pyramid

You should therefore look for contracts that work on overall cost not dayrates. A drive for lower dayrates incentives manual activities rather than automation. Besides dayrates it is also beneficial to incentivise automation even further by sharing the upside of automation (e.g. gain sharing on savings from automation, so that the vendor makes automation investments by themselves)

Deliverables are overvalued

To this date many organisations structure contracts around deliverables. This is not in line with modern delivery. In Agile or iterative projects we are potentially never fully done with a deliverable and certainly shouldn’t encourage payments for things like design documents. We should focus on the functionality that is going live (and is documented) and should structure the release schedule so that frequent releases coincide with regular payments for the vendor. There are many ways to ‘measure’ functionality that goes live like story points, value points, function points etc. Each of them better than deliverable based payments.

Here is an example payment schedule:

  • We have 300 story points to be delivered in 3 iterations and 1 release to production. 1000$ total price
  • 10%/40%/30%/20% Payment schedule (First payment at kick-off, second one as stories are done in iterations, third one is once stories are releases to production, last payment after a short period of warranty)
  • 10% = 100$ on Signing contract
  • Iteration 1 (50 pts done): 50/300 *0.4 * 1000 = 66$
  • Iteration 2 (100 pts done): 100/300 * 0.4 * 1000 = 133$
  • Iteration 3 (150 pts done): 150/300 * 0.4 * 1000 = 201$
  • Hardening & Go-live: 30% = 300$
  • Warranty complete: 20% = 200$

BlackBox development is a thing of the past

In the past it was a quality of a vendor to take care of things for you in more or less a “blackbox” model. That means you trusted them to use their methodology, their tools and their premises to deliver a solution for you. Nowadays understanding your systems and your business well is an important factor for remaining relevant in the market. Therefore you should ask your vendor to work closely with people in your company so that you can keep key intellectual property in house and bring the best from both parties together, your business knowledge and the knowledge of your application architecture with the delivery capabilities of the systems integrator. A strong partner will be able to help you deliver beyond your internal capability and should be able to do so in a transparent fashion. It will also reduce your risk of nasty surprises. And last but not least in Agile one of the most important things for the delivery team is to work closely with business. That is just not possible if vendor and company are not working together closely and transparently. A contract should reflect the commitment from both sides to work together as it relates to making people, technology and premises available to each other.

One caveat to this guidance is that for applications that are due for retirement you can opt for a more traditional contractual model, but for systems critical to your business you should be strategic about choosing your delivery partner in line with the above.

I already posted some related posts in the past, feel free to read further on:

https://notafactoryanymore.com/2014/10/27/devops-and-outsourcing-are-we-ready-for-this-a-view-from-both-sides/

https://notafactoryanymore.com/2015/01/30/working-with-sis-in-a-devopsagile-delivery-model/

https://notafactoryanymore.com/2015/02/26/agile-reporting-at-the-enterprise-level-part-2-measuring-productivity/

Guide to the Guide to Continuous Delivery Vol 3

CD GuideI am not really objective when I say that I hope you have read the most recent Guide to Continuous Delivery Vol. 3 as I had the honor to contribute an article to it. My article is about mapping out a roadmap for your DevOps journey and I have an extended and updated blog article in draft on that topic that I will push out sometime soon. There is a lot of really good insight in this guide and for the ones with little time or who just prefer the “CliffsNotes”, I want to provide my personal highlights. I won’t go through every article but will cover many of them. Besides articles the guide provides a lot of information on tooling that can help in your DevOps journey.

Key Research Findings

The first article covers the CD survey that was put together for this guide. While less people said they use CD this might indicate more people understanding better what it takes to really do CD, I take this a positive indication for the community. Unsurprisingly Docker is very hot, but its clear that there is a long way to go to make it really work when you look at the survey results.

Five Steps to Automating Continuous Delivery Pipelines

Very decent guidance on how to create your CD pipeline, the two things that stood out for me are “Measure your pipeline”, which is absolutely critical to enable continuous improvement and potentially crucial for measuring the benefits for your CD business case. It also highlights that you sometimes need to include manual steps, which is where many tools fall down a bit. Gradually moving from manual to full automation by enabling a mix of automated and manual steps is a very good way to move forward.

Adding Architectural quality metrics to your cd pipeline

An interesting article on measuring more than just functional testing in your pipeline. It stresses the point to include performance and stress testing in the pipeline and that even without full scale in early environments you can get good insights from measuring performance in early environments and use the relative change to investigate areas of concern.
There is other information that can provide valuable insights into architectural weaknesses like # of calls to external systems, response time and size for those calls, number of exceptions and CPU time.

How to Define your DevOps roadmap – Well read the whole article 😉

Four Keys to Successful Continuous Delivery

Three of the keys are quite common: Culture, Automation and Cloud. What I was happy to see what the point about open and extendable tools. Hopefully over time more vendors realise that this is the right way to go.

A scorecard for measuring ROI of Continuous Delivery Investment

An interesting short model for measuring ROI, it uses lots of research based numbers as inputs into the calculations. Could come in handy for some who want a high-level business case.

Continuous Delivery & Release Automation for Microservices

I really liked this article with quite a few handy tips for managing Microservices that match my ideas to a large degree. For example you should only get into Microservices if you already have decent CI and CD skills and capabilities. Microservices require more governance than traditional architectures as you will likely deal with more technology stacks, have to deal with additional testing complexity and require a different ops model. To deal with this you need to have a real-time view of status and dependencies of your Microservices. The article goes into quite some detail and provides a nice checklist.

Top CD resources

No surprise here to see the State of DevOps report, Phoenix Project and the Continuous Delivery book on this list.

Make sure to check out the DevOps Checklist on devopschecklist.com – there is lots of good questions on this checklist that can make you think about possible next steps for your own adoption.

Continuous Delivery for Containerized Applications

A lot of common ground get revisited in this article like the need for immutable Microservices/containers, Canary launches and A/B testing. What I found great about this article is the description of a useful tagging mechanism to govern your containers through the CD pipeline.

Securing a Continuous Delivery Pipeline

Some practical guidance on leveraging the power of CD pipelines to increase security, a topic that was just discussed at the DevOps Forum in Portland too and which means we should see some guidance coming out later in the year. The article highlights that tools alone will not solve all your problems but can provide real insights. When starting to use tools like SonarQube be aware that the initial information can be confusing and it will take a while to make sense of all the information. Using the tools right will allow you to free up time for more meaningful manual inspections where required.

Executive Insights on Continuous Delivery

Based on interviewing 24 executives this articles gathers their insights. Not surprisingly they mention that it is much easier to start in a green fields environment than in brown fields. Even though everyone agrees that tools have significantly improved, the state of CD tools is still not where people would like it to be and many organisations still have to create a lot of homemade automation. The “elephant in the room” that is raised at the end is that in general people rely on intuition still for the ROI of DevOps, there is no obvious recommendation for how to measure this scientifically.

DevOps Leadership Culture – Staying Cool When it is Getting Tough

LeadershipFor many organizations, the move to DevOps is more complicated than simply putting Agile methodologies, tools, and techniques into practice—it requires a cultural shift. This is especially true when running into the inevitable roadblocks that occur along the path to disruption. This is when IT leaders must stay the course and have faith in their DevOps vision.

In this post, I would like to talk about how IT leaders can create a culture to enable DevOps to thrive, and what the future of IT organisations might look like if they successfully stay the course.

How DevOps and Agile have evolved over the years

I find that the industry seems to have moved along the same phases of focus as myself (but perhaps that is a case of confirmation bias). Let me describe what I mean. Coming from some form of waterfall development and in a time when the best answer to productivity improvement was going offshore or using packaged software, Agile provided an alternative way to deliver projects successfully. The initial focus was on small teams of highly focused individuals and the success of those teams showed what is possible. Early successes meant that many more organizations wanted to adopt Agile and so it was adopted for larger and more complex environments.

At this stage, Agile projects got into trouble as the relatively simple recipes and the tendency toward offshoring and packaged software worked against the ideal of small, co-located teams for Agile delivery. This is where I saw the next two trends picking up: Scaled Agile Frameworks (like SAFe) and DevOps with its cultural and technical aspects. While there is a lot more to be done in this space, I can already see the broader organizational change as the next frontier. Otherwise successful Agile/DevOps teams run into problems with the funding cycles and other organizational practices at the moment. While Agile and DevOps was used in small pockets of organizations, it was easy to fly under the radar; with mainstream adoption we will now have to solve these other, more complex problems in the organization and do so while shifting the overall organizational culture.

Cultural transformation needed to become truly Agile and adopt DevOps: What IT leaders need to do

Over time I came to realize that methodology and technical practices can only get you so far. Staying the course in tough times is not easy and reality is that it’s likely going to get worse before it gets better. Leaders need to believe in their mission and support the team in times when it does not look like there will be quick wins.

There is this story about Toyota and how they introduced a cord in their factories overseas. This cord is pulled whenever there is a problem with the production system. Of course this is disruptive at first and some factories stopped using the cord because of the disruption. The ones who used it had a negative impact on productivity initially while the others continued to produce the same results as before. Management could have easily given up on the cord, but they stuck with it and over time improved their production system so much that they outperformed the other factories significantly. There was no chance for the other factories to catch up afterwards as the improvements were systematic and not just focused on fixing defects as they appeared as the other factories had done. To me this serves as a worthwhile example for management who adopts DevOps. Management needs to find ways to measure progress of the improvements and need to stay the course of systematic improvements even when productivity takes an initial hit. I have seen many transformational efforts that start well and then get stuck when disruption is necessary, which might mean some steps backwards in some regards. Here is where management can show what it means to support a vision and to stay the course. The ones who do and have the right vision will win this race.

Let me share one more piece of personal advice on cultural change. I subscribe to Dan Pink’s sources of motivation at work: autonomy, mastery and purpose. Management should look for opportunities to create a workspace where each team member can increase their satisfaction along those three dimensions. We are all knowledgeable workers in IT, and the best way to get the best out of us is for us to be highly motivated and work in line with the company vision. From talking to people in the IT industry, I often find that we have optimized work in a way that has not considered the relevant characteristics of knowledge workers, and this is likely to be the next area that will increase productivity significantly if addressed correctly.

A look at the Lean Enterprise of the future

Honestly, I think Agile and DevOps will be part of every organization in the next few years. So far, very few have really transformed their whole organization to become as lean as possible. After all, Agile and DevOps are both ways to become leaner. I think that Agile and DevOps practitioners and change agents will join forces with organizational change management practitioners to examine organizational processes. While I don’t know how the end-state looks like in detail, I have a few things in mind that I hope to see in organizations over the next few years, and I will hopefully play my part in some of those transformations. Here is what the organization of the future looks like to me:

  • HR practices have been transformed to recognize the team-based nature of work and that outcomes of the organization matter the most.
  • Financial governance has found a way to decouple funding cycles so that Agile teams can continue working as long as certain organizational results (financial and otherwise) are achieved by teams.
  • Project-based teams are a thing of the past. Teams exist as persistent entities with stable members that transcend traditional role definitions and even organizational boundaries where vendors and system integrators are involved.
  • Stakeholders across the organization have access to real-time information from both business and IT systems to steer the organization.

This post has been adopted from an interview I gave “The Enterprisers” project in the lead up to the DevOps Enterprise Summit 2015 – you can find the full interview here: https://enterprisersproject.com/article/2015/10/creating-culture-devops-thrive

Picture: Leadership vs management by Olivier Carré-Delisle
Taken from Flickr under Creative Commons license

Agile Reporting at the enterprise level (Part 2) – Measuring Productivity

productivityThose of you who know me personally, know that nothing can get me on my soapbox quicker than a discussion on measuring productivity. Just over the last week I have been asked three times how to measure this in Agile. I was surprised to notice that I had not yet put my thoughts on paper (well in a blog post). This is well overdue so here I share my thoughts.

Let’s start with the most obvious: Productivity measures output and not outcome. The business cares about outcomes first and outputs second, after all there is no point creating Betamax cassettes more productively than a competitor if everyone buys VHS. Understandably it is difficult to measure the outcome of software delivery so we end up talking about productivity. Having swallowed this pill and being unable to give all but anecdotal guidance on how to measure outcomes, let’s look at productivity measurements.

How not to do it! The worst possible way that I can think of is to do it literally based on output. Think of widgets or java classes or lines of code. If you measure this output you are at best not measuring something meaningful and at worst encouraging bad behaviour. Teams that focus on creating an elegant and easy to maintain solution with reusable components will look less productive than the ones just copying things or creating new components all the time. This is bad. And think of the introduction of technology patterns like stylesheets, all of a sudden for a redesign you only have to update a stylesheet and not all 100 web pages. On paper this would look like a huge productivity loss, updating 1 stylessheet over updating 100 pages in a similar timeframe. Innovative productivity improvements will not get accurately reflected by this kind of measure and teams will not look for innovative ways as much given they are measured on something different . Arguably function points are similar, but I have never dealt with them so I will reserve judgement on this until I have firsthand experience.

How to make it even worse! Yes, widget or line of code based measurements are bad, but it can get even worse. If we have done measurements on this we do not incentivise teams to look for reuse or componentisation of code, and we are also in danger of destroying their sense of teamwork by measuring what each team member contributes. “How many lines of code have you written today?” I have worked with many teams where the best coder writes very little code and that is because he is helping everyone else around him. The team is more productive by him doing this than by him writing lots of code himself. He multiplies his strength rather than linearly growing the team’s productivity by doing more himself.

Okay, you might say that this is all well and good, but what should we do? We clearly need some kind of measurement. I completely agree. Here is what I have used and I think this is a decent starting point:

Measure three different things:

  • Delivered Functionality – You can do this by either measuring how many user stories or story points you deliver. If you are not working in Agile, you can use requirements or use cases or scenarios. Anything that actually relates to what the user gets from the system. This is closest to measuring outcome and hence the most appropriate measure. Of course these items come in all different sizes and you’d be hard pressed to strictly compare two data points but the trending should be helpful. If you did some normalisations of story points (another great topic for a soapbox) then that will give some comparability.
  • Waste – While it is hard to measure productivity and outcomes, it is quite easy to measure the opposite: waste! Of course you should contextually decide which elements of waste you measure and I would be careful with composites unless you can translate this to money (e.g. all the waste adds to 3MUSD, not, we have a waste index of 3.6). Composites of such diverse elements such as defects, manual steps, process delays and handovers are difficult to understand. If you cannot translate these to dollars, just choose 2 or 3 main waste factors and measure them. Once they are good find the next one to measure and track.
  • Cycle time – This is the metric that I would consider above all others to be meaningful. How long does it take to get a good idea implemented in production? You should have the broadest definition that you can measure and then break it down into the sub-components to understand where your bottlenecks are and optimise those. Many of these will be impacted by the levels of automation you have implemented and the level of lean process optimisation you have done.

This is by no means perfect. You can game these metrics just like many others and sometimes external factors influence the measurement, but I strongly believe that if you improve on these three measures you will be more productive.

There is one more thing to mention as a caveat. You need to measure exhaustively and in an automated fashion. The more you rely on just a subset of work and the more you manually track activities the less accurate these measures will be. This also means that you need to measure things that don’t lead to functionality being delivered, like paying down technical debt, analysing new requests for functionality that does not implement or defect triage. There is plenty of opportunity to optimise in this space – Paying technical debt down quicker, validating feature requests quicker, reducing feedback cycles to reduce triage times of defects.

For other posts of the Agile reporting series look here: Agile Reporting at the enterprise level – where to look? (Part 1 – Status reporting)

Here is a related TED talk about productivity and the impact of too many rules and metrics by Yves Morieux from BCG

Distributed Agile – An oxymoron?

It is time for me to put on paper (well not really paper…but you know what I mean) my thoughts on distributed Agile. I have worked with both distributed and co-located Agile. Distributed Agile is a reality, but there are a lot of myths surrounding it. I had some queries over the last few months where people were trying to compare co-located teams with distributed teams.

networkLet’s start by talking about one of the things that is being brought up again and again: “Agile works best with a small number of clever people in a room together”. Now I agree that this will be one of the best performing teams, but I would stipulate that in that situation you don’t need a methodology at all and that the team will be performing very well still. The power of Agile is the rigor it brings to your delivery when you either don’t have very experienced people in the team or when you are  a distributed team.

Now why would you choose a distributed Agile model?

  • Scaling up. It can be very difficult to quickly find enough people in your location
  • Access to skills. It’s also difficult to find people with the right skills.
  • Follow-the-sun development. By working in different regions of the world, you can work around the clock which means you can bring functionality to market quicker.
  • You are already distributed. Well if your teams are already in different locations, you don’t really have a choice, do you?
  • You DO NOT chose distributed Agile because it is inherently cheaper or better. That is a misconception.

The goal of distributed Agile is to get as close as possible to the performance the same team would be capable of if they were co-located.

distributed AgileAll things being equal, I don’t believe a distributed team will ever outperform a co-located team. But then all things are never equal. The best distributed teams are better performing than 80% of the co-located teams (see graph on the left). So it is important in both co-located and distributed Agile to get a well working team together. Improving the performance of your team is more important than the location of the members.

There are however factors that help you achieve an experience close to co-locations.

  • Invest in everything that has to do with communication: Webcams, Instant Messengers, Video conferencing,… And really make this one count, spending 10 minutes each time to enable a connection costs a lot of productive time over a sprint.
  • Physical workspace – where team members are co-located they need to have the right physical environment and shouldn’t sit among hundreds of other people who disturb them
  • Invest in virtual tooling like wikis, discussion boards etc.
  • Find ways to get to know each other. This happens naturally for co-located teams, but requires effort for distributed teams. Spend 10 min in each sprint review introducing a team member or create virtual challenges or social events in second life or world of warcraft.
  • Don’t save a few dollars on travel. Get key stakeholders or team members to travel to the other location so that you at least for a short period of time can enjoy the richness of communication that comes with co-location.
  • Agree on virtual etiquette – what should each team member do on calls or in forums. Retrospectives and Sprint reviews require some additional thought to really hear from everyone.

If you do all that you have a team that operates nearly as if co-located. And if you really want to push the performance of your team further then I have an answer as well:

  • Look at your engineering practices
  • Look at your tools
  • Implement DevOps/ContinuousDelivery/Continuous Integration

That will really push your productivity, much more than any methodology or location choice could possibly do.

Continuous Everything in DevOps…What is the difference between CI, CD,CD,…?

Okay so training was more work than expected, hence I will now slowly make my way through the backlog of topics. We will start with some the different techniques being used in DevOps. I will move the definitions to my definitions page as well, as I will refer to them again and again over time I am sure.

Continuous Integration (the practice)
This is probably the most widely known in this list of practices. It is about compiling/building/packaging your software on a continuous basis. With every check-in a system triggers the compilation process, runs the unit test, runs any static analysis tools you use and any other quality related checks that you can automate. I would also add the automated deployment into one environment so that you know that the system can be deployed. It usually means that you have all code merged into the mainline or trunk before triggering this process off. Working from the mainline can be challenging and often concepts like feature toggles are being used to enable the differentiation between features that are ready for consumption and features that are still in progress. This leads to variants where you run continuous integration on specific code branches only, which is not ideal, but better than not having continuous integration at all.

Continuous Integration (the principle)
I like to talk about Continuous Integration in a broader sense that aims at integrating the whole system/solution as often and as early as possible. To me Continuous Integration means that I want to integrate my whole system, while I could have a Continuous Integration server running on individual modules of the system. This also means I want to run integration tests early on and deploy my system into an environment. It also means “integrating” test data early with system to test as close as possible to the final integration. Really to me it means test as far left as possible and don’t leave integration until Integration Test at the end of the delivery life-cycle.

Continuous Delivery vs. Continuous Deployment
What could be more confusing than having do different practices that are called CD: Continuous Delivery and Continuous Deployment? What is the difference between CD and CD. Have a look at the summary picture:CDvsCd

As you can see the main practices are the same and the difference is mainly in where to apply them. In Continuous Delivery you aim to have the full SDLC automated up until the last environment  before production, so that you are ready at any time to deploy automatically to production. In Continuous Deployment you go one step further, you actually automatically deploy to production. The difference is really just whether or not there is an automatic or manual trigger. Of course this kind of practice requires really good tooling across the whole delivery supply chain: everything that was already mentioned under continuous integration, but you will have to have more sophisticated test tooling that allows you to test all the different aspects of the system (performance, operational readiness, etc.). And to be honest I think there will often be cases where you require some human inspection for usability or other non-automatable aspects, but the goal is to minimise this as much as possible.

Continuous Testing
Last but not least Continuous Testing. To me this means that during the delivery of a system you keep running test batteries. You don’t wait until later phases of delivery to execute testing but rather you keep running tests on the latest software build and hence you have real-time status of the quality of your software and if you use Test-Driven-Development you have real-time status of progress. This is not terribly different to the others mentioned before but I like the term because it reflects the diffusion of testing from a distinct phase to an ongoing, continuous activity.

I hope this post was helpful for those of you who were a bit confused with the terms. Reach out with your thoughts.