The most Controversial Concept in Agile Delivery – Estimating in Story Points

This blog post is another one of those that I should have written a while ago as the topic of story point based estimation keeps coming up again and again. To really understand why story point based estimation is important for Agile delivery, I think I need to explain the idea behind it.

The purpose of estimates is to get a good idea of how much work needs to be done to achieve a certain outcome. To do that, the estimate should be accurate and reasonably precise. This is where the crux of the problem is: precision. If I’d asked you how long it takes to fly from Sydney to Los Angeles, you would not respond with an estimate that includes minutes and seconds because you know that it is ineffective as it is not precise. The more precise we get in estimates, the more we pretend to be able to do something that we cannot do: work at that level of precision. The other downside of precision is that each level of precision requires more work to be put in the estimation process. I have done many IT projects and can tell you that my estimates for each individual task is off by as much as +/- 100% easily, but in aggregate my estimates are pretty good.

Let’s explore the difference between accuracy and precision a bit further:

accurate vs precise

It should be clear that we care more about accuracy than we care about precision and that is exactly what story points do for me. I am spending just the necessary amount of time estimating to be reasonably accurate without trying to become too precise. The usual Fibonacci sequence (1,2,3,5,8,…) helps to avoid false precision as well. Now, to be honest we could call it 1,2,3,5,8 days and be done with it as that would probably achieve the same outcome as story points. The problem is that for some reason we are a lot more tempted to use the other in between numbers when we talk about days. We are also tempted to equate days of effort with schedule, and most of us can attest that a day of effort is hardly ever done in a day of schedule as we get distracted, multi-task or attend meetings. The story point concept provides us with a nice abstraction that prevents these mental shortcuts and keeps us focused on the relative nature of the estimate.

The other thing that should be obvious is that a day of effort for one person is not the same as a day of effort for another person. More experienced people need less time than more junior people, so any estimate in hours or days is flawed unless you know who will do it. Story points do not suffer from this problem as they are relative to other stories and independent to the person performing the tasks associated with it.

The other nice thing with Agile estimation is that it usually is a lot closer to the often recommended Delphi technique, which asks multiple independent experts to estimate tasks and then aggregate it. Planning poker is a pretty close approximation of the Delphi technique and is therefore much more accurate than estimates done by individuals.

But why do we need a point system at all, why do we not just do relative sizing in t-shirt sizes or something similar. As I have explored in another blogpost (link), teams need a goal line whenever there is a certain outcome to be achieved. The easiest way to do so is by tracking progress on a numerical scale (see Agile reporting post). And if you work in a larger organisation you probably want to have some common currency to be able to measure the throughput (see productivity blog) and be able to swap stories between teams. Here I will go with the guidance that SAFe provides, start with a point being roughly a full day of work and estimate everything else relative to that. On a regular basis bring members of the team together to estimate an example set of stories and use this process to recalibrate the relative understanding of size.

So what if things change? One thing that people are always concerned about is scope creep or inaccurate estimates. For a story by itself I don’t have strong opinions on whether or not you update the size once you realise there is more or less work than expected. However, if you use larger buckets for your initial estimates (e.g. a feature that should roughly take you 100 pts), then I think it is important to measure how many points the stories of that feature actually add up to – if that is different to 100 pts in this case you have some real scope change that will impact your timelines.

To close off, I will provide a few helpful links to other comments/blogs about story points which you can use to learn more about this topic:

http://www.scruminc.com/story-points-why-are-they-better-than/

http://collaboration.csc.ncsu.edu/laurie/Papers/ESEM11_SCRUM_Experience_CameraReady.pdf

http://www.mountaingoatsoftware.com/blog/seeing-how-well-a-teams-story-points-align-from-one-to-eight

https://www.scrum.org/Forums/aft/564

http://www.mlcarey321.com/2013/08/normalized-story-points.html

http://blogs.versionone.com/agile_management/2013/10/14/scalable-agile-estimation-and-normalization-of-story-points-introduction-and-overview-of-the-blog-series-part-1-of-5/

Mirco’s Advice for supervisors – Feedback, the best way to improve your team

adviceAfter discussing one-on-ones in my last blog about tools for supervisors, this blog will focus on feedback. Feedback is your opportunity to improve the performance of your team on a daily basis. Here I am talking about ongoing feedback you can give your team every day, not the kind of feedback you give once a performance year. Before I get to feedback let me remind you that this post is one in a series of six posts on tools for supervisors:

  1. One on Ones (link)
  2. Feedback
  3. Coaching (TBD)
  4. Delegation (TBD)
  5. Principles for success (TBD)
  6. Pitfalls for supervisors (TBD)

While one-on-ones are probably the most important practice given the impact it has on your team, feedback is what moves your team forward. The most important point about feedback is to never give feedback when you are angry. The way you communicate and what you say when you are angry will undermine the overall idea behind feedback – to make your team more effective in the future. There is nothing you can do about the past, move on and provide constructive feedback. You should only give feedback when you are able to smile while doing so. This will make sure that you are in a positive frame of mind.

There is a good reason to be in a positive frame of mind when giving feedback. If you think about it, how often is a mistake made on purpose? – it hardly ever is. So assume good intent. There are very few people who make mistakes on purpose. This of course means that there is no reason to get angry, just try to help your direct-report become more effective by pointing out the ineffective behaviour and discussing alternatives. I will tell you how to do that further below.

Let’s not forget that you should also give positive feedback. Some tend to forget this even though it is so much easier to give positive feedback. For it to really make an impact, make sure to be specific about positive feedback as well. Don’t just praise “Well done”, but rather give specific positive feedback like “When you prepare meeting notes and send them out before I even ask you, it helps everyone stay on top of their assigned tasks. Thank you.”

Purpose: The purpose of feedback is to encourage effective behaviour in the future. There is no “why” in feedback, which means you are not trying to understand why something happened, but instead trying to encourage effective behaviour in the future. This is not a root cause analysis. If you take this purpose to heart you will see that there is little difference between positive and negative feedback, you simply state what your direct-report did, the impact it had and what to do in the future (either continue or change behaviour).

How to Do: Feedback is not easy to give, especially negative feedback. The key here is to focus on the behaviour and not implied traits: e.g. “When you raise your voice and make sarcastic comments” is much better than “When you act like a jerk”. Make sure the person you give feedback to understands the implications, e.g. “When you send me your status on time, it allows me to collate the report quickly and be on time for my report to my boss”.

Don’t argue with your direct-report about either the reason for his behaviour or the validity of your feedback. Remember the purpose of feedback is to influence future behaviour. If he argues with your feedback, walk away, he will either do the same thing again and you can give him the same feedback again (this time with one more piece of evidence) or he won’t do it again (which means your feedback has achieved its purpose). The guys at manager-tools.com refer to this as “shot across the bow” and this piece of insight was eye opening for me and made me avoid so many unnecessary and ineffective discussions with my direct-reports.

There is a specific format that you could use to give feedback: Ask first “Can I give you some feedback?”, then focus on behaviour “When you do x, this is what happens”, and then either thank him and encourage him “Thank you, keep doing this” or ask for an improvement “Can you do this better/different next time?” Timing of feedback is also important, don’t give feedback on things that happened longer than a week ago. Consider feedback like breathing, many supervisors hold their breath and then blast it out after a while (or even just at the end of the year), try to breathe regularly. Small bits of regular feedback will allow you to keep correcting course and not try to turn the whole ship around twice a year.

One last piece of advice: Find a way to encourage yourself to give feedback frequently. Put a reminder in your calendar every day to give feedback, put a comment on your one-on-one tracking sheet to provide feedback, or do what the guys at manager-tools recommend: Put 3 coins in your left pants pocket and every time you give feedback move it to the right pocket. At the end of each day you know whether you gave 3 pieces of feedback and each time you put your hands in your pocket you get a little reminder of how you are tracking.

Sure ways to fail #2 – Not knowing what the goal is

“If you don’t know where you going, any road will take you there” – The Cheshire Cat in Alice in Wonderland

The_Cheshire_Cat_by_JTwilight97This quote from Alice in Wonderland is very insightful and I have quoted it many times when talking about strategies and plans. In this case I want to use this as an illustration for a way an Agile project will surely fail. If you don’t know what the goal line is for a specific release or sprint, you have no means to understand how you are travelling and whether you are on the right track. I have seen many teams post great velocities in their first couple of sprints, but when asked whether they can achieve the release goal they have no idea. This is painful and leads to a lot of anxiety for the team and stakeholders.

How does this happen? I believe that this is due to the fact that in Agile we don’t want to spend too much time planning and estimating too much to the future and hence start kicking off projects really quickly. We do this with just the first one or two sprints worth of stories ready for implementation and then we keep grooming the backlog. This is great, but if you choose to do so, I think you need to spend a little bit of extra time to give the team a final goal. You don’t need to have all the stories for the release, after all we want to be flexible in Agile, but you should have some idea of the things you need to deliver. This can be stories, themes, epics, features or whatever you choose. At this stage you should do a quick estimation to provide the overall scope for the release and a goal line that the team can use in their burn-up graphs (read more on reporting here). You can then track changes to the goal line if epics require more stories than expected or if any new scope is introduced. This allows the team to have meaningful discussions with the product owner and the stakeholders about required changes to the release. If you don’t have such a goal line, you could get a shock surprise towards the end of your release and surprises are not something we want with Agile delivery.

The one caveat is if you work in a project that truly does not know what the scope is or the scope is undefined. What you can do in those cases is either work in a Kanban style or with an assumed target velocity. If you work in Kanban you only ever have to worry about the first few items in your backlog and set expectations with your stakeholders on how long they will take and then do the same for the next set of stories on a regular cadence. This requires a lot of trust between the team, product owner and stakeholders. Alternatively you can set yourself a certain target velocity and just work towards that and fill the velocity with stories that are getting groomed on an ongoing basis.

Employee Engagement – The magic potion?

thumb-422558_1280I am sure by now most people understand that there is strong correlation between employee satisfaction and business results. If you need more convincing have a read of these two articles: ForbesResearch Paper

So how do you best go about measuring it?

On my current project I have decided to go with the following 4 questions:

  • I would recommend this account and my project as a good place to work
  • I have the tools and resources to do my role well
  • I rarely think about rolling off this account or project
  • My role makes good use of my skills and abilities

For those of you who have read Jez Humble’s “Lean Enterprise”, these questions will look familiar. I have adopted them to the project setting that I work within. We have just set out on a cultural transformation to become truly Agile and adopt DevOps in a large complex legacy environment. To me measuring the above will give me the best indicator that we are doing the right thing. Of course there will be other measures who determine the quality of the outcomes and the levels of automation among others, but changing the culture of an organisation is critical if your Agile and DevOps adoption is to be successful. I will report back throughout that journey to tell you what my experiences is with the above questions.

IT delivery is complex and it is not always clear what the right solution is. I found in the past that it is near impossible to create processes and tools that work by itself, you need to have the right mindset that people use the processes and tools with the right intent. It’s very frustrating when you implement great automation only to see a few months later that the solution has degraded. It is with hindsight that I understand that the solution is to not just implement process and tools but to instill the right culture and mindset for progression,
a culture where we blamelessly identify a way to avoid the same mistake again rather than looking for the person in fault,
a culture where we strive for automation and lean processes and are not concerned about the size of our teams or budgets,
a culture where you don’t have to protect your fiefdom and where you are happy to collaborate with others to solve problems no matter where the root cause lies.

I think we all in IT need to understand this dynamic between employee satisfaction and outcomes better, I for sure believe that I have come across a magic potion that I aim to bring to all my future projects.

What computer games can teach us about maturity models – Choose your own DevOps Adventure

To use maturity models or not is an eternal question that Agile and DevOps coaches struggle with. We all know that maturity models have some weaknesses, they can easily be gamed if they are used to incentivise and/or punish people, they are very prone to the Dunning-Kruger effect and often they are vague. Of course on the flipside, maturity models allow you to position yourself, your team or your company across a set of increasingly good practices and striving for the next level could be the required motivation to push ahead and implement the next improvement.

Clearly there are different reasons behind different kinds of maturity models. For a self-assessment and to set a roadmap, a traditional maturity model like the Accenture DevOps maturity model is what it takes to get these done. There are many others available on the internet, so feel free to choose the one you like best.

At one of my recent clients we performed many maturity assessments across a wide variety of teams, technologies and applications. Of course such large scope means that we did not spend a lot of time with each team to assess the maturity and not surprisingly the result was that we got very different levels of response. We heard things like “Of course we do Continuous Integration, we have Jenkins installed and it runs every Saturday”, had this team not mentioned the second part of the sentence we would have probably ticked the Continuous Integration box on the maturity sheet.

A few months later we were back in the same situation and needed to find a way to help teams self-assess their maturity in an environment where a lot of DevOps concepts are not well known and different vendors and client teams are involved which means the actual maturity rating becomes somewhat political. I was worrying about this for a while and then one night while playing on my PC, inspiration hit me – I remembered the good old Civilisation game and the technology tree:

techtree_original

Now if I could come up with a technology tree just like this for DevOps I might be able to use this with teams to document the practices they have in place and what it takes to get the next practice enabled. Enter the DevOps technology dependency tree (sample below):

CD Technical Dependencies Tree

In this tree for each leaf we created a definition and related metrics and now each team could go off and use this tree to chart where they are and how they progress. This way each team chooses their own DevOps adventure. We also marked capabilities that the company needed to provide so that each team could leverage common practices that are strategically aligned (like a common test automation framework or deployment framework). This tree has been hugely successful at this specific client and we continue to update it whenever we find a better representation and believe new practices should be represented.

Who would have thought that playing hours of computer games would come in handy one day…

One on Ones or “Get to Know Your Team”

adviceAfter blogging about advice for individual contributors and performance rating preparation, I will spend the next few months blogging about good practices for supervisors. This is close to my heart as I fully subscribe to what a former boss once told me: “There is no point complaining about our work culture, start with your part of the organisation and make it a great place to work. The further you get up the ladder and the more people you influence the more you can influence the work culture for the better”. I think this is a fantastic point of view and over the next few months I will share with you what I think makes a good supervisor and how certain supervisor behaviours can
help make your company a great place to work for their direct reports. Hopefully these posts will reach many supervisors and help them to be more effective while improving the team’s performance and satisfaction.

Here is a quick overview of the upcoming blog posts:
1. One on Ones (this post)
2. Feedback (TBD)
3. Coaching (TBD)
4. Delegation (TBD)
5. Principles for success (TBD)
6. Pitfalls for supervisors (TBD)

All these posts are heavily influenced by my friends over at www.manager-tools.com. Go check them out for more details on how to be great supervisor.

Okay, let’s get into One on Ones. In my view this is one of the most important practices for a supervisor. Having a good relationship with the people in your team is very important. One-on-ones give you the chance to get to know your team, to learn about them as an individual and to help them get better on a regular basis. This regular touch point will also mean that you will have a better chance to identify when something is not right, perhaps a problem at home or at work that impacts the individual. Because you speak to them one on one, you will be better able to recognise if their behaviour changes or if they start talking differently about work. Take an interest in the person you are working with, it will pay back many times during your career. People don’t leave companies, people leave supervisors. Reflect on this and understand how important your relationship is.

The purpose of the one-on-one is to get to know your team better and to build strong relationships with the individuals in the team. It is NOT a status meeting but yes you will often talk about work, after all this is what you both have in common, but the one-on-ones you spend investing in the relationship will be the ones that really count. Remember that the focus is on your direct report and what they want to talk about, not on you and what you would like to tell them. This can be difficult at times and hence the format should start with the individual, which will force you to focus on their concerns first.

The format is 30 min every week – 10 min for your direct report, 10 min for you, 10 min for coaching/feedback or in general looking forward to what is happening next. In reality you will spend most one-on-ones with your direct report’s topic and your section and won’t get to the coaching/feedback part.

So how do you do it: Schedule a weekly 30 minutes meeting with each of your direct reports. Make it recurring so that it is always in your calendar. Take notes during the meeting. These
notes will come in handy between meetings to follow up on action items and they can also become helpful when it is performance evaluation time as a record of their achievements and the feedback you have given. I sometimes struggle to focus on the non-work aspects, so for me it is a conscious effort to make small talk. I use a feedback form to take notes, which also acts as a reminder in these sessions to focus on the individual and not the work. I also write reminders for myself to focus on listening over talking. It does help! (If you are interested reach out to me directly and I will send you my form). Make sure you remember your direct report’s family names and any other personal details they share. The agenda should always start with your direct report given a chance to talk about whatever they want to talk about. You get the second half of the one-on-one and if your direct report runs over time, that’s okay. It’s important that you make this time regularly. Don’t cancel it regularly and don’t move it around too often – this sends a clear message of the priority of these meetings if you don’t make the time for them. Show them that these one-on-ones are important to you.

Frequently asked questions:
Q:What if my direct report just keeps talking and I don’t get the chance to tell him
the things I need to tell him?
A: Don’t sweat it, let him have the time. You are his supervisor after all so you can always grab him another time during the day to talk to him about what you want.

Q: My direct treats the one-on-ones as a status meeting and does not share anything
personal?
A: That’s okay, give it some time. Over time he might get more comfortable in these meetings and will start becoming more informal. If not, then this is still a meeting that improves your relationship and shows that you are there to listen.

Q: I am not in the same city as my direct report or I am travelling intermittingly?
A: You can definitely do this over the phone or even better over video conference.

Continue reading more about my advice to improve your performance and the performance of your team:
20 principles for a successful start to a career

Agile Reporting at the enterprise level (Part 2) – Measuring Productivity

productivityThose of you who know me personally, know that nothing can get me on my soapbox quicker than a discussion on measuring productivity. Just over the last week I have been asked three times how to measure this in Agile. I was surprised to notice that I had not yet put my thoughts on paper (well in a blog post). This is well overdue so here I share my thoughts.

Let’s start with the most obvious: Productivity measures output and not outcome. The business cares about outcomes first and outputs second, after all there is no point creating Betamax cassettes more productively than a competitor if everyone buys VHS. Understandably it is difficult to measure the outcome of software delivery so we end up talking about productivity. Having swallowed this pill and being unable to give all but anecdotal guidance on how to measure outcomes, let’s look at productivity measurements.

How not to do it! The worst possible way that I can think of is to do it literally based on output. Think of widgets or java classes or lines of code. If you measure this output you are at best not measuring something meaningful and at worst encouraging bad behaviour. Teams that focus on creating an elegant and easy to maintain solution with reusable components will look less productive than the ones just copying things or creating new components all the time. This is bad. And think of the introduction of technology patterns like stylesheets, all of a sudden for a redesign you only have to update a stylesheet and not all 100 web pages. On paper this would look like a huge productivity loss, updating 1 stylessheet over updating 100 pages in a similar timeframe. Innovative productivity improvements will not get accurately reflected by this kind of measure and teams will not look for innovative ways as much given they are measured on something different . Arguably function points are similar, but I have never dealt with them so I will reserve judgement on this until I have firsthand experience.

How to make it even worse! Yes, widget or line of code based measurements are bad, but it can get even worse. If we have done measurements on this we do not incentivise teams to look for reuse or componentisation of code, and we are also in danger of destroying their sense of teamwork by measuring what each team member contributes. “How many lines of code have you written today?” I have worked with many teams where the best coder writes very little code and that is because he is helping everyone else around him. The team is more productive by him doing this than by him writing lots of code himself. He multiplies his strength rather than linearly growing the team’s productivity by doing more himself.

Okay, you might say that this is all well and good, but what should we do? We clearly need some kind of measurement. I completely agree. Here is what I have used and I think this is a decent starting point:

Measure three different things:

  • Delivered Functionality – You can do this by either measuring how many user stories or story points you deliver. If you are not working in Agile, you can use requirements or use cases or scenarios. Anything that actually relates to what the user gets from the system. This is closest to measuring outcome and hence the most appropriate measure. Of course these items come in all different sizes and you’d be hard pressed to strictly compare two data points but the trending should be helpful. If you did some normalisations of story points (another great topic for a soapbox) then that will give some comparability.
  • Waste – While it is hard to measure productivity and outcomes, it is quite easy to measure the opposite: waste! Of course you should contextually decide which elements of waste you measure and I would be careful with composites unless you can translate this to money (e.g. all the waste adds to 3MUSD, not, we have a waste index of 3.6). Composites of such diverse elements such as defects, manual steps, process delays and handovers are difficult to understand. If you cannot translate these to dollars, just choose 2 or 3 main waste factors and measure them. Once they are good find the next one to measure and track.
  • Cycle time – This is the metric that I would consider above all others to be meaningful. How long does it take to get a good idea implemented in production? You should have the broadest definition that you can measure and then break it down into the sub-components to understand where your bottlenecks are and optimise those. Many of these will be impacted by the levels of automation you have implemented and the level of lean process optimisation you have done.

This is by no means perfect. You can game these metrics just like many others and sometimes external factors influence the measurement, but I strongly believe that if you improve on these three measures you will be more productive.

There is one more thing to mention as a caveat. You need to measure exhaustively and in an automated fashion. The more you rely on just a subset of work and the more you manually track activities the less accurate these measures will be. This also means that you need to measure things that don’t lead to functionality being delivered, like paying down technical debt, analysing new requests for functionality that does not implement or defect triage. There is plenty of opportunity to optimise in this space – Paying technical debt down quicker, validating feature requests quicker, reducing feedback cycles to reduce triage times of defects.

For other posts of the Agile reporting series look here: Agile Reporting at the enterprise level – where to look? (Part 1 – Status reporting)

Here is a related TED talk about productivity and the impact of too many rules and metrics by Yves Morieux from BCG

Waterfall or Agile – Reflections on Winston Royce’s original paper

If you are like me, at some stage you learned about the Waterfall methodology. Often the source of the waterfall methodology is attributed to Winston Royce and his paper: “Managing the Development of Large Software Systems”. Recently I have heard many people speak about this paper and imply that it has been misunderstood by the general audience. Rather than prescribing Waterfall it was actually recommending an iterative (or shall we call it Agile?) approach. I just had to read it myself to see what is behind these speculations.

I think there is some truth to both interpretations. I will highlight four points of interest and provide a summary afterwards:

  • Fundamentals of Software Development
    I like the way he starts by saying that fundamentally all value in softwareroyce 2
    delivery comes from a) analysis and b) coding. Everything else (documentation,
    testing, etc.) is required to manage the process and customers would ideally
    not pay for those activities and most developers would prefer not to do them.
    This is such a nice and simple way to describe the problem. It speaks to the
    developer in me.
  • Problems with Waterfall Delivery – He then goes on to describe how the Waterfall model isroyce waterfall fundamentally flawed and how in reality the stage containment is never successful. This pictures and the caption is what most Agile folks use as evidence: “Unfortunately, for the process illustrated, the design iterations are never confined to the successive steps.” So I think he again identifies the problem correctly based on his experience with delivery at NASA.
  • Importance of Documentation – Now he starts to describe his solution to the waterfall problem in five steps. I will spare you the details, but one important point he raises is documentation. To quote his paper “How much documentation? My own view is quite a lot, certainly more than most programmers, analysts or program designers are willing to do…” He basically uses documentation to drive the software delivery process and has some elaborate ideas on how to use documentation correctly. A lot of which makes complete sense in a waterfall delivery method.
  • Overall solution – At the end of the paper he provides his updated model and I have to sayroyce 3 it looks quite complicated. To be honest many of the other delivery frameworks like DAD or SAFe look similarly complicated and we should not discount it just for that reason. I did not try to fully understand the model, but it is basically a waterfall delivery with a few Agile ideas sprinkled in: Early customer involvement, having two iterations of the software to get it right and a focus on good testing.

Summary – Overall I think Winston identifies the problems and starts to think in an Agile direction (okay Agile didn’t exist then, but you know what I mean). I think his
approach is still closer to the Waterfall methodology we all know but he is going in the right direction of iterations and customer involvement. As such, I think his paper is neither the starting point of the Waterfall model nor the starting point of an Agile methodology. I think a software archaeologist would see this as an inbetween model that came before its time.

DevOps in Scaled Agile Models – Which one is best?

DevOps in ScaledAgileI have already written about the importance of DevOps practices (or for that matter Agile technical practices) for Agile adoption and I don’t think there are many people arguing for the contrary. Ultimately, you want those two things to go hand in hand to maximise the outcome for your organisation. In this post I want to have a closer look at popular scaling frameworks to see whether these models explicitly or implicitly include DevOps. One could of course argue that the Agile models should really focus on just the Agile methodology and associated processes and practices. However, given that often the technical side is the inhibitor of achieving the benefits of Agile, I think DevOps should be reflected in these models to remind everyone that Software is being created first and foremost by developers.

Let’s look at a few of the more well known models:

SAFE (Scaled Agile Framework) – This one is probably the easiest as it has DevOps being called out in the big picture. I would however consider two aspects of SAFe as relevant for the wider discussion, the DevOps team and the System Team. While the DevOps team talks about the aspects that have to do with deployment into production and the automation of the process, the System Team focuses more on the development side activities like Continuous Integration and Test automation. For me there is a problem here as it feels a lot like the DevOps team is the Operations team and the System Team is the Build team. I’d rather have them in one System/DevOps team with joint responsibilities. If you consider both of them as just concepts in the model and you have them working closely together then I feel you start getting somewhere. This is how I do this on my projects.

DAD (Disciplined Agile Delivery) – In DAD, DevOps is weaved into the fabric of the methodology but not as nicely spelled out as I would like. DAD is a lot more focused on the processes (perhaps an inheritance from RUP as both are influenced/created by IBM folks). There is however a blog post by “creator” Scott Ambler that draws all the elements together. I still feel that a bit more focus on the technical aspects of delivery in the construction phase would have been better. That being said, there are few good references if you go down to the detailed level. The Integrator role has an explicit responsibility to integrate all aspects of the solution and the Produce a Potentially Consumable Solution and Improve Quality processes call out many technical practices related to DevOps.

LESS (Large Scale Scrum) – In LESS DevOps is not explicitly called out, but is well covered under Technical Excellence. Here it talks about all the important practices and principles and the descriptions for each of them is really good. LESS has a lot less focus on telling you exactly how to put these practices in place, so it will be up to you to define which team or who in your team should be responsible for this (or in true Agile fashion perhaps it is everyone…).

In conclusion, I have to say that I like the idea of combining the explicit structure of SAFE with the principles and ideas of LESS to create my own meta-framework. I will certainly use both as reference going forward.

What do you think? Is it important to reflect DevOps techniques in a Scaled Agile Model? And if so, which one is your favourite representation?

8 DevOps Principles that will Improve Your Speed to Market

I recently got asked about the principles that I follow for DevOps adoptions, so I thought I’d write down my list of principles and what they mean to me:

  • Test Early & Often – This principle is probably the simplest one. Test as early as you can and as often as you can. In my ideal world we find more and more defects closer to the developer by a) running tests more frequently enabled by test automation, b) providing integration points (real or mocked) as early as possible and c) minimising the variables between environments. With all this in place the proportion of defects outside of the development cycle should reduce significantly.
  • Improve Continuously – This principle is the most important and hardest to do. No implementation of DevOps practices is ever complete. You will always learn new things about your technology and solution and without being vigilant about it, deterioration will set in. Of course the corollary to this is that you need to measure what you care about, otherwise all improvements will be unfocused and you will be unable to measure the improvements. You should measure metrics that best represent the areas of concern for you, like cycle time, automation level, defect rates etc.
  • Automate Everything – Easier said than done, this is the one most people associate with DevOps. It is the automation of all processes required to deliver software to production or any other environment. For me this goes further than that, it means automating your status reporting, integrating the tools in your ecosystem and getting everyone involved to focus on things that computers cannot (yet) do.
  • Cohesive Teams – Too often I have been in projects where the silo mentality was the biggest hindrance of progress, be it between delivery partner and client, between development and test teams or between development and operations. Not working together and not having aligned goals/measures of success is going to make the technical problems look like child’s play. Focus on getting the teams aligned early on in your process so that everyone is moving in the same direction.
  • Deliver Small Increments – Complexity and interdependence are the things that make your cycle time longer than required. The more complex and interdependent a piece of software is, the more difficult it is to test and to identify the root cause of any problem. Look for ways to make the chunks of functionality smaller. This will be difficult in the beginning but the more mature you get, the easier this becomes. It will be a multiplying effect that reduces your time to market further.
  • Outside-In Development – One of the most fascinating pieces of research I have seen is by Walker Royce about the success of different delivery approaches. He shows that delays are often caused by integration defects that are identified too late. They are identified late because that’s the first time you can test time in an integrated environment. Once you find them you might have to change the inner logic of your systems to accommodate the required changes to the interface. Now imagine doing it the other way around, you test your interfaces first and once they are stable you build out the inner workings. This reduces the need of rework significantly. This principle holds for integration between systems or for modules within the same application and there are many tools and patterns that support outside-in development.
  • Experiment Frequently – Last but not least you need to experiment. Try different things, use new tools and patterns and keep learning. It’s the same for your business, by using DevOps you can safely try new things out with your customers (think of A/B testing), but you should do the same internally for your development organisation. Be curious!

If you follow these 8 principles I am sure you are on the right track to improve your speed to market.