Tag Archives: agile

Sure ways to fail #2 – Not knowing what the goal is

“If you don’t know where you going, any road will take you there” – The Cheshire Cat in Alice in Wonderland

The_Cheshire_Cat_by_JTwilight97This quote from Alice in Wonderland is very insightful and I have quoted it many times when talking about strategies and plans. In this case I want to use this as an illustration for a way an Agile project will surely fail. If you don’t know what the goal line is for a specific release or sprint, you have no means to understand how you are travelling and whether you are on the right track. I have seen many teams post great velocities in their first couple of sprints, but when asked whether they can achieve the release goal they have no idea. This is painful and leads to a lot of anxiety for the team and stakeholders.

How does this happen? I believe that this is due to the fact that in Agile we don’t want to spend too much time planning and estimating too much to the future and hence start kicking off projects really quickly. We do this with just the first one or two sprints worth of stories ready for implementation and then we keep grooming the backlog. This is great, but if you choose to do so, I think you need to spend a little bit of extra time to give the team a final goal. You don’t need to have all the stories for the release, after all we want to be flexible in Agile, but you should have some idea of the things you need to deliver. This can be stories, themes, epics, features or whatever you choose. At this stage you should do a quick estimation to provide the overall scope for the release and a goal line that the team can use in their burn-up graphs (read more on reporting here). You can then track changes to the goal line if epics require more stories than expected or if any new scope is introduced. This allows the team to have meaningful discussions with the product owner and the stakeholders about required changes to the release. If you don’t have such a goal line, you could get a shock surprise towards the end of your release and surprises are not something we want with Agile delivery.

The one caveat is if you work in a project that truly does not know what the scope is or the scope is undefined. What you can do in those cases is either work in a Kanban style or with an assumed target velocity. If you work in Kanban you only ever have to worry about the first few items in your backlog and set expectations with your stakeholders on how long they will take and then do the same for the next set of stories on a regular cadence. This requires a lot of trust between the team, product owner and stakeholders. Alternatively you can set yourself a certain target velocity and just work towards that and fill the velocity with stories that are getting groomed on an ongoing basis.

Agile Reporting at the enterprise level (Part 2) – Measuring Productivity

productivityThose of you who know me personally, know that nothing can get me on my soapbox quicker than a discussion on measuring productivity. Just over the last week I have been asked three times how to measure this in Agile. I was surprised to notice that I had not yet put my thoughts on paper (well in a blog post). This is well overdue so here I share my thoughts.

Let’s start with the most obvious: Productivity measures output and not outcome. The business cares about outcomes first and outputs second, after all there is no point creating Betamax cassettes more productively than a competitor if everyone buys VHS. Understandably it is difficult to measure the outcome of software delivery so we end up talking about productivity. Having swallowed this pill and being unable to give all but anecdotal guidance on how to measure outcomes, let’s look at productivity measurements.

How not to do it! The worst possible way that I can think of is to do it literally based on output. Think of widgets or java classes or lines of code. If you measure this output you are at best not measuring something meaningful and at worst encouraging bad behaviour. Teams that focus on creating an elegant and easy to maintain solution with reusable components will look less productive than the ones just copying things or creating new components all the time. This is bad. And think of the introduction of technology patterns like stylesheets, all of a sudden for a redesign you only have to update a stylesheet and not all 100 web pages. On paper this would look like a huge productivity loss, updating 1 stylessheet over updating 100 pages in a similar timeframe. Innovative productivity improvements will not get accurately reflected by this kind of measure and teams will not look for innovative ways as much given they are measured on something different . Arguably function points are similar, but I have never dealt with them so I will reserve judgement on this until I have firsthand experience.

How to make it even worse! Yes, widget or line of code based measurements are bad, but it can get even worse. If we have done measurements on this we do not incentivise teams to look for reuse or componentisation of code, and we are also in danger of destroying their sense of teamwork by measuring what each team member contributes. “How many lines of code have you written today?” I have worked with many teams where the best coder writes very little code and that is because he is helping everyone else around him. The team is more productive by him doing this than by him writing lots of code himself. He multiplies his strength rather than linearly growing the team’s productivity by doing more himself.

Okay, you might say that this is all well and good, but what should we do? We clearly need some kind of measurement. I completely agree. Here is what I have used and I think this is a decent starting point:

Measure three different things:

  • Delivered Functionality – You can do this by either measuring how many user stories or story points you deliver. If you are not working in Agile, you can use requirements or use cases or scenarios. Anything that actually relates to what the user gets from the system. This is closest to measuring outcome and hence the most appropriate measure. Of course these items come in all different sizes and you’d be hard pressed to strictly compare two data points but the trending should be helpful. If you did some normalisations of story points (another great topic for a soapbox) then that will give some comparability.
  • Waste – While it is hard to measure productivity and outcomes, it is quite easy to measure the opposite: waste! Of course you should contextually decide which elements of waste you measure and I would be careful with composites unless you can translate this to money (e.g. all the waste adds to 3MUSD, not, we have a waste index of 3.6). Composites of such diverse elements such as defects, manual steps, process delays and handovers are difficult to understand. If you cannot translate these to dollars, just choose 2 or 3 main waste factors and measure them. Once they are good find the next one to measure and track.
  • Cycle time – This is the metric that I would consider above all others to be meaningful. How long does it take to get a good idea implemented in production? You should have the broadest definition that you can measure and then break it down into the sub-components to understand where your bottlenecks are and optimise those. Many of these will be impacted by the levels of automation you have implemented and the level of lean process optimisation you have done.

This is by no means perfect. You can game these metrics just like many others and sometimes external factors influence the measurement, but I strongly believe that if you improve on these three measures you will be more productive.

There is one more thing to mention as a caveat. You need to measure exhaustively and in an automated fashion. The more you rely on just a subset of work and the more you manually track activities the less accurate these measures will be. This also means that you need to measure things that don’t lead to functionality being delivered, like paying down technical debt, analysing new requests for functionality that does not implement or defect triage. There is plenty of opportunity to optimise in this space – Paying technical debt down quicker, validating feature requests quicker, reducing feedback cycles to reduce triage times of defects.

For other posts of the Agile reporting series look here: Agile Reporting at the enterprise level – where to look? (Part 1 – Status reporting)

Here is a related TED talk about productivity and the impact of too many rules and metrics by Yves Morieux from BCG

Waterfall or Agile – Reflections on Winston Royce’s original paper

If you are like me, at some stage you learned about the Waterfall methodology. Often the source of the waterfall methodology is attributed to Winston Royce and his paper: “Managing the Development of Large Software Systems”. Recently I have heard many people speak about this paper and imply that it has been misunderstood by the general audience. Rather than prescribing Waterfall it was actually recommending an iterative (or shall we call it Agile?) approach. I just had to read it myself to see what is behind these speculations.

I think there is some truth to both interpretations. I will highlight four points of interest and provide a summary afterwards:

  • Fundamentals of Software Development
    I like the way he starts by saying that fundamentally all value in softwareroyce 2
    delivery comes from a) analysis and b) coding. Everything else (documentation,
    testing, etc.) is required to manage the process and customers would ideally
    not pay for those activities and most developers would prefer not to do them.
    This is such a nice and simple way to describe the problem. It speaks to the
    developer in me.
  • Problems with Waterfall Delivery – He then goes on to describe how the Waterfall model isroyce waterfall fundamentally flawed and how in reality the stage containment is never successful. This pictures and the caption is what most Agile folks use as evidence: “Unfortunately, for the process illustrated, the design iterations are never confined to the successive steps.” So I think he again identifies the problem correctly based on his experience with delivery at NASA.
  • Importance of Documentation – Now he starts to describe his solution to the waterfall problem in five steps. I will spare you the details, but one important point he raises is documentation. To quote his paper “How much documentation? My own view is quite a lot, certainly more than most programmers, analysts or program designers are willing to do…” He basically uses documentation to drive the software delivery process and has some elaborate ideas on how to use documentation correctly. A lot of which makes complete sense in a waterfall delivery method.
  • Overall solution – At the end of the paper he provides his updated model and I have to sayroyce 3 it looks quite complicated. To be honest many of the other delivery frameworks like DAD or SAFe look similarly complicated and we should not discount it just for that reason. I did not try to fully understand the model, but it is basically a waterfall delivery with a few Agile ideas sprinkled in: Early customer involvement, having two iterations of the software to get it right and a focus on good testing.

Summary – Overall I think Winston identifies the problems and starts to think in an Agile direction (okay Agile didn’t exist then, but you know what I mean). I think his
approach is still closer to the Waterfall methodology we all know but he is going in the right direction of iterations and customer involvement. As such, I think his paper is neither the starting point of the Waterfall model nor the starting point of an Agile methodology. I think a software archaeologist would see this as an inbetween model that came before its time.

DevOps in Scaled Agile Models – Which one is best?

DevOps in ScaledAgileI have already written about the importance of DevOps practices (or for that matter Agile technical practices) for Agile adoption and I don’t think there are many people arguing for the contrary. Ultimately, you want those two things to go hand in hand to maximise the outcome for your organisation. In this post I want to have a closer look at popular scaling frameworks to see whether these models explicitly or implicitly include DevOps. One could of course argue that the Agile models should really focus on just the Agile methodology and associated processes and practices. However, given that often the technical side is the inhibitor of achieving the benefits of Agile, I think DevOps should be reflected in these models to remind everyone that Software is being created first and foremost by developers.

Let’s look at a few of the more well known models:

SAFE (Scaled Agile Framework) – This one is probably the easiest as it has DevOps being called out in the big picture. I would however consider two aspects of SAFe as relevant for the wider discussion, the DevOps team and the System Team. While the DevOps team talks about the aspects that have to do with deployment into production and the automation of the process, the System Team focuses more on the development side activities like Continuous Integration and Test automation. For me there is a problem here as it feels a lot like the DevOps team is the Operations team and the System Team is the Build team. I’d rather have them in one System/DevOps team with joint responsibilities. If you consider both of them as just concepts in the model and you have them working closely together then I feel you start getting somewhere. This is how I do this on my projects.

DAD (Disciplined Agile Delivery) – In DAD, DevOps is weaved into the fabric of the methodology but not as nicely spelled out as I would like. DAD is a lot more focused on the processes (perhaps an inheritance from RUP as both are influenced/created by IBM folks). There is however a blog post by “creator” Scott Ambler that draws all the elements together. I still feel that a bit more focus on the technical aspects of delivery in the construction phase would have been better. That being said, there are few good references if you go down to the detailed level. The Integrator role has an explicit responsibility to integrate all aspects of the solution and the Produce a Potentially Consumable Solution and Improve Quality processes call out many technical practices related to DevOps.

LESS (Large Scale Scrum) – In LESS DevOps is not explicitly called out, but is well covered under Technical Excellence. Here it talks about all the important practices and principles and the descriptions for each of them is really good. LESS has a lot less focus on telling you exactly how to put these practices in place, so it will be up to you to define which team or who in your team should be responsible for this (or in true Agile fashion perhaps it is everyone…).

In conclusion, I have to say that I like the idea of combining the explicit structure of SAFE with the principles and ideas of LESS to create my own meta-framework. I will certainly use both as reference going forward.

What do you think? Is it important to reflect DevOps techniques in a Scaled Agile Model? And if so, which one is your favourite representation?

Working with SIs in a DevOps/Agile delivery model

In this blog post (another of my DevOps SI series), I want to explore the contractual aspects of working with SI (System Integrator). At the highest level there are three models that I have come across:

  • Fixed Price contract – Unfortunately these are not very flexible and usually require a level of detail to define the outcome, which is counterproductive for Agile engagements. It does however incentivise the SI to use best practices to reduce the cost of delivery.
  • Time and Material – This is the most flexible model that easily accommodates any scope changes. The challenge for this one is that the SI does not have an incentive to increase the level of automation because each manual step adds to the revenue the SI makes.
  • Gain Share – This is a great model if your SI is willing to share the gains and risks of your business model. While this is the ideal model, often it is not easy to agree on the contribution that the specific application makes to the overall business.

So what is my preferred model? Well let me start by saying that the contract itself will only ever be one aspect to consider, the overall partnership culture will likely make a bigger impact than the contract itself. I have worked with many different models and have made them work even when they were a hindrance for the Agile delivery approach. However, if I had to define a model that I consider most suitable (Ceteris Paribus – all other things being equal), I would agree on a time and materials contract to keep the flexibility. I would make it mandatory to do joint planning sessions so that both staff movements and release schedule are done in true partnership (it does not help if the SI has staffing issues the client is not aware of or the client makes any ramp-ups/ramp-downs the SI’s problem). I would agree on two scorecards that I would use to evaluate the partnership. One is a Delivery Scorecard, which shows the performance of delivery, things like: are we on track, have we delivered to our promises, is our delivery predictable. The second is an Operational Scorecard , which shows: the quality of delivery, the automation levels in place, the cycle times for key processes in the SDLC.

With these elements I feel that you can have a very fruitful partnership that truly brings together the best of both worlds.

How to support Multi-Speed IT with DevOps and Agile

These days a lot of organisations talk about Multi-Speed IT, so I thought I’d share my thoughts on this. I think the concept has been around for a while but now there is a nice label to associate this idea. Let’s start by looking at why Multi-Speed IT is important. The idea is best illustrated by a picture of two interlocking gears of different sizes and by using a simple example to explain the concept.

The smaller gear moves much faster than the larger one, but where the two gears interlock they gearsremain aligned to not stop the motion. But what does this mean in reality? Think about a banking app on your mobile. Your bank might update the app on a weekly basis with new functionality like reporting and/or improved user interface. That is a reasonable fast release cycle. The mainframe system that sits in the background and provides the mobile app with your account balance and transaction details does not have to change at the same speed. In fact it might only have to provide a new service for the mobile app once every quarter. Nonetheless the changes between those two systems need to align when new functionality is rolled out. However, it doesn’t mean both systems need to release at the same speed. In general, the customer facing systems are the fast applications (Systems of Engagement, Digital) and the slower ones are the Systems of Record or backend systems. The release cycles should take this into consideration.

So how do you get ready for the Multi-Speed IT Delivery Model?

  • Release Strategy (Agile) – Identify functionality that requires changes in multiple systems and ones that can be done in isolation. If you follow an Agile approach, you can align every n-th release for releasing functionality that is aligned while the releases in between can deliver isolated changes for the fast moving applications.
  • Application Architecture – Use versioned interface agreements so that you can decouple the gears (read applications) temporarily. This means you can release a new version of a backend system or a front-end system without impacting current functionality of the other. Once the other system catches up, new functionality becomes available across the system. This allows you to keep to your individual release schedule, which in turn means delivery is a lot less complex and interdependent. In the picture I used above, think of this as the clutch that temporarily disengages the gears.
  • Technical Practices and Tools (DevOps) – If the application architecture decoupling is the clutch, then the technical practices and tools are the grease. This is where DevOps comes into the picture. The whole idea of the Multi-Speed IT is to make the delivery of functionality less interdependent. On the flip side, you need to spend more effort on getting the right practices and tools in place to support this. For example you want to make sure that you can quickly test the different interface versions with automated testing, you need to have good version control to make sure you have in place the right components for each application, you also want to make sure you can manage your codeline very well through abstractions and branching where required. And the basics of configuration management, packaging and deployment will become even more important as you want to reduce the number of variables you have to deal with in your environments. You better remove those variables introduced through manual steps by having these processes completely automated.
  • Testing strategies – Given that you are now dealing with multiple versions of components being in the environment at the same time, you have to rethink your testing strategies. The rules of combinatorics make it very clear that it only takes a few different variables before it becomes unmanageable to test all permutations. So we need to think about different testing strategies that focus on valid permutations and risk profiles. After all, functionality that is not yet live requires less testing than the ones that will go live next.

The above points cover the technical aspects but to get there you will also have to solve some of the organisational challenges. Let me just highlight 3 of them here:

  • Partnership with delivery partners – It will be important to choose your partners wisely. Perhaps it helps to think of your partner ecosystem in three categories: Innovators (the ones who work with you in innovative spaces and with new technologies), Workhorses (the ones who support your core business applications that continue to change) and Commodities (the ones who run legacy applications that don’t require much new functionality and attention). It should be clear that you need to treat them differently in regards to contracts and incentives. I will blog later about the best way to incentivise your workhorses, the area that I see most challenges in.
  • Application Portfolio Management – Of course to find the right partner you first need to understand what your needs are. Look across your application portfolio and determine where your applications sit across the following dimensions: Importance to business, exposure to customers, frequency of change, and volume of change. Based on this you can find the right partner to optimise the outcome for each application.
  • Governance – Last but not least governance in very important. In a multi-speed IT world you will need flexible governance. One size fits all will not be good enough. You will need light-weight system driven governance for your high-speed applications and you can probably afford a more powerpoint/excel driven manual governance for your slower changing applications. If you can run status reports of live systems (like Jira, RTC or TFS) for your fast applications you are another step closer to mastering the multi-speed IT world.

Manifesto-Mania – thoughts on the Half-Arsed Agile and the Anti-Agile manifesto

I came across the following two websites recently:

http://www.halfarsedagilemanifesto.org/

http://antiagilemanifesto.com/

And as is so often the case, when you have to smile about it, it hits too close to home. So I wanted to provide my perspective on these.

Let’s start with the Anti Agile manifesto (http://antiagilemanifesto.com/) – this one is just plainly ignoring the fact that Agile operates in a different cadence and requires a different structure to achieve it’s outcome. However many organisations do follow some of the points mentioned in this manifesto, where stand-ups turn into status meetings and stories are full use cases. Clearly someone who had to suffer from a bad Agile project put pen to paper here – take this as a sniff test if your Agile project is really agile. It is not if this manifesto is true for you.

The Half-Arsed manifesto (http://www.halfarsedagilemanifesto.org/) is different as it provides the challenges that many project teams are exposed to. It’s the tension between local optimisation of teams and organisation wide requirements. Let’s explore this a bit more in detail.

We have heard about new ways of developing software by
paying consultants and reading Gartner reports. Through
this we have been told to value:

Point taken – the Agile space is full of consultants, but hey so is every other space in IT. Let’s move to the next:

Individuals and interactions over processes and tools
and we have mandatory processes and tools to control how those
individuals (we prefer the term ‘resources’) interact

It is important to acknowledge that unfortunately in large organisations you do have mandatory tools and processes. Some of this you should challenge as an Agile team to get a better outcome (e.g. being allowed to use walls to post things even though it might not be “pretty”). Others are required for the sake of the organisation and should be respected. Imagine you are a senior product owner and each team is using a different tool to manage their sprints. You get pictures, excel sheets, links, etc. from your teams to understand their progress, remember this stakeholder is actually paying for your team and is making decisions about the direction of the company, don’t you want him to spend time doing that rather than digging through lots of different data sources? The term resources is terrible, I don’t think we should call people resource – full stop.

Working software over comprehensive documentation
as long as that software is comprehensively documented

I think this is nitpicking. it really comes down to what comprehensively means. In my view Agile teams need to create all the documentation that is required to use and maintain the software. Documentation that is not required in Agile projects is transition documentation that is required to get someone else to do work (e.g. detailed technical designs), they don’t provide value if the team sits together and/or talks every day.

Customer collaboration over contract negotiation
within the boundaries of strict contracts, of course, and subject to rigorous change control

Oh dear – this one clearly speaks to me. As someone working for a systems integrator I do depend on contracts and change control. I always aim to find a better solution, but there are commercial realities and this point is just try. Let’s find the most creative way to solve our problem while still having the required commercial framework in place.

Responding to change over following a plan
provided a detailed plan is in place to respond to the change, and it is followed precisely

Honestly, I am struggling with this one. Isn’t all of SCRUM based on a plan for how to manage change? And isn’t SCRUM about rigorously following the few mandatory ceremonies (at least until your are experienced enough to know why you are breaking the rules).

That is, while the items on the left sound nice
in theory, we’re an enterprise company, and there’s
no way we’re letting go of the items on the right.

Yes – I am working in the enterprise world. And while everything here is a bit harder, I enjoy the challenge and bit by bit we can hopefully change the environment together. If we only accept the purest of Agile forms, we are making the perfect the enemy of the better. Don’t tell people that they are not Agile, help them get a bit better and a bit more agile.

Can you do Agile development with Packaged Software?

I have heard this question many times and only recently realised that the person asking the question and me hearing the question had a different understanding of what the question means. Let me try to explain. The question posed by this blog post can be interpreted in two ways:

  1. If you use packaged software like SAP or Siebel, can you use an Agile methodology?
  2. If you want to be Agile, can you use packaged software?

questionmarkI know the difference is subtle, but the impact of the understanding and the response is amazing. I had this realisation when talking to a colleague of mine who is training an Agile course with me in the next week. My response to both questions would be yes, the difference being that to the first question I would say “YES”, to the second question I would say “yes, but…” It is easiest to see the difference if you consider the alternatives in question: in the first question the alternatives are Agile or Waterfall, in the second question the alternatives are Packaged Software or Custom Software.

 

Let me explain – If you want to be agile, you have a couple of things you want to achieve, so lets look at some of them when considering packaged software

  • Faster time to market – my experience tells me that you will not be as fast with packaged software as you would be with a custom solution in many cases (just the sheer size of the software is often significantly larger)
  • Decoupling of dependencies – this is much harder in packaged solutions
  • Use a lot of automation and DevOps practices – this is a little bit trickier with packaged software
  • Want a first release quickly – if you know which product to choose you might be, but usually packaged software requires more requirements analysis and fit-gap analysis up front to choose the right product

So if you want to be really Agile and use DevOps practices and autonomous self-directed teams with little dependencies then packaged software might be harder to use, hence the “yes, but…” You should ideally choose a custom solution or perhaps a SAAS solution in that case (but be careful, many SAAS products are not DevOps supportive either).

Now lets look a the other question: Assume you already use Siebel or SAP, can you use an Agile methodology? Here the answer is a loud and clear “YES”. Think about all the good things about Waterfall – they all still exist in Agile just better, if you do it right. You have the rigor of change control (just for a shorter period – the iteration), you have extensive testing (just automated and within the iteration), you have as much or as little time for analysis as your require in the discovery phase. You will likely not see the same results in regards to time to market as if you would use custom software, but I bet you get better results than if you do waterfall with packaged software.

So yes – I think I (and many others) have responded to the wrong question.

To give you an idea about my experience: I have supported Agile projects for Siebel at many different levels of scale and usually with 2-3 week iterations. I have implemented some of the core practices of DevOps around configuration management and automatic deployments for Siebel and we are currently in the process of implementing automated testing as well (hopefully including TDD down the road). It is absolutely possible and I think most of my colleagues would agree that using Agile is preferred over Waterfall for Siebel if you do it right (perhaps packaged software is a little bit less forgiving than custom if you don’t do it right – but that’s for another discussion). Could we be faster and better if we could use a custom solution? – absolutely. But that is a different question now, isn’t it? 😉

Picture: Question Mark by Marco Bellucci
Licence: Creative Commons

Distributed Agile – An oxymoron?

It is time for me to put on paper (well not really paper…but you know what I mean) my thoughts on distributed Agile. I have worked with both distributed and co-located Agile. Distributed Agile is a reality, but there are a lot of myths surrounding it. I had some queries over the last few months where people were trying to compare co-located teams with distributed teams.

networkLet’s start by talking about one of the things that is being brought up again and again: “Agile works best with a small number of clever people in a room together”. Now I agree that this will be one of the best performing teams, but I would stipulate that in that situation you don’t need a methodology at all and that the team will be performing very well still. The power of Agile is the rigor it brings to your delivery when you either don’t have very experienced people in the team or when you are  a distributed team.

Now why would you choose a distributed Agile model?

  • Scaling up. It can be very difficult to quickly find enough people in your location
  • Access to skills. It’s also difficult to find people with the right skills.
  • Follow-the-sun development. By working in different regions of the world, you can work around the clock which means you can bring functionality to market quicker.
  • You are already distributed. Well if your teams are already in different locations, you don’t really have a choice, do you?
  • You DO NOT chose distributed Agile because it is inherently cheaper or better. That is a misconception.

The goal of distributed Agile is to get as close as possible to the performance the same team would be capable of if they were co-located.

distributed AgileAll things being equal, I don’t believe a distributed team will ever outperform a co-located team. But then all things are never equal. The best distributed teams are better performing than 80% of the co-located teams (see graph on the left). So it is important in both co-located and distributed Agile to get a well working team together. Improving the performance of your team is more important than the location of the members.

There are however factors that help you achieve an experience close to co-locations.

  • Invest in everything that has to do with communication: Webcams, Instant Messengers, Video conferencing,… And really make this one count, spending 10 minutes each time to enable a connection costs a lot of productive time over a sprint.
  • Physical workspace – where team members are co-located they need to have the right physical environment and shouldn’t sit among hundreds of other people who disturb them
  • Invest in virtual tooling like wikis, discussion boards etc.
  • Find ways to get to know each other. This happens naturally for co-located teams, but requires effort for distributed teams. Spend 10 min in each sprint review introducing a team member or create virtual challenges or social events in second life or world of warcraft.
  • Don’t save a few dollars on travel. Get key stakeholders or team members to travel to the other location so that you at least for a short period of time can enjoy the richness of communication that comes with co-location.
  • Agree on virtual etiquette – what should each team member do on calls or in forums. Retrospectives and Sprint reviews require some additional thought to really hear from everyone.

If you do all that you have a team that operates nearly as if co-located. And if you really want to push the performance of your team further then I have an answer as well:

  • Look at your engineering practices
  • Look at your tools
  • Implement DevOps/ContinuousDelivery/Continuous Integration

That will really push your productivity, much more than any methodology or location choice could possibly do.

Impressions from the DevOps Enterprise Summit 2014

I spent the last 3 days at the DevOps Enterprise summit here in San Francisco and wanted to share my thoughts with those that couldn’t come over here. Overall it was a great conference, especially if you consider that this was the first time this conference was organised. A few glitches, but that just made it more likeable. And I am sure it will be even better next year. And I have to admit that I hope not hear about horses and unicorns for a few days…

So what did I take away from the conference? Here are a few of the themes that were pretty common through the 3 days:

  • DevOps Teams – While there were certainly exceptions (most notably from Barclays), most organisations that spoke seemed to have a dedicated shared services DevOps team to focus on the tooling, governance and support of their DevOps platform. This is certainly my preferred approach as well and it was good to see that many organisation have made positive experiences with this. But it was also good to hear positive stories from organisations that have chosen a more federated approaches and to learn how they approached it successfully.
  • DevOpsSec – While obvious in hindsight, the frequent mentioning of information security as a critical element in the DevOps journey really brought this home for me. And as some of the speakers highlighted the ability to automated the compliance to regulations and policies is so powerful, that information security can actually be your ally in the DevOps journey and not a blocker. Great change of perspective for me personally.
  • Balance of Culture and Technical Practices – Not surprisingly a lot was being said about culture change and also about technical practices for DevOps. I think this balance is important and is good for us to keep in mind in our day-to-day as we sometimes get to focussed on only one side of the equation.
  • Internal Conferences – So many companies use internal conferences to spread the word and share experiences across the organisation. This is fantastic to hear and I am glad they are able to get the support for it as it can be hard to make a quantifiable business case for those as I know from experience.
  • Servers are cattle not pets – A lot was being said about the importance of having environments that are commodity and consistent, so that you can replace servers easily and reliably. Quite a few of the tool vendors were from a server monitoring, configuration management drift detection space as well. This clearly deserves more focus going forward.
  • Tooling – A completely non scientific impression is that certain tools are much more prolific than others in the DevOps toolkit, examples are: Jenkins, Atlassian tools, Git.
  • Measuring everything – Not really a new thought, but interesting to see how many of the organisations had good data to support their story. So important to get this right and use it to drill down on bottlenecks and cost sources.
  • Scaled Agile Framework – SAFe got a lot of positive mentioning by the speakers, seems to be widely adopted at large enterprises.
  • A few smaller takeaways:
    • Impact Score of Releases – I like the idea of measuring the impact of releases by measuring the sum of (number of Defects x severity). Brilliant.
    • Inverse Taylor Manoeuvre – Such a good name for self-enabled teams
    • Inverse Conway Manoeuvre –  A great name for addressing the architectural challenges that many of us face with existing architecture
    • Release notes as blog – Such a good idea to not send notes around but rather use a blog to document all release changes
    • Sprint Plan review meeting – A meeting after the sprint plan to get all relevant stakeholder across the plan (like Ops, InfoSec, Business). Great idea to test.
  • Favourite Quotes:
    • “Branches are evil”
    • “There is a right way to develop software (and DevOps is it)”
    • “We geeks don’t just like SkyNey – we want to build it”
    • “Cease dependence on mass inspection to achieve quality”
    • “Just talking nicely to each other does not delivery software”
    • “Time does not make software better”

A few things could be improved going forward in my opinion:

  • Many of the Enterprise Scale organisations were talking about their Web presence or Digital space, and only a few talked about the DevOps transformation for their Systems of Record. A better balance would be nice, to not only hear the positive stories and learn from the really difficult cases
  • One aspect that was only mentioned as a sidenote and by 2 or 3 speakers is the reality of working with many different vendors and systems integrators. How do you enable this multi-party setup for DevOps practices? Having been on both sides of that story, perhaps I should share my experiences next year…
  • The sessions were pretty much back-to-back and there was little time for Q&A and to ask informal questions. Perhaps a short break in between sessions or a more formal way to socialise with the speaker right after the session would be good. I have seen this very successful at other conferences.

And last but not least a shout-out to some of the outstanding speakers from the conference, if you get a chance check-out the recordings later in the week when they are available on the conference website at http://devopsenterprisesummit.com.
– Gary Gruver
– Em Campbell-Pretty
– Jason Cox
– Mark Schwarz
– Owen Gardner
– Carmen DeArdo
– to highlight just a few, there were many more that are worth listening to if you have the time