Category Archives: Uncategorized

Impressions from DevOps Enterprise Summit 2015

2015-10-21 15.13.19The theme of conference was so nicely summarised by Jody Mulkey from TicketMaster “DevOps is not a technology problem” already in the morning of Day 1.  Most talks covered the cultural aspects of DevOps and what good looks like and what it takes to transform the organisational culture. Some of my takeaways to try at home to shift culture are:

  • Blameless Culture
  • Value Stream mapping
  • Minimal Viable process
  • “Red Tape Removal teams”
  • Internal Conferences/Hack Days and other avenues for informal learning
  • ChatOps as communication channel

The other topic that came up again and again was: Metrics. Everyone is looking for ways to measure progress and justify improvement investment. This certainly validated the efforts of a working group before the conference. The working group put together a paper on measuring DevOps which you can get access to here http://devopsenterprise.io/media/DOES_forum_metrics_102015.pdf I was part of the team and have to thank everyone involved for a great experience and amazing support from the team at IT Revolution.

As part of the Unconference there was a very lively discussion about metrics. I think there was some level of consensus about a few points:

  • There is not one measure to rule them all, you need multiple dimensions for steerage
  • The metrics in focus shift over time
  • You should use the scientific method and drive improvements through hypothesis and validation

There was consensus that DevOps is not a goal – “are we there yet?” is not a valid question. Jason Cox summarised it nicely in his talk: “Keep moving forward with curiosity”. That is the spirit of the conference and the overall DevOps movement.

Technology wise – I think the practice/tool that stuck with me most was ChatOps. This seems to have broken into many organisations and is a tool to change culture and the way collaboration works.

Some highlight talks for me were – and there were so many more (!):

Jason Cox – Disney: Okay as a geek it would always be hard to trump a talk that is full of Star Wars references and shows the latest trailer. But besides this he is just great in giving insights into the Disney org and how to leverage the unique company culture for the DevOps movement.

Adrian Cockcroft – Battery Ventures: A fantastic talk about systems thinking applied to organisations. Fascinating insights into the cultures of companies like Netflix. So many ideas in the talk that it requires hearing it a few times to catch it all.

Damon Edwards – DTO solutions: This talk brought everything together what we heard throughout the conference. The summary slide shows the learning organisation, the measures, the idea of a journey not a goal to reach:devops-kaizen-practical-steps-to-start-sustain-a-transformation-42-638

Jez Humble: This talk one of the most interesting metrics for DevOps progress and success: Number of manhours required outside of business hours for a production deployment – the goal needs to be 0. Overall great talk about the architecture concerns for DevOps. Its an unsung truth that architecture plays a huge role in DevOps and is difficult to change.

Steve Spear – High Velocity Edge: About leading a learning organisation. He demonstrate with examples very well that leaders have to have the courage to lead from the front in regards to learning and acknowledging that they don’t know everything. The other takeaway was the rigorous adoption of the scientific method in all aspects of the organisation and the example of Toyota showcased how such an organisation destroys the competition.

Josh Corman & John Willis: An insightful talk about vulnerabilities in software and the importance of taking this serious, not just to save money, but to save lifes!

Topo Pal – Capital One: Topo’s point of Capital One contributing their tools to open source for the community was a powerful statement. We can all benefit from being more open and sharing more with the community. Thank you Topo.

Rosalind Radcliff – IBM: Educating us on what is possible with Mainframe. One of the few talks about “legacy” and really insightful that modern delivery methods are absolutely possible with the mainframe.

Mark Rendell – Accenture: Talking about the role platform applications play in solving the organisational structure challenge of scaling DevOps.

What did I get most value out of you wonder?

100% the discussions with people in the hallway, with fellow speakers, with all the people who are on the journey with me. Looking forward to make the next steps together and share our stories again next year.

Favourite Quotes:

  • “Motion != Work” Jason Cox & Jez Humble
  • “We cant copy Netflix because it has all those superstar engineers, we don’t have the people” Fortune 100 CTO —- “We hired them from you, and got out of their way” Adrian Cockcroft
  • You are either building a learning organization…or you are losing to someone who is – Andrew Clay Shafer requoted in USPTO session
  • “We are open sourcing our tools because it is the right thing to do.” Topo Pal, Capital One
  • “Buoys, not boundaries.” Ralph Loura, HP Enterprise
  • “Too much planning means you miss out on opportunities to learn and adjust.” Gary Gruver, co-author, “Leading the Transformation”
  • “We have horrible hygiene in our software supply chain” – Josh Corman, Sonatype
  • “I am in this (DevOps) to save lives” – Josh Corman, Sonatype
  • “We had Schroedinger releases – until it was in production we didn’t know whether it was dead or not” – Elisabeth Hendrickson
  • “Project based teams accrue technical debt, product or platform based teams pay it down” – Adrian Cockcroft

What I want to see more of next year:

  • People talking about managing vendors and systems integrators
  • Experiences with learning cultures and how the experiments went
  • Even more discussion on bridging the chasm to the organisational majority

Cultural references/Running Gags:

  • Lots of Star Wars references – aren’t we all looking forward to Christmas? How fitting we had Jason speak again.
  • Volkswagen gags were also popular

DevOps for Systems of Record – A new hope (Preview of DOES talk)

(This post first appeared on DevOps.com – Link)

I am speaking at the DevOps Enterprise Summit later this week and thought I give people a teaser of what I am going to talk about, so that you know what to expect. I am looking forward to share my story – a story of adversity but also of hope – with many of you. ( A recording of it is available here)

So we have all heard about the DevOps and Continuous Delivery stories from our favorite Web companies (Google, NetFlix,…). It is much harder to come across successful case studies with Enterprise-grade Systems of Record. While I cannot provide the 50 times a day deployments for SAP (yet ;-)), I wanted to share my story so far and what I have learned in the journey (both in regards to solve and unsolved issues).

Hopefully this contribution to the discussion helps some of those out there who are looking for a first step. But why should we even deal with COTS and other systems of record? Ultimately for me this comes back to the two gear analogy that I keep using. If you can just ditch your “legacy” applications completely, then congratulations you don’t have to deal with this and can probably stop reading this blog post. For all of you out there that cannot do that, you will be confronted with the reality that even though your digital and custom applications can deliver at amazing speed you are still somehow constraint by your “legacy” applications. Hence speeding up those will help you achieve the ultimate terminal velocity of your delivery organisation. So how do I approach Systems of Record and the DevOps adoption of it

3 Key steps:
1. Look under the hood to find the ingredients
2. Recreate the IKEA manual
3. Understand the path to production

DOES1

  1. Initial Discovery – Let’s open the hood

DOES2Ultimately the two things that I am always looking for are the moving parts of the application and the processes to build and deploy the application. Sounds easy, right? Well it often isn’t as especially COTS application try to abstract those processes away. Look at SAP where you make configuration changes in one environment that then somehow get transported to other environments, but you have little visibility into the underlying process (Ok – my SAP experience is 10 years old, so it might have changed, but you get the gist).

Why do we look for moving parts? Well we want to use the common software engineering practices as much as we can and don’t necessarily want to trust the inbuild features for version control for example. So we look for ways to find the source code files or ways to extract the relevant information in text format (well SQL, XML, etc count as text in my world view), this is sometimes relatively easy (Siebel .sif files are just a type of XML for example) and sometimes it is pretty difficult or impossible. These underlying files is what we will store in source control to make sure we understand at each point in time the configuration of the application.

Once you have found the source code, you will have 3 things to do:
a) Get it out of the proprietary solution
b) Integrate it with your IDE
c) Solve for Merges
(I will explore this further in another blog post later)

Part 2. Recreate the IKEA manual
DOES3Once you have all the moving parts, you need to understand how it all fits together. Custom Application usually require explicit instructions which makes it much easier to define and automate the instruction set. For systems of record packaging and deployment is often done through UIs and there is usually more than one way of doing it. Experienced operators have found their own short cuts and the reliability of the system depends on having the right guy in the operating centre (I have some stories to tell about not having the right guy on go-live weekend).

Your goal needs to be an “Ikea Manual” that creates the system from the moving parts we identified earlier without requiring complex interactions or decisions from human operators. Ideally all instructions work from the commandline or some programming API, but when all else fails a UI driver like Selenium can solve the really tricky parts of creating and automating the manual. Every black-box step should be drilled open so that you understand what is happening within the system and so that you understand the variations that could cause you trouble down the path.

Part 3. Understand the Path to production
does4Speaking of paths…Of course for systems of record the same rules apply as for any other system. You should practice the exact things that you will do to establish the system in production in all previous environments and minimise the difference between configurations. The challenge for systems of records is often the cost associated with production like environments So you might have to look for creative ways to minimise the difference and to parameterise configurations so that you can at least automate for the differences.

So all is good in Mirco’s land of the systems of Record?
Unfortunately not, there are still many challenges that I am working on. The progress we have made so far, allowed for a step change in velocity and reliability, but there are still areas that I want to find better solutions for:

  • Unit test automation
  • Unsupported activities or no API
  • Performance
  • Configuration Management skills – the curse of the configurators
  • Common Objects
  • Operations – the last mile…

I will speak at much more detail about my experiences at the DevOps Enterprise Summit, if you are attending, come say Hi. I am always happy to show you the scars of working with Systems of records and share war stories.

And for two “legacy” technology stacks that I have done extensive work for (Siebel and Mainframe ) I promise to write more in-depth blog posts in the future.

The Testing Centre of Excellence is dead, Long Live the Centre of Quality Engineering

“Big, centralised Test Centres of Excellence are a thing of the past” – Forrester 2013

Agile clearly took the IT industry by storm. In my early days trying to bring Agile to projects a few years back I had to explain even what Agile means. Nowadays pretty much all my clients are using some kind of Agile somewhere in their organisation. Now we see the challenges of hot to support delivery in an Agile fashion and how to optimise for speed to market. One common obstacle is duration and effort for testing. Too many organisations are still relying on large scale manual inspection instead of having an optimised and automated approach to testing. Of course in most cases not everything can be automated, but it is far more common to see too little automation than to see too much automation.

“Quality comes not from inspection, but from improvement of the production process.“ – Dr W. Edward Deming

TCOE1The above picture illustrates the shift that is required to really support speed to market. Quality needs to be built into the development process from the beginning and every opportunity for early feedback should be used to avoid the late detection from manual inspection. There is so much good evidence showing that the temporal delay for finding defects is very costly. People don’t necessarily still remember what exactly they were thinking when they made a specific change a few weeks ago. Or worse the fix might even be given to someone completely different who first has to understand the context of the piece of code that he is trying to fix.

For many organisation this provides a challenge as most testing is organised centrally from a Testing Center of Excellence. This needs to change.

TCOE2Rather than having this large group of testers who are centrally organised, we need people with quality focus embedded in our Agile teams. The manual tester who just executes test scripts is a thing of the past mostly. The new roles require people who understand the business context and can come up with test strategies that minimise the risk in our application. Those test experts are the kind of people who look both ways on a one way street, they just think differently to other people and hence have a very specific mindset that we need.

“Good testers are the kind of people who look both ways before crossing a one-way street”

Besides creating test strategies they will also look after exploratory testing which finds things outside of the usual boundaries. And then we need the test automation developers who should also be embedded in the Agile teams. This decentralising of quality experts does not mean that there is no need for a central function. Test Automation requires a technical framework that should be owned by the central function. Additionally the central function should make sure that lessons learned and other experiences are being shared and that the profession of Automation Developers and Test strategists and Exploratory Testers continue to evolve. Providing networking opportunities, knowledge sharing session and more formal guidance on how to embed quality as early as possible in the development process should be the new focus. This also means to collaborate with the development and engineering teams on coding standards, on the way static code analysis is being used and what can be covered by automated unit testing. On the most basic level it is a shift from testing to quality engineering.

The nature of the central function will shift from a Testing Center of Excellence to a Center of Quality Engineering. Will this be difficult and transformation? Yes it will be and I will leave you with this thought that all TCOE practitioners (and testers for that matter) should take to heart:

“It is not necessary to change. Survival is not mandatory.“ – Dr. W. Edward Deming

What computer games can teach us about maturity models – Choose your own DevOps Adventure

To use maturity models or not is an eternal question that Agile and DevOps coaches struggle with. We all know that maturity models have some weaknesses, they can easily be gamed if they are used to incentivise and/or punish people, they are very prone to the Dunning-Kruger effect and often they are vague. Of course on the flipside, maturity models allow you to position yourself, your team or your company across a set of increasingly good practices and striving for the next level could be the required motivation to push ahead and implement the next improvement.

Clearly there are different reasons behind different kinds of maturity models. For a self-assessment and to set a roadmap, a traditional maturity model like the Accenture DevOps maturity model is what it takes to get these done. There are many others available on the internet, so feel free to choose the one you like best.

At one of my recent clients we performed many maturity assessments across a wide variety of teams, technologies and applications. Of course such large scope means that we did not spend a lot of time with each team to assess the maturity and not surprisingly the result was that we got very different levels of response. We heard things like “Of course we do Continuous Integration, we have Jenkins installed and it runs every Saturday”, had this team not mentioned the second part of the sentence we would have probably ticked the Continuous Integration box on the maturity sheet.

A few months later we were back in the same situation and needed to find a way to help teams self-assess their maturity in an environment where a lot of DevOps concepts are not well known and different vendors and client teams are involved which means the actual maturity rating becomes somewhat political. I was worrying about this for a while and then one night while playing on my PC, inspiration hit me – I remembered the good old Civilisation game and the technology tree:

techtree_original

Now if I could come up with a technology tree just like this for DevOps I might be able to use this with teams to document the practices they have in place and what it takes to get the next practice enabled. Enter the DevOps technology dependency tree (sample below):

CD Technical Dependencies Tree

In this tree for each leaf we created a definition and related metrics and now each team could go off and use this tree to chart where they are and how they progress. This way each team chooses their own DevOps adventure. We also marked capabilities that the company needed to provide so that each team could leverage common practices that are strategically aligned (like a common test automation framework or deployment framework). This tree has been hugely successful at this specific client and we continue to update it whenever we find a better representation and believe new practices should be represented.

Who would have thought that playing hours of computer games would come in handy one day…

DevOps in Scaled Agile Models – Which one is best?

DevOps in ScaledAgileI have already written about the importance of DevOps practices (or for that matter Agile technical practices) for Agile adoption and I don’t think there are many people arguing for the contrary. Ultimately, you want those two things to go hand in hand to maximise the outcome for your organisation. In this post I want to have a closer look at popular scaling frameworks to see whether these models explicitly or implicitly include DevOps. One could of course argue that the Agile models should really focus on just the Agile methodology and associated processes and practices. However, given that often the technical side is the inhibitor of achieving the benefits of Agile, I think DevOps should be reflected in these models to remind everyone that Software is being created first and foremost by developers.

Let’s look at a few of the more well known models:

SAFE (Scaled Agile Framework) – This one is probably the easiest as it has DevOps being called out in the big picture. I would however consider two aspects of SAFe as relevant for the wider discussion, the DevOps team and the System Team. While the DevOps team talks about the aspects that have to do with deployment into production and the automation of the process, the System Team focuses more on the development side activities like Continuous Integration and Test automation. For me there is a problem here as it feels a lot like the DevOps team is the Operations team and the System Team is the Build team. I’d rather have them in one System/DevOps team with joint responsibilities. If you consider both of them as just concepts in the model and you have them working closely together then I feel you start getting somewhere. This is how I do this on my projects.

DAD (Disciplined Agile Delivery) – In DAD, DevOps is weaved into the fabric of the methodology but not as nicely spelled out as I would like. DAD is a lot more focused on the processes (perhaps an inheritance from RUP as both are influenced/created by IBM folks). There is however a blog post by “creator” Scott Ambler that draws all the elements together. I still feel that a bit more focus on the technical aspects of delivery in the construction phase would have been better. That being said, there are few good references if you go down to the detailed level. The Integrator role has an explicit responsibility to integrate all aspects of the solution and the Produce a Potentially Consumable Solution and Improve Quality processes call out many technical practices related to DevOps.

LESS (Large Scale Scrum) – In LESS DevOps is not explicitly called out, but is well covered under Technical Excellence. Here it talks about all the important practices and principles and the descriptions for each of them is really good. LESS has a lot less focus on telling you exactly how to put these practices in place, so it will be up to you to define which team or who in your team should be responsible for this (or in true Agile fashion perhaps it is everyone…).

In conclusion, I have to say that I like the idea of combining the explicit structure of SAFE with the principles and ideas of LESS to create my own meta-framework. I will certainly use both as reference going forward.

What do you think? Is it important to reflect DevOps techniques in a Scaled Agile Model? And if so, which one is your favourite representation?

8 DevOps Principles that will Improve Your Speed to Market

I recently got asked about the principles that I follow for DevOps adoptions, so I thought I’d write down my list of principles and what they mean to me:

  • Test Early & Often – This principle is probably the simplest one. Test as early as you can and as often as you can. In my ideal world we find more and more defects closer to the developer by a) running tests more frequently enabled by test automation, b) providing integration points (real or mocked) as early as possible and c) minimising the variables between environments. With all this in place the proportion of defects outside of the development cycle should reduce significantly.
  • Improve Continuously – This principle is the most important and hardest to do. No implementation of DevOps practices is ever complete. You will always learn new things about your technology and solution and without being vigilant about it, deterioration will set in. Of course the corollary to this is that you need to measure what you care about, otherwise all improvements will be unfocused and you will be unable to measure the improvements. You should measure metrics that best represent the areas of concern for you, like cycle time, automation level, defect rates etc.
  • Automate Everything – Easier said than done, this is the one most people associate with DevOps. It is the automation of all processes required to deliver software to production or any other environment. For me this goes further than that, it means automating your status reporting, integrating the tools in your ecosystem and getting everyone involved to focus on things that computers cannot (yet) do.
  • Cohesive Teams – Too often I have been in projects where the silo mentality was the biggest hindrance of progress, be it between delivery partner and client, between development and test teams or between development and operations. Not working together and not having aligned goals/measures of success is going to make the technical problems look like child’s play. Focus on getting the teams aligned early on in your process so that everyone is moving in the same direction.
  • Deliver Small Increments – Complexity and interdependence are the things that make your cycle time longer than required. The more complex and interdependent a piece of software is, the more difficult it is to test and to identify the root cause of any problem. Look for ways to make the chunks of functionality smaller. This will be difficult in the beginning but the more mature you get, the easier this becomes. It will be a multiplying effect that reduces your time to market further.
  • Outside-In Development – One of the most fascinating pieces of research I have seen is by Walker Royce about the success of different delivery approaches. He shows that delays are often caused by integration defects that are identified too late. They are identified late because that’s the first time you can test time in an integrated environment. Once you find them you might have to change the inner logic of your systems to accommodate the required changes to the interface. Now imagine doing it the other way around, you test your interfaces first and once they are stable you build out the inner workings. This reduces the need of rework significantly. This principle holds for integration between systems or for modules within the same application and there are many tools and patterns that support outside-in development.
  • Experiment Frequently – Last but not least you need to experiment. Try different things, use new tools and patterns and keep learning. It’s the same for your business, by using DevOps you can safely try new things out with your customers (think of A/B testing), but you should do the same internally for your development organisation. Be curious!

If you follow these 8 principles I am sure you are on the right track to improve your speed to market.

Working with SIs in a DevOps/Agile delivery model

In this blog post (another of my DevOps SI series), I want to explore the contractual aspects of working with SI (System Integrator). At the highest level there are three models that I have come across:

  • Fixed Price contract – Unfortunately these are not very flexible and usually require a level of detail to define the outcome, which is counterproductive for Agile engagements. It does however incentivise the SI to use best practices to reduce the cost of delivery.
  • Time and Material – This is the most flexible model that easily accommodates any scope changes. The challenge for this one is that the SI does not have an incentive to increase the level of automation because each manual step adds to the revenue the SI makes.
  • Gain Share – This is a great model if your SI is willing to share the gains and risks of your business model. While this is the ideal model, often it is not easy to agree on the contribution that the specific application makes to the overall business.

So what is my preferred model? Well let me start by saying that the contract itself will only ever be one aspect to consider, the overall partnership culture will likely make a bigger impact than the contract itself. I have worked with many different models and have made them work even when they were a hindrance for the Agile delivery approach. However, if I had to define a model that I consider most suitable (Ceteris Paribus – all other things being equal), I would agree on a time and materials contract to keep the flexibility. I would make it mandatory to do joint planning sessions so that both staff movements and release schedule are done in true partnership (it does not help if the SI has staffing issues the client is not aware of or the client makes any ramp-ups/ramp-downs the SI’s problem). I would agree on two scorecards that I would use to evaluate the partnership. One is a Delivery Scorecard, which shows the performance of delivery, things like: are we on track, have we delivered to our promises, is our delivery predictable. The second is an Operational Scorecard , which shows: the quality of delivery, the automation levels in place, the cycle times for key processes in the SDLC.

With these elements I feel that you can have a very fruitful partnership that truly brings together the best of both worlds.

How to support Multi-Speed IT with DevOps and Agile

These days a lot of organisations talk about Multi-Speed IT, so I thought I’d share my thoughts on this. I think the concept has been around for a while but now there is a nice label to associate this idea. Let’s start by looking at why Multi-Speed IT is important. The idea is best illustrated by a picture of two interlocking gears of different sizes and by using a simple example to explain the concept.

The smaller gear moves much faster than the larger one, but where the two gears interlock they gearsremain aligned to not stop the motion. But what does this mean in reality? Think about a banking app on your mobile. Your bank might update the app on a weekly basis with new functionality like reporting and/or improved user interface. That is a reasonable fast release cycle. The mainframe system that sits in the background and provides the mobile app with your account balance and transaction details does not have to change at the same speed. In fact it might only have to provide a new service for the mobile app once every quarter. Nonetheless the changes between those two systems need to align when new functionality is rolled out. However, it doesn’t mean both systems need to release at the same speed. In general, the customer facing systems are the fast applications (Systems of Engagement, Digital) and the slower ones are the Systems of Record or backend systems. The release cycles should take this into consideration.

So how do you get ready for the Multi-Speed IT Delivery Model?

  • Release Strategy (Agile) – Identify functionality that requires changes in multiple systems and ones that can be done in isolation. If you follow an Agile approach, you can align every n-th release for releasing functionality that is aligned while the releases in between can deliver isolated changes for the fast moving applications.
  • Application Architecture – Use versioned interface agreements so that you can decouple the gears (read applications) temporarily. This means you can release a new version of a backend system or a front-end system without impacting current functionality of the other. Once the other system catches up, new functionality becomes available across the system. This allows you to keep to your individual release schedule, which in turn means delivery is a lot less complex and interdependent. In the picture I used above, think of this as the clutch that temporarily disengages the gears.
  • Technical Practices and Tools (DevOps) – If the application architecture decoupling is the clutch, then the technical practices and tools are the grease. This is where DevOps comes into the picture. The whole idea of the Multi-Speed IT is to make the delivery of functionality less interdependent. On the flip side, you need to spend more effort on getting the right practices and tools in place to support this. For example you want to make sure that you can quickly test the different interface versions with automated testing, you need to have good version control to make sure you have in place the right components for each application, you also want to make sure you can manage your codeline very well through abstractions and branching where required. And the basics of configuration management, packaging and deployment will become even more important as you want to reduce the number of variables you have to deal with in your environments. You better remove those variables introduced through manual steps by having these processes completely automated.
  • Testing strategies – Given that you are now dealing with multiple versions of components being in the environment at the same time, you have to rethink your testing strategies. The rules of combinatorics make it very clear that it only takes a few different variables before it becomes unmanageable to test all permutations. So we need to think about different testing strategies that focus on valid permutations and risk profiles. After all, functionality that is not yet live requires less testing than the ones that will go live next.

The above points cover the technical aspects but to get there you will also have to solve some of the organisational challenges. Let me just highlight 3 of them here:

  • Partnership with delivery partners – It will be important to choose your partners wisely. Perhaps it helps to think of your partner ecosystem in three categories: Innovators (the ones who work with you in innovative spaces and with new technologies), Workhorses (the ones who support your core business applications that continue to change) and Commodities (the ones who run legacy applications that don’t require much new functionality and attention). It should be clear that you need to treat them differently in regards to contracts and incentives. I will blog later about the best way to incentivise your workhorses, the area that I see most challenges in.
  • Application Portfolio Management – Of course to find the right partner you first need to understand what your needs are. Look across your application portfolio and determine where your applications sit across the following dimensions: Importance to business, exposure to customers, frequency of change, and volume of change. Based on this you can find the right partner to optimise the outcome for each application.
  • Governance – Last but not least governance in very important. In a multi-speed IT world you will need flexible governance. One size fits all will not be good enough. You will need light-weight system driven governance for your high-speed applications and you can probably afford a more powerpoint/excel driven manual governance for your slower changing applications. If you can run status reports of live systems (like Jira, RTC or TFS) for your fast applications you are another step closer to mastering the multi-speed IT world.

The need for an Onshore Technical Apprenticeship model to counter the falling ladder

This post is a little bit different to my other posts, it highlights a challenge that most of us in the western world is facing, now that offshoring and nearshoring is just a reality of life. I think companies need to think about an onshore technical IT apprenticeship program that allows them to build more technical skills onshore. And it does not matter whether you are a consulting company, an IT shop or a manufacturing business, software is an important part of your business and you better keep building the skills required to run it.

When I started working for my current employer over 10 years ago everyone did some kind of technical work as a new joiner, I personally did some Cobol programming, many of my friends worked with Java. Then as we got more and more senior we had a good understanding of what it takes to develop a solution.

Nowadays if you are trying to staff a technical onshore role, it is actually not easy to find people and often you look to colleagues from offshore centres to help out. A lot of project and program management is still being done onshore and many of the current leaders still come from my generation of “Been there, done that”.
Unfortunately in many IT teams onshore it is today more important to learn exactly 3 technical skills: Excel, Powerpoint and MS Project…

ladderA bit of background:
The problem I describe is what I call the “Falling Ladder” or which in a TED talk was called “sinking skill ladder” (around 10:50 into this video http://www.ted.com/talks/nirmalya_kumar_india_s_invisible_entrepreneurs/transcript?language=en#t-698501) – once you have given all the on the ground technical work to delivery centers, you will soon realise that the people onshore do not have the knowledge to design solutions really well, so you start handing over design as well. Soon you don’t have skills to really project manage anymore as all you can do is manage the spreadsheet, so you hand over project management as well. And then soon all there is onshore is the demand creation either through sales externally or internally. And would you trust a Technical Architect who has never seen code? Okay – this is a slight exaggeration, but you get the gist.
I had a friend who recently left the IT company he was working for and after 2 months of job search we caught up, here is what he said: “It is actually quite hard to find a job if you don’t have any skills beyond excel and MS project, the market is looking for people who understand the technology you are dealing with”. You end up with two choices you get some of the younger guns for the newest set of technologies or someone much older who still has the technical skills of well established technologies, but is that sustainable?

So what should you do:
I think you need to find a way to create the next generation of technical leaders. Technology and specifically software will remain an important part of your business. It is unrealistic to expect to all IT work in-house and onshore, but to really drive technology to the extent that you need to, it is important to retain technical talent. Create some roles in your team that are onshore even if it is unpopular. The investment you make here is small and will pay back easily if you can keep good technical people around and grow them as the next technology leaders in your company.

Picture: The Fall by Celine Nadeau
License: Creative Commons

DevOps for Systems of Record – DevOps Summit Melbourne 21st of November

Update – The DevOps summit has been postponed until early next year.

This time I just want to provide a quick teaser about my upcoming talk at the DevOps summit in Melbourne about DevOps for Systems of Record. I will highlight in this talk my experiences from two case studies, one with Mainframe and one with Siebel and share lessons learned.

IBM_Blue_Gene_P_supercomputerThe talk will highlight four different principles of dealing with Systems of Records in your DevOps adoption:
Principle 1 – Overcoming Culture
Principle 2 – Tenacity over Vendor Advice
Principle 3 – A word about Maturity Models
Principle 4 – DevOps changes everything

After the talk I will provide a more detailed post about the topic, but for now you will have to come over to the Melbourne Convention and Exhibition Centre to hear me speak about it. Check this link for details about the summit: http://www.devops-summit.org/Melbourne