Digital transformations: the most efficient path is not doing everything, everywhere, all at once
Executing multiple streams of work at the same time will be much faster than performing activities in a sequential fashion.
The idea is totally logical. It makes perfect sense, doesn’t it?
We’ve all been there. There’s a huge programme of work that needs to be completed in a ridiculously short period of time. Whether this is a digital transformation, a programme to deliver on a significant piece of regulation, a change activity to improve the business/generate more revenue or any other large change programme, the solution that is presented is always the same.
To achieve the organisation’s goals within the ever-shortening time constraints, we will need to run multiple, concurrent, interrelated streams of work, or put another way: we’ll need to do everything, everywhere, all at once.
Ask anyone how much more efficient they are when they multitask (the answer is invariably “much more efficient”).
Ask any computer programmer how multithreaded programming – a technique that allows a software application to execute multiple tasks concurrently – always results in improved performance (the answer here is normally “it depends”).
However, there’s plenty of research out there telling us that multitasking on a human level is counterproductive. Shifting focus between different tasks or activities is detrimental to overall efficiency, reduces productivity and has a negative impact on quality.
For the computer programmer, multithreading may not generate any performance gains due to shared resource contention or other technical factors.
So, what’s this got to do with digital transformation?
The human factor
At its core, digital transformation is a human-centric change, and as such we can extend the above principle to any digital transformation initiative.
Organisations are unlikely to run into significant issues if transformation initiatives are small, discrete and involve two parallel streams where the interdependencies between these streams is kept to a minimum.
However, increasing the number of streams or the dependencies between these streams, even by a small amount, increases the complexity, cost and risk profile of the delivery significantly.
Managing interdependencies relating to resources, schedules, technical requirements or any of the other myriad factors that affect delivery becomes extremely complex. The lines of communication also increase significantly resulting in more delay, more scope for confusion and more complexity.
To be clear, not all large change initiatives that run multiple, parallel, interrelated streams of work are destined to fail. However, with each new stream of work or interdependency that is added, the risk of failure and cost overruns increase significantly.
This push for efficiency ignores the fact that change occurs within a complex, constantly evolving and multifaceted landscape. In a situation where a programme management function plans, monitors and coordinates multiple, interrelated streams of work, this results in a steeply rising non-linear cost curve.
I like GeePaw Hill’s Many More Much Smaller Steps (MMMSS) approach to change, which challenges the common attitude centred around making everything more efficient. In GeePaw’s words: “Parallelism is never free, and the cost isn’t linear as the size of the problem goes up. It doesn’t just get harder when we add more streams, it gets significantly more difficult, quickly swamping any benefit it might deliver.”
So, how should we approach large and complex transformational change initiatives?
Make them smaller
With most digital transformations being undertaken to improve efficiencies or to meet the ever-increasing pace of changing customer expectations, it is crucial to success that we don’t only improve the pace, but also effectively manage the risks associated with this type of change.
Rather than tackling any digital transformation as a large, complex, multi-headed beast, we should improve one discrete area of the business quickly and then move onto the next by adopting an iterative process that allows organisations to deliver and demonstrate continual progress at speed.
Simplifying the change and progressing quickly through a number of smaller transformations drives behavioural change and delivers intrinsic human and organisational benefits beyond improving delivery (e.g., improved morale and reduced transformation fatigue).
As the organisation becomes more adept at transformational change, it may look at running two transformations in parallel (again minimising the interdependencies between the initiatives). However, for the reasons outlined above, we should not increase the number of parallel, interrelated streams above two.
Validation, innovation and risk management
So, rather than investing significant time and effort on up-front analysis and decision making and then taking a bet that, at some point (possibly 12 months or more) in the future, these still hold true, the above process of small iterative transformations allows organisations to validate their assumptions and the feasibility of ideas much more quickly.
The outcomes here are much more favourable and allow for better risk management around the execution of conceptual ideas. Either the bets and assumptions are correct – which allows another set of risks to be taken and drives more innovation – or the organisation fails fast and corrects quickly. No matter which of these results is achieved, both will, ultimately, improve delivery velocity.
Fundamentally, the organisation is not risking everything on a single event or set of assumptions that were taken a long time ago at the start of the transformation. It is betting on a series of smaller initiatives that are delivering results over shorter timeframes which in turn improves how the organisation manages risk.
What’s wrong with the status quo?
As mentioned in my previous article, we’ve all seen the well reported statistic that 70% of transformations fail to deliver successful outcomes, but the issue being addressed here is broader than that. It touches on all large change programmes.
It is a very rare occurrence that, when a large transformation or other change initiative (with many heavily interconnected, parallel streams of work) completes, everyone whoops for joy at the success of the delivery. More often, the expectation that everything will be better once the initiative is delivered results in a drop in morale when the outcomes fail to match or exceed expectations.
By adopting a (more or less) sequential, iterative and incremental approach, we see a situation where everyone is pleased when something works, but if it does not, we course correct quickly and move on.
Against the delivery pressure of an ever-closing time window, the idea that we should do one thing at a time is seemingly counterintuitive, but doing everything, everywhere, all at once is not the answer.
About the author
Brian Harkin is the CTO of Kynec and a visiting lecturer at Bayes Business School (City, University of London).
He is passionate about the intersection of people, technology, and innovation and has developed the Galapagos Framework to help leaders and organisations transform the way they direct digital change.
All opinions are his own and he welcomes debate and comment!
Follow Brian on Twitter @DigitalXformBH and LinkedIn.