Myth-busting the cloud

It feels sometimes that a week doesn’t go by without some new level of “prominence” assigned to myths surrounding how we should now approach “The Cloud”.

Invariably, the advice will be biased from, normally, one of two core dimensions – either those whose strategy is simply that full, public cloud consumption is inevitable or those who take the opposite stance…

So, I have taken some of these core “myths” and put the Neil Fagan perspective on them…but rather than call them myths, I have called them “challenges”.

Challenge one: Hybrid is a permanent destination

Quick definition: hybrid means an organisation has a mix of on- and off-premise cloud-based solutions.

I am quite sure that from a public cloud provider perspective their entire business vision, mission and strategy is based on removing any data centre that is not their own…and therefore the concept of hybrid is one which only devalues this point.

The further subtext to hybrid; it allows workloads to be shared between cloud providers – akin to spot pricing. In my opinion this is where the orchestration products have over-sold themselves, unless an organisation chooses the lowest common denominator every time (and hence never use any advanced features); there will always be constraints.

There is also the “why would you run your own data centre?” question, if all the variables are stacked up against it. The answer is you wouldn’t if it (the cloud) did reconcile all your requirements.

Time is the key dimension here. As an organisation matures and more of the non standard workloads can be migrated, then the on-premise data centre will become very much a declining asset. Will it decline to an absolute zero?  I don’t believe so. Will it decline to only marginal use cases? I would say “yes”; but whether this happens in 3 or 15 years is debatable.

Most organisations will have a mantra to keep vendor lock-in to a minimum in order to reduce the risk of overzealous exploitation.   Whilst a hybrid strategy would support offsetting this, the market pressures are somewhat different given, for example, the market share AWS currently commands.   I see hybrid much more as the reality of a deployment timeline, accompanied by the technical and cultural change of moving to a public cloud provider. Will it disappear completely?  Probably, but I couldn’t predict how long this will take.

The other core dimension of a hybrid cloud is the workload itself.  It could well be that some workloads only use one provider (on or off premise) and therefore they neither need to move between providers nor between on and off premise – other workloads will. The latter will never be seamless, but building for portability should be a key principle in application development.

What is key though is that the AWS/Azure ecosystem is one which every organisation should exploit as fully and as quickly as possible.  Without this ecosystem, technical and business advancement will be severely impacted. Using it, testing it, piloting it in every way possible is a “must do” activity. But, at the same time, don’t assume that within a 12-month period, your entire workload will suddenly be there.

Challenge two: Bimodal is the answer

Quick definition: Gartner “created” it. Bimodal is the practice of managing two separate but coherent styles of work: one focused on predictability, the other on exploration. 

There are as many advocates as there are detractors of bimodel.  The different view being that firstly, bimodel is just too simple a concept to model an organisation on. Secondly, it creates an A-team and B-team culture. Probably most importantly (thanks to Simon Wardley), it describes the trimodal – pioneers, settlers and time planners; the critical element being the settlers who ensure the work is taken from the pioneers and turned into mature products before the town planners can turn this into industrialised commodities or utility services. To quote Simon “Without this middle component then, yes, you cover the two extremes (e.g., agile vs six sigma) but new things built never progress or evolve. You have nothing managing the ‘flow’ from one extreme to another.”

Isn’t convergence a better strategy?

Practical real world … any organisation with multiple development programs will need to manage the maturity of these from the set of “legacy” through to inflight developments through to those which are emerging.  From an infrastructure deployment perspective, this may well be bi-modal (e.g. mode 1, traditional VM infrastructure; mode 2, webscale open-source-oriented infrastructure); however this does not mean an organisation can (or should) model itself to it.  So it’s good that Gartner have raised the topic as it has created the discussion that has been brewing for some time.

Challenge three: If I have deployed everything into AWS (or other service providers), whenever they have a problem, I will blame the provider

The recent AWS S3 outage is a good example of this. Immediate comments are usually along the lines of “This is what happens when you give the keys to the kingdom to one company.”,

What this actually allows an organisation to do is focus on “Enterprise Cloud Adoption”. In other words, moving workloads to the public cloud doesn’t remove the organisation from its original responsibilities: application management,  resilient architectures, security, disaster recovery, governance and financial management. So, whilst AWS may have, for example, very impressive availability SLAs and statistics, it would be like buying a car without breakdown coverage even though you probably believe you will never need it.

The management process an organisation needs to develop is often termed a “Shared Responsibility Model” – AWS have published one (here is an example) which highlights how to build a resilient and available architecture- ensuring every role, every function and every activity has a clear line of demarkation. Every organisation that moves workloads to the cloud has a duty to ensure the designed infrastructure supports the business need.

The next time there is an outage (and there will be one) by a significant cloud provider, the answer from the enterprise should be more like“that is a shame, but thank goodness we planned for it…”

This post first appeared in Neil’s blog.

Neil Fagan

Neil Fagan is CTO of the UK Government Security and Intelligence Account in Global Infrastructure Services. He is an enterprise architecture expert, leading teams of architects who work on solutions from initial concept through delivery and support.

See Neil’s full bio.


  1. Tim Coote says:

    Sane words from a large SI. There do seem to be a lot of enterprises in denial, though: I went to a Cloud Expo and could only find owned infrastructure approaches. Are enterprises frightened, or just unsure how to identify applications/systems to migrate and how to get the devops/CD working properly first?

    Liked by 1 person

    • Neil Fagan says:

      Hi Tim – I think you are right – its never quite as easy as the glossy brochures suggest – and of course we all know that fundamentally this is not a technical project but more a business transformation – in order to get the business value and benefits…



  1. […] AWS use this term directly, this is not in any way exclusive to AWS.   In a bimodal context, (see previous blog for a broader view of this) there are two distinct […]


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: