[ Trends ] Multi - Cloud
May 17, 2021
Jens Bontinck

[ Trends ] Multi - Cloud

Data Architecture

In a few years, analysts forecast that companies deploy most workloads in multiple public clouds and use multiple SaaS services to run their business. 

On-premise deployments will only be used if cloud or SaaS is not feasible from a legal, budget, latency or data volume point of view. 

For example, IDC expects 2021 to "Be the Year of Multi-Cloud": " By 2022, over 90% of enterprises worldwide will be relying on a mix of on-premises/dedicated private clouds, multiple public clouds, and legacy platforms to meet their infrastructure needs.".


At ML6 we clearly see a move to the cloud but we notice that multi-cloud is rather controversial. As soon as a cloud platform has been selected (or was added to the "enterprise agreement") all kinds of rules pop up that restricts the cloud services and SaaS services to a subset of services offered by a single cloud provider for workloads developed internally and externally.


In this blogpost, our Head of Labs, Jens Bontinck, outlines the benefits of multi-cloud, looks at the risks and complexity and shares his top three tips that make multi- cloud feasible. 


Reference: Hybrid and multi-cloud patterns and practices

The problem

We understand that companies try to reduce the complexity.

On the other hand, if your cloud strategy is only based on lift & shift to a limited number of “on-premise lookalike” services you might not see a lot of benefits. If you restrict the set of services on a specific cloud, the use of other clouds and SaaS services too much, you are probably not using the new features that are key advantages of modern cloud-native tooling and SaaS.

A key advantage of cloud and SaaS are fully managed services, so you can focus on your business cases instead of infrastructure, backed by a pay per use model that scale to zero so the operational costs will scale according to the use of your application or data volume.

Let’s summarize the benefits of multi-cloud into 4 topics.

  1. If you adopt a multi-cloud and SaaS approach it’s possible to select the most appropriate services for each workload.
    Basic IaaS and some PaaS services are very similar in pricing and functionality on all clouds. Examples are object storage, virtual machines and managed relational databases. More specific services such as IoT services, cloud data warehouses and AI tools can be very different from a functionality, scalability and pricing point of view.
    A good example is Google BigQuery. If you want to store a lot of data in a cloud data warehouse for analytics and AI and you load the data, as it should be, append-only into well-defined partitions, the storage costs for Google BigQuery is typically half of the costs in other cloud data warehouse solutions.
  2. Mitigate risks.
    Depending on your industry you will be legally required to have an exit strategy and disaster recovery plan for a number of workloads.
    A partial solution is storing essential data on multiple (public) clouds. Public clouds and well managed private data centres or high-available on-premise hardware offers excellent SLAs. However, outages, disasters and attacks by hackers can occur so make sure that essential data sets and backups are stored in multiple locations. All cloud providers have excellent life-cycle management and various levels of encryption available to make this happen at a manageable cost. From an application point of view, if you take disaster recovery and resilience into account in the design of the application, using cloud-native frameworks and data services that automatically run in different regions can ensure a maximum uptime or portability with no or limited refactoring. Check out the IDC white paper “How a multi-cloud strategy can help regulated organizations mitigate risks in the cloud” for more examples.
  3. Design greenfield applications with multi-cloud in mind.
    Modern applications are often designed as a range of micro-services, (managed)  data services and message queues.
    Multi-cloud or hybrid deployment options, for example, low-cost development/acceptance environments in the cloud are more and more feasible due to container orchestrators such as Kubernetes. The features of Kubernetes, and less configuration intensive solutions for containers, varies across cloud and on-premise providers so it’s recommended to select a solution that works for your team and view on managing infrastructure.
  4. Keep track of differences in costs and features.
    Public clouds and SaaS services are all trying to increase market share. This can be an opportunity from an OPEX point of view. If you have a good view of the current and future workloads, the current OPEX, the portability of certain workloads, the features you are looking for and the skills in your teams it’s feasible to negotiate a great long term commit deal with multiple providers or selectively move or deploy new workloads to a competitor cloud or SaaS solution.

The risks


Multi-cloud is unfortunately not as straightforward as the cloud-native foundation and cloud vendors claim. It definitely adds complexity.

Let’s take a look at containers.
Managed Kubernetes clusters are available on every major cloud service but the version of Kubernetes, the level of automatic infrastructure management with or without downtime and the surrounding cloud infrastructure, such as API managers and software-defined networking are different.
Services, such as Cloud Run and Knative, that abstract away the complexity of Kubernetes are available but with different features and APIs.

Another reality check is Terraform to define infrastructure as code.
All the major clouds and on-premise tools are supported but you still need to have a detailed understanding of the specific cloud services.

On the other hand, we’ve had great results with minimal effort that brings value to our customers.

These are our top 3 tips to make multi-cloud feasibly

  1. Assess the scope and impact of using another public cloud or SaaS service in a realistic way.
    In some cases, we hear concerns as “The roll-out of the cloud strategy took us 2 years because of the impact on on-premise networking, authentication, the roll-out of Office applications, migrations of several ERP applications… This is preventing us from adopting another public cloud or SaaS application”.

    We notice that as soon as workloads have been implemented on 1 public cloud, the team can quickly get up to speed with another cloud. The services will be different but a lot of the tooling such as infrastructure as code, working with object storage, container registries are very similar.
    Setting up VPN connections or interconnects between clouds is easier than configuring on-premise hardware. All major public cloud providers have managed services to keep LDAP services and object storage in sync between public clouds and on-premise environments.
    It’s also important to think about the types of workloads you want to move. If you for example move large scale data analytics and AI to another cloud, you don’t need to upskill your entire IT department but only 1 specific team and a number of people in supporting roles.
  2. Make sure the architecture team is aware of the features and pricing of different cloud providers and the challenges teams are facing in day to day operations.
    It’s recommended a modern architecture team follows the trends on the market and regularly aligns with internal/external development teams about the challenges they are facing and the operational costs.
    With this input, it’s feasibly to work towards a tech stack with a good balance between technology used in productions, technology that has to be phased out and technology that can be tested and adopted in an incremental way.
  3. A balanced view on portability and vendor lockin.
    Not everybody needs the scale and extreme agility of large cloud-native companies.
    Large technology companies are developing their own technology, often backed or externalised as open-source, because the tools available at that time in the market didn’t meet their specific needs or budget.
    If you have a smaller team, budget and your core business is not running a technology business we recommend focussing on your core business.
    Align your technology stack with your business goals. Invest in skills and custom-developed technology that improve your core business.
    Leverage managed infrastructure and SaaS services for most of the other supporting processes outside your core.
    A lot of companies for example don’t need the extreme customizability of a full-blown self-managed Kubernetes environment. A managed Kubernetes or serverless solution can be just enough with a lot less infrastructure and security management. Portability will be restricted but possible with some refactoring. This is also the case for AI and advanced analytics. Interoperability between cloud data warehouses and more customized server-based Hadoop/Spark platforms is mature enough these days to use the best of both worlds.
    We recommend keeping most of your data in a managed cloud solution so you can focus on developing valuable data products (dashboards, alerts, ML models) instead of infrastructure while you temporary spin up more specific technology for the 10 % adhoc or R&D cases.

Summary

We hope this article demystifies multi- and hybrid cloud.
Get in touch if you have any comments or are interested in our cloud, data and architecture advisory services.


Related posts

Want to learn more?

Let’s have a chat.
Contact us