Mohamed Wadie Nsiri
on 10 February 2023
There’s been a lot of talk in recent years about companies leaving the cloud. With the looming macroeconomic uncertainties that are affecting growth, companies are trying to control their costs by downsizing their staff and reducing their infrastructure costs. One way to reduce infrastructure costs is repatriating workloads from public clouds, which we refer to as “cloud repatriation”. According to a 2021 survey by 451 Research, 48% of respondents repatriated some of their workloads from cloud providers. A percentage that echoes Gartner’s prediction that “more than 50% of data migration initiatives will exceed their budget and timeline—and potentially harm the business—because of flawed strategy and execution”.
We can avoid many cloud repatriations with proper planning and design as detailed in this whitepaper. We can also avoid cloud repatriation by looking more closely at the drivers pushing organisations to move away from public clouds. This article is the first in a series that aims to help you avoid costly mistakes and misconceptions around cloud migration and repatriation. In this blog, we will analyse the main reasons for cloud repatriation.
Introduction
There are several surveys and studies around repatriation. Most of the data converge on a few common drivers for cloud repatriation even if their importance might be rated differently. The following reasons are often in the top 3:
Higher costs
Many organisations are finding that cloud costs end up being higher than initially projected. According to CloudZero’s 2022 report, only 40% of the respondents estimated that their “cloud costs are about where they should be or lower”. The latter number correlates with 50% of IT executives reporting that “it’s difficult to get cloud costs under control” in Anodot’s 2022 report.
Some studies point out that 94% of organisations are wasting money in the cloud – such as HashiCorp’s 2022 survey. Statista estimated the amount of waste to around 30% of total cloud spending! Gartner estimated total cloud spending to around 500 billion dollars in 2022. If these values are to be trusted, it means that cloud waste is higher than the annual GDP of nearly two-thirds of the world’s countries!
Unsurprisingly, 451 Research’s 2021 survey ranked cost as the #1 reason to repatriate workloads. It was also ranked third by IDC’s 2018 survey.
The gap between the actual cost and the projected one might be due to the lack of:
- Realistic business plans
- Proper hardware planning and poor technical design choices
- Controls and global policies over the used cloud services leading to the “shadow IT” issue
- Cloud skills to optimally leverage the cloud services
- Global views on the cost structure of cloud spending, especially in hybrid or multi-cloud setups
Let’s now look at another common reason for cloud repatriation.
Growing security concerns
According to IBM’s 2022 Transformation Index, 54% of respondents agree that “Public Cloud is not secure enough”. IDC’s 2018 survey confirms this perception by reporting security as the first driver for repatriation.
The concerns around cloud security are fueled by:
- The recurring vulnerabilities found in different cloud providers (e.g. Azure, AWS, GCP).
- The wide impact of some cloud failures on the internet (e.g. AWS outage in late 2021)
It is becoming clear that a significant security hole in a major cloud provider might lead to data leakage at unprecedented scales. According to Statista, 60% of corporate data is already stored in public clouds, which explains the growing concerns over concentrating so much data in a limited number of providers’ infrastructure.
Besides security concerns, there’s a third and often surprising reason organisations choose to repatriate their workloads: performance issues.
Degraded performance
According to IDC’s 2018 survey, performance is the second most common reason for cloud repatriation. Virtana’s 2021 survey indicates that 72% of repatriations were due to “performance or cost reasons, or a combination of both”.
Degradation in performance after a migration to the cloud is rarely due to underperforming hardware. Public clouds propose the same, and sometimes better, hardware that the one you can buy in a private cloud set-up. Yet, there are several scenarios that might lead to degraded performance after a migration to the cloud:
- Your private cloud set-up leveraged persistent local flash storage, compared to public clouds where you can have ephemeral-only local storage or persistent remote storage.
- Your on-premise set-up benefited from low network latency between your components compared to public clouds’s inter-availability zone latency or intra-availability zone latency.
In this whitepaper, we provide explanations on how those scenarios might arise and how we can be mitigate them.
Summary
We shared a good amount of data around cloud repatriation. We went over the most common reasons for moving workloads back from cloud providers according to these studies. In the next blog, we will dive into the misconceptions that might lead to cloud migration failures and to cloud repatriation in particular. We will also provide recommendations to help you avoid these traps.
At Canonical, we have the expertise and the products that can help you plan, execute and optimise your cloud migration:
- We can help you secure your assets. Our Ubuntu Pro offering provides you with 10 years maintenance and security coverage for more than 23,000 packages. It helps you harden your operating system and acquire the certifications that you need for various compliance regimes, such as FedRAMP and HIPAA.
- We can manage some of your critical applications at a predictable and transparent cost.
- Our Juju platform helps you operate your workloads the same way on the major public cloud providers and in your private cloud. Therefore, it allows you to lower the risk of any cloud migration as you can move workloads back and forth without any change in your model of operations.
- We can help you build your own private cloud through a wide variety of products:
- MAAS for setting up and operating bare-metal servers.
- LXD for managing containers and virtual machines.
- OpenStack and Canonical Kubernetes for modernising your data centre operations through a public cloud like paradigm.
- Canonical Ceph for centralising and automating your storage management.