How Container Mobility & Workload Mobility Unlock Hybrid Cloud
Porting a legacy application to a new cloud or virtualization platform is notoriously risky, expensive, and challenging. These older applications carry a lot of technical baggage and typically require a great deal of reconfiguration in order for the “lift and shift” approach to work, adding considerable time and expense to migration projects. Fortunately, the advent of containerization and container cluster managers, like Google’s Kubernetes, are poised to deliver an unprecedented level of workload and container mobility that could change the way we migrate applications forever.
Workload Mobility = Cloud Migration
The infrastructure landscape is ever-diversifying, with new technologies and vendors emerging on what seems like a daily basis. Increasing levels of abstraction have allowed for unprecedented efficiency and flexibility. More companies than ever are leveraging a multi-cloud or hybrid environment. What a time to be alive.
Despite all these (rapidly occurring) advancements, there isn’t a great way to move legacy applications to these new platforms. With a “lift and shift” approach, you’re likely to end up replicating your hypervisor in a cloud environment, which of course defeats the purpose of value-add features like infinite scalability.
And while the idea of a quick fix to ‘port’ the applications sounds alluring, the reality is that mature IT workloads are very complex. The more data contained in your application workloads, the greater the chance that a number of services are using or being used by it. Without capturing and translating this configuration data as well, you’re better off manually migrating the applications.
- Mature IT workloads and environments are complex, lots of interdependencies
- Shift in mindset from one-time migration to recurring/ongoing
- Restores power to the end users from the vendors; can switch vendors when ever you want and reduces the vendor lock-in of your org
- Creates the ability to move resources around to best balance price and performance
The Portability of Containers
The rapid growth and widespread adoption of containers is no surprise given the benefits and advantage they deliver to business applications. Containers provide lightweight platform abstraction at a high level without resorting to virtualization. This enables the localization of applications within containers and makes it possible to port applications between public and private cloud providers that support the container standard.
Containers are also more efficient at creating workload bundles that are portable from cloud to cloud. As such, they provide a stable foundation for moving workloads around multi-clouds and hybrid cloud infrastructures without having to do major rewrites of an application’s codebase.
It takes knowledge, skills and enabling technology to successfully execute cloud-to-cloud migration using containers, but such an approach makes portability between clouds relatively easy and simple.
Essentially, containers break up applications into smaller code packages, each containing the basic operating system (OS) that it needs to run independent of the environment where it is hosted. Container orchestration software (like Kubernetes) provide the programming layer that enables containers to knit themselves together to form enterprise solutions.
With containers, programmers do not need to rewrite their application code to run on new cloud platforms and operating systems. They can code applications just once, as containerization imbues them with the ability to run virtually anywhere.
Because the migration of containers between cloud platforms or providers can be achieved by downloading them onto the new servers, containers allow for easy scaling of applications as well as seamless cloud portability.
Container Security Risks
Containers come with security-related limitations that organizations should investigate before building applications that leverage their capabilities. Containers in the cloud are less secure and do not have the same security boundaries as VMs.
Hackers can exploit weak points and vulnerabilities in the underlying OS to gain access to the container, and can also exploit the containers to access to underlying servers. Fortunately, there is a way to limit the impact of such attacks on the enterprise.
Protect Container Data
Just like other data sources, containers need to be protected. Organizations that increasingly rely on containerization technology for critical IT functions must put in place appropriate safeguards to minimize downtime and disruptions to business operations and ensure business continuity.
Containers are instances of short-lived microservices or applications and can be scaled to meet application loads. In the event of a failure, the container can be restarted and reconfigured from code. This means there is no need to back up containers themselves.
Ideally, containers should be shut down once they are longer in use. Most developers continue to launch container-base applications and forget to go back and scale down the number of containers in operation. This locks down cloud-based resources and can cost organizations a lot more for resources that deliver no benefit.
While containers were originally intended to be short-lived, the reality of enterprise data management meant that application data should extend beyond the lifetime of any single container. Data persistence was added to container infrastructures, creating the need for backup mechanisms to protect container data.
Recent implementations of container technology store application data on the same host as the container, usually in the folder associated with the container. If the container is lost, the application data is lost as well.