VMware costs are climbing. Broadcom’s acquisition reshuffled licensing terms, and IT teams everywhere are looking for a way out. SUSE Virtualization is one of the options getting serious attention, and for good reason. It’s an open-source, Kubernetes-native platform that runs virtual machines and containers in a single environment instead of forcing you to manage two separate stacks.
This guide covers what SUSE Virtualization actually does, how it works under the hood, where it fits in your infrastructure, and what its limitations look like in practice. We also walk through what you need to consider for protecting production workloads once you’re running on it.
What Is SUSE Virtualization and Why Does It Matter?
SUSE Virtualization is an open-source, Kubernetes-native hyper-converged infrastructure (HCI) platform designed to run virtual machines and containers side by side on the same cluster. It’s built from the ground up on Kubernetes, KubeVirt, and KVM, which means your infrastructure team manages everything through a single control plane rather than juggling separate tools for VMs and containerized applications.
The Problem With Legacy Hypervisors
For years, most enterprise virtualization strategies revolved around one vendor. VMware dominated the hypervisor market, and teams built entire operational workflows around vSphere, vCenter, and vSAN. Then Broadcom acquired VMware, licensing models shifted, and costs started climbing in ways that caught many organizations off guard. Suddenly, the platform that was supposed to be a safe bet became a financial liability.
Legacy hypervisors were designed before Kubernetes existed. They handle VMs well, but they were never intended to orchestrate containers. Organizations that adopted both VMs and containers ended up running two completely separate infrastructure stacks, each with its own management tools, networking configurations, storage backends, and operational overhead. That means two sets of skills your team needs to maintain, two licensing agreements to negotiate, and two failure domains to monitor. For teams already thinking about the risks involved in VM migration, this dual-stack burden makes planning even harder.
The core issue with legacy hypervisors isn’t just rising cost. It’s the architectural mismatch between VM-only platforms and the container-driven direction most enterprises are heading.
How SUSE Virtualization Brings VMs and Containers Together
Instead of treating VMs and containers as separate concerns, SUSE Virtualization uses Kubernetes as the common orchestration layer for both. Virtual machines run through KubeVirt, which extends the Kubernetes API to manage VM lifecycle operations the same way you’d manage pods. Containers run natively on the same cluster: one API, one set of access controls, one operational model.
This matters because most enterprises aren’t going to migrate every legacy workload into containers overnight. You still have databases, appliances, and vendor-packaged software that need a full VM. SUSE Virtualization gives your team a path to consolidate those workloads onto a single platform without forcing a disruptive rewrite. You can run a Windows Server VM right next to a containerized microservice, managed through the same Kubernetes tooling your DevOps team already knows. And when it comes to protecting those workloads, having a solid virtual machine backup strategy becomes much simpler when everything lives on one platform.
Under the Hood: How SUSE Virtualization Works
Let’s break down the three core layers that make this platform work: the compute engine, the storage layer, and the management interface.
KVM, KubeVirt, and the Kubernetes Control Plane
At the compute level, SUSE Virtualization relies on Kernel-based Virtual Machine (KVM), the same hypervisor technology behind most Linux-based virtualization deployments. KVM turns each bare-metal node into a hypervisor capable of running fully isolated virtual machines with near-native performance. It’s battle-tested, open source, and already trusted across thousands of production environments.
That’s where KubeVirt comes in. KubeVirt is a Kubernetes add-on that extends the Kubernetes API so VMs become first-class citizens alongside pods. You define a VM using a YAML manifest, just like you would for any other Kubernetes resource. Kubernetes then handles scheduling, networking, and lifecycle operations. Your team doesn’t need to learn a completely new toolset. If they already work with kubectl and Helm, they can manage VMs through the same workflows.
Here’s a simple way to think about it: Kubernetes is the operating system for your cluster, KVM is the engine running each virtual machine, and KubeVirt is the translation layer that lets Kubernetes speak “VM.” The result is a single control plane that schedules containers and virtual machines on the same nodes, governed by the same RBAC policies and monitored through the same observability stack.
Storage With Longhorn and Third-Party CSI Support
Every VM needs persistent storage, and SUSE Virtualization ships with Longhorn as its default storage backend. Longhorn is a lightweight, distributed block storage system built specifically for Kubernetes. It replicates data across nodes, supports snapshots, and handles volume management without requiring a separate storage array. For smaller deployments or teams just getting started, Longhorn covers the basics well.
For enterprises with existing storage investments, SUSE Virtualization also supports third-party Container Storage Interface (CSI) drivers. That means you can plug in storage from vendors like Dell, NetApp, or HPE and use their arrays as the persistent layer for your VMs and containers. Of course, once you’re running production VMs at scale, having a solid backup and restore strategy for your Kubernetes workloads becomes just as important as the storage layer itself.
Storage Options in SUSE Virtualization
Here’s how the two storage approaches compare across key criteria.
Feature | Longhorn (Built-In) | Third-Party CSI (Dell, NetApp, HPE, etc.) |
Deployment complexity | Minimal: included out of the box | Requires driver installation and array configuration |
Performance ceiling | Dependent on local node disks | Scales with enterprise array capabilities |
Replication | Built-in cross-node replication | Handled by the external array |
Best fit | Small-to-mid deployments, edge sites | Large-scale production with existing storage infrastructure |
Unified Management With SUSE Rancher
Running one SUSE Virtualization cluster is straightforward enough. Running five across different data centers is where Rancher becomes essential. According to the official SUSE Virtualization documentation, Rancher integrates with SUSE Virtualization by default to centrally manage virtual machines and containers, with built-in support for authentication providers and multi-tenancy through RBAC.
From Rancher’s dashboard, your team gets a single pane of glass for everything they need to do to operate across clusters:
- Import multiple SUSE Virtualization clusters into one view.
- Manage hosts and VM images across environments.
- Provision new Kubernetes clusters on top of SUSE Virtualization nodes.
- Control access across teams with granular role-based policies.
It’s the operational glue that ties individual clusters into a manageable fleet, which is exactly what enterprise IT teams need when standardizing on a new platform.
Learn why A leading player in the telecommunications industry chose Trilio for their Backup
Key Capabilities IT Teams Should Know About
Understanding the architecture is helpful, but what actually matters to your team on a day-to-day basis are the operational capabilities. Here’s what SUSE Virtualization brings to the table once it’s running in your environment.
Live Migration and VM Lifecycle Management
If you can’t move a running VM from one node to another without downtime, you can’t do rolling upgrades, hardware maintenance, or capacity rebalancing. SUSE Virtualization supports live migration natively through KubeVirt, which means you can evacuate a node, patch it, and bring it back while workloads keep running on neighboring hosts. The experience is comparable to what teams expect from vMotion, just handled through Kubernetes primitives instead of vCenter.
Beyond migration, the full VM lifecycle is managed through standard Kubernetes resources. Creating, starting, stopping, cloning, and deleting VMs all happens through the API or the Harvester UI. Your team can template VM configurations, version-control them in Git, and deploy them through CI/CD pipelines. That’s a significant operational advantage over point-and-click hypervisor consoles that don’t lend themselves to automation.
Snapshots, Backup, and Data Protection Gaps
SUSE Virtualization includes volume snapshots and basic VM backup through Longhorn. You can take a snapshot before a maintenance window and roll back if something goes wrong. This is perfectly adequate for dev/test environments or single-cluster setups.
Built-in snapshots are not a disaster recovery strategy: They typically live on the same storage as the original data, they don’t provide cross-cluster or offsite recovery, and they lack the application-level consistency guarantees that databases demand. If your MySQL instance is mid-transaction when a snapshot fires, you could restore to a corrupted state.
This gap between “we have snapshots” and “we have actual data protection” is one of the most common miscalculations teams make when adopting Kubernetes-based virtualization.
Snapshots give you a safety net for quick rollbacks. They don’t give you disaster recovery, offsite copies, or application-consistent backups, and confusing the two is how production incidents turn into data loss events.
Who Is SUSE Virtualization Built for?
Not every organization needs this platform, but the ones that do tend to share a common profile. Here’s a practical assessment process to help you evaluate whether SUSE Virtualization fits your situation:
- Audit your current hypervisor costs and contracts: If renewal pricing has jumped significantly or you’re locked into per-CPU licensing that doesn’t scale, you have a financial reason to evaluate alternatives.
- Inventory your workload mix: Count how many workloads run as VMs versus containers. If you’re operating both and maintaining two separate infrastructure stacks, consolidation onto a single Kubernetes-based platform removes real operational overhead.
- Assess your team’s Kubernetes maturity: SUSE Virtualization assumes familiarity with kubectl, YAML manifests, and Kubernetes concepts. If your infrastructure team already manages Kubernetes clusters, the learning curve is manageable. If they don’t, factor in training time.
- Map your data protection requirements: Determine your RPO and RTO targets for each workload class. If any VM carries production data that needs offsite backup, application-consistent snapshots, or cross-cluster disaster recovery, plan for a dedicated backup solution from day one.
- Run a proof of concept on non-critical workloads first: Migrate a handful of test VMs, validate networking and storage performance, and confirm that your existing monitoring tools integrate before committing production workloads.
Protecting SUSE Virtualization Workloads in Production
Why Built-In Snapshots Aren't Enough
Longhorn snapshots and the basic VM backup functionality included with SUSE Virtualization cover simple rollback scenarios. They work for reverting a bad config change or recovering a test environment. However, they fall short in several critical areas that production workloads demand.
Snapshots stored on the same cluster as the original data don’t protect you from node failures, storage corruption, or site-level disasters. They also lack application-aware consistency, meaning a database mid-write could end up in a broken state after restore. And there’s no built-in mechanism for scheduling automated backups to offsite targets, enforcing retention policies, or performing cross-cluster recovery.
How Trilio for Kubernetes Fills the Gap
This is exactly the problem Trilio for Kubernetes was designed to solve. Rather than treating backup as an afterthought bolted onto a general-purpose tool, Trilio provides application-centric data protection built specifically for Kubernetes-native environments, including SUSE Virtualization.
Capability | Longhorn Snapshots (Built-In) | Trilio for Kubernetes |
Application-consistent backups | No (crash-consistent only) | Yes: pre/post hooks for MySQL, PostgreSQL, Redis, and others |
Offsite/remote backup targets | Not supported natively | S3-compatible storage, NFS, and cloud-native backends |
Cross-cluster recovery | No | Yes: restore to a different cluster entirely |
Incremental backups | No | Yes: reduces storage consumption and backup windows |
Immutable backups (ransomware protection) | No | Yes: write-once storage support |
Policy-driven automation | Manual or basic scheduling | Automated policies with retention management |
Trilio captures entire Kubernetes applications (persistent volumes, metadata, configurations, and Helm releases) as a single recoverable unit. That means when you restore, you get the complete application back, not just a disk image you have to reassemble manually. It integrates through native Kubernetes APIs, so your team manages backup the same way they manage everything else on the cluster: through YAML, kubectl, or the Trilio UI. If you’re exploring similar protection strategies for other virtualization platforms, the approach Trilio takes for OpenShift Virtualization backup follows the same application-centric model.
If your team is planning a production SUSE Virtualization deployment with stateful VMs, building data protection into the architecture from the start saves you from painful retrofitting later. Schedule a demo to see how Trilio handles backup and recovery for Kubernetes-native virtualization environments.
Automated Red Hat OpenShift Data Protection & Intelligent Recovery
Perform secure application-centric backups of containers, VMs, helm & operators
Use pre-staged snapshots to instantly test, transform, and restore during recovery
Scale with fully automated policy-driven backup-and-restore workflows
Getting Started With SUSE Virtualization
SUSE Virtualization offers IT teams a realistic, open-source alternative to legacy hypervisor lock-in, and it does so without requiring a complete shift to containers overnight. It brings VMs and containers together on a single Kubernetes-based stack, which reduces operational overhead and gives infrastructure teams a platform they can extend and automate on their own terms. That said, the platform alone only gets you halfway there. Stateful workloads in production demand that you think about data protection before your first incident forces the conversation.
A solid starting point is to run a proof of concept with non-critical VMs, test your storage and networking assumptions against real conditions, and define your backup and recovery requirements from the outset. That preparation is what makes the difference between a clean rollout and an expensive restart.
FAQs
Is SUSE Virtualization a direct replacement for traditional hypervisors like VMware?
It can replace VMware for many workloads, but it takes a different architectural approach by using Kubernetes as the orchestration layer instead of a traditional hypervisor management stack. Teams should run a proof of concept with representative workloads to confirm feature parity for their specific use cases before committing to a full migration.
Do I need to be a Kubernetes expert to use SUSE Virtualization?
You don’t need to be an expert, but your team should be comfortable with core Kubernetes concepts like YAML manifests, kubectl, and RBAC policies. Organizations without existing Kubernetes experience should plan for a learning curve and consider hands-on training before moving production workloads onto the platform.
Can I run Windows and Linux VMs on the same cluster?
The platform supports both Windows and Linux virtual machines running side by side with containerized workloads on the same Kubernetes cluster. This flexibility is one of the key reasons teams evaluating SUSE Virtualization find it practical for mixed environments with legacy applications.
How does storage performance compare to enterprise SAN arrays?
The built-in Longhorn storage works well for small and mid-sized deployments, but its performance depends on local node disks rather than dedicated storage hardware. For workloads that demand higher throughput or lower latency, you can integrate enterprise storage arrays from vendors like Dell or NetApp through CSI drivers.
Can I migrate existing VMware VMs to SUSE Virtualization?
Yes, but it requires planning rather than a simple lift-and-shift. You’ll need to convert VM disk formats and verify guest OS compatibility before migrating production workloads. Most teams start with non-critical VMs to validate networking and storage behavior on the new platform before touching anything business-critical.