Key Concepts and Best Practices for OpenShift Virtualization

How to Migrate from VMware to OpenShift Virtualization – Step by Step Instructions

Author

Table of Contents

This blog provides detailed steps to show you how to move from VMware to Red Hat OpenShift Virtualization using Red Hat’s MTV Operator (Migration Toolkit for Virtualization Operator).

To further help the reader, you can see a video of Trilio for OpenShift here , and a whitepaper about Trilio VMware migration to OpenStack .

In the remainder of this post we will detail the steps involved in migrating a test Ubuntu 20.04 LTS Server from VMware vCenter to OpenShift Virtualization.

VMware to OpenShift Migration Steps

This is a list of the detailed steps of how we can migrate a VM from VMware vCenter to OpenShift using the Migration Toolkit Operator (MTV) Operator from Red Hat.

The test migration setup we are using is as follows:

Prerequisites

Before we start the instructions for the install of the MTV Operator, this guide assumes that you already have the following installed and working:

Preparation

1. Install Migration Toolkit Operator (MTV) Operator

Install MTV Operator from the Operator Hub. You can use all the default values.

Now you will have new console items on the left side for Virtualization and Migration.

In order to start a VMware Migration, we need to configure the MTV operator for the specifics of the VMware environment. When you open up the Migration side panel, you will see 4 items: Providers, Plans, NetworkMaps and StorageMaps. All these must be configured to process a successful migration.

  1. Providers – this details the platform integrations. You can migrate from VMware, Red Hat RHV, Red Hat OpenStack and other Openshift platforms (Version 4.x only)

  2. Plans – This is the migration plan to detail source and destination platforms and options

  3. NetworkMaps – Here we detail how we are going to address the networking translation, i.e. VM Network to Openshift network layers

  4. StorageMap – Here we detail how we map the individual ESXi datastores to the cluster CSI storage classes.

2. Create a Provider

Click create a new provider. We add a provider for VMware:

3. Create SHA fingerprint

For the SSHA-1 fingerprint, you will need to run the following openssl command from a Linux machine that has connectivity to vCenter:

    
     openssl s_client -connect vcenter7-1.infra.trilio.io:443 </dev/null | openssl x509 -in /dev/stdin -fingerprint -sha1 -noout
    
   

4. Create Network Map

Click on “Create NetworkMap”. Give it a name and fill in the “Source provider” and “Target provider” with the drop downs, there should be only one option for each. Then click “Add” to add a network mapping. The “Source networks” dropdown should present you with a list of the VMware networks from vCenter. Select the network(s) that are attached to your source VM. There should be only one option for “Target namespaces/netorks” which is “Pod network”. Finally, click “Create”.

Check that the network map is ready.

5. Create StorageMaps

Click on “Create StorageMap”. Give it a name and fill in the “Source provider” and “Target provider” with the drop downs, there should be only one option for each. Then click “Add” to add a storage mapping. The “Source datastores” dropdown should present you with a list of the VMware datastores from vCenter. Select the datastore(s) that are attached to your source VM. The “Target storage classes” should already have your default storage class selected which is fine. Finally, click “Create”.

Check that the storage map is ready

Operation

1. Create a migration Plan

Click “Create plan”, there are 7 steps to it.

1.) General settings

Create a new project for the imported VM to be placed.

    
     oc new-project demo-ubuntu-migration
    
   

2. VM selection

Choose your VM by navigating down through your VMware inventory list of datacenter, cluster, folder.

3. Network mapping

4.Storage mapping

5.Type

Here you select your type of migration, Cold (offline) or Warm (online).

NOTE – In order to use Warm migration, Change Block Tracking (CBT) has to enabled on the VM. If you would like to enable this on a VM, you first have to shut down the VM in order to edit the Settings.

Click “Advanced”, then scroll down and click “Edit Configuration”.

Click “Add Configuration Parameters”.

Add these settings: (if you only have one disk, then you only need one of the

    
     scsix:x.ctkEnabled = "TRUE"
    
   

settings)

6. Hooks - optional

Here you can enter any hooks for Ansible scripts to run post migration automation. We are going to skip this step, click Next.

7. Review

heck that everything has been entered correctly and then click “Finish”

Start a migration Plan

Click the Start button to begin the migration. You can click the arrow on the left to expand the details to watch the migration process in stages.

To see even more detail, you can look at the migration pod logs in the target namespace.

The process can take some time. This test VM in my lab took 2 hours to complete. There are a lot of factors that play into the migration speed, network latency, storage latency, etc.

When the plan is complete, all the stages will show green and the VM will be started in the target namespace.

You can now look at the details of the VM and login to the console on the Virtualization tab of the OpenShift console.

Each type of operating system that you import will potentially have some post-migration steps necessary to get the VM working properly in the OpenShift environment. In the case of Ubuntu, networking will be broken when the VM is migrated. The reason for this is that the network config is based on the eth device name.  And that device name changes between VMware and OpenShift.  In Ubuntu, the ethernet device naming convention is based on what it sees in the PCI hardware of the NIC.  When the VM is migrated, the virtual NIC hardware that the OS detects might be different between VMware’s virtual NIC vs OpenShift’s – hence “ens160” –> “enp1s0”.

The place where you change networking configuration in Ubuntu is the file

    
     "/etc/netplan/00-installer-config.yaml". 
    
   

You need to first confirm what is the new virtual NIC name in OpenShift. From the console, run "ip addr" and make note of the eth device name, i.e. enp3s0.

Now modify the

    
     "/etc/netplan/00-installer-config.yaml"
    
   

file (using sudo) and change the eth device name.

Save the file and then run "sudo netplan apply" and check "ip addr" again. You should now see an IP address assigned and network connectivity should now be working again.

Allow SSH Access : Expose network access to the VM

Next you will likely want to open up some type of network access to the VM depending on what type of services or applications you have running on the VM. Let’s use a simple example of enabling ssh access to the VM from outside of the cluster.

First, find the pod name of the pod for your running migrated VM.

    
     % oc get all
NAME                                        READY   STATUS      RESTARTS   AGE
pod/ubuntu-demo-migration-vm-451395-sbqwc   0/1     Completed   0          23h
pod/virt-launcher-ubuntu-demo-4qb2p         1/1     Running     0          21h
NAME                                             AGE   PHASE     IP             NODENAME                 READY
virtualmachineinstance.kubevirt.io/ubuntu-demo   21h   Running   10.131.0.159   dev-86m78-worker-bwghd   True
NAME                                     AGE   STATUS    READY
virtualmachine.kubevirt.io/ubuntu-demo   21h   Running   True
    
   

Then use the “oc expose” command to create a service for that pod.

    
     % oc expose pod virt-launcher-ubuntu-demo-4qb2p --port 22 --name ssh
service/ssh exposed
% oc get all
NAME                                        READY   STATUS      RESTARTS   AGE
pod/ubuntu-demo-migration-vm-451395-sbqwc   0/1     Completed   0          23h
pod/virt-launcher-ubuntu-demo-4qb2p         1/1     Running     0          21h
NAME          TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/ssh   ClusterIP   172.30.21.24   <none>        22/TCP    8s
NAME                                             AGE   PHASE     IP             NODENAME                 READY
virtualmachineinstance.kubevirt.io/ubuntu-demo   21h   Running   10.131.0.159   dev-86m78-worker-bwghd   True
NAME                                     AGE   STATUS    READY
virtualmachine.kubevirt.io/ubuntu-demo   21h   Running   True
    
   

Next, you can now expose the service and OpenShift will create a route using the DNS domain of the cluster.

    
      % oc expose service ssh
route.route.openshift.io/ssh exposed
% oc get routes
NAME   HOST/PORT                                                       PATH   SERVICES   PORT   TERMINATION   WILDCARD
ssh    ssh-demo-ubuntu-migration.apps.dev.staging.presales.trilio.io          ssh        22                   None
    
   

You can now ssh to the VM using the route provided by OpenShift.

    
     % ssh jeff@ssh-demo-ubuntu-migration.apps.dev.staging.presales.trilio.io
The authenticity of host 'ssh-demo-ubuntu-migration.apps.dev.staging.presales.trilio.io (172.31.6.9)' can't be established.
ED25519 key fingerprint is SHA256:zLaG17PVqc/BimP9Q25vdgK7x2ppv3+sHL12ZN6FfpM.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'ssh-demo-ubuntu-migration.apps.dev.staging.presales.trilio.io' (ED25519) to the list of known hosts.
Welcome to Ubuntu 20.04.6 LTS (GNU/Linux 5.4.0-176-generic x86_64)
 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/pro
  System information as of Thu 11 Apr 2024 09:51:43 PM UTC
  System load:  0.02              Processes:               240
  Usage of /:   48.2% of 8.87GB   Users logged in:         1
  Memory usage: 13%               IPv4 address for ens160: 172.31.6.25
  Swap usage:   0%
 * Strictly confined Kubernetes makes edge and IoT secure. Learn how MicroK8s
   just raised the bar for easy, resilient and secure K8s cluster deployment.
   https://ubuntu.com/engage/secure-kubernetes-at-the-edge
Expanded Security Maintenance for Applications is not enabled.
0 updates can be applied immediately.
Enable ESM Apps to receive additional future security updates.
See https://ubuntu.com/esm or run: sudo pro status
New release '22.04.3 LTS' available.
Run 'do-release-upgrade' to upgrade to it.
Last login: Thu Apr 11 19:38:47 2024 from 172.16.174.8
jeff@jeff-ubuntu-1:~$ 
    
   

Conclusion

We have now shown how you can use the Red Hat Migration Toolkit Operator (MTV) Operator to migrate VMs from VMware vCenter into OpenShift.

Now that VMs have been migrated into OpenShift, the next question that follows is how will you protect important data on the VM? How do you backup the VM to make sure you protect its running state and the application data contained on VM?

This is where Trilio comes in. Trilio is a native data protection solution for OpenShift, it is even built right into the OpenShift console UI.

Here is a video on how you can backup your OpenShift Virtualization VMs directly from the OpenShift Console with Trilio:

We can backup this VM and then restore it back to the original namespace or we can restore it to a new namespace, essentially making a copy of the VM on the same cluster.

Trilio can also be used across multiple clusters for application migration or disaster recovery use cases.

Here is a video on how you can restore your OpenShift Virtualization VMs directly from the OpenShift Console with Trilio:

As said earlier Trilio already has software to assist in VMware Migration to Red Hat OpenStack, and will welcome the opportunity to discuss options and how to select the appropriate strategy.