Reference Guide: Optimizing Backup Strategies for Red Hat OpenShift Virtualization

Container orchestration platforms such as OpenShift provide built-in developer tools that enable the automated deployment, infrastructure provisioning, scaling, and management of containerized applications, allowing developers to create applications faster. Understanding different OpenShift offerings—especially Red Hat OpenShift, which is widely used in enterprises—their components, how they’re deployed, and managed allows administrators, developers, and DevOps professionals to manage cluster and application deployments efficiently.

This tutorial provides a comprehensive overview of OpenShift and demonstrates how to install, manage, and deploy applications to the platform locally for learning and development purposes. 

Summary of key OpenShift concepts

The following table summarizes the key terminologies used in this article and briefly introduces them.

ConceptDescription
ProjectA project is used to logically group applications and related resources, provide access, and achieve isolation. 
RoutesA route is used to access a service externally using a hostname and is similar to an ingress in Kubernetes.
OperatorsAn operator is an application-specific controller used to deploy and manage an application’s lifecycle and associated services. 
Source-to-imageS2I is an application-building approach that takes source code and creates a container image from it using a builder image.
Build configA build config is a configuration file that outlines the process of building a container image from source code or a Dockerfile. 

Introduction to OpenShift

This section briefly introduces OpenShift, its use as a container orchestration platform, and its value proposition for developers, DevOps, and the related ecosystem. It also discusses OpenShift’s relationship with Kubernetes, some of its key features, and its developer friendliness. 

OpenShift vs. Kubernetes

The table below lists the key differences between OpenShift and Kubernetes, including the benefits of OpenShift over Kubernetes.

FactorKubernetesOpenShift
Nature/type of productUpstream open-source container orchestration platformEnterprise Kubernetes that  includes security, developer tools, RBAC, and more
Cost/licensingOpen-source and freeOpenShift OKD (the upstream open source distribution) is free; Red Hat OpenShift is license-based
Operating system optimization/supportSupports multiple Linux distributionsOptimized for Fedora CoreOS and Red Hat Enterprise Linux
Role-based access control (RBAC) capabilityLimited RBAC capabilities with manual configuration and a lack of fine-grained access controlFull-fledged RBAC support with projects, predefined roles, and the ability to create groups and users
Security featuresUser responsible for implementing securityComes with enterprise security features such as policies that disallow root containers, security context constraints (SCCs), and OAuth.
Integrated featuresDoes not include built-in tools but supports integrating themBuilt-in features include image registry, CI/CD pipelines, and S2I
Enterprise supportNo enterprise supportIncludes enterprise support

OpenShift offerings

OpenShift is available as a community-driven open-source distribution called OKD (Origin community distribution), previously called OpenShift Origin. It is the upstream project for Red Hat OpenShift, an enterprise-licensed distribution available in both managed and self-managed options.

Managed options 

Managed Red Hat OpenShift is available on popular cloud providers through a partnership between Red Hat and the cloud providers. 

The following managed cloud service options are available at the moment:

  • Red Hat OpenShift Service on AWS: This offering is hosted on AWS infrastructure and billed by AWS. Red Hat and AWS jointly manage and support this service.
  • Microsoft Azure Red Hat OpenShift: This product is hosted on Azure and billed by Microsoft. Red Hat and Microsoft jointly manage and support it.
  • Red Hat OpenShift Dedicated on Google Cloud: This cloud service is hosted on Google Cloud and billed separately for OpenShift and infrastructure by Red Hat and Google Cloud, respectively. Red Hat manages and supports this service. 
  • Red Hat OpenShift on IBM Cloud: IBM hosts, bills, and manages the service, and Red Hat and IBM jointly provide support. 

Self-managed options

The self-managed approach allows you to install RedHat OpenShift (or OKD) on a supported platform of your choice, such as VMware vSphere, Nutanix, OpenStack, or even cloud platforms such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP). 

Red Hat offers a tiered portfolio of OpenShift solutions to meet various enterprise needs:

  • Red Hat OpenShift Kubernetes Engine represents a foundational, entry-level license, providing enterprise Kubernetes capabilities. It allows users to run containers specifically on the Red Hat CoreOS platform and includes basic security features.
  • Red Hat OpenShift Container Platform builds upon the Kubernetes Engine and includes features such as a developer console, OpenShift Serverless, Service Mesh, pipelines for Continuous Integration/Continuous Delivery (CI/CD), and GitOps, alongside all the standard functionalities of the OpenShift Kubernetes Engine.
  • Red Hat OpenShift Platform Plus includes advanced solutions such as advanced cluster management, advanced cluster security, OpenShift Data Foundation, and Red Hat Quay in addition to all Red Hat OpenShift container platform capabilities. 

Red Hat OpenShift licence options (source)

Red Hat OpenShift licence options (source)

In addition to the above core platforms, Red Hat provides specialized engines for specific functionalities. Red Hat OpenShift Virtualization Engine is one such solution for running virtual machines using OpenShift virtualization.

Automated Red Hat OpenShift Data Protection & Intelligent Recovery

Perform secure application-centric backups of containers, VMs, helm & operators

Use pre-staged snapshots to instantly test, transform, and restore during recovery

Scale with fully automated policy-driven backup-and-restore workflows

Red Hat OpenShift concepts

The Red Hat OpenShift Container Platform (RHOCP) provides many enterprise-grade features that make it a comprehensive platform for cloud-native applications. Some of these features are:

  • An integrated developer experience and toolset that includes capabilities like a feature-rich and intuitive built-in web console with separate perspectives for administrators and developers, direct code-to-container image creation using source-to-image (S2I), developer workflows that support DevOps practices, application templates, etc.
  • A purpose-built, secure, and minimal container-optimized operating system called Red Hat Enterprise Linux CoreOS (RHCOS)
  • Integrated RBAC built on OAuth
  • Default security features, such as security policies and full encryption
  • An operator-driven approach, including the OperatorHub, providing a catalog of certified operators
  • Simpler and more automated installation and update management of the entire platform
  • Enterprise support

Architecture

Red Hat OpenShift’s architecture is somewhat similar to that of Kubernetes. A typical OpenShift deployment consists of control plane nodes and compute nodes. 

The control plane nodes consist of the usual Kubernetes services such as the API Server, Controller Manager, Scheduler, etcd, and others. In addition to these OpenShift-specific services, certain networking components and the Cluster Version Operator are also in the mix. 

The following image shows a typical Red Hat OpenShift deployment architecture.

Red Hat OpenShift architecture (source)

Red Hat OpenShift architecture (source)

Learn KubeVirt & OpenShift Virtualization Backup & Recovery Best Practices

Concepts

Although OpenShift is built on the foundations of Kubernetes, certain important concepts, components, and terminologies extend and enhance it beyond basic container orchestration. Let’s look at some of them that are used in effective platform administration and operation:

  • A project is a fundamental organizational unit in OpenShift. Projects are built on top of the Kubernetes namespace with additional annotations providing extended isolation capabilities and, most importantly, access to resources within it for users and teams. An OpenShift cluster consists of default projects that start with openshift- or kube- and contain cluster components. 
  • Operators are extensions to OpenShift that enable applications to be packaged, deployed, and managed using native APIs. They use custom resource definitions (CRDs) to enable new custom object types for users to deploy Kubernetes-native applications.
  • Embedded OperatorHub is a marketplace where you can browse and install operators from multiple providers. It contains operators from Red Hat, certified by Red Hat operators from ISVs, and community operators. 
  • Operator Lifecycle Manager (OLM) helps manage the lifecycle of the operators, such as installing, updating, giving projects access to operators, and making configuration changes. 
  • Source-to-image (S2I) is an image builder popularly used with OpenShift, a framework and tool that uses builder images to create container images from application code. The builder images contain the necessary runtime and build environment for various languages and frameworks. With S2I, compiling, packaging, and configuring applications in a containerized format is automated. 
  • A build configuration is a definition file describing how a container image should be created. It includes information such as the location of the code, the build strategy (e.g., S2I), where to push the image, and sometimes also a Dockerfile.

Red Hat OpenShift deployment options

There are different ways to test drive and deploy OpenShift, with options such as Red Hat OpenShift Service on AWS (ROSA), Developer Sandbox, Red Hat OpenShift Dedicated on GCP, and the self-managed Red Hat OpenShift Container Platform. There are two primary installation methods: user-provisioned infrastructure (UPI) and installer-provisioned infrastructure (IPI). 

As of this writing, you can install RHOCP on the following platforms: 

  • Amazon Web Services (AWS) on 64-bit x86/ARM instances
  • Microsoft Azure on 64-bit x86/ARM instances
  • Microsoft Azure Stack Hub
  • Google Cloud Platform (GCP) on 64-bit x86/ARM instances
  • Red Hat OpenStack Platform (RHOSP)
  • IBM Cloud
  • IBM Z or IBM LinuxONE with z/VM or Red Hat Enterprise Linux (RHEL) KVM or LPAR
  • IBM Power
  • IBM Power Virtual Server
  • Nutanix
  • VMware vSphere
  • Bare metal or other platform-agnostic infrastructure

An OpenShift deployment process typically requires a bastion host/workstation, a bootstrap node, and one or more cluster nodes (i.e., control plane nodes and worker nodes).

There are four installation methods available, which are described below.

Assisted installer

The assisted installer is a web-based installation method that is ideal for connected environments. Based on the selection in the web console, a discovery image is created that is used to boot the cluster machines. This image installs RHCOS and an agent that registers the cluster hosts, gathers inventory, and performs various installation and configuration steps. A REST API for the assisted installer is also available to automate this process.

An assisted installer method does not require an installer program running locally, does not use a bootstrap node, and does not involve manual processes such as creating manifest and ignition files. These features make it very user-friendly for new administrators. 

Agent-based installer

The agent-based installer method is ideal for disconnected or air-gapped environments. In this process, you create an ISO that contains all the information about the nodes. This bootable ISO also includes an assisted discovery agent and the assisted service. One of the control plane hosts runs the assisted service and becomes the temporary bootstrap node during cluster setup. Once all other non-bootstrap nodes are set up, the bootstrap node reboots and joins the cluster as an OpenShift node.  

You will need a bastion host or a workstation to run the openshift-install program and generate the config files and the ISO.

User-provisioned infrastructure (UPI)

In the UPI method, you are responsible for creating the infrastructure on which OpenShift will run, which means you generate the bootstrap node and all the cluster nodes. You then use an install-config file to generate manifest files, followed by the ignition files—one each for bootstrap, master, and worker—which are then made available through a web server. Each of the infrastructure nodes is booted to an ISO where RHCOS is installed, and on reboot, they use the respective ignition files to start, configure, and complete the cluster setup process.

This method provides more control and flexibility over infrastructure provisioning; the downside is the higher degree of manual effort. The UPI method also requires a bastion host or workstation for preparatory steps and the hosting of the ignition files. 

Installer-provisioned infrastructure (IPI)

The IPI method automatically creates and configures the infrastructure required to deploy the OpenShift cluster, providing a turn-key solution with fewer manual steps. In this method as well, you use a bastion host/workstation to download the openshift-install and the oc programs. However, the rest of the steps are automated. You simply run the openshift-install create cluster command, provide the target platform details (such as for vSphere the vCenter server address, username, and password), and then let the installer do the rest.

The IPI method is supported on many popular cloud and virtualization platforms, such as AWS, Azure, GCP, vSphere, and Nutanix. 

Learn How To Best Backup & Restore Virtual Machines Running on OpenShift

Installing Red Hat OpenShift Local

In the following sections, we discuss the steps to install Red Hat OpenShift Local on a machine and explore how to deploy an application to it and operate it. Note that this installation method is not meant for production scenarios. 

This demonstration uses a Windows machine for the installation, but the setup process is the same for Linux or Mac.

Download prerequisites

Start by navigating to https://developers.redhat.com/download-manager/link/3868678 and signing in. This will take you to the Create an OpenShift cluster page, where you will be asked to select the cluster type. Local should already be selected. Choose your local operating system and click Download OpenShift Local. You must also download the pull secret by clicking the Download pull secret button. This will be required during the installation. 

Download the crc binary and pull secret

Download the crc binary and pull secret

Extract the downloaded file to reveal the crc binary that you will run to install Red Hat OpenShift Local.

If you are on a Linux or Mac, simply select the operating system from the drop-down menu to download the correct archive. Extract it to use the crc binary. The setup and start procedures demonstrated here are the same. 

Set up and start OpenShift local 

From a command prompt, run the following command to configure the operating system for OpenShift local:

				
					crc setup
				
			

This will perform a few checks for system requirements and make necessary downloads, installs, and configurations to set up crc. For example, on a Windows host, it looks similar to the screenshot below:

Output of crc setup command on a Windows host

Output of crc setup command on a Windows host

Next, run crc start to start OpenShift Local.

				
					crc start
				
			

When prompted, enter the pull secret downloaded earlier. Wait until the cluster is successfully started. The console will print the web console URL along with the admin and developer credentials to log in.

Output of crc start command on a Windows host

Output of crc start command on a Windows host

Launch the web console

The web console is available locally at https://console-openshift-console.apps-crc.testing by default. Open this URL in a browser and log in using the kubeadmin credentials displayed in the output earlier. If you are testing user capabilities, you should log in using the developer credentials. 

If prompted for a certificate warning, click Advanced, and choose Accept the Risk and Continue

The dashboard should look similar to the screenshot below:

The Red Hat OpenShift web console administrator view

The Red Hat OpenShift web console administrator view

CLI authentication and access

You can also use the oc command line tool to access and interact with the cluster. The client utilities for a particular operating system can be downloaded from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/

Run the following command to verify that the oc command works and also to ensure that the client and server versions match.

				
					oc version
				
			

Next, run the oc login command as shown below to log in. In this example, we are logging in as the developer user.

				
					oc login https://api.crc.testing:6443 -u developer -p developer
				
			

The following screenshot shows the output of the commands above.

Output of oc login command on a Windows host

Output of oc login command on a Windows host

To learn about production installation methods, visit the Red Hat OpenShift installation documentation section and select your platform. 

Deploying an application on OpenShift

In this hands-on section, we will see how to deploy an application to OpenShift. You can deploy an application directly from a source repository like GitHub or using a YAML deployment file, like in Kubernetes. We will cover both of these approaches here.

From source/Git repository

Start by creating a new project:

				
					oc new-project blog
				
			

For this demonstration, we will use a sample Python and Django blog application at https://github.com/openshift-instruqt/blog-django-py. Run the following command to create the app from the command line:

				
					oc new-app python:latest~https://github.com/openshift-instruqt/blog-django-py --name openshift
				
			

Important note: The oc new-app command, when building an application from source (like in this example), relies on OpenShift’s internal image registry to store the resulting build image. While OpenShift Local (formerly CodeReady Containers) typically comes with its internal image registry enabled by default, most other common OpenShift cluster setups do NOT have the internal image registry enabled by default for security and operational reasons.

If the internal image registry is not configured and accessible in your OpenShift cluster, the oc new-app command for source-to-image (S2I) builds will fail during the build process when attempting to push the newly created image. Ensure your OpenShift environment has an accessible image registry configured, or that you have the necessary permissions to use it, for this command to execute successfully.

The command triggers the application build process. It creates a build configuration for the container image and runs the build. In the command above, we also specify the build approach, which is S2I, by writing python:latest before the repo.

You can view the logs and the build process by running the following command: 

				
					oc logs -f buildconfig/openshift
				
			

Note that this process could take over 5 minutes.

Next, we will expose the application so we can access it from outside the cluster with the command:

				
					oc expose service/openshift
				
			

Retrieve the URL with the command:

				
					oc get route openshift --template '{{ .spec.host }}'
				
			

Finally, navigate to the link in a browser to view the blog. It should appear as in the screenshot below:

Application deployed in OpenShift

Application deployed in OpenShift

From a YAML file

You can also deploy a workload using a YAML file, similar to how it is done in Kubernetes. 

Let’s create a separate project for this with the command:

				
					oc new-project vote
				
			

Then we’ll use the oc apply command as below:

				
					oc apply -f https://raw.githubusercontent.com/Azure-Samples/azure-voting-app-redis/refs/heads/master/azure-vote-all-in-one-redis.yaml
				
			

This will create multiple deployments and services as defined in the YAML file. You should see a console output confirming that the resources were created. 

Managing OpenShift

You can operate and manage OpenShift using a web console, through the oc CLI utility, or both. Let’s take a look at both from both the administrator’s and developer’s perspective.  

Using the web console

To view all projects across the entire cluster select Projects under Home. This is also where you can create a new project as an administrator.

Projects view of the web console as an administrator

Projects view of the web console as an administrator

As an administrator, you can view workloads such as pods, deployments, secrets, config maps, and more for the entire cluster. 

Similarly, you can view and manage networking, storage, and compute-specific objects. You can also view builds, build configs, and image streams across the entire cluster.  

For example, Administration > Cluster Settings, in the Details tab, is where you, as an administrator, can view cluster details such as the current version of OpenShift, view available updates and perform updates, manage Red Hat OpenShift subscription, the Cluster ID, etc.

Cluster Settings view of the web console as an Administrator

Cluster Settings view of the web console as an Administrator

The ClusterOperators tab in Cluster Settings lists cluster operators, their status, and version. The status of these operators shows the overall health of the cluster and, hence, is an essential administrative responsibility. 

Cluster settings showing ClusterOperators

Cluster settings showing ClusterOperators

The Configuration tab in Cluster Settings allows you to manage and modify the configuration of cluster resources.

Cluster Settings showing the Configuration section in web console

Cluster Settings showing the Configuration section in web console

To change to the developer view or perspective, click the Administrator drop-down from the menu and select Developer.

Changing from the administrator to developer perspective in the web console

Changing from the administrator to developer perspective in the web console

The navigation menu for the developer view is different and focuses on developer tasks such as building, deploying, and managing applications and workloads. The following image shows the developer view for the kubeadmin user with the topology of the project blog (created earlier) selected.

Developer view of the web console

Developer view of the web console

Using the CLI

You can also manage the OpenShift cluster using the CLI. The oc adm command is used for cluster administration tasks such as managing nodes, security policies, certificates, and other low-level operations requiring elevated privileges. Let’s take a look at some of them.

Administrators are often tasked with upgrading a cluster. You can use the following command to view update status and the available cluster updates.

				
					oc adm upgrade
				
			

Important note: Commands like oc adm upgrade are designed for managing production-grade, multi-node OpenShift clusters and typically require cluster-admin privileges. These commands are generally not applicable or functional within single-node, local development environments like OpenShift Local (formerly CodeReady Containers), as OpenShift Local manages its own lifecycle and updates independently. For OpenShift Local, updates are typically handled by upgrading the application itself.

To update to a specific minor version, use this command:

				
					oc adm upgrade --to='4.18.17'
				
			

To disable scheduling of pods on a node, you can cordon using:

				
					oc adm cordon worker1
				
			

And to uncordon or enable scheduling, run:

				
					oc adm uncordon worker1
				
			

To drain a node of pods for maintenance operations, use:

				
					oc adm drain
				
			

Certificate signing requests can be approved with this command:

				
					oc adm certificate approve csr-abcde
				
			

Administrators can collect debugging data for one or more cluster operators using this command:

				
					oc adm inspect clusteroperator/openshift-apiserver
				
			

Use this to gather debug information for support:

				
					oc adm must-gather
				
			

Node logs can be collected with:

				
					oc adm node-logs
				
			

You can also access a complete list of OpenShift CLI administrator commands and a list of OpenShift CLI developer commands.

Conclusion

Red Hat OpenShift is a robust container orchestration platform that extends Kubernetes with enterprise features such as development tools, comprehensive RBAC, security policies, a purpose-built container OS, and operations. It includes a rich and intuitive web console for developers, administrators, and virtualization administrators CLI, integrated CI/CD pipelines, image building tools, and more, making it a comprehensive platform for deploying and managing containerized applications. With a trusted enterprise operating system as its base, it is a versatile platform for building, deploying, and managing cloud-native applications across industry types. It can be deployed as a public, hybrid, or private containerization platform.

Trilio for OpenShift is a cloud-native backup and restore application specifically designed to protect your workloads within Red Hat OpenShift environments. It’s easy to get started with and is available as a Certified Operator in the OpenShift OperatorHub for seamless installation and management. 

Try the demo to see how Trilio integrates seamlessly with the OpenShift Console and helps you manage all your backup and recovery operations directly within the UI.

Table Of Contents

Like This Article?

Subscribe to our LinkedIn Newsletter to receive more educational content

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.