OpenStack and OpenShift are powerful platforms that have redefined the implementation of cloud-native native applications over the last few years. OpenStack, born out of the joint efforts of NASA and Rackspace in 2010, is an open-source cloud operating system that provides comprehensive services for building and operating public and private clouds. OpenShift, developed by Red Hat, is a container platform built on the foundation of Kubernetes—this powerful and widely adopted open-source orchestration engine simplifies the deployment and management of cloud-native applications. While OpenStack offers a foundational infrastructure for building cloud environments, OpenShift focuses on providing a higher-level platform for deploying and managing cloud-native applications.
While traditionally seen as distinct entities, these platforms can be seamlessly integrated to create a powerful synergy. The recent release of Red Hat OpenStack Services on OpenShift (RHOSO) represents this integration in its most advanced form. With this, you can manage cloud infrastructure (OpenStack) like a modern application, using the power of containers and Kubernetes (OpenShift). This means easier setup, smoother scaling, and simpler updates.
This article delves into the practical aspects of configuring and running OpenStack on OpenShift. We’ll also highlight how RHOSO addresses longstanding OpenStack complexity challenges and explore the various stages of initial setup to ongoing management and optimization.
Summary of key stages when deploying OpenStack on OpenShift
The following table outlines the essential stages for integrating OpenStack services into your OpenShift environment.
Stage | Description |
Understanding traditional OpenStack setups | Learn about the typical architecture of an OpenStack deployment, including the roles of control nodes, compute nodes, and storage nodes. |
Preparing the OpenShift platform for hosting OpenStack services | Prepare your OpenShift cluster for OpenStack deployment to verify software prerequisites and ensure sufficient hardware resources. |
Deploying OpenStack on OpenShift | Configure OpenStack services on OpenShift, including key touch points like operator installation, network configuration, and storage setup. |
Optimizing OpenStack services on OpenShift | Fine-tune services to optimize the performance and resource utilization of your OpenStack deployment on OpenShift. |
Backing up virtual machines in OpenStack | Set up and configure a solution like TrilioVault to protect your OpenStack virtual machines with efficient backup and recovery capabilities. |
Traditional OpenStack setup
OpenStack’s infrastructure as a service (IaaS) platform is modular, allowing you to select and install individual services based on your application’s needs. Each service offers APIs for interoperability and integration.
OpenStack deployments have traditionally been complex due to factors beyond simple software installation, involving designing a platform to leverage existing infrastructure and preparing for future growth and adaptability. OpenStack’s scalable architecture relies on distributed components connected via a shared message bus and database. Typically, a deployment includes control nodes for managing the cluster, orchestrating resources, and handling networking; compute nodes for executing virtual machines; and storage nodes for managing persistent data.
Each service (compute, storage, networking, etc.) is typically deployed as a standalone component on individual physical or virtual servers. This approach provides flexibility but can also introduce complexity in terms of management and scalability. Configuration is often done manually, requiring expertise and time-consuming processes. Scaling OpenStack services in a traditional setup can be challenging: It involves adding or removing physical or virtual machines. Managing a conventional OpenStack environment can also be complex, especially for large-scale deployments.
In addition to choosing the right hardware for computing, storage, and networking requirements, a supported operating system must be selected. The chosen operating system must be compatible with the desired OpenStack version and configured to meet the specific requirements and dependencies for running OpenStack. Furthermore, each OpenStack release may have different installation instructions and requirements, making manual deployment challenging.
Red Hat OpenStack Services on OpenShift (RHOSO)
Red Hat recently announced the release of Red Hat OpenStack Services on OpenShift (RHOSO) to directly address the longstanding complexity challenges that have characterized traditional OpenStack environments. In addition, RHOSO delivers a significantly faster and more efficient OpenStack experience. It streamlines the deployment of both traditional VMs and modern containerized applications, unifying operations across the entire infrastructure, from core to edge. Compared to Red Hat OpenStack Platform 17.1, RHOSO deploys compute nodes 4x faster.
RHOSO is also designed to seamlessly support organizations where they are today. Organizations can maintain their current OpenStack compute nodes unchanged, while the control plane migrates to OpenShift containers, enabling operators to leverage OpenShift’s robust orchestration capabilities for automated scaling and simplified lifecycle management.
Note that while RHOSO is a commonly used acronym, it is also referred to as Red Hat OpenStack Platform 18 in some of Red Hat’s developer documentation.

Automated Red Hat OpenShift Data Protection & Intelligent Recovery
Perform secure application-centric backups of containers, VMs, helm & operators
Use pre-staged snapshots to instantly test, transform, and restore during recovery
Scale with fully automated policy-driven backup-and-restore workflows
Preparing the OpenShift platform for hosting OpenStack services
To successfully host OpenStack services on OpenShift, the OpenShift cluster must be configured with the appropriate hardware and software resources, which will provide the necessary foundation for a stable and efficient deployment.
Hardware prerequisites
The minimum hardware requirements for running OpenStack services on OpenShift are as follows:
- An OpenShift cluster with at least three nodes
- The following resources for each worker node:
- 64 GB RAM
- 16 CPU cores
- 120 GB storage (NVMe/SSD) for the root disk, plus an additional 250 GB
- Two physical NICs per worker node
- 150 GB PV space for storing service logs, databases, file import conversion, and metadata
- 5 GB space for control plane services
- Two preprovisioned nodes built with Red Hat Enterprise Linux (RHEL) 9.4 for the data plane
Software prerequisites
The minimum software requirements are as follows:
- An OpenShift cluster running version 4.16
- OpenShift cluster support for Multus CNI
- The following installed operators:
- Kubernetes NMState
- MetalLB
- Cert-manager
- Cluster Observability
- Cluster Bare-Metal
- A backend storage class configured either using the LVM Storage Operator or OpenShift Data Foundation, ready to provision PVs
- The oc utility and podman tool installed on the cluster workstation
- No network policies in place that restrict communication between the openstack-operators and the openstack projects
This document provides detailed guidance on planning your installation and configuring the data plane and compute nodes. The commands used in the tutorial below can also be found in our Git repo.
Setting up the network infrastructure for OpenStack
An OpenStack deployment typically requires the implementation of these physical data center networks:
- Control plane network: This network serves as a communication channel for the OpenStack Operator, allowing it to access and manage data plane nodes securely using Ansible SSH. It’s also used to migrate instances between data plane nodes.
- External network: An external network is optional and can be configured to meet specific environmental needs, such as providing internet access to virtual machines or configuring VLAN provider networks.
- Internal API network: This network facilitates internal communication among the various OpenStack components.
- Storage network: This network is dedicated to storage operations, supporting block storage, RBD, NFS, FC, and iSCSI.
- Tenant (project) network: This network enables data communication among virtual machine instances within the cloud deployment.
- Storage management network: This optional network can be used by storage components for providing features such as data replication.
The worker nodes in the cluster are connected to the isolated networks using the NMState operator on the control plane; this operator facilitates the connection of worker nodes to isolated networks. The NetworkAttachmentDefinition custom resources (CRs) are used to attach service pods to these networks. The MetalLB Operator exposes internal service endpoints, while public service endpoints are exposed as OpenShift routes.
The table below lists the default networks for OpenStack deployments. You can customize these networks if necessary.
Network name | VLAN | CIDR | NetConfig allocationRange | MetalLB IPAddressPool range | Network attachment definition ipam range | OCP worker node network config range |
ctlplane | N/A | 192.168.122.0/24 | 192.168.122.100 – 192.168.122.250 | 192.168.122.80 – 192.168.122.90 | 192.168.122.30 – 192.168.122.70 | 192.168.122.10 – 192.168.122.20 |
external | N/A | 10.0.0.0/24 | 10.0.0.100 – 10.0.0.250 | N/A | N/A |
|
internalapi | 20 | 172.17.0.0/24 | 172.17.0.100 – 172.17.0.250 | 172.17.0.80 – 172.17.0.90 | 172.17.0.30 – 172.17.0.70 | 172.17.0.10 – 172.17.0.20 |
storage | 21 | 172.18.0.0/24 | 172.18.0.100 – 172.18.0.250 | N/A | 172.18.0.30 – 172.18.0.70 | 172.18.0.10 – 172.18.0.20 |
tenant | 22 | 172.19.0.0/24 | 172.19.0.100 – 172.19.0.250 | N/A | 172.19.0.30 – 172.19.0.70 | 172.19.0.10 – 172.19.0.20 |
storageMgmt | 23 | 172.20.0.0/24 | 172.20.0.100 – 172.20.0.250 | N/A | 172.20.0.30 – 172.20.0.70 | 172.20.0.10 – 172.20.0.20 |
The rest of this guide assumes that you have configured your OpenShift network to host OpenStack services. To plan your network configuration, refer here.
Operator installation
To install the OpenStack Operator in OpenShift, log into the web console as a cluster administrator. Navigate to OperatorHub, search for the OpenStack operator from Red Hat, and click Install. Choose openstack-operators as the installed namespace and confirm the installation. The operator is ready when its status shows as Succeeded.
Learn about the features that power Trilio’s intelligent backup and restore
Deploying OpenStack on OpenShift
Log into the cluster with a user having cluster-admin privileges and create the OpenStack project. Label this project to enable the OpenStack Operators to create privileged pods.
$ oc new-project openstack $ oc label ns openstack security.openshift.io/scc.podSecurityLabelSync=false --overwrite $ oc label ns openstack pod-security.kubernetes.io/enforce=privileged --overwrite
Securing access to OpenStack services
Create a secret custom resource to access the OpenStack Services service pods securely. Create a base64 encoded password of your choice as follows:
$ echo -n '0P&n$T@<Kr0x' | base64 MFAmbiRUQDxLcjB4
The command above will generate a 16-character key that is base64-encoded. Create a secret in the OpenStack project using the following manifest. Make sure to replace the password values with the base64 value.
apiVersion: v1 data: AdminPassword: MFAmbiRUQDxLcjB4 AodhPassword: MFAmbiRUQDxLcjB4 AodhDatabasePassword: MFAmbiRUQDxLcjB4 BarbicanDatabasePassword: MFAmbiRUQDxLcjB4 BarbicanPassword: MFAmbiRUQDxLcjB4 BarbicanSimpleCryptoKEK: CeilometerPassword: MFAmbiRUQDxLcjB4 CinderDatabasePassword: MFAmbiRUQDxLcjB4 CinderPassword: MFAmbiRUQDxLcjB4 DatabasePassword: MFAmbiRUQDxLcjB4 DbRootPassword: MFAmbiRUQDxLcjB4 DesignateDatabasePassword: MFAmbiRUQDxLcjB4 DesignatePassword: MFAmbiRUQDxLcjB4 GlanceDatabasePassword: MFAmbiRUQDxLcjB4 GlancePassword: MFAmbiRUQDxLcjB4 HeatAuthEncryptionKey: MFAmbiRUQDxLcjB4 HeatDatabasePassword: MFAmbiRUQDxLcjB4 HeatPassword: MFAmbiRUQDxLcjB4 IronicDatabasePassword: MFAmbiRUQDxLcjB4 IronicInspectorDatabasePassword: MFAmbiRUQDxLcjB4 IronicInspectorPassword: MFAmbiRUQDxLcjB4 IronicPassword: MFAmbiRUQDxLcjB4 KeystoneDatabasePassword: MFAmbiRUQDxLcjB4 ManilaDatabasePassword: MFAmbiRUQDxLcjB4 ManilaPassword: MFAmbiRUQDxLcjB4 MetadataSecret: MFAmbiRUQDxLcjB4 NeutronDatabasePassword: MFAmbiRUQDxLcjB4 NeutronPassword: MFAmbiRUQDxLcjB4 NovaAPIDatabasePassword: MFAmbiRUQDxLcjB4 NovaAPIMessageBusPassword: MFAmbiRUQDxLcjB4 NovaCell0DatabasePassword: MFAmbiRUQDxLcjB4 NovaCell0MessageBusPassword: MFAmbiRUQDxLcjB4 NovaCell1DatabasePassword: MFAmbiRUQDxLcjB4 NovaCell1MessageBusPassword: MFAmbiRUQDxLcjB4 NovaPassword: MFAmbiRUQDxLcjB4 OctaviaDatabasePassword: MFAmbiRUQDxLcjB4 OctaviaPassword: MFAmbiRUQDxLcjB4 PlacementDatabasePassword: MFAmbiRUQDxLcjB4 PlacementPassword: MFAmbiRUQDxLcjB4 SwiftPassword: MFAmbiRUQDxLcjB4 kind: Secret metadata: name: osp-secret namespace: openstack type: Opaque
Now create the secret:
$ oc create -f openstack_secret.yaml
Creating the data plane network
To create the data plane network, you must first define at least one control plane network. Next, define a NetConfig custom resource and specify all the subnets for the data plane networks. You can also define VLAN networks to create network isolation for composable networks, such as InternalAPI, Storage, and External. Each network definition must include the IP address assignment.
To utilize the default OpenStack networks, define a specification for each network in a manifest file. This file also requires the definition of topology for each data plane network. Ensure that the NetConfig allocationRange does not intersect with the MetalLB IPAddressPool range or the OpenStack IP address pool range.
Create the net_config.yaml file and populate it as follows. This manifest contains the networks and VLANs according to the details mentioned in the table above. The subnet definition for the InternalApi network includes a sample list of IP addresses that must be excluded from the allocation list. This list can be used to exclude specific IP addresses from being allocated to data plane nodes.
apiVersion: network.openstack.org/v1beta1
kind: NetConfig
metadata:
name: openstacknetconfig
namespace: openstack
spec:
networks:
- name: CtlPlane
dnsDomain: ctlplane.rhoso.local
subnets:
- name: subnet1
allocationRanges:
- end: 192.168.122.120
start: 192.168.122.100
- end: 192.168.122.200
start: 192.168.122.150
cidr: 192.168.122.0/24
gateway: 192.168.122.1
- name: InternalApi
dnsDomain: internalapi.rhoso.local
subnets:
- name: subnet1
allocationRanges:
- end: 172.17.0.250
start: 172.17.0.100
excludeAddresses:
- 172.17.0.10
- 172.17.0.12
cidr: 172.17.0.0/24
vlan: 20
- name: External
dnsDomain: external.rhoso.local
subnets:
- name: subnet1
allocationRanges:
- end: 10.0.0.250
start: 10.0.0.100
cidr: 10.0.0.0/24
gateway: 10.0.0.1
- name: Storage
dnsDomain: storage.rhoso.local
subnets:
- name: subnet1
allocationRanges:
- end: 172.18.0.250
start: 172.18.0.100
cidr: 172.18.0.0/24
vlan: 21
- name: Tenant
dnsDomain: tenant.rhoso.local
subnets:
- name: subnet1
allocationRanges:
- end: 172.19.0.250
start: 172.19.0.100
cidr: 172.19.0.0/24
vlan: 22
Create the data plane network by applying the manifest and verify that the data plane network has been created:
$ oc create -f openstack_netconfig.yaml $ oc get netconfig/openstacknetconfig -n openstack
Creating the control plane
The control plane node is the central authority, overseeing and managing the different OpenStack services, so creating it is fundamental to OpenStack deployment on OpenShift. The following sections include the configuration required to deploy and set up an OpenStack control plane node on your OpenShift cluster.
The respective service sections included in the manifest are explained as follows:
- Cinder: Cinder services allow you to create, delete, and manage block storage volumes in your OpenStack environment. This section defines the configuration for Cinder services used for block storage management in OpenStack. It includes the number of replicas for each Cinder service, the required network attachments, and other configuration options.
- Nova: The Nova service manages compute resources in OpenStack. It sets the number of replicas for each Nova service and configures internal load balancing using MetalLB.
- DNS: This section defines the configuration of DNS services for the data plane. It specifies two DNS servers with IP addresses 192.168.122.1 and 192.168.122.2. It also configures them for internal load balancing using MetalLB on the ctlplane network with a shared IP address of 192.168.122.80. Finally, it sets the number of deployed DNS server replicas to 2 for redundancy.
- Galera: This section defines the configuration for Galera, a high-availability database cluster often used for OpenStack services. It includes two Galera clusters: openstack (for all services) and openstack-cell1 (for compute services).
- Keystone: This section configures Keystone services that are responsible for user authentication and authorization in OpenStack. It sets up three scalable replicas, uses MetalLB for internal load balancing, references the OpenStack database, and links to a secret containing the credentials necessary for authentication and authorization.
- Glance: The Glance services manage disk images in OpenStack. This section specifies storage requirements, references the OpenStack database and a secret for credentials, and requires network attachment to the storage network.
- Barbican: This section configures Barbican services for secret management in OpenStack. It sets up multiple replicas for Barbican API and worker services, uses MetalLB for internal load balancing, references the OpenStack database and a secret, and deploys a single replica for the Barbican Keystone listener.
- Memcached: This section defines the configuration for Memcached services used to cache data in OpenStack.
- Neutron: This section configures Neutron services for network management in OpenStack. It sets up three replicas, uses MetalLB for internal load balancing, references the OpenStack database and a secret, and attaches to the internalapi network.
- Swift: This section configures Swift services for object storage in OpenStack. It defines one replica for each Swift proxy and storage service. Both require network attachment to the storage network. Internal load balancing is configured on the internalapi network with a shared IP using MetalLB.
- OVN: This section configures OVN for network virtualization in OpenStack. It defines two OVN database clusters with three replicas each and a storage request of 10 GB. Both clusters require attachment to the internalapi network. Additionally, it configures OVN Northd for network management, also attached to the internalapi network
- Placement: This section configures the OpenStack Placement service, which is responsible for managing resource placement for workloads. It sets up three replicas for scalability, utilizes MetalLB for internal load balancing on the internalapi network, references the OpenStack database, and links a secret containing credentials.
- RabbitMQ: This section sets up two RabbitMQ clusters with three replicas each, using internal load balancing with dedicated IPs. The first cluster is for the main OpenStack services, while the second (optional) might be used for a separate OpenStack cell.
- Telemetry: This section enables telemetry services in OpenStack, including metric storage, alerting, and autoscaling. It configures the monitoring stack with persistent storage, a scrape interval of 30 seconds, and a retention period of 24 hours. It configures Aodh (alarm service), Ceilometer (metering service), and logging settings, enabling basic telemetry functionality within the OpenStack environment.
- Horizon: This section sets up two Horizon service instances. Horizon is the standard web-based interface for accessing and managing OpenStack services.
The following is a complete control plane manifest file that includes definitions for all of the services explained above. Before applying the manifest, make sure to update the storageClass field with the name of your desired storage class.
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: openstack-control-plane
namespace: openstack
spec:
secret: osp-secret
storageClass: lvms-vg1
cinder:
apiOverride:
route: {}
template:
databaseInstance: openstack
secret: osp-secret
cinderAPI:
replicas: 3
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
cinderScheduler:
replicas: 1
cinderBackup:
networkAttachments:
- storage
replicas: 0
cinderVolumes:
volume1:
networkAttachments:
- storage
replicas: 0
nova:
apiOverride:
route: {}
template:
apiServiceTemplate:
replicas: 3
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
metadataServiceTemplate:
replicas: 3
override:
service:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
schedulerServiceTemplate:
replicas: 3
override:
service:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
cellTemplates:
cell0:
cellDatabaseAccount: nova-cell0
cellDatabaseInstance: openstack
cellMessageBusInstance: rabbitmq
hasAPIAccess: true
cell1:
cellDatabaseAccount: nova-cell1
cellDatabaseInstance: openstack-cell1
cellMessageBusInstance: rabbitmq-cell1
noVNCProxyServiceTemplate:
enabled: true
networkAttachments:
- internalapi
- ctlplane
hasAPIAccess: true
secret: osp-secret
dns:
template:
options:
- key: server
values:
- 192.168.122.1
- key: server
values:
- 192.168.122.2
override:
service:
metadata:
annotations:
metallb.universe.tf/address-pool: ctlplane
metallb.universe.tf/allow-shared-ip: ctlplane
metallb.universe.tf/loadBalancerIPs: 192.168.122.80
spec:
type: LoadBalancer
replicas: 2
galera:
templates:
openstack:
storageRequest: 5000M
secret: osp-secret
replicas: 3
openstack-cell1:
storageRequest: 5000M
secret: osp-secret
replicas: 3
keystone:
apiOverride:
route: {}
template:
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
databaseInstance: openstack
secret: osp-secret
replicas: 3
glance:
apiOverrides:
default:
route: {}
template:
databaseInstance: openstack
storage:
storageRequest: 10G
secret: osp-secret
keystoneEndpoint: default
glanceAPIs:
default:
replicas: 0
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
networkAttachments:
- storage
barbican:
apiOverride:
route: {}
template:
databaseInstance: openstack
secret: osp-secret
barbicanAPI:
replicas: 3
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
barbicanWorker:
replicas: 3
barbicanKeystoneListener:
replicas: 1
memcached:
templates:
memcached:
replicas: 3
neutron:
apiOverride:
route: {}
template:
replicas: 3
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
databaseInstance: openstack
secret: osp-secret
networkAttachments:
- internalapi
swift:
enabled: true
proxyOverride:
route: {}
template:
swiftProxy:
networkAttachments:
- storage
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
replicas: 1
swiftRing:
ringReplicas: 1
swiftStorage:
networkAttachments:
- storage
replicas: 1
storageRequest: 10Gi
ovn:
template:
ovnDBCluster:
ovndbcluster-nb:
replicas: 3
dbType: NB
storageRequest: 10G
networkAttachment: internalapi
ovndbcluster-sb:
dbType: SB
storageRequest: 10G
networkAttachment: internalapi
ovnNorthd:
networkAttachment: internalapi
placement:
apiOverride:
route: {}
template:
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
databaseInstance: openstack
replicas: 3
secret: osp-secret
rabbitmq:
templates:
rabbitmq:
replicas: 3
override:
service:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.85
spec:
type: LoadBalancer
rabbitmq-cell1:
replicas: 3
override:
service:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.86
spec:
type: LoadBalancer
telemetry:
enabled: true
template:
metricStorage:
enabled: true
monitoringStack:
alertingEnabled: true
scrapeInterval: 30s
storage:
strategy: persistent
retention: 24h
persistent:
pvcStorageRequest: 20G
autoscaling:
enabled: false
aodh:
databaseAccount: aodh
databaseInstance: openstack
passwordSelector:
aodhService: AodhPassword
rabbitMqClusterName: rabbitmq
serviceUser: aodh
secret: osp-secret
heatInstance: heat
ceilometer:
enabled: true
secret: osp-secret
logging:
enabled: false
ipaddr: 172.17.0.80
horizon:
apiOverride: {}
enabled: true
template:
customServiceConfig: ""
memcachedInstance: memcached
override: {}
preserveJobs: false
replicas: 2 1
resources: {}
secret: osp-secret
tls: {}
Create the control plane and wait until OpenShift finishes setting up all the resources:
$ oc create -f control_plane.yaml
You can use the following command to keep track of the control plane’s progress. Once all resources have been created by OpenShift, the status changes to Setup complete:
$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE Openstack-control-plane Unknown Setup started $ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane True Setup complete
Control plane creation includes an OpenStackClient pod, which can be accessed via rsh to execute OpenStack CLI commands:
$ oc rsh -n openstack openstackclient
$ openstack endpoint list -c 'Service Name' -c Interface -c URL --service glance +--------------+-----------+---------------------------------------------------------------+ | Service Name | Interface | URL | +--------------+-----------+---------------------------------------------------------------+ | glance | internal | https://glance-internal.openstack.svc | | glance | public | https://glance-default-public-openstack.apps.ostest.test.metalkube.org | +--------------+-----------+---------------------
Retrieve the URL for the Horizon dashboard:
$ oc get horizons horizon -o jsonpath='{.status.endpoint}'
Obtain the credentials for the admin user to log into the dashboard:
$ oc get secret osp-secret -o jsonpath='{.data.AdminPassword}' | base64 -d
Creating data plane nodes
The data plane nodes host virtual machines and provide compute resources within the OpenStack environment. The number of data plane nodes required in an OpenStack deployment depends on the desired scale and performance of the cloud infrastructure. OpenStack supports both preprovisioned and unprovisioned nodes. Preprovisioned nodes have a preinstalled OS, while unprovisioned nodes are provisioned using the Cluster Baremetal Operator during data plane creation. The following guide uses preprovisioned nodes and assumes that you already have configured preprovisioned nodes in your infrastructure.
To manage RHEL nodes on the data plane using Ansible, you must generate SSH keys and create corresponding SSH key secret CRs. This allows Ansible to execute commands with the specified user and key. Additionally, generating SSH keys and creating secret CRs are necessary for certain tasks, such as migrating between compute nodes, assigning Red Hat subscriptions, and storing credentials to register the data plane node to Red Hat’s customer portal.
Generate the SSH key pair and create the secret CR for Ansible:
$ ssh-keygen -f my_ansible_key -N "" -t rsa -b 4096 $ oc create secret generic ansible-dp-private-key-secret --save-config --dry-run=client --from-file=ssh-privatekey=my_ansible_key --from-file=ssh-publickey=my_ansible_key.pub -n openstack -o yaml | oc apply -f -
Learn how Trilio’s partnership with Canonical helps better protect your data
Generate the SSH key pair for instance migration and create the secret CR for migration:
$ ssh-keygen -f ./nova-migration-ssh-key -t ecdsa-sha2-nistp521 -N '' $ oc create secret generic nova-migration-ssh-key \ --save-config \ --from-file=ssh-privatekey=nova-migration-ssh-key \ --from-file=ssh-publickey=nova-migration-ssh-key.pub \ -n openstack \ -o yaml | oc apply -f -
Generate the base64-encoded string for your credentials used for authenticating to Red Hat’s customer portal:
$ echo -n | base64
Create a manifest file called subscription.yaml with subscription-manager credentials to register unregistered nodes with the Red Hat Customer Portal:
apiVersion: v1 kind: Secret metadata: name: subscription-manager namespace: openstack data: username: <base64_username> password: <base64_password>
Apply the manifest:
$ oc create -f subscription.yaml -n openstack
Create a secret CR that contains the Red Hat registry credentials. Replace the username and password fields with your Red Hat registry username and password credentials:
$ oc create secret generic redhat-registry --from-literal edpm_container_registry_logins='{"registry.redhat.io": {"": ""}}'
Generate the base64-encoded string for your libvirt password and create the libvirt secret:
$ echo -n | base64
apiVersion: v1 data: LibvirtPassword: namespace: openstack kind: Secret metadata: name: libvirt-secret namespace: openstack type: Opaque
$ oc create -f libvirt_secret.yaml
Before proceeding to create the data plane, create a small PVC on OpenShift using your preferred storage class. You’ll need to reference this PVC in the manifest for storing Ansible logs. The major sections of the manifest are explained below:
- The initial section defines the configuration for the node template. It specifies the Ansible user, port, and variables, and it adds the SSH key secret created earlier to enable Ansible to connect to the data plane nodes. It references an existing PVC to store ansible logs, connects the data plane to the control plane network, uses subscription-manager to register the node with Red Hat, sets the release version to 9.4, and enables the required repositories.
- The network section defines the network configuration for the nodes. It specifies the network interface (nic1), the physical bridge (br-ex), and the public interface (eth0). It also includes a template for configuring the network interfaces with the appropriate MTU, IP addresses, routes, and DNS settings. The template uses variables from the nodeset_networks list to dynamically configure the network interfaces based on the specific network requirements. Replace the MAC address with the address assigned to the NIC for network configuration on the compute node.
- The next section defines individual compute nodes within the OpenStack environment. Each node has a unique identifier and hostname and connects to multiple networks for control plane, internal API, storage, and tenant traffic. Ansible configuration is provided for node provisioning and management, including the Ansible host address, user, and an FQDN variable for the node’s internal API. This defines the compute nodes and their network connectivity for the OpenStack deployment.
The following is a complete manifest file that includes definitions for all the fields required for a successful deployment of preprovisioned data plane nodes. Make sure to replace the PVC name and the MAC addresses based on your environment.
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
name: openstack-data-plane
namespace: openstack
spec:
env:
- name: ANSIBLE_FORCE_COLOR
value: "True"
networkAttachments:
- ctlplane
preProvisioned: true
nodeTemplate:
ansibleSSHPrivateKeySecret: ansible-dp-private-key-secret
extraMounts:
- extraVolType: Logs
volumes:
- name: ansible-logs
persistentVolumeClaim:
claimName: logs-pvc
mounts:
- name: ansible-logs
mountPath: "/runner/artifacts"
managementNetwork: ctlplane
ansible:
ansibleUser: cloud-admin
ansiblePort: 22
ansibleVarsFrom:
- prefix: subscription_manager_
secretRef:
name: subscription-manager
- prefix: registry_
secretRef:
name: redhat-registry
ansibleVars:
edpm_bootstrap_command: |
subscription-manager register --username {{
subscription_manager_username }} --password {{
subscription_manager_password }}
subscription-manager release --set=9.4
subscription-manager repos --disable=*
subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms --enable=rhoso-18.0-for-rhel-9-x86_64-rpms --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms
edpm_bootstrap_release_version_package: []
edpm_network_config_os_net_config_mappings:
edpm-compute-0:
nic1: 52:54:04:60:55:22
neutron_physical_bridge_name: br-ex
neutron_public_interface_name: eth0
edpm_network_config_template: |
---
{% set mtu_list = [ctlplane_mtu] %}
{% for network in nodeset_networks %}
{{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
{%- endfor %}
{% set min_viable_mtu = mtu_list | max %}
network_config:
- type: ovs_bridge
name: {{ neutron_physical_bridge_name }}
mtu: {{ min_viable_mtu }}
use_dhcp: false
dns_servers: {{ ctlplane_dns_nameservers }}
domain: {{ dns_search_domains }}
addresses:
- ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
routes: {{ ctlplane_host_routes }}
members:
- type: interface
name: nic1
mtu: {{ min_viable_mtu }}
# force the MAC address of the bridge to this interface
primary: true
{% for network in nodeset_networks %}
- type: vlan
mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
addresses:
- ip_netmask:
{{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
{% endfor %}
nodes:
edpm-compute-0:
hostName: edpm-compute-0
networks:
- name: ctlplane
subnetName: subnet1
defaultRoute: true
fixedIP: 192.168.122.100
- name: internalapi
subnetName: subnet1
fixedIP: 172.17.0.100
- name: storage
subnetName: subnet1
fixedIP: 172.18.0.100
- name: tenant
subnetName: subnet1
fixedIP: 172.19.0.100
ansible:
ansibleHost: 192.168.122.100
ansibleUser: cloud-admin
ansibleVars:
fqdn_internal_api: edpm-compute-0.rhoso.local
edpm-compute-1:
hostName: edpm-compute-1
networks:
- name: ctlplane
subnetName: subnet1
defaultRoute: true
fixedIP: 192.168.122.101
- name: internalapi
subnetName: subnet1
fixedIP: 172.17.0.101
- name: storage
subnetName: subnet1
fixedIP: 172.18.0.101
- name: tenant
subnetName: subnet1
fixedIP: 172.19.0.101
ansible:
ansibleHost: 192.168.122.101
ansibleUser: cloud-admin
ansibleVars:
fqdn_internal_api: edpm-compute-1.rhoso.local
Now create the data plane nodes from this manifest:
$ oc create --save-config -f openstack_preprovisioned_node_set.yaml -n openstack
You can use the following command to keep track of the progress. Once all resources have been created by OpenShift, the status will change to DataPlane ready.
$ oc get openstackdataplane -n openstack NAME STATUS MESSAGE Openstack-edpm Unknown Setup started $ oc get openstackdataplane -n openstack NAME STATUS MESSAGE openstack-edpm True DataPlane ready
Deploying the data plane
To deploy data plane services, create a manifest file and define the OpenStackDataPlaneDeployment resource. nodeSets contains the name of the OpenStackDataPlaneNodeSet CR, which was created in the last section.
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: data-plane-deploy namespace: openstack spec: nodeSets: - openstack-data-plane
Apply the manifest and monitor the progress. Upon successful completion, you will see the NodeSet Ready message.
$ oc create -f data_plane_deploy.yaml $ oc get openstackdataplanedeployment -n openstack NAME STATUS MESSAGE data-plane-deploy True Setup Complete $ oc get openstackdataplanenodeset -n openstack NAME STATUS MESSAGE openstack-data-plane True NodeSet Ready
Connect to the OpenStack client pod using a remote shell:
$ oc rsh -n openstack openstackclient
Inside the remote shell, run the following to verify that the deployed compute nodes are accessible on the control plane:
$ openstack hypervisor list
Best practices for running OpenStack services on OpenShift
Here are some best practices to consider when hosting OpenStack services on OpenShift:
- Make use of OpenShift’s networking capabilities: Use network policies to isolate OpenStack services and improve security. Employ OpenShift’s load balancing features to distribute traffic across multiple instances of OpenStack services.
- Monitor and optimize performance: Utilize OpenShift’s built-in monitoring capabilities to track resource usage, identify performance bottlenecks, and proactively address issues.
- Employ hybrid cloud strategies: Combine OpenShift’s flexibility and scalability with on-premises infrastructure’s control and cost-efficiency to achieve the optimal balance for your workload. Carefully assess which workloads suit migration to OpenShift based on cost, performance, and compliance requirements.
Backup your OpenStack on OpenShift environment with Trilio
Trilio’s has maintained a Premier Partnership status with Red Hat for years, supporting multiple generations of Red Hat OpenStack Platform—versions 10, 13, 16, and 17—both with and without OpenShift integration.
Trilio as a certified OpenShift operator
As a certified Red Hat OpenShift Operator, Trilio is now also extending support to the recently released RHOSO platform to provide comprehensive data protection specifically designed for it.
Suggested read: Trilio Announces Support for Red Hat OpenStack Services on OpenShift
Trilio as a certified OpenStack (RHOSO) operator
Through application-centric backups, Trilio helps you capture all necessary components for complete application recovery, including disk volumes, network topology, and security groups. Based on your environment, you can choose a suitable target, such as NFS, Amazon S3, or Ceph.
Defining Workloads in Trilio for RHOSO
To get started with Trilio on OpenStack with an OpenShift v18 Control Plane, follow the steps as outlined below.
Once you’ve deployed and configured Trilio, define your workloads to protect specific virtual machines. A workload is a backup task that safeguards VMs based on predefined policies. While there’s no limit to the number of workloads, each VM can belong to only one workload.
To create a workload in your RHOSO environment:
- Access the Trilio dashboard through the OpenShift console or directly via the Trilio Operator interface
- Navigate to the Workloads section and select “Create Workload”
- On the Details tab, provide a workload name, description, and optional policy selection
- Choose whether to enable encryption for the workload (if encrypted, provide the appropriate encryption key)
- Select your target VMs to protect on the Workload Members tab
- Configure the backup schedule and retention policy
- Set the full backup interval on the Policy tab
- Enable VM pause during backup if needed for consistency
- Click Create to establish the workload
The system will initialize your workload within moments. Backups will automatically execute according to your defined schedule and policy. View and manage all workloads through the Workloads section of the Trilio dashboard.
Find out how Vericast solved K8s backup and recovery with Trilio
Conclusion
Despite its advantages, many organizations discover a critical gap when hosting OpenStack services on OpenShift. Some IT leaders mistakenly assume their existing OpenShift backup tools will adequately protect OpenStack workloads within this environment. Others believe that OpenStack’s built-in redundancy would eliminate the need for comprehensive backup protection. Sadly, both assumptions can lead to significant data loss and extended recovery times when incidents occur.
Trilio can be your partner in this transition to ensure your RHOSO deployment remains resilient against data loss, configuration errors, and disaster scenarios. Our certified solution, coupled with Trilio’s tenant-empowering backup-as-a-service capabilities, provides the necessary recovery tools for a smooth transition and ongoing data protection.
To know more, schedule your personalized RHOSO backup consultation and request a demo here.
- Summary of key stages when deploying OpenStack on OpenShift
- Traditional OpenStack setup
- Preparing the OpenShift platform for hosting OpenStack services
- Deploying OpenStack on OpenShift
- Best practices for running OpenStack services on OpenShift
- Backup your OpenStack on OpenShift environment with Trilio
- Conclusion

Like This Article?
Subscribe to our LinkedIn Newsletter to receive more educational content