OpenShift, developed by Red Hat, is a container platform built on the foundation of Kubernetes. This powerful and widely adopted open-source orchestration engine simplifies the deployment and management of cloud-native applications. However, even though OpenShift offers immense flexibility, on-premises OpenShift deployments can often feel like second-class citizens in the broader OpenShift landscape. Sure, you get control, but you must also manage physical infrastructure and all the “invisible work” eats into innovation time.
That’s where Azure Red Hat OpenShift (ARO) changes the game. Managed solutions like ARO free organizations from these infrastructure headaches, allowing them to prioritize application development. Unlike its on-premises cousin, ARO lets Microsoft and Red Hat shoulder the infrastructure burden—no more midnight fire drills for certificate renewals or panic-scaling clusters. Instead, your team can focus on the fun part: writing code that moves the needle.
In this article, we compare managed OpenShift offerings against on-premises deployments, with a spotlight on Azure Red Hat OpenShift.
Summary of key benefits of Azure OpenShift
The following table provides an overview of the key benefits of using Azure Red Hat OpenShift that are covered in this article.
Benefit | Description |
Simplified cluster operations | Managed OpenShift services such as Azure RedHat OpenShift handle the underlying infrastructure complexities—including provisioning, scaling, and monitoring—allowing teams to dedicate their efforts to developing and managing applications. |
Reduced operational expenses | Leveraging the managed services of Azure Red Hat OpenShift allows organizations to decrease the costs associated with managing and maintaining OpenShift infrastructure. |
Greater focus on application development | With the burden of infrastructure management lifted, development and operations teams can dedicate their energy and expertise to building, deploying, and scaling innovative applications. |
Integration with the Azure ecosystem | Azure Red Hat OpenShift offers seamless integration with a wide range of Azure infrastructure services. |
Comprehensive enterprise support | The joint support model ensures that organizations readily have access to the right expertise to resolve any issues related to their Azure Red Hat OpenShift environments. |
Understanding traditional OpenShift deployment
The OpenShift container platform provides several options when deploying a cluster in any infrastructure. Four primary deployment methods are available, each of which provides a highly available infrastructure; the right choice depends on the specific use scenarios:
- Assisted installation: This is the easiest way of deploying a cluster because it offers a web-based and user-friendly interface and is ideal for networks with access to the public Internet. It also offers smart defaults, pre-flight checks, and a REST API for automation. The assisted installer generates a discovery image, which is used to boot the cluster machines.
- Agent-based installation: This approach requires setting up a local agent and configuration via the command line and is better suited to disconnected or restricted networks.
- Automated installation: This method deploys an installer-provisioned infrastructure using the baseboard management controller on each cluster host. It works in both connected and disconnected environments.
- Full control installation: This approach is ideal if you want complete control of the underlying infrastructure hosting the cluster nodes. It supports both connected and disconnected environments and provides maximum customization by deploying user-prepared and maintained infrastructure.
The automated installer approach is usually associated with installer provisioned infrastructure (IPI), while the other methods are usually associated with user provisioned infrastructure (UPI).
Automated OpenStack & OpenShift Data Protection & Intelligent Recovery
Enjoy native OpenStack integration, documented RESTful API, and native OpenStack CLI
Restore Virtual Machines with Trilio’s One-Click Restore
Select components of your build to recover via the OpenStack dashboard or CLI
Deploying OpenShift via installer-provisioned infrastructure
The IPI (Installer Provisioned Infrastructure) approach offers a straightforward way to set up an OpenShift cluster by automating infrastructure provisioning, though it provides less flexibility than manual methods. The installer handles all the major tasks of provisioning the underlying infrastructure and configuring the cluster. For API and Ingress VIPs (Virtual IPs), the installer automatically assigns them via the cloud provider’s load balancer service. However, for on-premise deployments, administrators must manually provide two IP addresses for these VIPs. Once deployed, the keepalived service manages these VIPs, ensuring the API VIP runs on a control plane node and the Ingress VIP runs on a compute or an infrastructure node.
With a standard cluster, the installer needs minimal details for installation. With a customized cluster, you can specify more information on the platform, such as the number of machines the control plane uses, the type of virtual machine the cluster deploys, or the CIDR range for the pods and service network. This flexibility makes customized clusters better suited for environments with specific requirements, while standard clusters offer a faster, opinionated deployment.
The following table summarizes the supported IPI methods for major platforms.
AWS | Azure | GCP | Nutanix | RHOSP | Bare metal | vSphere | IBM | |
Default | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |
Custom | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |
Network Customization | ✓ | ✓ | ✓ | ✓ | ✓ | |||
Restricted Network | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
Private Cluster | ✓ | ✓ | ✓ | ✓ | ||||
Existing Virtual Private Network | ✓ | ✓ | ✓ | ✓ |
Installer-provisioned infrastructure options
Deploying OpenShift via user-provisioned infrastructure
The UPI approach requires more effort on the part of the administrator but offers much more flexibility than IPI. This flexibility results in greater control over the selection of specific options in the infrastructure, such as the choice of operating system on compute nodes and the use of external load-balancing options. With IPI, the control and compute nodes can only use CoreOS as the underlying operating system; the UPI approach also allows the use of RHEL servers as compute nodes in the cluster.
The following table summarizes the supported UPI methods for major platforms.
AWS | Azure | GCP | Nutanix | RHOSP | Bare metal | vSphere | IBM | |
Custom | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||
Network Customization | ✓ | ✓ | ||||||
Restricted Network | ✓ | ✓ | ✓ | ✓ | ||||
Shared VPC hosted outside of cluster project | ✓ |
User-provisioned infrastructure options
Learn how Trilio’s partnership with Canonical helps better protect your data
Cluster installation flow
The installation flow is shown in the figure below.

Installation flow in traditional OpenShift deployment (source)
The OpenShift installer produces Ignition configuration files, which are essential for creating the cluster. A temporary bootstrap machine utilizes its Ignition config to initiate the cluster deployment. This bootstrap node then orchestrates establishing the control plane nodes, which subsequently deploy the compute nodes for application workloads.
Challenges with traditional OpenShift deployments
Traditional OpenShift deployments (“unmanaged OpenShift”) require that all the aspects of the underlying infrastructure be managed by the same teams responsible for deploying and running applications on the platform. This approach offers cluster administrators the freedom to configure the cluster according to the organization’s requirements and may even result in a cost-effective solution. However, while the traditional approach offers more control, it also means that extended effort is required to manage the cluster. This can include multiple challenges, including in the following areas:
- Provisioning and managing the infrastructure: In unmanaged OpenShift deployments, application teams assume responsibility for managing the entire infrastructure. This requires them to handle all aspects of the underlying infrastructure, which can entail multiple tasks, such as setting up baremetal hosts and virtual machines, configuring networking (including CNI plugins), setting up ingress and egress controls, integrating authentication systems, configuring load balancers, and storage management.
- Sustained operational overhead: The operational workload never truly allows teams to focus on application development. Teams face an endless cycle of critical security patches that demand immediate attention, version upgrades requiring weeks of preparation and testing, and round-the-clock performance monitoring. These are not one-off tasks but continuous responsibilities that pull resources away from strategic development work.
- Security and compliance: Without dedicated security personnel, engineering teams get stretched dangerously thin. They’re forced to harden cluster configurations—often learning security best practices through trial and error—maintain granular access controls, and produce compliance documentation. All this must happen while tracking new vulnerabilities and constantly emerging regulatory changes. What often gets labeled as a “shared responsibility” in theory actually becomes an overwhelming distraction from core engineering work.
- Organizational resource requirements: Organizations with smaller teams often lack the necessary infrastructure, resources, and expertise in networking, security, troubleshooting, and system administration to manage their OpenShift clusters effectively. Every layer adds complexity, and each component demands specialized knowledge beyond typical application development skills. The entire operation becomes vulnerable to disruptions if the resources lack these skills.
- Scalability limitations: Scaling infrastructure isn’t a straightforward process and presents its own set of challenges. While cloud platforms can provision resources on demand, expanding an on-premises OpenShift cluster can pose a significant financial and technical challenge and require manual intervention.
These factors can contribute to a higher total cost of ownership. Organizations must weigh these factors against the benefits of greater control and flexibility offered by traditional OpenShift deployments.
Learn about the features that power Trilio’s intelligent backup and restore
Understanding managed OpenShift services
Because of the challenges discussed above, managed OpenShift has emerged as a compelling alternative that offers a more efficient approach to deploying and operating OpenShift clusters. Managed OpenShift is a service that public cloud providers offer to simplify containerized applications’ deployment and scaling and to reduce the administrative burden on the application teams.
The major cloud platforms providing this service are shown in the table below.
| Cloud Platform | Infrastructure Provider | Management | Support |
| Red Hat OpenShift Service on AWS | AWS | Red Hat and AWS | Red Hat and AWS |
| Microsoft Azure Red Hat OpenShift | Azure | Red Hat and Azure | Red Hat and Azure |
| Red Hat OpenShift on IBM Cloud | IBM | IBM | Red Hat and IBM |
Cloud providers offering managed OpenShift services
In this article, we focus on Azure Red Hat OpenShift (ARO), a managed OpenShift platform that allows organizations to deploy and scale applications quickly in a native Azure environment.
Overview of Azure Red Hat OpenShift
OpenShift has been available on Azure as a “self-managed” offering for several years, i.e., it can be deployed on Azure much like in any on-premise infrastructure. However, the setup, deployment, and day-2 operations in managing OpenShift require significant technical expertise and can be time-consuming. Azure RedHat OpenShift takes away the complexity of setting up and maintaining OpenShift by simplifying these processes, offering a fully managed OpenShift service that leverages the capabilities of the Azure cloud platform while simplifying deployment, management, and operations.
ARO is a fully managed application platform jointly engineered and supported by Red Hat and Microsoft. It simplifies OpenShift deployments on Azure by eliminating the need to manage the underlying infrastructure. A dedicated site reliability engineering (SRE) team automates, scales, and secures your clusters, handling all patching, updates, and monitoring. This lets you focus on application development while benefiting from a robust and secure platform. Azure Red Hat OpenShift clusters are deployed within your Azure subscription and billed through your existing Azure account.
High-level architecture of Azure Red Hat OpenShift
ARO leverages Azure’s infrastructure, including virtual machines, network security groups, and storage accounts, as the foundation for its deployments. This makes integrating with other services already running within your Azure account easier.
Azure Red Hat OpenShift deployments utilize a single Azure virtual network with two dedicated subnets: one for control plane nodes and another for worker nodes. Each subnet requires a minimum of a /27 subnet mask (32 addresses), though a larger subnet is recommended to accommodate future cluster scaling needs.
The following diagram illustrates a high-level architecture of an Azure RedHat OpenShift cluster that integrates Azure’s native services with the OpenShift platform. At its core, Azure Active Directory provides identity management, authenticating users and applications accessing the OpenShift API and administration console. The control plane consists of master nodes running critical components (API server, controller manager, etcd) as Azure VMs.
As shown in the figure, two Azure Load Balancers are employed:
- Azure Load Balancer (Master): This server handles ingress traffic for OpenShift API users and provides outbound connectivity for control plane nodes.
- Azure Load Balancer (Router): This device directs ingress traffic to applications running within OpenShift via the OpenShift router or ingress controller and provides outbound connectivity for worker nodes.
Networking is handled through an Azure Virtual Network with OpenShift’s SDN managing internal pod communication. DNS resolution is achieved via Azure DNS service, while public/private IP addressing enables external access and internal cluster communication. Worker nodes run as VM scale sets hosting application pods. Azure Premium SSD Managed Disks provide persistent storage for both the control plane and applications.

Azure RedHat OpenShift architecture (Source)
Although the customer also has full cluster-admin access, Azure Red Hat OpenShift manages everything from the underlying infrastructure (the data center) up to the cluster operators. These operators, running on the control plane, are responsible for vital tasks like monitoring, updating, and maintaining cluster health. Microsoft and Red Hat jointly oversee these components, ensuring continuous cluster availability.
The support policy outlines the boundaries of support for your cluster. It’s essential to familiarize yourself with this policy and to review the responsibility matrix to ensure that you understand what actions are supported and which may not be covered.
Integration with Azure services
As a native Azure service, Azure Red Hat OpenShift offers seamless integration with a wide range of Azure infrastructure services. The following provides a high-level overview of some of them.

Integration of Azure RedHat OpenShift with different services (Source)
It is quite possible that some components of the application stack may exist outside Azure RedHat OpenShift. For instance, an application might rely on an external database running in Azure in some other form. In such cases, Azure Service Operator can simplify the integration of these different components. Azure Service Operator facilitates an infrastructure-as-code approach by allowing definitions of Azure service dependencies to be included within the Kubernetes YAML files that describe OpenShift applications. When the operator detects new custom resource definitions (CRDs) defined in YAML that match a corresponding Azure resource, it automatically translates these YAML definitions into the necessary Azure API calls to provision the requested resources.
Challenges of using a managed OpenShift service
A managed solution like Azure RedHat OpenShift undoubtedly removes much of the operational burden of managing and maintaining an OpenShift cluster, which allows organizations to focus on their applications and business objectives. However, it’s essential to acknowledge that, like any managed service, there can be inherent challenges that organizations need to consider. Here are some of them:
- Reduced control: Platforms like Azure Red Hat OpenShift handle the heavy lifting of infrastructure management, but this convenience comes at a cost. Since Microsoft and Red Hat manage the underlying components, you lose some of the granular control you’d get with a self-managed deployment. You might hit limitations when you need to tweak kernel parameters for extreme performance or customize security policies. The trade-off is worth it for most organizations, though—fewer headaches in exchange for slightly less fine-tuning.
- Vendor lock-in: Relying on a fully managed service can increase vendor lock-in. If the company needs to migrate infrastructure to a different platform, migrating OpenShift might be more challenging and costly than a self-managed deployment.
- Support scope: Support isn’t always straightforward, even with a managed support provider. The provider’s scope may exclude custom configurations or complex integration, meaning that you could be on your own if something breaks outside standard deployments. And resolving issues isn’t always quick. Since Azure Red Hat OpenShift involves both Microsoft and Red Hat, troubleshooting a cluster means:
- Your team diagnosing the issue at application layer
- Microsoft checking the Azure layer
- Red Hat verifying OpenShift’s behavior
This back-and-forth can slow things down, especially when you need a fix in a hurry.
- Shared responsibility model: While the managed service provider secures the underlying OpenShift infrastructure, customers must remember one critical detail: Security doesn’t stop at the platform level. You’re still responsible for safeguarding your applications and data. This means ensuring that workloads are patched, access controls are correctly configured, and sensitive data remains protected—tasks that fall squarely on the customer, not the platform provider.
- Learning curve: A managed solution simplifies many aspects of deploying OpenShift, but it may also introduce new concepts and workflows associated with the managed service provider. For instance, Azure Red Hat OpenShift takes a different approach to networking than traditional deployments, requiring teams to understand how ARO interacts with Azure Virtual Networks. Similarly, authentication flows might use Azure Active Directory in ways that differ from standard OpenShift implementations.
- Cost considerations: While a managed solution simplifies management, it typically comes with a higher operational cost than managing your infrastructure. Furthermore, the level of cost optimization available within a fully managed service might be more restricted compared to a self-managed environment where you have greater control over resource allocation and scaling.
Summary comparison of managed and unmanaged OpenShift
The following table offers a summarized comparison of the two approaches.
| Feature | Managed OpenShift | Unmanaged OpenShift |
| Infrastructure management | Managed by the provider | Managed by the customer |
| Control and customization | Limited control over the underlying infrastructure | High level of control over infrastructure and configurations |
| Scalability | Easier to scale | Scalability requires careful planning and effort |
| Support | Comprehensive support for both software and infrastructure | Support is required from multiple vendors |
| Operational overhead | Minimal operational overhead | High operational overhead |
| Security | Managed security updates provided by the vendor | Requires proactive security management |
| Use cases | Suitable for organizations with existing cloud infrastructures that want to minimize operational overhead and have limited technical expertise | Suitable for organizations that require high control and customization and have strong technical teams |
Whether a managed solution is better for a particular organization depends on several factors. Ultimately, the choice has to be made while considering factors such as control, security, operational overhead, and financial costs.
Backing up application data on Azure RedHat OpenShift
Managed OpenShift platforms like Azure Red Hat OpenShift significantly simplify infrastructure management and reduce the operational burden, but they do not inherently provide a comprehensive data protection strategy. As mentioned earlier, Microsoft institutes a “shared responsibility” model for Azure RedHat OpenShift, emphasizing that the ultimate responsibility lies with the customer to safeguard the application data. The customer must implement mature data protection solutions that go beyond basic snapshots.
While products such as Velero provide a foundational backup and restore solution for Kubernetes and OpenShift, they lack several enterprise features. This is where solutions like Trilio come into play. Trilio is engineered as an enterprise-grade platform designed to address modern organizations’ complex data protection and disaster recovery needs. Trilio distinguishes itself by offering a comprehensive suite of advanced features, including application-consistent backups, granular restores, robust role-based access control (RBAC), and the ability to transform backups for cross-cluster restores
Trilio offers the following capabilities for Azure RedHat OpenShift:
- Backup: Trilio enables the automated scheduling of point-in-time backups and flexible recovery options, ensuring consistent data protection according to defined policies.
- Continuous restore: Trilio’s continuous restore capabilities enable building effective disaster recovery strategies. Regardless of your cloud provider, they ensure that applications can be quickly restored after a disaster. While continuous restore requires a secondary cluster, Trilio’s innovative architecture allows for a more efficient setup: multiple primary application clusters can be continuously restored to one dedicated disaster recovery (DR) cluster. This approach cuts down on infrastructure costs compared to a traditional one-to-one primary-to-DR cluster model and simplifies management for organizations operating multiple production clusters. The result is significant cost savings and a streamlined recovery process.
- Cost-effective regional DR: Traditional disaster recovery solutions for OpenShift typically involve deploying two active clusters across different regions, requiring complex configurations at the application and network levels. This approach can quickly get expensive and complicated. Trilio offers a more agile and cost-efficient alternative: By storing backups in geographically redundant storage, such as Azure Blob Storage with cross-region replication, Trilio enables recovery to a secondary region without the overhead of a constant active-active setup. Given the rapid deployment of ARO clusters, many customers prefer Trilio’s backup and restore method over traditional DR setups, significantly reducing costs while maintaining strong disaster recovery capabilities.
- Migration and platform portability: Another significant benefit of using Trilio is platform portability, which is achieved via its robust transform capabilities, which effectively eliminate vendor lock-in for applications running in OpenShift. This feature empowers organizations to seamlessly migrate applications between diverse environments, such as Azure Red Hat OpenShift and Red Hat OpenShift Service on AWS (ROSA), or even on-premises clusters. For instance, customers can easily migrate applications back to on-premises infrastructure if cloud costs become prohibitive. Trilio allows customers to adapt to evolving business needs and cost structures by enabling on-the-fly modifications to backups. This portability ensures business continuity and future-proofs application deployments, allowing organizations to choose the optimal platform.
- Multi-cluster management: The integration of Trilio with Red Hat Advanced Cluster Management for Kubernetes (RHACM) facilitates the definition and orchestration of policy-driven data protection across a diverse range of Kubernetes deployments, including hybrid, multi-cloud, and edge environments.
Find out how Vericast solved K8s backup and recovery with Trilio
Conclusion
Choosing between managed and unmanaged OpenShift deployments requires careful consideration of several factors. While managed solutions like Azure Red Hat OpenShift offer significant advantages in terms of reduced operational overhead, simplified management, and enhanced security, they may have limitations in terms of control and customization. Conversely, unmanaged deployments provide greater control but require significant in-house expertise and ongoing maintenance. Organizations must carefully evaluate their specific needs, resources, and priorities to determine the most suitable deployment model.
Like This Article?
Subscribe to our LinkedIn Newsletter to receive more educational content