Reference Guide: Optimizing Backup Strategies for Red Hat OpenShift Virtualization

TrilioVault for Kubernetes: Operator Development

Table of Contents

Subscribe to Newsletter

Originally published on Red Hat OpenShift blog on May 21, 2020

Trilio is a native data protection solution within the Open Hybrid Cloud landscape. Over a year ago, we started fielding requests from our customers and prospects to protect both their Kubernetes-based applications in their existing on-prem environments and their Kubernetes-based applications residing in the public cloud.

When we decided to take on the cloud-native challenge, Trilio had to make the important strategic decision of how to package our data-protection solution. While Kubernetes provides concepts like stateful sets, replica sets, and daemon sets, a more programmatic way of packaging and deploying the product was needed.

Helm was one option that was being heavily used in Kubernetes environments for packaging applications. It also has a huge developer community along with backing from major enterprises, which provided strong justification. However, while building our approach, Red Hat briefed us on a new technology known as Operators, which was developed by CoreOS (Red Hat acquired CoreOS in 2018). We learned how Operators simplify the lifecycle management of an application and the benefits they provide for customers in terms of management.

Technically, since our architecture had decoupled the application from the operator, adopting to OLM was easy. The Operators Lifecycle Manager (OLM) framework that is available to develop and build operators as well as the catalog-based application management were extremely attractive and aligned with Trilio’s strategy and objective for providing a simple self-service approach for data protection in a Kubernetes environment. As a result, we decided to leverage both deployment models to serve our customers and provide them with options.

As part of its Upstream Operator, Trilio offers a single Custom Resource Definition (CRD) that enables customers to install the correct version of the application and update it when newer versions of the software are released. As part of our modular design which the Operator code and the Application code are separate, Trilio was able to take full advantage of the OLM framework by exposing our application CRDs directly to provide a better customer experience. With this approach, for upstream environments, Trilio would leverage our Upstream Operator. For OLM-based environments, like OpenShift, the OLM framework would be doing the job of our Operator and serve the following functions:

  • Managing installation
  • Managing the lifecycle of the application
  • Application availability

With this two-pronged strategy, we were able to publish the application CRDs that customers would leverage to “operate” our new cloud-native data-protection product called TrilioVault for Kubernetes. This approach also gave us the opportunity to provide our customers with an integrated experience to manage Trilio’s custom resources. This enables a single UI for the customer to not only manage applications but also to manage their data protection. While this strategy of exposing CRDs was more time-consuming for Trilio, it was the correct route for our customers and reflected Kubernetes design principles.

One of the best aspects of the OLM framework that customers will enjoy is how updates are delivered to the TrilioVault for Kubernetes application. Not only are Role-Based Access Control (RBAC) policies adhered to (by allowing only cluster-admin role manage operators), but the user also has the ability to set the approval policy for the updates as “automatic” or “manual.” This process of delivering updates to customers is completely automated and managed through the Operator certification program.

Overall, the tools and engines provided by Red Hat in terms of development and delivery of an Operator made the entire process painless. Operator Software Development Kit (SDK) provides a CLI-based tool to create, build, and deploy the Operator. The CLI was extremely valuable in terms of efficiency and saving time during development.

Running validation tests locally also helped with certifications. The Operator certification process was not only valuable in ensuring we used best practices from Red Hat but also helped in terms of validating our code from a customer’s perspective. The entire customer journey from OperatorHublisting to install and deployment experience has been tested. The Red Hat team was great to work with. Their attention to detail was on display when they ensured the metadata and cosmetics around the Operator were styled correctly.
TrilioVault for Kubernetes has been launched for Early Access with a Red Hat OpenShift Certified Operator, but our journey has just begun. Today, TrilioVault offers basic installation, seamless upgrades, and lifecycle management capabilities for its Operator. Going forward, we’ll be focusing on metrics and monitoring to provide data intelligence and achieve massive scale and parallelism through deep insights and autopilot capabilities while providing superior and innovative data-protection features.

You can get your hands on our operator directly within OpenShift Embedded OperatorHub.

You can also watch videos and test-drive TrilioVault for Kubernetes here.

 

 

 

Sharing

Author

Picture of Stefan Kroll

Stefan Kroll

Related Articles

Copyright © 2024 by Trilio

Powered by Trilio