« Back to Resources

How (and Why) to Use Ansible with TrilioVault for Kubernetes

When enterprise data centers started delivering applications and services with distributed, complex, and heterogeneous systems, developers needed a way to manage them automatically via code. This process is known as infrastructure-as-code (IAC) — automating infrastructure deployment and orchestration.

IAC is a key to DevOps Engineering and an essential part of successfully managing environments, large or small. This is especially true in cloud-native environments, where management is even more complicated.

Let’s take a look at some of the challenges of management in containerized environments, the tools you need to solve them, and how to make the process easier using Ansible and TrilioVault for Kubernetes.

Management Challenges in Cloud-Native Environments

In Kubernetes applications, a virtual machine turns into thousands of objects in a cluster. This makes management even more complex and time-consuming. As a result, the need to manage your environments via code is no longer a nice-to-have, but a must-have.

But the challenges don’t stop there.

Even if you’re managing infrastructure and other infrastructure-related components via code, you still have disparate code methodologies leveraged by different systems within your environment. 

The Solution: Ansible from Red Hat + TrilioVault for Kubernetes

To automate deployment and management of all of your infrastructure, you need a homogeneous unified language. Ansible from Red Hat and TrilioVault for Kubernetes are here to help.

Ansible from Red Hat

Ansible is one of the best IAC tools out there, no matter how big or small your organization. It helps you manage configurations, deploy and deliver applications, and automate your IT overall. 

According to IDC:

  • The 5-year return on investment is 667% 
  • IT infrastructure management is 30% more efficient
  • Network infrastructure management is 29% more efficient
  • New storage resources are deployed 75% faster

If you’re using Kubernetes, Ansible provides an easy integration with their Kubernetes Collection for Ansible that supports the execution of kubectl commands (get/apply etc.) against a cluster. There’s also a Helm module that streamlines the deployment and management of Helm charts via Ansible. 

Now, you’re ready to bring your cloud-native applications and tools into the automation fold.

TrilioVault for Kubernetes

TrilioVault for Kubernetes (TVK) is a cloud-native application designed for data protection and data management use cases. Rather than maintaining a separate CLI, TVK natively integrates with kubectl and all other Kubernetes concepts. 

This way, TVK can be fully managed and operated through the kubectl framework. For example, TVK also includes plugins wrapped via Krew (a kubectl extension framework). 

TVK is a truly Kubernetes-native application itself, using Kubernetes concepts including CSI, secrets, annotations and all mechanisms that integrate with systems outside the cluster.

Ansible + TVK: IAC Made Simple

By combining TVK with Ansible, you can manage your data protection processes in an automated way. 

With the Kubernetes kubectl module for Ansible and TVK’s native integration into the kubectl framework, TVK is supported out-of-the-box for Ansible. You don’t need to do any configuration or module installation. 

Because TVK is packaged as a helm chart for any Cloud Native Computing Foundation (CNCF) certified distribution, the Ansible-helm integration module can support the 1-command install and configuration experience to remotely deploy the solution to several clusters at once. 

Ready to get started? Below, you’ll find step-by-step instructions and a video tutorial.

Getting Started with Ansible and TrilioVault

In this section, you’ll learn how to create a target, backup plan and backup of a MySQL app, using the Ansible K8s module and TVK.

Environment

  • Kubernetes cluster
  • Ansible v2.9
  • TVK v2.7

Objective

You must be able to leverage the Kubernetes Ansible module to interact with TVK resources and create objects like target, backup plan and launch backups using automation.

This is an initial guide to setting up the basics for interacting with TVK’s clusters using Ansible. The end goal is to manage our data protection platform in a completely automated way.

Prerequisites

In this guide, a Linux laptop was used, with Fedora 34.

1. How you install the following tools in your Ansible controller node depends entirely on your Operating System. In this guide, a Linux laptop with Fedora was used, so yum or DNF may be used. If the python version used in your shell is not correct and you experience errors installing the tools, you might have to modify your PATH, as follows:

[st@rtrek ~]$ ansible --version
ansible 2.9.27
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/st/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.9.9 (main, Nov 19 2021, 00:00:00) [GCC 11.2.1 20210728 (Red Hat 11.2.1-1)]

2. Ansible controller node is needed, which will be used to connect to the K8s cluster. In this Ansible controller node, Python 3.6 or later must be installed. Check which version that you have installed by executing the following command:

[st@rtrek ~]$ python -V
Python 3.9.9

3. Python OpenShift greater than 0.6. In Fedora, the package manager is used to install it. Check which version that you have installed by executing the following command:

[st@rtrek ~]$ rpm -qa | grep openshift
python3-openshift-0.11.2-3.fc34.noarch

4. PyYAML greater than 3.11. Check which version that you have installed by executing the following command:

[st@rtrek ~]$ rpm -qa | grep pyyaml
python3-pyyaml-5.4.1-2.fc34.x86_64

5. Install the Kubectl command in the Ansible controller node using the following instructions for guidance: Install and Set Up kubectl on Linux

6. This is the list of files required automate everything, which can be cloned from GitHub.

[st@rtrek ansiblemysql]$ ls -lsa
total 72
4 drwxr-xr-x. 3 st st 4096 Mar 10 14:12 .
4 drwxr-xr-x. 5 st st 4096 Mar 9 13:26 ..
4 -rw-r--r--. 1 st st 215 Mar 9 13:26 ansible.cfg
4 -rw-r--r--. 1 st st 2430 Mar 9 13:26 app.yaml
4 -rw-r--r--. 1 st st 267 Mar 9 13:26 backupplan.yaml
4 -rw-r--r--. 1 st st 168 Mar 9 13:26 backup.yaml
4 -rw-r--r--. 1 st st 387 Mar 9 13:26 createbackupplan.yaml
4 -rw-r--r--. 1 st st 348 Mar 9 13:26 createbackup.yaml
4 -rw-r--r--. 1 st st 316 Mar 9 13:26 createnamespace.yaml
4 -rw-r--r--. 1 st st 670 Mar 9 13:26 createtarget.yaml
4 -rw-r--r--. 1 st st 297 Mar 9 13:26 delete.yaml
4 drwxr-xr-x. 8 st st 4096 Mar 9 13:29 .git
0 -rw-r--r--. 1 st st 0 Mar 9 13:26 inventory
4 -rw-r--r--. 1 st st 327 Mar 9 13:26 launchapp.yaml
4 -rw-r--r--. 1 st st 202 Mar 9 13:26 namespace.yaml
4 -rw-r--r--. 1 st st 49 Mar 9 13:26 README.md
4 -rw-r--r--. 1 st st 164 Mar 9 13:27 secret.yaml
4 -rw-------. 1 st st 833 Mar 10 14:13 stagingconfig
4 -rw-r--r--. 1 st st 345 Mar 9 13:26 target.yaml

7. For security reasons the secret.yaml file should be modified. This file holds the access and secret key to the AWS S3 bucket being used as a backup target. Modify the accessKey and secretKey to the credentials that you use for your bucket:

[st@rtrek ansiblemysql]$ cat secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: demo-secret
type: Opaque
stringData:
accessKey: AS3BDRXYCVQMCQJPWAAA
secretKey: C6yR79d6WXXXXX8y5KtfewopUM3MWRDDS

8. The bucketName value in the target.yaml file must be modified, to reflect the correct bucket:

[st@rtrek ansiblemysql]$ cat target.yaml
apiVersion: triliovault.trilio.io/v1
kind: Target
metadata:
name: hopper-demo-s3-target
spec:
type: ObjectStore
vendor: AWS
objectStoreCredentials:
url: "https://s3.amazonaws.com"
credentialSecret:
name: demo-secret
namespace: hopper
bucketName: "bhtvault-bucket"
region: "us-east-1"
thresholdCapacity: 5Gi

9. In certain playbooks, you’ll see a reference o a kubeconfig file named “stagingconfig”. You must generate your own kubeconfig file and reference that file name on line 14: kubeconfig in the next code snippet example. On every file where this kubeconfig file is referenced, it must be replaced with whatever bespoke file name that you have given to it. In this example configuration, the kubeconfig file is in the same folder, which is why an absolute or relative path is not required.

[st@rtrek ansiblemysql]$ cat createtarget.yaml
---
- name: Ensure a backup target is available
hosts: localhost
tasks:
- name: Copy secret yaml file
become: true
copy:
src: secret.yaml
dest: /tmp/secret.yaml

- name: Ensure Secret is present
k8s:
kubeconfig: "stagingconfig"
state: present
namespace: hopper
src: /tmp/secret.yaml

If you don’t know how to configure this kubeconfig file, one way of doing it in an OpenShift cluster is to execute these two following commands. This results in creation of a file named stagingconfig, which has the credentials you need in the playbooks:

[st@rtrek ansiblemysql]$ export KUBECONFIG=<stagingconfig>
[st@rtrek ansiblemysql]$ oc login --username=kubeadmin --password=<PASSWORD> --server=https://<YOUR-K8S-CLUSTER>:6443

10. Finally, a namespace and application are required before running all the playbooks and launching the backup. To facilitate this, three files have been added to the GitHub repository, which you can open:

  • cat createnamespace.yaml – This creates a namespace.
  • cat launchapp.yaml – This launches an app inside the namespace.
  • cat app.yaml – This is where all the info related to the app is (this file is used in the launchapp.yaml playbook)

11. Before deploying everything, you must first check that you are in the proper namespace (in the example lab below, this is named “hopper”). Then launch the application that you wish to backup later in the process:

a.

[st@rtrek ansiblemysql]$ oc project hopper
Now using project "hopper" on server "https://api.dc4.demo.presales.trilio.io:6443".
[st@rtrek ansiblemysql]$ ansible-playbook launchapp.yaml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

PLAY [Launch an app] **************************************************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************************************************************************************************************************************
ok: [localhost]

TASK [Copying the app yaml file] **************************************************************************************************************************************************************************************************************************
ok: [localhost]

TASK [Launching the app] **********************************************************************************************************************************************************************************************************************************
changed: [localhost]

PLAY RECAP ************************************************************************************************************************************************************************************************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

[st@rtrek ansiblemysql]$

b. Check that everything is deployed correctly:

[st@rtrek ansiblemysql]$ oc get all
NAME READY STATUS RESTARTS AGE
pod/k8s-demo-app-frontend-7c4bdbf9b-25k8w 1/1 Running 0 75s
pod/k8s-demo-app-frontend-7c4bdbf9b-nw776 1/1 Running 0 75s
pod/k8s-demo-app-frontend-7c4bdbf9b-vn5dk 1/1 Running 0 75s
pod/k8s-demo-app-mysql-754f46dbd7-w56xt 1/1 Running 0 76s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/k8s-demo-app-frontend ClusterIP 172.30.186.239 <none> 80/TCP 76s
service/k8s-demo-app-mysql ClusterIP 172.30.128.64 <none> 3306/TCP 77s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/k8s-demo-app-frontend 3/3 3 3 76s
deployment.apps/k8s-demo-app-mysql 1/1 1 1 77s

NAME DESIRED CURRENT READY AGE
replicaset.apps/k8s-demo-app-frontend-7c4bdbf9b 3 3 3 76s
replicaset.apps/k8s-demo-app-mysql-754f46dbd7 1 1 1 77s
[st@rtrek ansiblemysql]$

c. Next, run the first Ansible playbook, which is the one used to create a backup target. In this example, it is AWS. Once the STATUS field displays Available, proceed to the next step:

[st@rtrek ansiblemysql]$ ansible-playbook createtarget.yaml 
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

PLAY [Ensure a backup target is available] ***************************************************************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************************************************************************
ok: [localhost]

TASK [Copy secret yaml file] *****************************************************************************************************************************************************************
changed: [localhost]

TASK [Ensure Secret is present] **************************************************************************************************************************************************************
changed: [localhost]

TASK [Copy backup target yaml file] **********************************************************************************************************************************************************
ok: [localhost]

TASK [Ensure backup target is available] *****************************************************************************************************************************************************
changed: [localhost]

PLAY RECAP ***********************************************************************************************************************************************************************************
localhost : ok=5 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

[st@rtrek ansiblemysql]$ oc get target
NAME TYPE THRESHOLD CAPACITY VENDOR STATUS BROWSING ENABLED
hopper-demo-s3-target ObjectStore 5Gi AWS Available 
[st@rtrek ansiblemysql]$

d. Now launch the playbook that creates the backup plan:

[st@rtrek ansiblemysql]$ ansible-playbook createbackupplan.yaml 
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

PLAY [Ensure a backup plan is ready] *********************************************************************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************************************************************************
ok: [localhost]

TASK [Copy the backup plan yaml file] ********************************************************************************************************************************************************
changed: [localhost]

TASK [Ensure the backup plan is available] ***************************************************************************************************************************************************
changed: [localhost]

PLAY RECAP ***********************************************************************************************************************************************************************************
localhost : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

[st@rtrek ansiblemysql]$ oc get backupplan
NAME TARGET RETENTION POLICY INCREMENTAL SCHEDULE FULL BACKUP SCHEDULE STATUS
backupplan-demo hopper-demo-s3-target Available
[st@rtrek ansiblemysql]$

e. Execute the playbook that creates the backup of the namespace. As the listing proceeds to backup, you can monitor progress until 100 is displayed in the PERCENTAGE COMPLETED field:

[st@rtrek ansiblemysql]$ ansible-playbook createbackup.yaml 
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

PLAY [Launch a backup] ***********************************************************************************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************************************************************************
ok: [localhost]

TASK [Copy the backup plan yaml file] ********************************************************************************************************************************************************
changed: [localhost]

TASK [Ensure the backup runs] ****************************************************************************************************************************************************************
changed: [localhost]

PLAY RECAP ***********************************************************************************************************************************************************************************
localhost : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

[st@rtrek ansiblemysql]$ oc get backups
NAME BACKUPPLAN BACKUP TYPE STATUS DATA SIZE CREATION TIME START TIME END TIME PERCENTAGE COMPLETED BACKUP SCOPE DURATION
mysql-label-backup backupplan-demo Full InProgress 0 2022-04-19T18:17:14Z 2022-04-19T18:17:14Z App 
[st@rtrek ansiblemysql]$ oc get backups
NAME BACKUPPLAN BACKUP TYPE STATUS DATA SIZE CREATION TIME START TIME END TIME PERCENTAGE COMPLETED BACKUP SCOPE DURATION
mysql-label-backup backupplan-demo Full InProgress 0 2022-04-19T18:17:14Z 2022-04-19T18:17:14Z 20 App 
[st@rtrek ansiblemysql]$ oc get backups
NAME BACKUPPLAN BACKUP TYPE STATUS DATA SIZE CREATION TIME START TIME END TIME PERCENTAGE COMPLETED BACKUP SCOPE DURATION
mysql-label-backup backupplan-demo Full Available 126156800 2022-04-19T18:17:14Z 2022-04-19T18:17:14Z 2022-04-19T18:19:25Z 100 App 2m11.160153441s
[st@rtrek ansiblemysql]$

f. To verify that the backup is in progress in the TVK UI, check the Last Backup Status field:

last backup status TrilioVault for Kubernetes user interface

Want to see it in action? The video below shows you the process, including how to:

  • Create a backup target using an Ansible playbook.
  • Create a backup plan with Ansible.
  • Launch a backup and show progress through to completion.

Resources to Help You Implement 

For more help, check out the resources below:

Benefits of Managing TVK through Ansible

IAC is the key to successfully managing your environments, whether large or small. And TVK and Ansible make it easy to automate application management and increase your resiliency. 

Here are some of the high-level benefits of managing TVK through Ansible:

  1. No new skills needed: There’s nothing new to learn beyond existing knowledge of kubectl. 
  2. Zero configuration required via the Ansible kubectl module.
  3. Install from anywhere: Remote installation of TVK into Kubernetes clusters (serial/parallel).
  4. All TVK functionality works: for example, backup and restore policy creation via kubectl commands.
  5. Increase your application resiliency as part of application deployment.
  6. Create a disaster recovery plan to restore your minimal viable business. Make sure to validate it consistently and activate it when you need it.
  7. Automatic support with Ansible for any new operation added by TVK as part of future releases.