Kubernetes FAQ
- Which T4K version am I running?
# Upstream
kubectl version # alternative: kubectl get nodes -o yaml | grep -w kubeletVersion kubectl describe deployment k8s-triliovault-control-plane | grep RELEASE_TAG
# RHOCP/OKD
oc version # RHOCP/OKD version. Alternative: oc get nodes -o yaml | grep -w containerRuntimeVersion # rhaos<ocp_version>.<commit>.<rhel_version> oc get nodes # upstream K8s version running on each node. Alternative: oc get nodes -o yaml | grep -w kubeletVersion oc describe deployment k8s-triliovault-control-plane -n trilio-system | grep RELEASE_TAG # v3.x or higher oc describe deployment k8s-triliovault-control-plane -n openshift-operators | grep RELEASE_TAG # Pre-3.x
- How can I install T4K Log Collector?
1. Run:
( set -x; cd "$(mktemp -d)" && OS="$(uname | tr '[:upper:]' '[:lower:]')" && ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')" && KREW="krew-${OS}_${ARCH}" && curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/${KREW}.tar.gz" && tar zxvf "${KREW}.tar.gz" && ./"${KREW}" install krew )
2. Add the following line to ~/.bashrc:
export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"
3. Install T4K Log Collector:
kubectl krew index add tvk-plugins https://github.com/trilioData/tvk-plugins.git kubectl krew update kubectl krew install tvk-plugins/tvk-log-collector #kubectl krew upgrade tvk-log-collector # upgrade the tvk-log-collector plugin #kubectl krew uninstall tvk-log-collector # uninstall
4. Use it:
kubectl tvk-log-collector --clustered --log-level debug
More information here: https://docs.trilio.io/kubernetes/v/3.0.x/krew-plugins/tvk-log-collector
- How to activate debug logs in T4K?
# Upstream
kubectl patch triliovaultmanager triliovault-manager -p '{"spec":{"logLevel":"Debug","datamoverLogLevel":"Debug"}}' --type merge #kubectl edit triliovaultmanager triliovault-manager #kubectl get configmap k8s-triliovault-config -o yaml | grep -i loglevel #kubectl patch triliovaultmanager triliovault-manager -p '{"spec":{"logLevel":null,"datamoverLogLevel":null}}' --type merge # revert to standard level
# RHOCP/OKD (v3.x or higher)
oc patch triliovaultmanager triliovault-manager -n trilio-system -p '{"spec":{"logLevel":"Debug","datamoverLogLevel":"Debug"}}' --type merge #oc edit triliovaultmanager triliovault-manager -n trilio-system #oc get configmap k8s-triliovault-config -n trilio-system -o yaml | grep -i loglevel #oc patch triliovaultmanager triliovault-manager -n trilio-system -p '{"spec":{"logLevel":null,"datamoverLogLevel":null}}' --type merge # revert to standard level
# RHOCP/OKD (Pre-3.x)
oc patch configmap k8s-triliovault-config -n openshift-operators -p '{"data":{"tvkConfig":"name: tvk-instance\nlogLevel: Debug\ndatamoverLogLevel: Debug"}}' #oc edit configmap -n openshift-operators k8s-triliovault-config #oc patch configmap k8s-triliovault-config -n openshift-operators -p '{"data":{"tvkConfig":"name: tvk-instance"}}' # revert to standard level
- How is the backup data stored inside the Trilio target?
The paths inside the target have the following format:
./<target_path>/76eef5b5-c47e-4a0e-a678-2d5c07e14cc1/263a4aae-bde1-49cd-8685-b3869328f0b4 ^ First UID is from BackupPlan: kubectl get backupplan <bakplan> -o jsonpath='{.metadata.uid}' ^ Second UID is from Backup: kubectl get backup <bak> -o jsonpath='{.metadata.uid}' # When using ClusterBackup, we will have three directories under <target_path> (considering we are backing up 2 namespaces): # ./<target_path>/<ClusterBackupPlan_UID>/<ClusterBackup_UID> # ./<target_path>/<BackupPlan_NS1_UID>/<Backup_NS1_UID> # ./<target_path>/<BackupPlan_NS2_UID>/<Backup_NS2_UID>
- How can I test a S3 bucket, before installing any Trilio-related software?
Connecting to and testing a S3 bucket:
sudo apt install -y awscli # Ubuntu sudo yum install -y awscli # RHEL #curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && unzip awscliv2.zip && sudo ./aws/install # if no repos are available #sudo pip3 install awscli # this command can also be used if running from a TVM. Using sudo is MANDATORY so we don't touch the virtual environment! export AWS_ACCESS_KEY_ID=Z3GZYLQN7Jaaaaaaaaab export AWS_SECRET_ACCESS_KEY=abcdefghvlkdNzvvzmrkFpd1R5pKg4aoME7IhSXp export AWS_DEFAULT_REGION=default export AWS_REGION=${AWS_DEFAULT_REGION} #export AWS_CA_BUNDLE=./ca-bundle.crt # to specify a CA bundle. To completely bypass SSL check, add --no-verify-ssl to aws commands export S3_ENDPOINT_URI=http://ceph.trilio.demo/ # or http://172.22.0.3/ export S3_TEST_BUCKET_URI=bucket1 dd if=/dev/urandom of=./testimage.img bs=1K count=102400 iflag=fullblock # create a 100MB test image aws s3 --endpoint-url $S3_ENDPOINT_URI mb s3://${S3_TEST_BUCKET_URI} aws s3 --endpoint-url $S3_ENDPOINT_URI ls | grep ${S3_TEST_BUCKET_URI} # confirm the bucket exists aws s3 --endpoint-url $S3_ENDPOINT_URI cp ./testimage.img s3://${S3_TEST_BUCKET_URI}/ aws s3 --endpoint-url $S3_ENDPOINT_URI ls s3://${S3_TEST_BUCKET_URI} | grep testimage.img # confirm the image is in the bucket aws s3 --endpoint-url $S3_ENDPOINT_URI rm s3://${S3_TEST_BUCKET_URI}/testimage.img rm -f ./testimage.img #aws s3 --endpoint-url $S3_ENDPOINT_URI rb s3://${S3_TEST_BUCKET_URI} # only if this bucket was created only for this purpose. Add --force to forcefully delete all contents #Note: if any issues are found while running above aws commands, get more detail by adding the flag --debug
- How do I enable logging for Trilio in the UI on Kubernetes?
During the installation of Trilio, you can enable observability to view logs directly in the UI instead of using the CLI. For detailed instructions, please refer to the Trilio Observability Integration documentation.
- T4K Does Trilio support backup of Hotplug disks connected to Virtual Machines running on OpenShift?
No T4K can’t take backup of HotPlug disks connected to Virtual Machines running on OpenShift.