Kubernetes, often written as K8s for short, is one of the leading container orchestration platforms. It provides a robust framework for developers and IT teams to manage large and complex containerized applications.
The platform exposes RESTful APIs through a control plane interface of the kube-apiserver component to aid in managing and operating objects. The API endpoint can be accessed via the command line utility kubectl or by using the K8s dashboard, both of which are provided as part of the Kubernetes distribution. The kubectl utility provides a simple and powerful interface to access the k8s control plane to create, update, and delete K8s resources.
In this article, we cover the most common use cases of kubectl and its command line syntax. We discuss the best practices while interacting with the K8s cluster and some troubleshooting steps for your application deployment. We also cover how to perform backup and recovery of your Kubernetes cluster.
Kubectl commonly used commands
Before we discuss the best practices for using kubectl, let’s familiarize ourselves with some commonly used kubectl commands and the tasks they perform in managing Kubernetes resources.
Note that a few commands presented in a marked section at the end of the table use the Krew plugin. The Krew plugin repository offers various tools to enhance your kubectl experience by simplifying discovering, installing, and managing deployments.
Command | Description |
Kubectl basic commands – getting information | |
kubectl cluster-info | View Cluster Information |
kubectl get nodes | View All Nodes |
kubectl get pods | View All Pods |
kubectl get services | View All Services |
kubectl get deployments | View All Deployments |
Pod Management | |
kubectl run <pod-name> –image=<image-name> | Create a Pod |
kubectl delete pod <pod-name> | Delete a Pod |
kubectl describe pod <pod-name> | Describe a Pod |
kubectl logs <pod-name> | Log output from a Pod |
kubectl exec -it <pod-name> — /bin/sh | Start an interactive shell in a pod |
Deployment Management | |
kubectl create deployment <deployment-name> –image=<image-name> | Create a Deployment |
kubectl scale deployment <deployment-name> –replicas=<number-of-replicas> | Scale a Deployment |
kubectl set image deployment/<deployment-name> <container-name>=<new-image-name> | Update a Deployment |
kubectl delete deployment <deployment-name> | Delete a Deployment |
Service Management | |
kubectl expose deployment <deployment-name> –type=<service-type> –port=<port> | Create a Service |
kubectl describe service <service-name> | View Details of a Service |
kubectl delete service <service-name> | Delete a Service |
Namespaces | |
kubectl get namespaces | List all Namespaces |
kubectl create namespace <namespace-name> | Create a Namespace |
kubectl delete namespace <namespace-name> | Delete a Namespace |
kubectl config set-context –current –namespace=<namespace-name> | Switch context to a Namespace |
Configuration and Management | |
kubectl apply -f <file-name>.yaml | Apply a configuration from the file |
kubectl delete -f <file-name>.yaml | Delete Resources from a Configuration File |
kubectl get all | View Current Configurations |
kubectl get <resource> <resource-name> -o yaml | View Configuration of a Resource |
Troubleshooting Commands | |
kubectl get events | View Events |
kubectl describe pod <pod-name> | View Pod Status |
kubectl exec -it <pod-name> — <command> | Execute a Command in a Pod |
kubectl logs -f <pod-name> | Follow the logs of a pod in real-time |
kubectl top pod | Display resource usage for pods |
kubectl top node | Display resource usage for nodes |
Node Management | |
kubectl get nodes | View All Nodes |
kubectl cordon <node-name> | Mark a node as unschedulable |
kubectl drain <node-name> | Evict all pods from a node |
kubectl uncordon <node-name> | Mark a node as schedulable |
Krew plugins | |
kubectl krew install neat | Format kubectl get output for better readability. |
kubectl krew install tree | Visualize resource hierarchies and relationships |
kubectl tree deployment <deployment-name> | Display a tree view of a deployment, showing its associated pods, replica sets, and other related objects |
kubectl krew install ctx | To manage and switch between multiple Kubernetes contexts. |
kubectl krew install resources | View resource requests and limits for pods and containers. |
Summary of kubectl best practices
Best practice | Description |
Use namespaces | Isolate different environments, teams, or applications within the same cluster. |
Label and annotate resources | Apply labels across resources for easier management, filtering, and grouping. |
Use declarative configurations | Define your desired state in YAML files and use the kubectl apply command to create or update resources. |
Set resource limits | Set resource requests and limits on your containers to avoid overprovisioning or resource starvation. |
Introduction to Kubernetes components
Before going into the details of kubectl, here is a list of the K8s components that the command line utility can interact with via APIs:
- Cluster: A Kubernetes cluster is a group of interconnected computers called nodes that manage and run containerized applications.
- Node: A node can be a physical or virtual machine managed by the K8s control plane. A node hosts components like kubelet, a container runtime, and the kube-proxy, which are necessary for running pods.
- Pod: A pod is a set of containers that act as a single unit and serve a single application. Containers share resources of a pod, like a storage volume and the IP address assigned to the pod.
- Deployment: A deployment is a declarative object that manages replicas and the state of an application. A deployment defines the number of replicas of an application and the image used by the application containers.
- Service: A service is a method to expose applications running in pods over a network. A service object provides the IP address and port for clients to access the services and can also perform load balancing for an application running across multiple pods.
- Namespace: A namespace provides a logical separation of resources within a K8s cluster. It can divide resources among multiple users or teams working on different projects.
Automated Kubernetes Data Protection & Intelligent Recovery
Perform secure application-centric backups of containers, VMs, helm & operators
Use pre-staged snapshots to instantly test, transform, and restore during recovery
Scale with fully automated policy-driven backup-and-restore workflows
Kubectl command and basic syntax
The kubectl utility requires a configuration file to interact with the Kubernetes APIs. By default, the file is named config and is placed in the path $HOME/.kube/config. Using kubectl, you can create new resources (like pods), get the list of existing resources (like the list of services), describe a specific resource (like a deployment), or delete a K8s resource (like a namespace).
Kubectl command syntax
The kubectl utility has the following command line syntax:
kubectl [command] [TYPE] [NAME] [flags]
- [command] is the name of the operation you want to perform, like create, delete, get, or describe.
- [TYPE] is the resource type you want to operate on, like pod, deployment, or service.
- [NAME] is the name of the resource you want to operate on.
- [flags] are used for additional arguments to the command line, like specifying the output format of the command or filtering resources based on labels.
Getting help from the command line
There are many resources available on the Internet to help you understand kubectl and its various options. There is also some brief command line help available:
$ kubectl --help kubectl controls the Kubernetes cluster manager. Find more information at: https://kubernetes.io/docs/reference/kubectl/ Basic Commands (Beginner): create Create a resource from a file or from stdin expose Take a replication controller, service, deployment . .
You can get help with a specific command as follows:
$ kubectl get --help Display one or many resources. . . Examples: # List all pods in ps output format kubectl get pods . .
Enable kubectl autocomplete
If you are using a Linux bash shell, you can enable autocompletion for the kubectl command as follows:
$ kubectl completion bash | \ sudo tee /etc/bash_completion.d/kubectl > /dev/null $ source /etc/bash_completion.d/kubectl
Kubectl basic commands: getting information
Let’s start with some basic commands on how to get information about the Kubernetes cluster and the resources running on the platform.
Display the address of the control plane and cluster services:
$ kubectl cluster-info Kubernetes control plane is running at https://127.0.0.1:6443
Display all nodes and their statuses:
$ kubectl get nodes NAME STATUS ROLES AGE VERSION node1 Ready control-plane 60d v1.29.5 node2 Ready 21d v1.29.5
View all pods:
$ kubectl get pods NAME READY STATUS RESTARTS AGE nginx-7854ff8877-q9bxc 1/1 Running 0 21d
View all services:
$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.233.0.1 443/TCP 60d
View all deployments:
$ kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE nginx 1/1 1 1 60d
Watch this 1-min video to see how easily you can recover K8s, VMs, and containers
Webserver deployment steps
We will now list the steps and commands to create an NGINX web server deployment on Kubernetes, including creating a separate namespace, deploying three replica sets, exposing the service, and some troubleshooting steps.
Deployment process steps
Create a namespace
Per best practices, it is recommended to create a separate namespace for each application.
$ kubectl create namespace nginx-namespace namespace/nginx-namespace created
Switching context to a namespace
By default, the kubectl command interacts with resources in the default namespace. To interact with resources on other namespaces, you can either use the command line switch -n <namespace> or switch the context so that you do not need to specify the -n parameter with each command.
$ kubectl config set-context --current --namespace=nginx-namespace Context "[email protected]" modified.
Create a deployment
Next, create a deployment YAML file and define the required state of the application. Create a file named nginx-deployment.yaml with the container specs and the number of replicas. Specify the minimum and maximum resource limits on the containers:
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment namespace: nginx-namespace spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80 resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m"
Next, apply the deployment as follows:
$ kubectl apply -f nginx-deployment.yaml deployment.apps/nginx-deployment created
Scale the deployment
You can scale up or down the number of replicas of the container after the deployment:
$ kubectl scale deployment nginx-deployment --replicas=4 deployment.apps/nginx-deployment scaled
Create a service
Create a file named nginx-service.yaml with the following content:
apiVersion: v1 kind: Service metadata: name: nginx-service namespace: nginx-namespace spec: selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80 type: LoadBalancer
Now create the service using kubectl apply:
$ kubectl apply -f nginx-service.yaml service/nginx-service created
Learn about the features that power Trilio’s intelligent backup and restore
Troubleshooting steps
If the application is not working as expected on the K8s cluster, you can use the following commands to check the status of resources created on the cluster and review the logs for errors.
Check the status of the deployment. The following command will show the replicas state along with age the deployment has been running.
$ kubectl get deployments -n nginx-namespace NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 3/3 3 3 21h
Get the state of the nginx deployment, along with its history, configuration, and any events associated with the deployment.
$ kubectl describe deployment nginx-deployment -n nginx-namespace Name: nginx-deployment Namespace: nginx-namespace CreationTimestamp: Sat, 31 Aug 2024 06:08:00 +0000 Labels: Annotations: deployment.kubernetes.io/revision: 1 Selector: app=nginx Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable . .
Check the status of replica sets within a namespace.
$ kubectl get replicasets -n nginx-namespace NAME DESIRED CURRENT READY AGE nginx-deployment-7c79c4bf97 3 3 3 21h
Display the status and age of all the pods in a namespace.
$ kubectl get pods -n nginx-namespace NAME READY STATUS RESTARTS AGE nginx-deployment-7c79c4bf97-c8rkj 1/1 Running 0 21h nginx-deployment-7c79c4bf97-xz49p 1/1 Running 0 21h nginx-deployment-7c79c4bf97-zg292 1/1 Running 0 21h
Check the current status, configuration, and events related to a specific pod.
$ kubectl describe pod nginx-deployment-7c79c4bf97-c8rkj -n nginx-namespace Name: nginx-deployment-7c79c4bf97-c8rkj Namespace: nginx-namespace Priority: 0 Service Account: default Node: node4/10.20.20.67 Start Time: Sat, 31 Aug 2024 06:08:00 +0000 Labels: app=nginx pod-template-hash=7c79c4bf97 . .
Check the logs of a pod:
$ kubectl logs nginx-deployment-7c79c4bf97-c8rkj -n nginx-namespace 2024/08/31 06:08:04 [notice] 1#1: using the "epoll" event method 2024/08/31 06:08:04 [notice] 1#1: nginx/1.27.1 2024/08/31 06:08:04 [notice] 1#1: built by gcc 12.2.0 (Debian 12.2.0-14) 2024/08/31 06:08:04 [notice] 1#1: OS: Linux 5.15.0-107-generic 2024/08/31 06:08:04 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 65535:65535 2024/08/31 06:08:04 [notice] 1#1: start worker processes 2024/08/31 06:08:04 [notice] 1#1: start worker process 29 2024/08/31 06:08:04 [notice] 1#1: start worker process 30
List the services in the namespace, the Type, Internal and External IPs and the exposed Ports. You can get more details of a specific service.
$ kubectl get services -n nginx-namespace NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-service LoadBalancer 10.233.54.48 80:31455/TCP $ kubectl describe service nginx-service -n nginx-namespace Name: nginx-service Namespace: nginx-namespace Labels: Annotations: Selector: app=nginx Type: LoadBalancer IP Family Policy: SingleStack . .
Check events for errors:
$ kubectl get events -A NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE default 116s Normal NodeHasSufficientMemory node/node3 Node node3 status is now: NodeHasSufficientMemory kube-system 35m Warning Unhealthy pod/kube-apiserver-node1 Readiness probe failed: HTTP probe failed with statuscode: 500 . .
Run interactive commands on pods and check connectivity:
$ kubectl exec -it nginx-deployment-7c79c4bf97-zg292 \ -n nginx-namespace -- curl http://nginx-service <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <body> <h1>Welcome to nginx!</h1> </body> </html>
Cleaning up
Once we are done with our application, we can do a cleanup of all the resources created above so that we can make space for other applications.
Delete the service:
$ kubectl delete service nginx-service -n nginx-namespace service "nginx-service" deleted
Delete the deployment:
$ kubectl delete deployment nginx-deployment -n nginx-namespace deployment.apps "nginx-deployment" deleted
Delete the namespace:
$ kubectl delete namespace nginx-namespace namespace "nginx-namespace" deleted
Kubernetes backup and restore using Trilio
What is Trilio for Kubernetes (T4K)?
Trilio is a cloud-native data backup and protection solution for Kubernetes environments. Trilio provides custom resource definitions (CRDs) that extend the capabilities of the K8s API to manage and automate backup and restore operations for applications and infrastructure. You can create backups and restore using the management console or use YAML definition files for the workflow.
In this section, we will use kubectl to create backup and restore targets for an upstream K8s environment. For brevity, we assume that T4K is installed with its licensing verified.
Create a backup target
In the example, we will use AWS S3 storage as the backup target. We need to define the access credentials for AWS.
Create a file called sample-secret.yaml with the required secrets:
apiVersion: v1 kind: Secret metadata: name: sample-secret type: Opaque stringData: accessKey: AKIAS5B35DGFSTY7T55D secretKey: xWBupfGvkgkhaH8ansJU1wRhFoGoWFPmhXD6/vVDcode
Use kubectl to create the secrets resource:
$ kubectl apply -f sample-secret.yaml
Create a demo-s3-target.yaml file that defines the AWS S3 bucket as a backup target using the above-created secret:
apiVersion: triliovault.trilio.io/v1 kind: Target metadata: name: demo-s3-target spec: type: ObjectStore vendor: AWS objectStoreCredentials: region: us-east-1 bucketName: trilio-browser-test credentialSecret: name: sample-secret namespace: TARGET_NAMESPACE thresholdCapacity: 5Gi
Now create the target resource using kubectl:
$ kubectl apply -f demo-s3-target.yaml
Creating a backup plan and backup
The next step is to define a backup plan using the S3 target defined above. Create a file called ns-backupplan.yaml with the target resource created above:
apiVersion: triliovault.trilio.io/v1 kind: BackupPlan metadata: name: ns-backupplan spec: backupConfig: target: namespace: default name: demo-s3-target
Create the backup plan resource:
$ kubectl apply -f ns-backupplan.yaml
To verify the backup plan was created successfully and check its status:
$ kubectl get backupplans
The above command helps confirm if the backup plan was created correctly through a confirmation output like the following:
NAME STATUS AGE ns-backupplan Completed 2h
You can define a policy to schedule backups at a specific frequency. Create a file called sample-schedule.yaml:
kind: "Policy" apiVersion: "triliovault.trilio.io/v1" metadata: name: "sample-schedule" spec: type: "Schedule" scheduleConfig: schedule: - "0 0 * * *" - "0 */1 * * *" - "0 0 * * 0" - "0 0 1 * *" - "0 0 1 1 *"
Now apply the schedule policy:
$ kubectl apply -f sample-schedule.yaml
After making the schedule, we need to define the retention period of the backups using the file sample-retention.yaml:
apiVersion: triliovault.trilio.io/v1 kind: Policy metadata: name: sample-retention spec: type: Retention default: false retentionConfig: latest: 2 weekly: 1 dayOfWeek: Wednesday monthly: 1 dateOfMonth: 15 monthOfYear: March yearly: 1
And now create the retention policy:
$ kubectl apply -f sample-retention.yaml
Verify the retention policy was created successfully:
$ kubectl get policies
The output returns the list of policies that are active:
NAME AGE sample-retention 1m
Finally, create the backup resource for a sample application using the file sample-backup.yaml:
apiVersion: triliovault.trilio.io/v1 kind: Backup metadata: name: sample-backup spec: type: Full backupPlan: name: sample-application namespace: default
And create the backup resource using kubectl:
$ kubectl apply -f sample-backup.yaml
Verify the backup was created successfully and check its status:
$ kubectl get backups
An example of the output might look like:
NAME STATUS AGE sample-backup Completed 5m
Creating a restore
For restore operations, use the Trilio Management Console or create a custom restore resource using YAML files. Let’s create a file called sample-restore.yaml for the backup we created above:
apiVersion: triliovault.trilio.io/v1 kind: Restore metadata: name: sample-restore spec: source: type: Backup backup: name: sample-backup namespace: default
Create the restore resource using kubectl:
$ kubectl apply -f sample-restore.yaml
Verify the restore was created successfully and check its status:
$ kubectl get restores
An example of the output might look like:
NAME STATUS AGE sample-restore Completed 2m
Learn about a lead telecom firm solved K8s backup and recovery with Trilio
Last thoughts
In Kubernetes, kubectl is the Swiss army knife that provides its users with the power and utility to perform a wide array of tasks, from inspecting K8s cluster resources to deploying applications to troubleshooting issues. Kubectl can be used to execute interactive commands as well as to apply operations via declarative code that aligns with the best practices of cloud operations. Kubectl provides both developers and operators with the capabilities to manage K8s with simplicity and efficiency.
Like This Article?
Subscribe to our LinkedIn Newsletter to receive more educational content