Monitoring Kubernetes pod resource usage is crucial for maintaining a healthy, efficient, and well-performing cluster.
By keeping an eye on resource utilization, you gain insights into workload requirements for the future, manage overused or underutilized resources effectively, and make informed choices when using tools like Horizontal Pod Autoscaler (HPA) or Vertical Pod Autoscaler (VPA).
kubectl provides a top command feature. Unlike the top command in Linux, this one just gives the CPU and memory utilization. But that’s all we need here, right?
You can get the resource usage of pods in the default namespace with:
kubectl top pods
And if you want to see resource utilization for pods in some other namespace, use:
kubectl top pods --namespace=demo-namespace
However, the above kubectl top commands won’t work if Metrics Server is not installed first.
Let me show you how it works by providing a hands-on example scenario.
I’ll create two pods: one in the default namespace and another in a newly created namespace. After setting up the pods, we’ll explore how to monitor resource usage for Kubernetes pods across both the default and custom namespaces.
Prerequisites for this scenario
Before getting started, make sure the following requirements are met:
- A running Kubernetes cluster (e.g., Minikube installed for local development)
- The kubectl command-line tool is configured to communicate with the cluster
- A shell environment, such as Bash or PowerShell
To monitor Kubernetes pod resource usage, I’ll first install the Metrics Server, create a Kubernetes pod, and then use the kubectl top command to check its resource utilization.
Install Metrics Server
Metrics Server is an essential add-on for Kubernetes that gathers resource metrics from the Kubelet running on each node in your cluster. It compiles this data and makes it available through the Kubernetes API server via the Metrics API. This allows you to easily access real-time resource usage stats using the kubectl top
command.
To install the Metrics Server on Kubernetes, you have two options: using a YAML manifest file or deploying via a Helm chart.
In this guide, I am opting for installing the Metrics Server with the components.yaml manifest.
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
After installing the Metrics Server, you can confirm that it has been installed correctly and is in a ready state:
kubectl get deployment metrics-server -n kube-system
Output:
NAME READY UP-TO-DATE AVAILABLE AGE
metrics-server 1/1 1 1 4h9m
The Metrics Server add-on has now been successfully installed on the Kubernetes cluster.
Create Kubernetes Pods
To test the setup, I’ll deploy two Kubernetes pods, each running an Nginx and PHP container. One pod will be created in the default namespace, while the other will be set up in a new namespace called demo-namespace.
Here’s the YAML configuration to set up the namespace and define the two pods:
apiVersion: v1
kind: Namespace
metadata:
name: demo-namespace
---
apiVersion: v1
kind: Pod
metadata:
name: first-pod
namespace: default
spec:
containers:
- name: nginx
image: nginx:latest
- name: php
image: php:7.4-fpm
---
apiVersion: v1
kind: Pod
metadata:
name: second-pod
namespace: demo-namespace
spec:
containers:
- name: nginx
image: nginx:latest
- name: php
image: php:7.4-fpm
Save the YAML configuration to a file called demo-pods.yml. After that, apply it to the resource using the following command:
kubectl apply -f demo-pods.yml
I’ve successfully launched two active Kubernetes pods, each operating within its own namespace.
Monitor Pods Resource Usage
To retrieve resource usage data for Kubernetes pods, you can utilize the built-in kubectl top command. This command provides essential metrics for each pod, including:
- CPU Usage (cores): The amount of CPU resources consumed by the pod, measured in CPU cores
- Memory Usage (bytes): The amount of memory consumed by the pod, measured in bytes
This data helps us to identify pods that are either over-utilizing or under-utilizing their allocated resources, which can later be used for capacity planning, performance optimization, and efficient autoscaling.
Now, let’s monitor the resource usage of pods in the default namespace:
kubectl top pods
Output
NAME CPU(cores) MEMORY(bytes)
first-pod 1m 9Mi
The above output will appear once the Metrics Server has gathered enough resource data.
🚧
If you see an error message indicating that metrics are not available yet, this means the Metrics Server is still in the process of collecting data. In this case, it’s advisable to check back in a few seconds.
Next, track the resource usage of pods within a specific namespace by utilizing the --namespace
flag:
kubectl top pods --namespace=demo-namespace
Output:
NAME CPU(cores) MEMORY(bytes)
second-pod 1m 9Mi
📋
Unlike the top command in Linux, kubectl top doesn’t provide a continuous output. You only get a single snapshot for the pod’s system resource usage.
Finally, to monitor resource utilization for pods across all namespaces in your cluster, simply use the –all-namespaces option:
kubectl top pods --all-namespaces
Output:
NAMESPACE NAME CPU(cores) MEMORY(bytes)
default first-pod 1m 9Mi
other-namespace second-pod 1m 9Mi
kubernetes-dashboard kubernetes-dashboard-api-74bf7cb4bb-n9fmp 0m 10Mi
kubernetes-dashboard kubernetes-dashboard-auth-65b56d647f-ngvrh 0m 8Mi
Conclusion
In this quick guide, you’ve learned how to utilize kubectl top to monitor resource usage for Kubernetes nodes and containers within specific pods.
We started by installing the Metrics Server and configuring Kubernetes pods across two different namespaces, and then learned how to track CPU and memory resource usage for pods in each namespace.
That provides a good enough base for you to get started and employ it in actual scenarios. Enjoy 😄
Vivek Y
FOSS enthusiast with extensive experience in system administration and a background in multiple industry-standard programming languages. Moreover, I enjoy writing technical content for leading publications in my golden hour.