Hope you have read Kubernetes Cost Optimization Best Practices – Part 1. In this article, we will show you more beneficial practices.
Try nodeSelector & Affinity
Nodes should be able to accommodate the Pods’ resource requests so that the critical applications running inside the containers run smoothly. Further right mapping assures administrators that they spend on the right nodes and no money is wasted unnecessarily on high-capacity nodes running pods that run less priority applications.
Kubernetes scheduler handles the nodes and pods matching. By default, the scheduler verifies if the number of requests (vCPU and memory) of Pods collectively not exceeding the Node’s CPU and RAM capacity. But for specific needs where the user prefers the pod to be deployed under a particular node, (like when you want two applications on different pods to stay under one Node as they are highly interdependent) default function wouldn’t suffice.
nodeSelector: A scheduling feature from Kubernetes allows the user to select a preferred node. The user just needs to mention nodeSelector and the Node label (Node labels are names, a key/value pair manually attached to the nodes for user identification) during pod specification. In this way, the pod is deployed only to the nodes that have the specified label.
Affinity: This is another feature that makes the scheduling rule based. When the user wants a pod to be located under a node only when it satisfies the given rule (for example, a specified label and region in the node). This can be done using the syntax – “requiredDuringSchedulingIgnoredDuringExecution” under “nodeAffinity” while specifying pods. Another syntax “preferredDuringSchedulingIgnoredDuringExecution” allows the scheduling still to continue with default mapping when the rule is not met. Assigning “weight” under “preferredDuringSchedulingIgnoredDuringExecution” deploys a pod under a node based on the highest weightage.
Users can also combine NodeSelector and nodeAffinity for optimal scheduling.
podAffinity: If the size of the Kubernetes cluster is only a few hundred, podAffinity is a good choice for scheduling. It is based on pod labels rather than node labels. podAffinity schedules a pod under a node when two or more pods satisfy the specified rule already under a node.
Size them right
Kubernetes offers an autoscaling feature that helps administrators dynamically adjust the pods and resource requests & limits size based on the workload demand. HPA – Horizontal Pod Autoscaling, scale in scale or out the number of pods based on the variation in workload. For example, the usage of a shopping application in production may be high during festival time and low or none during weekends. Similarly, enabling VPA – Vertical Pod Autoscaling helps in the dynamic autoscaling of resource requests and limits within the pods. This approach dissolves the unwanted pods or resources when not in use thus saving Kubernetes costs.
Adopt the right tool for visibility
With Kubernetes administrator efforts following the above practices during specification helps in optimizing substantial costs. But this doesn’t ensure complete Kubernetes cost optimization. Administrators miss a key optimizing factor – Utilization. Nodes and pods created if not utilized completely are a trench that can drain the cloud budget unnecessarily.
Bills from cloud service providers shouldn’t land the cloud administrators into shock with negative ROI. The cloud service providers don’t provide us with any metrics to measure the nodes and pods are utilized. All that we can see is the cost of nodes per hour whether it is utilized to its fullest or not.
Tools like Cloud CADI come as the aid to resolve this. Cloud CADI provides a unified dashboard that precisely shows you node-level and pod-level utilization of the Kubernetes cluster. (Yes, you read it right. pod-level breakdown). Along with utilization, if you wish to see the pod-level cost for a selected period, yes, you can get it. We also give you the choice of customized views through multiple filters as per the team’s needs. Using the role-based dashboard (engineers, finance, DevOps), teams aren’t crowded with too much information beyond what they need.
Using Cloud CADI, your Kubernetes cluster is completely under your surveillance. With a matter of clicks, utilization and expense data is on your table.
I now have visibility of Kubernetes clusters. Are there any recommendations for effective optimization?
If you ask, we are here to provide. Call us now.
Spend right on the cloud with Cloud CADI.