Fine-Grained Scaling Control for DaemonSet/Deployment #8227
Labels
sig/apps
Categorizes an issue or PR as relevant to SIG Apps.
sig/autoscaling
Categorizes an issue or PR as relevant to SIG Autoscaling.
sig/scalability
Categorizes an issue or PR as relevant to SIG Scalability.
Describe the issue
What would you like to be added?
DaemonSet/Deployment supports controlling strategy for scaling pods similar to RollingUpdate.
Why is this needed?
Currently, DaemonSets and Deployments (via ReplicaSets) offer some level of strategy control for rolling updates, but provide almost nothing for large-range scaling, apart from limiting the request pressure for the API server. As a result, a large number of creating pods will compete simultaneously for the same resources, leading to repeated failures and retries, and very slowly scaling.
In terms of pod scaling for DaemonSets/Deployments, the current solutions are based on various Autoscalers, but using Autoscalers to achieve specific-number scaling goals might also be inconsistent and awkward:
• For Deployments, scaling relies on the Horizontal Pod Autoscaler (HPA).
• For DaemonSets, scaling, that is node scaling,ctypically using the Cluster Autoscaler with Node Pools/Groups and HorizontalNodeScalingPolicy.
However, Node Pools/Groups depend on cloud provider supporting, unusable in local self-built Kubernetes clusters where manual node scaling via adding/removing labels/taints is common.
Additionally, the rolling update strategy for DaemonSets/Deployments is not flexible. It only supports a single fixed updating rate, whereas production environments often require a phased updating strategy of starting slowly and accelerating later.
Support Request also: English Version 中文版本
The text was updated successfully, but these errors were encountered: