-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Description
Which component are you using?: cluster-autoscaler
What version of the component are you using?: v1.34.2
Component version:
What k8s version are you using (kubectl version)?: v1.33.2
kubectl version Output
$ kubectl version
What environment is this in?: clusterapi with VMware
What did you expect to happen?: scale multiple nodes in one run
What happened instead?: scale single node in multiple runs
How to reproduce it (as minimally and precisely as possible): 2 identical nodepools (size, "tier" label, taint) with different zone label and deploy workload which has topologySpreadConstraints configured on topology key "topology.kubernetes.io/zone" and maxSkew of 1 and whenUnsatisfiable set to DoNotSchedule. And selects nodepools based on "tier" label.
Anything else we need to know?: balance-similar-node-groups and parallel-scale-up have been enabled and balancing-label set to "tier". When whenUnsatisfiable is changed to ScheduleAnyway multiple nodes are scaled in one run.