Skip to content

Conversation

@bpineau
Copy link
Contributor

@bpineau bpineau commented Jan 27, 2026

The ScaleDownCandidatesDelayProcessor uses three maps (scaleUps, scaleDowns, scaleDownFailures) that are accessed concurrently without synchronization, causing occasional "fatal error: concurrent map read and map write" panics like the one logged below:

I0125 23:53:49.284467      94 aws_manager.go:169] DeleteInstances was called: scheduling an ASG list refresh for next main loop evaluation
fatal error: concurrent map read and map write

goroutine 64 [running]:
internal/runtime/maps.fatal({0x6a1d287?, 0x358cfec?})
    /usr/local/go/src/runtime/panic.go:1058 +0x20
k8s.io/autoscaler/cluster-autoscaler/processors/scaledowncandidates.(*ScaleDownCandidatesDelayProcessor).GetScaleDownCandidates.func1(0x4004791290, 0x0, {0x69a3ae1, 0xb})
    /build/cluster-autoscaler/processors/scaledowncandidates/scale_down_candidates_delay_processor.go:64 +0x74
k8s.io/autoscaler/cluster-autoscaler/processors/scaledowncandidates.(*ScaleDownCandidatesDelayProcessor).GetScaleDownCandidates(0x40047a2150, 0x4000644008, {0x4041b70008, 0x3bc, 0x22f?})
    /build/cluster-autoscaler/processors/scaledowncandidates/scale_down_candidates_delay_processor.go:77 +0x2dc
k8s.io/autoscaler/cluster-autoscaler/processors/scaledowncandidates.(*combinedScaleDownCandidatesProcessor).GetScaleDownCandidates(0xad47200?, 0x4000644008, {0x40405b8008?, 0x0?, 0x0?})
    /build/cluster-autoscaler/processors/scaledowncandidates/scale_down_candidates_processor.go:59 +0x6c
k8s.io/autoscaler/cluster-autoscaler/core.(*StaticAutoscaler).RunOnce(0x4001211860, {0x19035c?, 0x1d091a7098197?, 0xad46400?})
    /build/cluster-autoscaler/core/static_autoscaler.go:575 +0x1f20
k8s.io/autoscaler/cluster-autoscaler/loop.RunAutoscalerOnce({0xe351712f7158, 0x4001211860}, 0x4000858230, {0xad46400?, 0x39fcb2bac404?, 0xad46400?})
    /build/cluster-autoscaler/loop/run.go:36 +0x80
main.run(0x4000858230, {0x7478858, 0x400052d170})
    /build/cluster-autoscaler/main.go:324 +0x2f0
main.main.func2({0x0?, 0x0?})
    /build/cluster-autoscaler/main.go:433 +0x28
created by k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run in goroutine 1
    /go/pkg/mod/k8s.io/client-go@v0.34.2/tools/leaderelection/leaderelection.go:220 +0xe4

....

goroutine 272269381 [runnable]:
k8s.io/autoscaler/cluster-autoscaler/core/scaledown/deletiontracker.(*NodeDeletionTracker).EndDeletion(0x4004e7c550, {0x4002dd7a80, 0x34}, {0x407b582d80, 0x1d}, {{0x0?, 0x0?}, 0x404f3cbeb0?, 0x0?})
    /build/cluster-autoscaler/core/scaledown/deletiontracker/nodedeletiontracker.go:99 +0x26c
k8s.io/autoscaler/cluster-autoscaler/core/scaledown/actuation.RegisterAndRecordSuccessfulScaleDownEvent(0x4000644008, {0x74675e0, 0x4002ec4740}, 0x40532b9808, {0x74905e0, 0x404e8f4440}, 0x0, 0x4004e7c550)
    /build/cluster-autoscaler/core/scaledown/actuation/delete_in_batch.go:227 +0x2c4
k8s.io/autoscaler/cluster-autoscaler/core/scaledown/actuation.(*NodeDeletionBatcher).deleteNodesAndRegisterStatus(0x40011df8c0, {0x40920dcd10, 0x1, 0x4090e84ba0?}, {0x4002dd7a80, 0x34}, 0x0)
    /build/cluster-autoscaler/core/scaledown/actuation/delete_in_batch.go:95 +0x118
created by k8s.io/autoscaler/cluster-autoscaler/core/scaledown/actuation.(*NodeDeletionBatcher).AddNodes in goroutine 272269380
    /build/cluster-autoscaler/core/scaledown/actuation/delete_in_batch.go:75 +0xd4

What type of PR is this?

/kind bug

/area cluster-autoscaler

What this PR does / why we need it:

Special notes for your reviewer:

Does this PR introduce a user-facing change?

Protect concurrent map access in ScaleDownCandidatesDelayProcessor

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:

The ScaleDownCandidatesDelayProcessor uses three maps (scaleUps, scaleDowns,
scaleDownFailures) that are accessed concurrently without synchronization,
causing occasional "fatal error: concurrent map read and map write" panics
like the one logged below:

```
I0125 23:53:49.284467      94 aws_manager.go:169] DeleteInstances was called: scheduling an ASG list refresh for next main loop evaluation
fatal error: concurrent map read and map write

goroutine 64 [running]:
internal/runtime/maps.fatal({0x6a1d287?, 0x358cfec?})
    /usr/local/go/src/runtime/panic.go:1058 +0x20
k8s.io/autoscaler/cluster-autoscaler/processors/scaledowncandidates.(*ScaleDownCandidatesDelayProcessor).GetScaleDownCandidates.func1(0x4004791290, 0x0, {0x69a3ae1, 0xb})
    /build/cluster-autoscaler/processors/scaledowncandidates/scale_down_candidates_delay_processor.go:64 +0x74
k8s.io/autoscaler/cluster-autoscaler/processors/scaledowncandidates.(*ScaleDownCandidatesDelayProcessor).GetScaleDownCandidates(0x40047a2150, 0x4000644008, {0x4041b70008, 0x3bc, 0x22f?})
    /build/cluster-autoscaler/processors/scaledowncandidates/scale_down_candidates_delay_processor.go:77 +0x2dc
k8s.io/autoscaler/cluster-autoscaler/processors/scaledowncandidates.(*combinedScaleDownCandidatesProcessor).GetScaleDownCandidates(0xad47200?, 0x4000644008, {0x40405b8008?, 0x0?, 0x0?})
    /build/cluster-autoscaler/processors/scaledowncandidates/scale_down_candidates_processor.go:59 +0x6c
k8s.io/autoscaler/cluster-autoscaler/core.(*StaticAutoscaler).RunOnce(0x4001211860, {0x19035c?, 0x1d091a7098197?, 0xad46400?})
    /build/cluster-autoscaler/core/static_autoscaler.go:575 +0x1f20
k8s.io/autoscaler/cluster-autoscaler/loop.RunAutoscalerOnce({0xe351712f7158, 0x4001211860}, 0x4000858230, {0xad46400?, 0x39fcb2bac404?, 0xad46400?})
    /build/cluster-autoscaler/loop/run.go:36 +0x80
main.run(0x4000858230, {0x7478858, 0x400052d170})
    /build/cluster-autoscaler/main.go:324 +0x2f0
main.main.func2({0x0?, 0x0?})
    /build/cluster-autoscaler/main.go:433 +0x28
created by k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run in goroutine 1
    /go/pkg/mod/k8s.io/client-go@v0.34.2/tools/leaderelection/leaderelection.go:220 +0xe4

....

goroutine 272269381 [runnable]:
k8s.io/autoscaler/cluster-autoscaler/core/scaledown/deletiontracker.(*NodeDeletionTracker).EndDeletion(0x4004e7c550, {0x4002dd7a80, 0x34}, {0x407b582d80, 0x1d}, {{0x0?, 0x0?}, 0x404f3cbeb0?, 0x0?})
    /build/cluster-autoscaler/core/scaledown/deletiontracker/nodedeletiontracker.go:99 +0x26c
k8s.io/autoscaler/cluster-autoscaler/core/scaledown/actuation.RegisterAndRecordSuccessfulScaleDownEvent(0x4000644008, {0x74675e0, 0x4002ec4740}, 0x40532b9808, {0x74905e0, 0x404e8f4440}, 0x0, 0x4004e7c550)
    /build/cluster-autoscaler/core/scaledown/actuation/delete_in_batch.go:227 +0x2c4
k8s.io/autoscaler/cluster-autoscaler/core/scaledown/actuation.(*NodeDeletionBatcher).deleteNodesAndRegisterStatus(0x40011df8c0, {0x40920dcd10, 0x1, 0x4090e84ba0?}, {0x4002dd7a80, 0x34}, 0x0)
    /build/cluster-autoscaler/core/scaledown/actuation/delete_in_batch.go:95 +0x118
created by k8s.io/autoscaler/cluster-autoscaler/core/scaledown/actuation.(*NodeDeletionBatcher).AddNodes in goroutine 272269380
    /build/cluster-autoscaler/core/scaledown/actuation/delete_in_batch.go:75 +0xd4

```
@k8s-ci-robot k8s-ci-robot added do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-area area/cluster-autoscaler labels Jan 27, 2026
@k8s-ci-robot
Copy link
Contributor

Hi @bpineau. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Jan 27, 2026
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: bpineau
Once this PR has been reviewed and has the lgtm label, please assign x13n for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. release-note Denotes a PR that will be considered when it comes time to generate release notes. and removed do-not-merge/needs-area do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. labels Jan 27, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/cluster-autoscaler cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants