-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Protect concurrent map access in ScaleDownCandidatesDelayProcessor #9130
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Protect concurrent map access in ScaleDownCandidatesDelayProcessor #9130
Conversation
The ScaleDownCandidatesDelayProcessor uses three maps (scaleUps, scaleDowns,
scaleDownFailures) that are accessed concurrently without synchronization,
causing occasional "fatal error: concurrent map read and map write" panics
like the one logged below:
```
I0125 23:53:49.284467 94 aws_manager.go:169] DeleteInstances was called: scheduling an ASG list refresh for next main loop evaluation
fatal error: concurrent map read and map write
goroutine 64 [running]:
internal/runtime/maps.fatal({0x6a1d287?, 0x358cfec?})
/usr/local/go/src/runtime/panic.go:1058 +0x20
k8s.io/autoscaler/cluster-autoscaler/processors/scaledowncandidates.(*ScaleDownCandidatesDelayProcessor).GetScaleDownCandidates.func1(0x4004791290, 0x0, {0x69a3ae1, 0xb})
/build/cluster-autoscaler/processors/scaledowncandidates/scale_down_candidates_delay_processor.go:64 +0x74
k8s.io/autoscaler/cluster-autoscaler/processors/scaledowncandidates.(*ScaleDownCandidatesDelayProcessor).GetScaleDownCandidates(0x40047a2150, 0x4000644008, {0x4041b70008, 0x3bc, 0x22f?})
/build/cluster-autoscaler/processors/scaledowncandidates/scale_down_candidates_delay_processor.go:77 +0x2dc
k8s.io/autoscaler/cluster-autoscaler/processors/scaledowncandidates.(*combinedScaleDownCandidatesProcessor).GetScaleDownCandidates(0xad47200?, 0x4000644008, {0x40405b8008?, 0x0?, 0x0?})
/build/cluster-autoscaler/processors/scaledowncandidates/scale_down_candidates_processor.go:59 +0x6c
k8s.io/autoscaler/cluster-autoscaler/core.(*StaticAutoscaler).RunOnce(0x4001211860, {0x19035c?, 0x1d091a7098197?, 0xad46400?})
/build/cluster-autoscaler/core/static_autoscaler.go:575 +0x1f20
k8s.io/autoscaler/cluster-autoscaler/loop.RunAutoscalerOnce({0xe351712f7158, 0x4001211860}, 0x4000858230, {0xad46400?, 0x39fcb2bac404?, 0xad46400?})
/build/cluster-autoscaler/loop/run.go:36 +0x80
main.run(0x4000858230, {0x7478858, 0x400052d170})
/build/cluster-autoscaler/main.go:324 +0x2f0
main.main.func2({0x0?, 0x0?})
/build/cluster-autoscaler/main.go:433 +0x28
created by k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run in goroutine 1
/go/pkg/mod/k8s.io/client-go@v0.34.2/tools/leaderelection/leaderelection.go:220 +0xe4
....
goroutine 272269381 [runnable]:
k8s.io/autoscaler/cluster-autoscaler/core/scaledown/deletiontracker.(*NodeDeletionTracker).EndDeletion(0x4004e7c550, {0x4002dd7a80, 0x34}, {0x407b582d80, 0x1d}, {{0x0?, 0x0?}, 0x404f3cbeb0?, 0x0?})
/build/cluster-autoscaler/core/scaledown/deletiontracker/nodedeletiontracker.go:99 +0x26c
k8s.io/autoscaler/cluster-autoscaler/core/scaledown/actuation.RegisterAndRecordSuccessfulScaleDownEvent(0x4000644008, {0x74675e0, 0x4002ec4740}, 0x40532b9808, {0x74905e0, 0x404e8f4440}, 0x0, 0x4004e7c550)
/build/cluster-autoscaler/core/scaledown/actuation/delete_in_batch.go:227 +0x2c4
k8s.io/autoscaler/cluster-autoscaler/core/scaledown/actuation.(*NodeDeletionBatcher).deleteNodesAndRegisterStatus(0x40011df8c0, {0x40920dcd10, 0x1, 0x4090e84ba0?}, {0x4002dd7a80, 0x34}, 0x0)
/build/cluster-autoscaler/core/scaledown/actuation/delete_in_batch.go:95 +0x118
created by k8s.io/autoscaler/cluster-autoscaler/core/scaledown/actuation.(*NodeDeletionBatcher).AddNodes in goroutine 272269380
/build/cluster-autoscaler/core/scaledown/actuation/delete_in_batch.go:75 +0xd4
```
|
Hi @bpineau. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: bpineau The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
The ScaleDownCandidatesDelayProcessor uses three maps (scaleUps, scaleDowns, scaleDownFailures) that are accessed concurrently without synchronization, causing occasional "fatal error: concurrent map read and map write" panics like the one logged below:
What type of PR is this?
/kind bug
/area cluster-autoscaler
What this PR does / why we need it:
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: