Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minor refactor to scale-up orchestrator for more re-usability #7649

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,8 @@ type AsyncNodeGroupInitializer struct {
atomicScaleUp bool
}

func newAsyncNodeGroupInitializer(
// NewAsyncNodeGroupInitializer creates a new AsyncNodeGroupInitializer instance.
func NewAsyncNodeGroupInitializer(
nodeGroup cloudprovider.NodeGroup,
nodeInfo *framework.NodeInfo,
scaleUpExecutor *scaleUpExecutor,
Expand Down
5 changes: 3 additions & 2 deletions cluster-autoscaler/core/scaleup/orchestrator/executor.go
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,7 @@ func (e *scaleUpExecutor) executeScaleUpsParallel(
failedNodeGroups[i] = result.info.Group
scaleUpErrors[i] = result.err
}
return combineConcurrentScaleUpErrors(scaleUpErrors), failedNodeGroups
return CombineConcurrentScaleUpErrors(scaleUpErrors), failedNodeGroups
}
return nil, nil
}
Expand Down Expand Up @@ -188,7 +188,8 @@ func (e *scaleUpExecutor) executeScaleUp(
return nil
}

func combineConcurrentScaleUpErrors(errs []errors.AutoscalerError) errors.AutoscalerError {
// CombineConcurrentScaleUpErrors returns combined scale-up error to report after multiple concurrent scale-ups might haver failed.
func CombineConcurrentScaleUpErrors(errs []errors.AutoscalerError) errors.AutoscalerError {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't it make more sense as a part of the errors package?

if len(errs) == 0 {
return nil
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ func TestCombinedConcurrentScaleUpErrors(t *testing.T) {

for _, testCase := range testCases {
t.Run(testCase.desc, func(t *testing.T) {
combinedErr := combineConcurrentScaleUpErrors(testCase.errors)
combinedErr := CombineConcurrentScaleUpErrors(testCase.errors)
assert.Equal(t, testCase.expectedErr, combinedErr)
})
}
Expand Down
7 changes: 4 additions & 3 deletions cluster-autoscaler/core/scaleup/orchestrator/orchestrator.go
Original file line number Diff line number Diff line change
Expand Up @@ -222,7 +222,9 @@ func (o *ScaleUpOrchestrator) ScaleUp(
return buildNoOptionsAvailableStatus(markedEquivalenceGroups, skippedNodeGroups, nodeGroups), nil
}
var scaleUpStatus *status.ScaleUpStatus
createNodeGroupResults, scaleUpStatus, aErr = o.CreateNodeGroup(bestOption, nodeInfos, schedulablePodGroups, podEquivalenceGroups, daemonSets, allOrNothing)
oldId := bestOption.NodeGroup.Id()
initializer := NewAsyncNodeGroupInitializer(bestOption.NodeGroup, nodeInfos[oldId], o.scaleUpExecutor, o.taintConfig, daemonSets, o.processors.ScaleUpStatusProcessor, o.autoscalingContext, allOrNothing)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Creation of the initializer used to be flag guarded and here it is no longer the case - is that intentional? If not, can you keep the flag guard?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It may not be ideal, but I preferred this over the alternatives:

  • passing around a nil
  • creating a dummy initializer implementation for the case when flag is not flipped
    Overall creation of the initializer doesn't really do anything yet.

One obvious option that might make more sense (PLMK WDYT) is to split off orchestrator's CreateNodeGroupAsync method.

createNodeGroupResults, scaleUpStatus, aErr = o.CreateNodeGroup(bestOption, nodeInfos, schedulablePodGroups, podEquivalenceGroups, daemonSets, initializer)
if aErr != nil {
return scaleUpStatus, aErr
}
Expand Down Expand Up @@ -501,15 +503,14 @@ func (o *ScaleUpOrchestrator) CreateNodeGroup(
schedulablePodGroups map[string][]estimator.PodEquivalenceGroup,
podEquivalenceGroups []*equivalence.PodGroup,
daemonSets []*appsv1.DaemonSet,
allOrNothing bool,
initializer nodegroups.AsyncNodeGroupInitializer,
) ([]nodegroups.CreateNodeGroupResult, *status.ScaleUpStatus, errors.AutoscalerError) {
createNodeGroupResults := make([]nodegroups.CreateNodeGroupResult, 0)

oldId := initialOption.NodeGroup.Id()
var createNodeGroupResult nodegroups.CreateNodeGroupResult
var aErr errors.AutoscalerError
if o.autoscalingContext.AsyncNodeGroupsEnabled {
initializer := newAsyncNodeGroupInitializer(initialOption.NodeGroup, nodeInfos[oldId], o.scaleUpExecutor, o.taintConfig, daemonSets, o.processors.ScaleUpStatusProcessor, o.autoscalingContext, allOrNothing)
createNodeGroupResult, aErr = o.processors.NodeGroupManager.CreateNodeGroupAsync(o.autoscalingContext, initialOption.NodeGroup, initializer)
} else {
createNodeGroupResult, aErr = o.processors.NodeGroupManager.CreateNodeGroup(o.autoscalingContext, initialOption.NodeGroup)
Expand Down
Loading