HYPERFLEET-722 - fix: update adapter configs to follow new standard#18
HYPERFLEET-722 - fix: update adapter configs to follow new standard#18rh-amarin wants to merge 1 commit intoopenshift-hyperfleet:mainfrom
Conversation
WalkthroughThis PR flattens and renames fields across multiple Helm adapter configs and task-configs for adapter1, adapter2, and adapter3. Top-level Kubernetes-style wrappers (apiVersion/kind/metadata/spec) are removed and replaced by a single root adapter object; many keys are converted from camelCase to snake_case (e.g., hyperfleetApi → hyperfleet_api, subscriptionId → subscription_id, kubeConfigPath → kube_config_path). Task configs are restructured: params and preconditions are promoted, preconditions/validationCheck CEL expressions and api_call captures are introduced or formalized, resources/manifests and discovery are reorganized, and post/post_actions payloads and status extraction are consolidated with dynamic adapter name templates. Sequence Diagram(s)mermaid Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (1)
helm/adapter2/adapter-task-config.yaml (1)
39-41: Replace hardcoded placement target before multi-cluster rollout.Line 40 hardcodes
placementClusterNameto"cluster1", so every run resolves to the sametarget_cluster(Line 60). This should come from placement output/event data or an explicit deploy-time input.If helpful, I can draft the param/capture wiring once the placement field contract is finalized.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@helm/adapter2/adapter-task-config.yaml` around lines 39 - 41, The placementClusterName entry is hardcoded to "\"cluster1\"" causing target_cluster to always resolve to the same cluster; update the placementClusterName expression to pull from the placement output or an explicit deploy-time input instead of a literal string (e.g., reference the placement event field or a chart/value parameter), and ensure target_cluster (the consumer of placementClusterName) uses that value; locate the placementClusterName key in adapter-task-config.yaml and replace the literal expression with the appropriate placement-output or parameter reference (or a placeholder variable that your deployment tooling will populate) so multi-cluster rollouts receive the correct target.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@helm/adapter1/adapter-config.yaml`:
- Line 7: The merged-config debug flag debug_config is currently enabled by
default; change its default value to false to prevent verbose merged-config
logging and reduce risk of sensitive-data exposure, i.e., update the
debug_config setting in adapter-config.yaml to false and ensure any chart/values
templates or README mention that debug_config is opt-in rather than enabled by
default.
In `@helm/adapter1/adapter-task-config.yaml`:
- Around line 133-136: The accessor paths in the post data extraction are
incorrect: replace the erroneous repeated segment
"resources.?resources.resource0" with the correct "resources.?resource0"
wherever it appears (e.g., the metadata name expression and the
creationTimestamp expression) so they match the pattern used in
adapter2/adapter3 and resolve correctly.
In `@helm/adapter2/adapter-config.yaml`:
- Line 38: The Helm chart currently sets the Maestro transport flag insecure:
true which disables TLS verification; change insecure to false (or remove the
key) in the Maestro/transport configuration so TLS verification is enforced, and
ensure corresponding TLS settings/certificates are provided (e.g., enable TLS
certs/ca/tls.enabled entries) so the chart can establish verified connections;
locate the insecure: true entry in the adapter-config.yaml under the Maestro
transport section and update it to false and verify related TLS keys are
present.
- Line 7: The merged-config debug flag is enabled (debug_config: true) which
should be off by default; change the value of debug_config to false in
adapter-config.yaml (or remove the true setting so defaults disable
merged-config logging) and, if applicable, add a brief comment or documentation
note explaining how to enable it when needed.
- Around line 33-37: The current source_id and client_id values
("hyperfleet-adapter" and "hyperfleet-adapter-client") are generic and risk
collisions; update the values for the symbols source_id and client_id in
adapter-config.yaml to unique, adapter-specific identifiers (for example include
the adapter name/namespace or a short unique suffix) and ensure client_id either
matches the new source_id or is a distinct unique value; also update the
associated HYPERFLEET_MAESTRO_CLIENT_ID environment usage if you change
client_id so runtime configuration matches the new identifier.
In `@helm/adapter3/adapter-config.yaml`:
- Line 7: The merged-config debug flag is enabled (debug_config: true) but
should be disabled by default; update the configuration to set debug_config:
false (or remove the key to rely on documented default) so merged runtime config
is not logged, ensuring the change targets the debug_config entry in the
adapter-config.yaml.
---
Nitpick comments:
In `@helm/adapter2/adapter-task-config.yaml`:
- Around line 39-41: The placementClusterName entry is hardcoded to
"\"cluster1\"" causing target_cluster to always resolve to the same cluster;
update the placementClusterName expression to pull from the placement output or
an explicit deploy-time input instead of a literal string (e.g., reference the
placement event field or a chart/value parameter), and ensure target_cluster
(the consumer of placementClusterName) uses that value; locate the
placementClusterName key in adapter-task-config.yaml and replace the literal
expression with the appropriate placement-output or parameter reference (or a
placeholder variable that your deployment tooling will populate) so
multi-cluster rollouts receive the correct target.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: cc85da2f-0bfc-4afa-b520-1286192b0baa
📒 Files selected for processing (6)
helm/adapter1/adapter-config.yamlhelm/adapter1/adapter-task-config.yamlhelm/adapter2/adapter-config.yamlhelm/adapter2/adapter-task-config.yamlhelm/adapter3/adapter-config.yamlhelm/adapter3/adapter-task-config.yaml
| log: | ||
| level: debug | ||
| # Log the full merged configuration after load (default: false) | ||
| debug_config: true |
There was a problem hiding this comment.
Disable merged-config debug logging by default.
Line 7 enables verbose merged-config logging. Keep this disabled by default to reduce accidental sensitive-data exposure in logs.
🔧 Suggested change
-debug_config: true
+debug_config: false📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| debug_config: true | |
| debug_config: false |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@helm/adapter1/adapter-config.yaml` at line 7, The merged-config debug flag
debug_config is currently enabled by default; change its default value to false
to prevent verbose merged-config logging and reduce risk of sensitive-data
exposure, i.e., update the debug_config setting in adapter-config.yaml to false
and ensure any chart/values templates or README mention that debug_config is
opt-in rather than enabled by default.
| log: | ||
| level: debug | ||
| # Log the full merged configuration after load (default: false) | ||
| debug_config: true |
There was a problem hiding this comment.
Disable merged-config debug logging by default.
Line 7 enables full merged-config logging. Keep this off by default to avoid leaking env-injected configuration details.
🔧 Suggested change
-debug_config: true
+debug_config: false📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| debug_config: true | |
| debug_config: false |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@helm/adapter2/adapter-config.yaml` at line 7, The merged-config debug flag is
enabled (debug_config: true) which should be off by default; change the value of
debug_config to false in adapter-config.yaml (or remove the true setting so
defaults disable merged-config logging) and, if applicable, add a brief comment
or documentation note explaining how to enable it when needed.
| source_id: "hyperfleet-adapter" | ||
|
|
||
| # Client identifier (defaults to source_id if not specified) | ||
| # Environment variable: HYPERFLEET_MAESTRO_CLIENT_ID | ||
| client_id: "hyperfleet-adapter-client" |
There was a problem hiding this comment.
Use adapter-specific Maestro identifiers.
Line 33 and Line 37 use generic IDs, while the inline comment says source_id must be unique across adapters. Generic IDs increase routing/attribution collision risk.
🔧 Suggested change
- source_id: "hyperfleet-adapter"
+ source_id: "adapter2"
@@
- client_id: "hyperfleet-adapter-client"
+ client_id: "adapter2-client"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| source_id: "hyperfleet-adapter" | |
| # Client identifier (defaults to source_id if not specified) | |
| # Environment variable: HYPERFLEET_MAESTRO_CLIENT_ID | |
| client_id: "hyperfleet-adapter-client" | |
| source_id: "adapter2" | |
| # Client identifier (defaults to source_id if not specified) | |
| # Environment variable: HYPERFLEET_MAESTRO_CLIENT_ID | |
| client_id: "adapter2-client" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@helm/adapter2/adapter-config.yaml` around lines 33 - 37, The current
source_id and client_id values ("hyperfleet-adapter" and
"hyperfleet-adapter-client") are generic and risk collisions; update the values
for the symbols source_id and client_id in adapter-config.yaml to unique,
adapter-specific identifiers (for example include the adapter name/namespace or
a short unique suffix) and ensure client_id either matches the new source_id or
is a distinct unique value; also update the associated
HYPERFLEET_MAESTRO_CLIENT_ID environment usage if you change client_id so
runtime configuration matches the new identifier.
| # Client identifier (defaults to source_id if not specified) | ||
| # Environment variable: HYPERFLEET_MAESTRO_CLIENT_ID | ||
| client_id: "hyperfleet-adapter-client" | ||
| insecure: true |
There was a problem hiding this comment.
Avoid insecure Maestro transport by default.
Line 38 sets insecure: true, which disables TLS verification and weakens connection security.
🔧 Suggested change
- insecure: true
+ insecure: false📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| insecure: true | |
| insecure: false |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@helm/adapter2/adapter-config.yaml` at line 38, The Helm chart currently sets
the Maestro transport flag insecure: true which disables TLS verification;
change insecure to false (or remove the key) in the Maestro/transport
configuration so TLS verification is enforced, and ensure corresponding TLS
settings/certificates are provided (e.g., enable TLS certs/ca/tls.enabled
entries) so the chart can establish verified connections; locate the insecure:
true entry in the adapter-config.yaml under the Maestro transport section and
update it to false and verify related TLS keys are present.
| log: | ||
| level: debug | ||
| # Log the full merged configuration after load (default: false) | ||
| debug_config: true |
There was a problem hiding this comment.
Disable merged-config debug logging by default.
Line 7 sets debug_config: true even though the comment documents default: false; this can expose merged runtime config (including sensitive env-derived values) in logs.
🔧 Suggested change
-debug_config: true
+debug_config: false📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| debug_config: true | |
| debug_config: false |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@helm/adapter3/adapter-config.yaml` at line 7, The merged-config debug flag is
enabled (debug_config: true) but should be disabled by default; update the
configuration to set debug_config: false (or remove the key to rely on
documented default) so merged runtime config is not logged, ensuring the change
targets the debug_config entry in the adapter-config.yaml.
f796f8d to
d7e26fe
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (2)
helm/adapter1/adapter-config.yaml (1)
6-9:⚠️ Potential issue | 🟠 MajorDisable merged-config logging by default.
Line 7 turns full merged-config dumping back on. Once deploy-time values inject credentials or tokens, this is an easy way to leak them into logs.
🔧 Suggested change
-debug_config: true +debug_config: false🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@helm/adapter1/adapter-config.yaml` around lines 6 - 9, The config currently enables full merged-config dumping via the debug_config key and sets log.level to debug, which risks leaking secrets; change debug_config to false and lower log.level from debug to info (or warn) so merged-config is not logged by default and verbose debug logs are avoided.helm/adapter1/adapter-task-config.yaml (1)
133-136:⚠️ Potential issue | 🟠 MajorFix the status data extractor path.
Lines 133 and 136 still use
resources.?resources.resource0..., so these fields walk a non-existent intermediate key and the payload data comes back empty.🔧 Suggested change
- resources.?resources.resource0.?metadata.?name.orValue("") + resources.?resource0.?metadata.?name.orValue("") @@ - resources.?resources.resource0.?metadata.?creationTimestamp.orValue("") + resources.?resource0.?metadata.?creationTimestamp.orValue("")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@helm/adapter1/adapter-task-config.yaml` around lines 133 - 136, The extractor paths mistakenly include a duplicate intermediate key ("resources.?resources.resource0...") causing empty payloads; update both occurrences so they reference the correct path ("resources.?resource0.?metadata.?name.orValue(\"\")" and the creationTimestamp expression to use "resources.?resource0.?metadata.?creationTimestamp.orValue(\"\")") by removing the extra "resources" segment in the path.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@helm/adapter2/adapter-task-config.yaml`:
- Around line 39-41: The config currently hardcodes placementClusterName to
"\"cluster1\"" which forces target_cluster for ManifestWork to the same cluster;
change placementClusterName to reference the dynamic placement result (e.g.,
remove the literal string and use the proper templating/expression to read the
placement adapter output) so target_cluster uses that variable at runtime;
update the entry for placementClusterName (and any usages of
target_cluster/ManifestWork generation) to interpolate or reference the
placement adapter's value instead of the fixed "cluster1" string.
In `@helm/adapter3/adapter-task-config.yaml`:
- Around line 122-135: The current Health, reason and message expressions only
check for resources.resource0.data.nodepoolId and dump the whole resource JSON;
update them to derive health from the runtime status and surface execution
errors: change the Health expression to require both
has(resources.resource0.data.nodepoolId) and resources.resource0.status.state ==
"Succeeded" (or not equals "Failed"), update reason to pick a success/failure
text based on resources.resource0.status.state, and replace message to emit
resources.resource0.status.error or resources.resource0.status.message (falling
back to the nodepoolId) instead of toJson(resources.resource0) so runtime errors
are shown.
---
Duplicate comments:
In `@helm/adapter1/adapter-config.yaml`:
- Around line 6-9: The config currently enables full merged-config dumping via
the debug_config key and sets log.level to debug, which risks leaking secrets;
change debug_config to false and lower log.level from debug to info (or warn) so
merged-config is not logged by default and verbose debug logs are avoided.
In `@helm/adapter1/adapter-task-config.yaml`:
- Around line 133-136: The extractor paths mistakenly include a duplicate
intermediate key ("resources.?resources.resource0...") causing empty payloads;
update both occurrences so they reference the correct path
("resources.?resource0.?metadata.?name.orValue(\"\")" and the creationTimestamp
expression to use
"resources.?resource0.?metadata.?creationTimestamp.orValue(\"\")") by removing
the extra "resources" segment in the path.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 77ebfba6-c75b-4960-80af-01b427fcc61b
📒 Files selected for processing (7)
helm/adapter1/adapter-config.yamlhelm/adapter1/adapter-task-config.yamlhelm/adapter2/adapter-config.yamlhelm/adapter2/adapter-task-config.yamlhelm/adapter3/adapter-config.yamlhelm/adapter3/adapter-task-config.yamlhelm/sentinel-nodepools/values.yaml
🚧 Files skipped from review as they are similar to previous changes (2)
- helm/adapter3/adapter-config.yaml
- helm/adapter2/adapter-config.yaml
| - name: "placementClusterName" | ||
| expression: "\"cluster1\"" # TBC coming from placement adapter | ||
| description: "Unique identifier for the target maestro" |
There was a problem hiding this comment.
Don't hardcode the Maestro target cluster.
Lines 39-41 pin placementClusterName to "cluster1", and Line 60 then uses it as target_cluster. That routes every ManifestWork to the same cluster regardless of the actual placement result.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@helm/adapter2/adapter-task-config.yaml` around lines 39 - 41, The config
currently hardcodes placementClusterName to "\"cluster1\"" which forces
target_cluster for ManifestWork to the same cluster; change placementClusterName
to reference the dynamic placement result (e.g., remove the literal string and
use the proper templating/expression to read the placement adapter output) so
target_cluster uses that variable at runtime; update the entry for
placementClusterName (and any usages of target_cluster/ManifestWork generation)
to interpolate or reference the placement adapter's value instead of the fixed
"cluster1" string.
| - type: "Health" | ||
| status: | ||
| expression: | | ||
| has(resources.resource0.data.nodepoolId) | ||
| ? "True" | ||
| : "False" | ||
| reason: | ||
| expression: | | ||
| has(resources.resource0.data.nodepoolId) | ||
| ? "ConfigMap data available" | ||
| : "ConfigMap data not yet available" | ||
| message: | ||
| expression: | | ||
| toJson(resources.resource0) |
There was a problem hiding this comment.
Restore runtime-based health reporting.
Lines 124-135 only check whether the discovered ConfigMap contains data.nodepoolId, so a failed run can still emit Health=True. The message also becomes raw resource JSON instead of the execution error.
🔧 Suggested change
- type: "Health"
status:
expression: |
- has(resources.resource0.data.nodepoolId)
- ? "True"
- : "False"
+ adapter.?executionStatus.orValue("") == "success"
+ ? "True"
+ : (adapter.?executionStatus.orValue("") == "failed" ? "False" : "Unknown")
reason:
expression: |
- has(resources.resource0.data.nodepoolId)
- ? "ConfigMap data available"
- : "ConfigMap data not yet available"
+ adapter.?errorReason.orValue("") != ""
+ ? adapter.?errorReason.orValue("")
+ : "Healthy"
message:
expression: |
- toJson(resources.resource0)
+ adapter.?errorMessage.orValue("") != ""
+ ? adapter.?errorMessage.orValue("")
+ : "All adapter operations completed successfully"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@helm/adapter3/adapter-task-config.yaml` around lines 122 - 135, The current
Health, reason and message expressions only check for
resources.resource0.data.nodepoolId and dump the whole resource JSON; update
them to derive health from the runtime status and surface execution errors:
change the Health expression to require both
has(resources.resource0.data.nodepoolId) and resources.resource0.status.state ==
"Succeeded" (or not equals "Failed"), update reason to pick a success/failure
text based on resources.resource0.status.state, and replace message to emit
resources.resource0.status.error or resources.resource0.status.message (falling
back to the nodepoolId) instead of toJson(resources.resource0) so runtime errors
are shown.
https://issues.redhat.com/browse/HYPERFLEET-722
Update the adapters to follow the new standard
I also updated adapter3 to make use of the now available field
event.owner_references.idin the events for nodepoolsSummary by CodeRabbit
Refactor
New Features