Skip to content

HYPERFLEET-722 - fix: update adapter configs to follow new standard#18

Open
rh-amarin wants to merge 1 commit intoopenshift-hyperfleet:mainfrom
rh-amarin:update-adapter-configs
Open

HYPERFLEET-722 - fix: update adapter configs to follow new standard#18
rh-amarin wants to merge 1 commit intoopenshift-hyperfleet:mainfrom
rh-amarin:update-adapter-configs

Conversation

@rh-amarin
Copy link
Contributor

@rh-amarin rh-amarin commented Mar 6, 2026

https://issues.redhat.com/browse/HYPERFLEET-722

Update the adapters to follow the new standard

I also updated adapter3 to make use of the now available field event.owner_references.id in the events for nodepools

Summary by CodeRabbit

  • Refactor

    • Consolidated and flattened adapter and task configuration schemas for consistent, simplified layouts.
    • Standardized naming to snake_case and unified logging/debug blocks across adapters.
    • Streamlined resource discovery and status reporting structures for clearer operational flow.
  • New Features

    • Added richer parameter support (e.g., cluster and namespace identifiers) and expanded message payload fields.
    • Introduced structured preconditions/validation checks and manifest-style resource orchestration.
    • Enhanced post-action/status payloads for more detailed reporting.

@coderabbitai
Copy link

coderabbitai bot commented Mar 6, 2026

Walkthrough

This PR flattens and renames fields across multiple Helm adapter configs and task-configs for adapter1, adapter2, and adapter3. Top-level Kubernetes-style wrappers (apiVersion/kind/metadata/spec) are removed and replaced by a single root adapter object; many keys are converted from camelCase to snake_case (e.g., hyperfleetApi → hyperfleet_api, subscriptionId → subscription_id, kubeConfigPath → kube_config_path). Task configs are restructured: params and preconditions are promoted, preconditions/validationCheck CEL expressions and api_call captures are introduced or formalized, resources/manifests and discovery are reorganized, and post/post_actions payloads and status extraction are consolidated with dynamic adapter name templates.

Sequence Diagram(s)

mermaid
sequenceDiagram
participant EventSource as External Event
participant Adapter as Adapter (helm config)
participant API as Hyperfleet API
participant K8s as Kubernetes API
participant Maestro as Maestro/ManifestWork
Note over EventSource,Adapter: 1) Event triggers adapter task
EventSource->>Adapter: provide params (cluster_id, namespace, etc.)
Adapter->>API: precondition api_call (GET /clusters/{{cluster_id}}/nodepools/{{nodepool_id}})
API-->>Adapter: capture fields (readyConditionStatus, generation, name)
Adapter->>Adapter: evaluate validationCheck (CEL)
alt validation passes
Adapter->>Maestro: create/submit manifestWork / resources
Maestro->>K8s: apply manifests (Namespace, ConfigMap, etc.)
K8s-->>Maestro: resource status callbacks
Maestro-->>Adapter: discovery data / observed status
Adapter->>API: post_action api_call (POST status payload)
API-->>Adapter: ack
else validation fails
Adapter->>API: post_action api_call (validation failure payload)
API-->>Adapter: ack
end

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • ciaranRoche
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title directly reflects the main change: updating adapter configs to follow a new standard, which is the primary focus across all six modified files.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (1)
helm/adapter2/adapter-task-config.yaml (1)

39-41: Replace hardcoded placement target before multi-cluster rollout.

Line 40 hardcodes placementClusterName to "cluster1", so every run resolves to the same target_cluster (Line 60). This should come from placement output/event data or an explicit deploy-time input.

If helpful, I can draft the param/capture wiring once the placement field contract is finalized.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/adapter2/adapter-task-config.yaml` around lines 39 - 41, The
placementClusterName entry is hardcoded to "\"cluster1\"" causing target_cluster
to always resolve to the same cluster; update the placementClusterName
expression to pull from the placement output or an explicit deploy-time input
instead of a literal string (e.g., reference the placement event field or a
chart/value parameter), and ensure target_cluster (the consumer of
placementClusterName) uses that value; locate the placementClusterName key in
adapter-task-config.yaml and replace the literal expression with the appropriate
placement-output or parameter reference (or a placeholder variable that your
deployment tooling will populate) so multi-cluster rollouts receive the correct
target.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@helm/adapter1/adapter-config.yaml`:
- Line 7: The merged-config debug flag debug_config is currently enabled by
default; change its default value to false to prevent verbose merged-config
logging and reduce risk of sensitive-data exposure, i.e., update the
debug_config setting in adapter-config.yaml to false and ensure any chart/values
templates or README mention that debug_config is opt-in rather than enabled by
default.

In `@helm/adapter1/adapter-task-config.yaml`:
- Around line 133-136: The accessor paths in the post data extraction are
incorrect: replace the erroneous repeated segment
"resources.?resources.resource0" with the correct "resources.?resource0"
wherever it appears (e.g., the metadata name expression and the
creationTimestamp expression) so they match the pattern used in
adapter2/adapter3 and resolve correctly.

In `@helm/adapter2/adapter-config.yaml`:
- Line 38: The Helm chart currently sets the Maestro transport flag insecure:
true which disables TLS verification; change insecure to false (or remove the
key) in the Maestro/transport configuration so TLS verification is enforced, and
ensure corresponding TLS settings/certificates are provided (e.g., enable TLS
certs/ca/tls.enabled entries) so the chart can establish verified connections;
locate the insecure: true entry in the adapter-config.yaml under the Maestro
transport section and update it to false and verify related TLS keys are
present.
- Line 7: The merged-config debug flag is enabled (debug_config: true) which
should be off by default; change the value of debug_config to false in
adapter-config.yaml (or remove the true setting so defaults disable
merged-config logging) and, if applicable, add a brief comment or documentation
note explaining how to enable it when needed.
- Around line 33-37: The current source_id and client_id values
("hyperfleet-adapter" and "hyperfleet-adapter-client") are generic and risk
collisions; update the values for the symbols source_id and client_id in
adapter-config.yaml to unique, adapter-specific identifiers (for example include
the adapter name/namespace or a short unique suffix) and ensure client_id either
matches the new source_id or is a distinct unique value; also update the
associated HYPERFLEET_MAESTRO_CLIENT_ID environment usage if you change
client_id so runtime configuration matches the new identifier.

In `@helm/adapter3/adapter-config.yaml`:
- Line 7: The merged-config debug flag is enabled (debug_config: true) but
should be disabled by default; update the configuration to set debug_config:
false (or remove the key to rely on documented default) so merged runtime config
is not logged, ensuring the change targets the debug_config entry in the
adapter-config.yaml.

---

Nitpick comments:
In `@helm/adapter2/adapter-task-config.yaml`:
- Around line 39-41: The placementClusterName entry is hardcoded to
"\"cluster1\"" causing target_cluster to always resolve to the same cluster;
update the placementClusterName expression to pull from the placement output or
an explicit deploy-time input instead of a literal string (e.g., reference the
placement event field or a chart/value parameter), and ensure target_cluster
(the consumer of placementClusterName) uses that value; locate the
placementClusterName key in adapter-task-config.yaml and replace the literal
expression with the appropriate placement-output or parameter reference (or a
placeholder variable that your deployment tooling will populate) so
multi-cluster rollouts receive the correct target.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: cc85da2f-0bfc-4afa-b520-1286192b0baa

📥 Commits

Reviewing files that changed from the base of the PR and between 15ec000 and f796f8d.

📒 Files selected for processing (6)
  • helm/adapter1/adapter-config.yaml
  • helm/adapter1/adapter-task-config.yaml
  • helm/adapter2/adapter-config.yaml
  • helm/adapter2/adapter-task-config.yaml
  • helm/adapter3/adapter-config.yaml
  • helm/adapter3/adapter-task-config.yaml

log:
level: debug
# Log the full merged configuration after load (default: false)
debug_config: true
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Disable merged-config debug logging by default.

Line 7 enables verbose merged-config logging. Keep this disabled by default to reduce accidental sensitive-data exposure in logs.

🔧 Suggested change
-debug_config: true
+debug_config: false
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
debug_config: true
debug_config: false
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/adapter1/adapter-config.yaml` at line 7, The merged-config debug flag
debug_config is currently enabled by default; change its default value to false
to prevent verbose merged-config logging and reduce risk of sensitive-data
exposure, i.e., update the debug_config setting in adapter-config.yaml to false
and ensure any chart/values templates or README mention that debug_config is
opt-in rather than enabled by default.

log:
level: debug
# Log the full merged configuration after load (default: false)
debug_config: true
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Disable merged-config debug logging by default.

Line 7 enables full merged-config logging. Keep this off by default to avoid leaking env-injected configuration details.

🔧 Suggested change
-debug_config: true
+debug_config: false
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
debug_config: true
debug_config: false
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/adapter2/adapter-config.yaml` at line 7, The merged-config debug flag is
enabled (debug_config: true) which should be off by default; change the value of
debug_config to false in adapter-config.yaml (or remove the true setting so
defaults disable merged-config logging) and, if applicable, add a brief comment
or documentation note explaining how to enable it when needed.

Comment on lines +33 to +37
source_id: "hyperfleet-adapter"

# Client identifier (defaults to source_id if not specified)
# Environment variable: HYPERFLEET_MAESTRO_CLIENT_ID
client_id: "hyperfleet-adapter-client"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use adapter-specific Maestro identifiers.

Line 33 and Line 37 use generic IDs, while the inline comment says source_id must be unique across adapters. Generic IDs increase routing/attribution collision risk.

🔧 Suggested change
-    source_id: "hyperfleet-adapter"
+    source_id: "adapter2"
@@
-    client_id: "hyperfleet-adapter-client"
+    client_id: "adapter2-client"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
source_id: "hyperfleet-adapter"
# Client identifier (defaults to source_id if not specified)
# Environment variable: HYPERFLEET_MAESTRO_CLIENT_ID
client_id: "hyperfleet-adapter-client"
source_id: "adapter2"
# Client identifier (defaults to source_id if not specified)
# Environment variable: HYPERFLEET_MAESTRO_CLIENT_ID
client_id: "adapter2-client"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/adapter2/adapter-config.yaml` around lines 33 - 37, The current
source_id and client_id values ("hyperfleet-adapter" and
"hyperfleet-adapter-client") are generic and risk collisions; update the values
for the symbols source_id and client_id in adapter-config.yaml to unique,
adapter-specific identifiers (for example include the adapter name/namespace or
a short unique suffix) and ensure client_id either matches the new source_id or
is a distinct unique value; also update the associated
HYPERFLEET_MAESTRO_CLIENT_ID environment usage if you change client_id so
runtime configuration matches the new identifier.

# Client identifier (defaults to source_id if not specified)
# Environment variable: HYPERFLEET_MAESTRO_CLIENT_ID
client_id: "hyperfleet-adapter-client"
insecure: true
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid insecure Maestro transport by default.

Line 38 sets insecure: true, which disables TLS verification and weakens connection security.

🔧 Suggested change
-    insecure: true
+    insecure: false
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
insecure: true
insecure: false
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/adapter2/adapter-config.yaml` at line 38, The Helm chart currently sets
the Maestro transport flag insecure: true which disables TLS verification;
change insecure to false (or remove the key) in the Maestro/transport
configuration so TLS verification is enforced, and ensure corresponding TLS
settings/certificates are provided (e.g., enable TLS certs/ca/tls.enabled
entries) so the chart can establish verified connections; locate the insecure:
true entry in the adapter-config.yaml under the Maestro transport section and
update it to false and verify related TLS keys are present.

log:
level: debug
# Log the full merged configuration after load (default: false)
debug_config: true
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Disable merged-config debug logging by default.

Line 7 sets debug_config: true even though the comment documents default: false; this can expose merged runtime config (including sensitive env-derived values) in logs.

🔧 Suggested change
-debug_config: true
+debug_config: false
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
debug_config: true
debug_config: false
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/adapter3/adapter-config.yaml` at line 7, The merged-config debug flag is
enabled (debug_config: true) but should be disabled by default; update the
configuration to set debug_config: false (or remove the key to rely on
documented default) so merged runtime config is not logged, ensuring the change
targets the debug_config entry in the adapter-config.yaml.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (2)
helm/adapter1/adapter-config.yaml (1)

6-9: ⚠️ Potential issue | 🟠 Major

Disable merged-config logging by default.

Line 7 turns full merged-config dumping back on. Once deploy-time values inject credentials or tokens, this is an easy way to leak them into logs.

🔧 Suggested change
-debug_config: true
+debug_config: false
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/adapter1/adapter-config.yaml` around lines 6 - 9, The config currently
enables full merged-config dumping via the debug_config key and sets log.level
to debug, which risks leaking secrets; change debug_config to false and lower
log.level from debug to info (or warn) so merged-config is not logged by default
and verbose debug logs are avoided.
helm/adapter1/adapter-task-config.yaml (1)

133-136: ⚠️ Potential issue | 🟠 Major

Fix the status data extractor path.

Lines 133 and 136 still use resources.?resources.resource0..., so these fields walk a non-existent intermediate key and the payload data comes back empty.

🔧 Suggested change
-                resources.?resources.resource0.?metadata.?name.orValue("")
+                resources.?resource0.?metadata.?name.orValue("")
@@
-                resources.?resources.resource0.?metadata.?creationTimestamp.orValue("")
+                resources.?resource0.?metadata.?creationTimestamp.orValue("")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/adapter1/adapter-task-config.yaml` around lines 133 - 136, The extractor
paths mistakenly include a duplicate intermediate key
("resources.?resources.resource0...") causing empty payloads; update both
occurrences so they reference the correct path
("resources.?resource0.?metadata.?name.orValue(\"\")" and the creationTimestamp
expression to use
"resources.?resource0.?metadata.?creationTimestamp.orValue(\"\")") by removing
the extra "resources" segment in the path.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@helm/adapter2/adapter-task-config.yaml`:
- Around line 39-41: The config currently hardcodes placementClusterName to
"\"cluster1\"" which forces target_cluster for ManifestWork to the same cluster;
change placementClusterName to reference the dynamic placement result (e.g.,
remove the literal string and use the proper templating/expression to read the
placement adapter output) so target_cluster uses that variable at runtime;
update the entry for placementClusterName (and any usages of
target_cluster/ManifestWork generation) to interpolate or reference the
placement adapter's value instead of the fixed "cluster1" string.

In `@helm/adapter3/adapter-task-config.yaml`:
- Around line 122-135: The current Health, reason and message expressions only
check for resources.resource0.data.nodepoolId and dump the whole resource JSON;
update them to derive health from the runtime status and surface execution
errors: change the Health expression to require both
has(resources.resource0.data.nodepoolId) and resources.resource0.status.state ==
"Succeeded" (or not equals "Failed"), update reason to pick a success/failure
text based on resources.resource0.status.state, and replace message to emit
resources.resource0.status.error or resources.resource0.status.message (falling
back to the nodepoolId) instead of toJson(resources.resource0) so runtime errors
are shown.

---

Duplicate comments:
In `@helm/adapter1/adapter-config.yaml`:
- Around line 6-9: The config currently enables full merged-config dumping via
the debug_config key and sets log.level to debug, which risks leaking secrets;
change debug_config to false and lower log.level from debug to info (or warn) so
merged-config is not logged by default and verbose debug logs are avoided.

In `@helm/adapter1/adapter-task-config.yaml`:
- Around line 133-136: The extractor paths mistakenly include a duplicate
intermediate key ("resources.?resources.resource0...") causing empty payloads;
update both occurrences so they reference the correct path
("resources.?resource0.?metadata.?name.orValue(\"\")" and the creationTimestamp
expression to use
"resources.?resource0.?metadata.?creationTimestamp.orValue(\"\")") by removing
the extra "resources" segment in the path.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 77ebfba6-c75b-4960-80af-01b427fcc61b

📥 Commits

Reviewing files that changed from the base of the PR and between f796f8d and d7e26fe.

📒 Files selected for processing (7)
  • helm/adapter1/adapter-config.yaml
  • helm/adapter1/adapter-task-config.yaml
  • helm/adapter2/adapter-config.yaml
  • helm/adapter2/adapter-task-config.yaml
  • helm/adapter3/adapter-config.yaml
  • helm/adapter3/adapter-task-config.yaml
  • helm/sentinel-nodepools/values.yaml
🚧 Files skipped from review as they are similar to previous changes (2)
  • helm/adapter3/adapter-config.yaml
  • helm/adapter2/adapter-config.yaml

Comment on lines +39 to +41
- name: "placementClusterName"
expression: "\"cluster1\"" # TBC coming from placement adapter
description: "Unique identifier for the target maestro"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't hardcode the Maestro target cluster.

Lines 39-41 pin placementClusterName to "cluster1", and Line 60 then uses it as target_cluster. That routes every ManifestWork to the same cluster regardless of the actual placement result.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/adapter2/adapter-task-config.yaml` around lines 39 - 41, The config
currently hardcodes placementClusterName to "\"cluster1\"" which forces
target_cluster for ManifestWork to the same cluster; change placementClusterName
to reference the dynamic placement result (e.g., remove the literal string and
use the proper templating/expression to read the placement adapter output) so
target_cluster uses that variable at runtime; update the entry for
placementClusterName (and any usages of target_cluster/ManifestWork generation)
to interpolate or reference the placement adapter's value instead of the fixed
"cluster1" string.

Comment on lines +122 to +135
- type: "Health"
status:
expression: |
has(resources.resource0.data.nodepoolId)
? "True"
: "False"
reason:
expression: |
has(resources.resource0.data.nodepoolId)
? "ConfigMap data available"
: "ConfigMap data not yet available"
message:
expression: |
toJson(resources.resource0)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Restore runtime-based health reporting.

Lines 124-135 only check whether the discovered ConfigMap contains data.nodepoolId, so a failed run can still emit Health=True. The message also becomes raw resource JSON instead of the execution error.

🔧 Suggested change
           - type: "Health"
             status:
               expression: |
-                has(resources.resource0.data.nodepoolId)
-                  ? "True"
-                  : "False"
+                adapter.?executionStatus.orValue("") == "success"
+                  ? "True"
+                  : (adapter.?executionStatus.orValue("") == "failed" ? "False" : "Unknown")
             reason:
               expression: |
-                has(resources.resource0.data.nodepoolId)
-                  ? "ConfigMap data available"
-                  : "ConfigMap data not yet available"
+                adapter.?errorReason.orValue("") != ""
+                  ? adapter.?errorReason.orValue("")
+                  : "Healthy"
             message:
               expression: |
-                toJson(resources.resource0)
+                adapter.?errorMessage.orValue("") != ""
+                  ? adapter.?errorMessage.orValue("")
+                  : "All adapter operations completed successfully"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/adapter3/adapter-task-config.yaml` around lines 122 - 135, The current
Health, reason and message expressions only check for
resources.resource0.data.nodepoolId and dump the whole resource JSON; update
them to derive health from the runtime status and surface execution errors:
change the Health expression to require both
has(resources.resource0.data.nodepoolId) and resources.resource0.status.state ==
"Succeeded" (or not equals "Failed"), update reason to pick a success/failure
text based on resources.resource0.status.state, and replace message to emit
resources.resource0.status.error or resources.resource0.status.message (falling
back to the nodepoolId) instead of toJson(resources.resource0) so runtime errors
are shown.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant