Skip to content

Add Terraform infrastructure for MN Vectorization [coderabbit-ai-review]#119

Closed
SashkoMarchuk wants to merge 5 commits intomainfrom
feature/mn-vectorization-infra
Closed

Add Terraform infrastructure for MN Vectorization [coderabbit-ai-review]#119
SashkoMarchuk wants to merge 5 commits intomainfrom
feature/mn-vectorization-infra

Conversation

@SashkoMarchuk
Copy link
Contributor

@SashkoMarchuk SashkoMarchuk commented Feb 26, 2026

Summary

Terraform infrastructure for the MN Vectorization project (RAG system for meeting transcripts).

Resources created (all in us-east-1):

  • S3 bucket — storage for transcripts, chunks, embeddings (versioned, encrypted, public access blocked)
  • DynamoDB x2 — tasks (async query state, with TTL) + users (ACL)
  • IAM role + instance profile — EC2 worker permissions (Bedrock, S3, DynamoDB, Secrets Manager, CloudWatch)
  • Secrets Manager x4 — API keys: Anthropic, Cohere, Qdrant (URL + key)
  • CloudWatch log groups x4 — ingestion, retrieval, mcp-server, temporal-worker
  • CloudWatch alarms x4 — indexing failures, query latency p99, DynamoDB throttling
  • Security Group — MCP server (port 3000 VPC ingress, HTTPS + DNS egress)

File structure

mn-vectorization/infra/
├── backend.tf              # S3 backend (config passed at init)
├── provider.tf             # AWS provider, default tags
├── variables.tf            # All input variables with validation
├── locals.tf               # Name prefixes, DynamoDB/Secrets/LogGroup maps
├── data.tf                 # Existing VPC + EC2 data sources
├── s3.tf                   # Artifacts bucket
├── dynamodb.tf             # Tasks + Users tables
├── iam.tf                  # Worker role + 6 policies
├── secrets.tf              # API keys in Secrets Manager
├── cloudwatch.tf           # Log groups + alarms
├── security_group.tf       # MCP server SG
├── outputs.tf              # ARNs, names, IDs
├── environments/
│   ├── dev.tfvars
│   ├── staging.tfvars
│   └── prod.tfvars
└── .gitignore

Notes

  • References existing VPC and EC2 instance via data sources (does not create or modify them)
  • ec2_instance_id in dev.tfvars is a placeholder — needs real value
  • Secret values are placeholders (CHANGE_ME) — real keys set via aws secretsmanager put-secret-value
  • Security group and instance profile are created but not auto-attached to EC2

[coderabbit-ai-review]

@claude review

Summary by CodeRabbit

  • Infrastructure

    • Added full Terraform provisioning with environment-specific configs, variables, locals, and exposed outputs.
  • Monitoring

    • Centralized logging and alarms for pipeline failures, query latency, and DB throttling.
  • Storage & Encryption

    • Secure object storage with lifecycle rules, versioning, and optional KMS encryption.
  • Database

    • Managed NoSQL tables with per-table TTL and point-in-time recovery in prod.
  • Secrets

    • Integrated managed secret storage for API keys with versioning behavior.
  • Access & Networking

    • Added worker role/instance profile, fine-grained IAM policies, and security group rules.
  • CI/CD

    • Added Terraform workflow and configured remote state backend.
  • Chores

    • Updated ignore rules and tightened Terraform version constraint.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@github-actions
Copy link

CodeRabbit CodeRabbit

🤖 CodeRabbit AI Review Available

To request a code review from CodeRabbit AI, add [coderabbit-ai-review] to your PR title.

CodeRabbit will analyze your code and provide feedback on:

  • Logic and correctness
  • Security issues
  • Performance optimizations
  • Code quality and best practices
  • Error handling
  • Maintainability

Note: Reviews are only performed when [coderabbit-ai-review] is present in the PR title.

@coderabbitai
Copy link

coderabbitai bot commented Feb 26, 2026

Warning

Rate limit exceeded

@SashkoMarchuk has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 6 minutes and 0 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 79a74e7 and 8b8d8ba.

📒 Files selected for processing (1)
  • .github/workflows/terraform-mn-vectorization.yml

Warning

CodeRabbit GitHub Action detected

The repository is using both CodeRabbit Pro and CodeRabbit Open Source (via GitHub Actions), which is not recommended as it may lead to duplicate comments and extra noise. Please remove the CodeRabbit GitHub Action.

📝 Walkthrough

Walkthrough

Adds a new Terraform-based AWS infrastructure for mn-vectorization: provider/backend, S3, DynamoDB, IAM, CloudWatch, Secrets Manager, security group, data sources, locals/outputs, environment tfvars, CI workflow, and Terraform .gitignore updates.

Changes

Cohort / File(s) Summary
Provider & Backend / CI / Gitignore
mn-vectorization/infra/provider.tf, mn-vectorization/infra/backend.tf, .github/workflows/terraform-mn-vectorization.yml, mn-vectorization/infra/.gitignore
Sets Terraform required_version and AWS provider with default tags, configures S3 remote state backend, adds Terraform CI workflow (fmt, init, plan on PR, apply on main), and adds .gitignore rules for Terraform artifacts.
Variables, Locals & Environments
mn-vectorization/infra/variables.tf, mn-vectorization/infra/locals.tf, mn-vectorization/infra/environments/*.tfvars
Adds comprehensive variables (validations, sensitive flags), locals for naming, dynamodb/secrets/log_groups maps, and dev/staging/prod tfvars.
S3 & Lifecycle
mn-vectorization/infra/s3.tf
Adds aws_s3_bucket.artifacts with versioning, server-side encryption (conditional KMS), public access block, and lifecycle rules including conditional embeddings expiry.
DynamoDB
mn-vectorization/infra/dynamodb.tf
Converts aws_dynamodb_table.main to for_each over locals, PAY_PER_REQUEST billing, per-table hash_key and attribute, dynamic TTL block, PITR enabled for prod, and standardized naming/tags.
Secrets Manager
mn-vectorization/infra/secrets.tf
Creates aws_secretsmanager_secret and aws_secretsmanager_secret_version per local.secrets with environment-based recovery_window and lifecycle ignore on secret_string.
IAM & Instance Profile
mn-vectorization/infra/iam.tf
Adds aws_iam_role.worker, instance profile, and multiple inline aws_iam_role_policy resources (S3, DynamoDB, Bedrock, CloudWatch Logs, Secrets Manager) plus conditional KMS policy controlled by var.is_kms_enabled.
Security Group
mn-vectorization/infra/security_group.tf
Adds aws_security_group.mcp and associated ingress/egress rules for MCP server port, HTTPS egress, and DNS egress.
CloudWatch / Observability
mn-vectorization/infra/cloudwatch.tf
Adds CloudWatch log groups and metric alarms (indexing failures, query p99, per-table DynamoDB throttling) with conditional SNS alarm actions and tagging.
Data Sources & Outputs
mn-vectorization/infra/data.tf, mn-vectorization/infra/outputs.tf
Adds data lookups for existing VPC/EC2/current identity/region and outputs for S3, DynamoDB ARNs/names, IAM role/profile, secrets ARNs, log group names, security group ID, VPC CIDR, and EC2 private IP.

Sequence Diagram(s)

sequenceDiagram
    actor Dev as "Developer / GitHub PR"
    participant GH as "GitHub Actions"
    participant TF as "Terraform (CI)"
    participant S3 as "Terraform S3 Backend"
    participant AWS as "AWS APIs"

    Dev->>GH: open PR / push main
    GH->>TF: checkout & setup terraform
    TF->>S3: init backend (S3)
    TF->>AWS: validate & plan (reads data sources)
    alt on main push
        TF->>AWS: apply -> create S3, DynamoDB, IAM, Secrets, SG, CloudWatch
        AWS-->>TF: return ARNs/IDs
    end
    TF-->>GH: plan/apply output
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐰 I hopped through vars and cloudbound trees,

I seeded buckets and tagged the breeze,
I tucked secrets in a moonlit chest,
Rang tiny alarms to keep systems blessed,
A rabbit's infra—ready to nest.

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title clearly and specifically describes the main change: adding Terraform infrastructure for the MN Vectorization project. It is concise, directly related to the changeset, and provides meaningful context.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/mn-vectorization-infra

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (4)
mn-vectorization/infra/locals.tf (1)

19-31: Use var.project_name for secret path prefixes to keep naming consistent.

Lines 19, 23, 27, and 31 hardcode mn-vectorization instead of reusing the project variable.

Proposed fix
-      name  = "mn-vectorization/${var.environment}/anthropic-api-key"
+      name  = "${var.project_name}/${var.environment}/anthropic-api-key"
...
-      name  = "mn-vectorization/${var.environment}/cohere-api-key"
+      name  = "${var.project_name}/${var.environment}/cohere-api-key"
...
-      name  = "mn-vectorization/${var.environment}/qdrant-api-key"
+      name  = "${var.project_name}/${var.environment}/qdrant-api-key"
...
-      name  = "mn-vectorization/${var.environment}/qdrant-url"
+      name  = "${var.project_name}/${var.environment}/qdrant-url"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mn-vectorization/infra/locals.tf` around lines 19 - 31, The secret names in
locals.tf hardcode "mn-vectorization" instead of using the project variable;
update each secret name (the entries for anthropic_api_key, cohere_api_key,
qdrant_api_key, and qdrant_url) to use var.project_name as the prefix (e.g.
replace "mn-vectorization/${var.environment}/..." with
"${var.project_name}/${var.environment}/...") so the secret paths consistently
derive from var.project_name and var.environment.
mn-vectorization/infra/provider.tf (1)

2-2: Constrain Terraform core major version to avoid surprise breakage.

Line 2 (>= 1.5) permits Terraform 2.x and higher releases. Add an upper bound for safer reproducibility and consistency with the AWS provider constraint (~> 5.0).

Proposed fix
-  required_version = ">= 1.5"
+  required_version = ">= 1.5, < 2.0"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mn-vectorization/infra/provider.tf` at line 2, The Terraform required_version
constraint currently allows any Terraform 2.x release ("required_version = \">=
1.5\""); restrict it by adding an upper bound to prevent unexpected
major-version breaks (e.g., change the required_version to ">= 1.5, < 2.0") so
it aligns with the AWS provider stability approach and ensures reproducible
runs; update the required_version setting in the provider.tf (look for the
required_version assignment) accordingly.
mn-vectorization/infra/s3.tf (1)

39-55: Add rule to abort incomplete multipart uploads.

The lifecycle configuration handles embeddings expiration but doesn't clean up incomplete multipart uploads. These can accumulate silently and incur storage costs. Consider adding an abort_incomplete_multipart_upload rule.

♻️ Proposed fix to add multipart cleanup
 resource "aws_s3_bucket_lifecycle_configuration" "artifacts" {
-  count  = var.embeddings_expiry_days > 0 ? 1 : 0
   bucket = aws_s3_bucket.artifacts.id

   rule {
+    id     = "abort-incomplete-multipart"
+    status = "Enabled"
+
+    filter {
+      prefix = ""
+    }
+
+    abort_incomplete_multipart_upload {
+      days_after_initiation = 7
+    }
+  }
+
+  dynamic "rule" {
+    for_each = var.embeddings_expiry_days > 0 ? [1] : []
+    content {
       id     = "expire-embeddings"
       status = "Enabled"

       filter {
         prefix = "embeddings/"
       }

       expiration {
         days = var.embeddings_expiry_days
       }
+    }
   }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mn-vectorization/infra/s3.tf` around lines 39 - 55, The lifecycle config
(resource aws_s3_bucket_lifecycle_configuration.artifacts, rule id
"expire-embeddings") needs an abort_incomplete_multipart_upload block to clean
up partial uploads; add abort_incomplete_multipart_upload {
days_after_initiation = var.multipart_abort_days } inside the existing rule, and
introduce a new variable var.multipart_abort_days (with a sensible default,
e.g., 7) in your variables.tf (and reference it in any docs/terraform.tfvars) so
incomplete multipart uploads are automatically aborted after the configured
number of days.
mn-vectorization/infra/dynamodb.tf (1)

6-26: Consider enabling Point-in-Time Recovery for production.

The DynamoDB tables lack Point-in-Time Recovery (PITR), which provides continuous backups. For the tasks and users tables storing async query state and ACL data, enabling PITR in production would protect against accidental data loss or corruption.

♻️ Proposed fix to add PITR
 resource "aws_dynamodb_table" "main" {
   for_each     = local.dynamodb_tables
   name         = "${local.name_prefix}-${each.key}"
   billing_mode = "PAY_PER_REQUEST"
   hash_key     = each.value.hash_key

   attribute {
     name = each.value.hash_key
     type = "S"
   }

   dynamic "ttl" {
     for_each = each.value.ttl_attr != null ? [each.value.ttl_attr] : []
     content {
       attribute_name = ttl.value
       enabled        = true
     }
   }

+  point_in_time_recovery {
+    enabled = var.environment == "prod"
+  }
+
   tags = { Name = "${local.name_prefix}-${each.key}" }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mn-vectorization/infra/dynamodb.tf` around lines 6 - 26, The DynamoDB tables
in aws_dynamodb_table.main lack Point-in-Time Recovery; add a
point_in_time_recovery block and enable it for production (or at least for the
sensitive tables like "tasks" and "users"). Update the resource
(aws_dynamodb_table.main) to include either a static block:
point_in_time_recovery { enabled = true } or a conditional/dynamic block driven
by each.value.pitr or a local flag (e.g., local.env == "prod" or
each.value.pitr) so that the setting is applied only in production or for tables
that opt-in (check each.key for "tasks" and "users" or add each.value.pitr to
local.dynamodb_tables). Ensure the attribute uses enabled = true when the
condition is met.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@mn-vectorization/infra/iam.tf`:
- Around line 157-174: The aws_iam_role_policy resource
(aws_iam_role_policy.kms_access) can be created with an empty Resource ARN if
var.is_kms_enabled is true but var.kms_key_arn is empty; add a precondition
block inside aws_iam_role_policy.kms_access that asserts length(var.kms_key_arn)
> 0 and supplies a clear error_message like "kms_key_arn must be provided when
is_kms_enabled is true." Alternatively, clearly document the required
relationship in the kms_key_arn variable description in variables.tf so users
know to supply an ARN whenever is_kms_enabled is true.

In `@mn-vectorization/infra/locals.tf`:
- Around line 18-33: The locals file currently materializes secrets
(anthropic_api_key, cohere_api_key, qdrant_api_key, qdrant_url) into Terraform
state by assigning raw values to locals that are fed into
aws_secretsmanager_secret_version.secret_string; remove these secret local
declarations and instead consume the sensitive variables (var.anthropic_api_key,
var.cohere_api_key, var.qdrant_api_key, var.qdrant_url) directly (mark those
variable blocks as sensitive) and populate them from environment via TF_VAR_* at
apply time, or alternatively manage initial secret bootstrap out-of-band; keep
the existing lifecycle { ignore_changes = [secret_string] } on
aws_secretsmanager_secret_version to avoid future state overwrites.

---

Nitpick comments:
In `@mn-vectorization/infra/dynamodb.tf`:
- Around line 6-26: The DynamoDB tables in aws_dynamodb_table.main lack
Point-in-Time Recovery; add a point_in_time_recovery block and enable it for
production (or at least for the sensitive tables like "tasks" and "users").
Update the resource (aws_dynamodb_table.main) to include either a static block:
point_in_time_recovery { enabled = true } or a conditional/dynamic block driven
by each.value.pitr or a local flag (e.g., local.env == "prod" or
each.value.pitr) so that the setting is applied only in production or for tables
that opt-in (check each.key for "tasks" and "users" or add each.value.pitr to
local.dynamodb_tables). Ensure the attribute uses enabled = true when the
condition is met.

In `@mn-vectorization/infra/locals.tf`:
- Around line 19-31: The secret names in locals.tf hardcode "mn-vectorization"
instead of using the project variable; update each secret name (the entries for
anthropic_api_key, cohere_api_key, qdrant_api_key, and qdrant_url) to use
var.project_name as the prefix (e.g. replace
"mn-vectorization/${var.environment}/..." with
"${var.project_name}/${var.environment}/...") so the secret paths consistently
derive from var.project_name and var.environment.

In `@mn-vectorization/infra/provider.tf`:
- Line 2: The Terraform required_version constraint currently allows any
Terraform 2.x release ("required_version = \">= 1.5\""); restrict it by adding
an upper bound to prevent unexpected major-version breaks (e.g., change the
required_version to ">= 1.5, < 2.0") so it aligns with the AWS provider
stability approach and ensures reproducible runs; update the required_version
setting in the provider.tf (look for the required_version assignment)
accordingly.

In `@mn-vectorization/infra/s3.tf`:
- Around line 39-55: The lifecycle config (resource
aws_s3_bucket_lifecycle_configuration.artifacts, rule id "expire-embeddings")
needs an abort_incomplete_multipart_upload block to clean up partial uploads;
add abort_incomplete_multipart_upload { days_after_initiation =
var.multipart_abort_days } inside the existing rule, and introduce a new
variable var.multipart_abort_days (with a sensible default, e.g., 7) in your
variables.tf (and reference it in any docs/terraform.tfvars) so incomplete
multipart uploads are automatically aborted after the configured number of days.

ℹ️ Review info

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 90f3fd2 and 2b18960.

📒 Files selected for processing (16)
  • mn-vectorization/infra/.gitignore
  • mn-vectorization/infra/backend.tf
  • mn-vectorization/infra/cloudwatch.tf
  • mn-vectorization/infra/data.tf
  • mn-vectorization/infra/dynamodb.tf
  • mn-vectorization/infra/environments/dev.tfvars
  • mn-vectorization/infra/environments/prod.tfvars
  • mn-vectorization/infra/environments/staging.tfvars
  • mn-vectorization/infra/iam.tf
  • mn-vectorization/infra/locals.tf
  • mn-vectorization/infra/outputs.tf
  • mn-vectorization/infra/provider.tf
  • mn-vectorization/infra/s3.tf
  • mn-vectorization/infra/secrets.tf
  • mn-vectorization/infra/security_group.tf
  • mn-vectorization/infra/variables.tf

Comment on lines +18 to +33
anthropic_api_key = {
name = "mn-vectorization/${var.environment}/anthropic-api-key"
value = var.anthropic_api_key
}
cohere_api_key = {
name = "mn-vectorization/${var.environment}/cohere-api-key"
value = var.cohere_api_key
}
qdrant_api_key = {
name = "mn-vectorization/${var.environment}/qdrant-api-key"
value = var.qdrant_api_key
}
qdrant_url = {
name = "mn-vectorization/${var.environment}/qdrant-url"
value = var.qdrant_url
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cd mn-vectorization/infra && rg -n -C5 'aws_secretsmanager_secret_version|secret_string|local\.secrets|each\.value\.value' secrets.tf

Repository: speedandfunction/automatization

Length of output: 786


Secret values flow to state, but mitigation is in place.

Lines 20, 24, 28, and 32 pass raw secret values through Terraform locals to aws_secretsmanager_secret_version.secret_string, which means secrets are written to Terraform state on initial apply. However, the lifecycle { ignore_changes = [secret_string] } block prevents Terraform from re-writing these values on subsequent applies, limiting state exposure.

This approach works for bootstrap scenarios where values are set once and then managed out-of-band (as indicated by the comment in secrets.tf). For additional hardening, consider using Terraform variables exclusively via environment variables (TF_VAR_*) and avoiding local value declarations for secrets altogether.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mn-vectorization/infra/locals.tf` around lines 18 - 33, The locals file
currently materializes secrets (anthropic_api_key, cohere_api_key,
qdrant_api_key, qdrant_url) into Terraform state by assigning raw values to
locals that are fed into aws_secretsmanager_secret_version.secret_string; remove
these secret local declarations and instead consume the sensitive variables
(var.anthropic_api_key, var.cohere_api_key, var.qdrant_api_key, var.qdrant_url)
directly (mark those variable blocks as sensitive) and populate them from
environment via TF_VAR_* at apply time, or alternatively manage initial secret
bootstrap out-of-band; keep the existing lifecycle { ignore_changes =
[secret_string] } on aws_secretsmanager_secret_version to avoid future state
overwrites.

@github-actions
Copy link

github-actions bot commented Feb 27, 2026

🔍 Vulnerabilities of temporal-test:latest

📦 Image Reference temporal-test:latest
digestsha256:17e54ff5e9a181d1bdbf7334ce9637f9c3934d54a65427ae36a5743f46487f15
vulnerabilitiescritical: 5 high: 30 medium: 0 low: 0
platformlinux/amd64
size218 MB
packages358
📦 Base Image alpine:3
also known as
  • 3.21
  • 3.21.3
  • latest
digestsha256:1c4eef651f65e2f7daee7ee785882ac164b02b78fb74503052a26dc061c90474
vulnerabilitiescritical: 1 high: 4 medium: 9 low: 3
critical: 2 high: 6 medium: 0 low: 0 stdlib 1.23.2 (golang)

pkg:golang/stdlib@1.23.2

critical : CVE--2025--68121

Affected range<1.24.13
Fixed version1.24.13
EPSS Score0.016%
EPSS Percentile3rd percentile
Description

During session resumption in crypto/tls, if the underlying Config has its ClientCAs or RootCAs fields mutated between the initial handshake and the resumed handshake, the resumed handshake may succeed when it should have failed. This may happen when a user calls Config.Clone and mutates the returned Config, or uses Config.GetConfigForClient. This can cause a client to resume a session with a server that it would not have resumed with during the initial handshake, or cause a server to resume a session with a client that it would not have resumed with during the initial handshake.

critical : CVE--2025--22871

Affected range<1.23.8
Fixed version1.23.8
EPSS Score0.169%
EPSS Percentile38th percentile
Description

The net/http package improperly accepts a bare LF as a line terminator in chunked data chunk-size lines. This can permit request smuggling if a net/http server is used in conjunction with a server that incorrectly accepts a bare LF as part of a chunk-ext.

high : CVE--2025--61729

Affected range<1.24.11
Fixed version1.24.11
EPSS Score0.017%
EPSS Percentile4th percentile
Description

Within HostnameError.Error(), when constructing an error string, there is no limit to the number of hosts that will be printed out. Furthermore, the error string is constructed by repeated string concatenation, leading to quadratic runtime. Therefore, a certificate provided by a malicious actor can result in excessive resource consumption.

high : CVE--2025--61726

Affected range<1.24.12
Fixed version1.24.12
EPSS Score0.026%
EPSS Percentile7th percentile
Description

The net/url package does not set a limit on the number of query parameters in a query.

While the maximum size of query parameters in URLs is generally limited by the maximum request header size, the net/http.Request.ParseForm method can parse large URL-encoded forms. Parsing a large form containing many unique query parameters can cause excessive memory consumption.

high : CVE--2025--61725

Affected range<1.24.8
Fixed version1.24.8
EPSS Score0.031%
EPSS Percentile9th percentile
Description

The ParseAddress function constructs domain-literal address components through repeated string concatenation. When parsing large domain-literal components, this can cause excessive CPU consumption.

high : CVE--2025--61723

Affected range<1.24.8
Fixed version1.24.8
EPSS Score0.038%
EPSS Percentile11th percentile
Description

The processing time for parsing some invalid inputs scales non-linearly with respect to the size of the input.

This affects programs which parse untrusted PEM inputs.

high : CVE--2025--58188

Affected range<1.24.8
Fixed version1.24.8
EPSS Score0.018%
EPSS Percentile4th percentile
Description

Validating certificate chains which contain DSA public keys can cause programs to panic, due to a interface cast that assumes they implement the Equal method.

This affects programs which validate arbitrary certificate chains.

high : CVE--2025--58187

Affected range<1.24.9
Fixed version1.24.9
EPSS Score0.018%
EPSS Percentile4th percentile
Description

Due to the design of the name constraint checking algorithm, the processing time of some inputs scale non-linearly with respect to the size of the certificate.

This affects programs which validate arbitrary certificate chains.

critical: 2 high: 6 medium: 0 low: 0 stdlib 1.23.6 (golang)

pkg:golang/stdlib@1.23.6

critical : CVE--2025--68121

Affected range<1.24.13
Fixed version1.24.13
EPSS Score0.016%
EPSS Percentile3rd percentile
Description

During session resumption in crypto/tls, if the underlying Config has its ClientCAs or RootCAs fields mutated between the initial handshake and the resumed handshake, the resumed handshake may succeed when it should have failed. This may happen when a user calls Config.Clone and mutates the returned Config, or uses Config.GetConfigForClient. This can cause a client to resume a session with a server that it would not have resumed with during the initial handshake, or cause a server to resume a session with a client that it would not have resumed with during the initial handshake.

critical : CVE--2025--22871

Affected range<1.23.8
Fixed version1.23.8
EPSS Score0.169%
EPSS Percentile38th percentile
Description

The net/http package improperly accepts a bare LF as a line terminator in chunked data chunk-size lines. This can permit request smuggling if a net/http server is used in conjunction with a server that incorrectly accepts a bare LF as part of a chunk-ext.

high : CVE--2025--61729

Affected range<1.24.11
Fixed version1.24.11
EPSS Score0.017%
EPSS Percentile4th percentile
Description

Within HostnameError.Error(), when constructing an error string, there is no limit to the number of hosts that will be printed out. Furthermore, the error string is constructed by repeated string concatenation, leading to quadratic runtime. Therefore, a certificate provided by a malicious actor can result in excessive resource consumption.

high : CVE--2025--61726

Affected range<1.24.12
Fixed version1.24.12
EPSS Score0.026%
EPSS Percentile7th percentile
Description

The net/url package does not set a limit on the number of query parameters in a query.

While the maximum size of query parameters in URLs is generally limited by the maximum request header size, the net/http.Request.ParseForm method can parse large URL-encoded forms. Parsing a large form containing many unique query parameters can cause excessive memory consumption.

high : CVE--2025--61725

Affected range<1.24.8
Fixed version1.24.8
EPSS Score0.031%
EPSS Percentile9th percentile
Description

The ParseAddress function constructs domain-literal address components through repeated string concatenation. When parsing large domain-literal components, this can cause excessive CPU consumption.

high : CVE--2025--61723

Affected range<1.24.8
Fixed version1.24.8
EPSS Score0.038%
EPSS Percentile11th percentile
Description

The processing time for parsing some invalid inputs scales non-linearly with respect to the size of the input.

This affects programs which parse untrusted PEM inputs.

high : CVE--2025--58188

Affected range<1.24.8
Fixed version1.24.8
EPSS Score0.018%
EPSS Percentile4th percentile
Description

Validating certificate chains which contain DSA public keys can cause programs to panic, due to a interface cast that assumes they implement the Equal method.

This affects programs which validate arbitrary certificate chains.

high : CVE--2025--58187

Affected range<1.24.9
Fixed version1.24.9
EPSS Score0.018%
EPSS Percentile4th percentile
Description

Due to the design of the name constraint checking algorithm, the processing time of some inputs scale non-linearly with respect to the size of the certificate.

This affects programs which validate arbitrary certificate chains.

critical: 1 high: 4 medium: 0 low: 0 openssl 3.3.3-r0 (apk)

pkg:apk/alpine/openssl@3.3.3-r0?os_name=alpine&os_version=3.21

critical : CVE--2025--15467

Affected range<3.3.6-r0
Fixed version3.3.6-r0
EPSS Score0.703%
EPSS Percentile72nd percentile
Description

high : CVE--2025--9230

Affected range<3.3.5-r0
Fixed version3.3.5-r0
EPSS Score0.023%
EPSS Percentile6th percentile
Description

high : CVE--2025--69421

Affected range<3.3.6-r0
Fixed version3.3.6-r0
EPSS Score0.059%
EPSS Percentile18th percentile
Description

high : CVE--2025--69420

Affected range<3.3.6-r0
Fixed version3.3.6-r0
EPSS Score0.070%
EPSS Percentile21st percentile
Description

high : CVE--2025--69419

Affected range<3.3.6-r0
Fixed version3.3.6-r0
EPSS Score0.056%
EPSS Percentile17th percentile
Description
critical: 0 high: 2 medium: 0 low: 0 curl 8.12.1-r0 (apk)

pkg:apk/alpine/curl@8.12.1-r0?os_name=alpine&os_version=3.21

high : CVE--2025--9086

Affected range<8.14.1-r2
Fixed version8.14.1-r2
EPSS Score0.035%
EPSS Percentile10th percentile
Description

high : CVE--2025--5399

Affected range<8.14.1-r0
Fixed version8.14.1-r0
EPSS Score0.146%
EPSS Percentile35th percentile
Description
critical: 0 high: 2 medium: 0 low: 0 golang.org/x/crypto 0.32.0 (golang)

pkg:golang/golang.org/x/crypto@0.32.0

high : CVE--2025--47913

Affected range<0.43.0
Fixed version0.43.0
EPSS Score0.033%
EPSS Percentile9th percentile
Description

SSH clients receiving SSH_AGENT_SUCCESS when expecting a typed response will panic and cause early termination of the client process.

high : CVE--2025--22869

Affected range<0.35.0
Fixed version0.35.0
EPSS Score0.215%
EPSS Percentile44th percentile
Description

SSH servers which implement file transfer protocols are vulnerable to a denial of service attack from clients which complete the key exchange slowly, or not at all, causing pending content to be read into memory, but never transmitted.

critical: 0 high: 1 medium: 0 low: 0 go.opentelemetry.io/otel/sdk 1.34.0 (golang)

pkg:golang/go.opentelemetry.io/otel/sdk@1.34.0

high : CVE--2026--24051

Affected range>=1.21.0
<1.40.0
Fixed version1.40.0
EPSS Score0.006%
EPSS Percentile0th percentile
Description

OpenTelemetry Go SDK Vulnerable to Arbitrary Code Execution via PATH Hijacking in go.opentelemetry.io/otel/sdk

critical: 0 high: 1 medium: 0 low: 0 github.com/golang-jwt/jwt 3.2.2+incompatible (golang)

pkg:golang/github.com/golang-jwt/jwt@3.2.2%2Bincompatible

high 8.7: CVE--2025--30204 Asymmetric Resource Consumption (Amplification)

Affected range>=3.2.0
<=3.2.2
Fixed versionNot Fixed
CVSS Score8.7
CVSS VectorCVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:H/SC:N/SI:N/SA:N
EPSS Score0.102%
EPSS Percentile28th percentile
Description

Summary

Function parse.ParseUnverified currently splits (via a call to strings.Split) its argument (which is untrusted data) on periods.

As a result, in the face of a malicious request whose Authorization header consists of Bearer followed by many period characters, a call to that function incurs allocations to the tune of O(n) bytes (where n stands for the length of the function's argument), with a constant factor of about 16. Relevant weakness: CWE-405: Asymmetric Resource Consumption (Amplification)

Details

See parse.ParseUnverified

Impact

Excessive memory allocation

critical: 0 high: 1 medium: 0 low: 0 golang.org/x/oauth2 0.7.0 (golang)

pkg:golang/golang.org/x/oauth2@0.7.0

high 7.5: CVE--2025--22868 Improper Validation of Syntactic Correctness of Input

Affected range<0.27.0
Fixed version0.27.0
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score0.112%
EPSS Percentile30th percentile
Description

An attacker can pass a malicious malformed token which causes unexpected memory to be consumed during parsing.

critical: 0 high: 1 medium: 0 low: 0 go.opentelemetry.io/otel/sdk 1.35.0 (golang)

pkg:golang/go.opentelemetry.io/otel/sdk@1.35.0

high : CVE--2026--24051

Affected range>=1.21.0
<1.40.0
Fixed version1.40.0
EPSS Score0.006%
EPSS Percentile0th percentile
Description

OpenTelemetry Go SDK Vulnerable to Arbitrary Code Execution via PATH Hijacking in go.opentelemetry.io/otel/sdk

critical: 0 high: 1 medium: 0 low: 0 golang.org/x/crypto 0.36.0 (golang)

pkg:golang/golang.org/x/crypto@0.36.0

high : CVE--2025--47913

Affected range<0.43.0
Fixed version0.43.0
EPSS Score0.033%
EPSS Percentile9th percentile
Description

SSH clients receiving SSH_AGENT_SUCCESS when expecting a typed response will panic and cause early termination of the client process.

critical: 0 high: 1 medium: 0 low: 0 golang.org/x/crypto 0.35.0 (golang)

pkg:golang/golang.org/x/crypto@0.35.0

high : CVE--2025--47913

Affected range<0.43.0
Fixed version0.43.0
EPSS Score0.033%
EPSS Percentile9th percentile
Description

SSH clients receiving SSH_AGENT_SUCCESS when expecting a typed response will panic and cause early termination of the client process.

critical: 0 high: 1 medium: 0 low: 0 c-ares 1.34.3-r0 (apk)

pkg:apk/alpine/c-ares@1.34.3-r0?os_name=alpine&os_version=3.21

high : CVE--2025--31498

Affected range<1.34.5-r0
Fixed version1.34.5-r0
EPSS Score0.618%
EPSS Percentile70th percentile
Description
critical: 0 high: 1 medium: 0 low: 0 golang.org/x/oauth2 0.26.0 (golang)

pkg:golang/golang.org/x/oauth2@0.26.0

high 7.5: CVE--2025--22868 Improper Validation of Syntactic Correctness of Input

Affected range<0.27.0
Fixed version0.27.0
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score0.112%
EPSS Percentile30th percentile
Description

An attacker can pass a malicious malformed token which causes unexpected memory to be consumed during parsing.

critical: 0 high: 1 medium: 0 low: 0 github.com/golang-jwt/jwt/v4 4.5.1 (golang)

pkg:golang/github.com/golang-jwt/jwt@4.5.1#v4

high 8.7: CVE--2025--30204 Asymmetric Resource Consumption (Amplification)

Affected range<4.5.2
Fixed version4.5.2
CVSS Score8.7
CVSS VectorCVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:H/SC:N/SI:N/SA:N
EPSS Score0.102%
EPSS Percentile28th percentile
Description

Summary

Function parse.ParseUnverified currently splits (via a call to strings.Split) its argument (which is untrusted data) on periods.

As a result, in the face of a malicious request whose Authorization header consists of Bearer followed by many period characters, a call to that function incurs allocations to the tune of O(n) bytes (where n stands for the length of the function's argument), with a constant factor of about 16. Relevant weakness: CWE-405: Asymmetric Resource Consumption (Amplification)

Details

See parse.ParseUnverified

Impact

Excessive memory allocation

critical: 0 high: 1 medium: 0 low: 0 go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc 0.36.4 (golang)

pkg:golang/go.opentelemetry.io/contrib/instrumentation@0.36.4#google.golang.org/grpc/otelgrpc

high : CVE--2023--47108 OWASP Top Ten 2017 Category A9 - Using Components with Known Vulnerabilities

Affected range<0.46.0
Fixed version0.46.0
EPSS Score4.299%
EPSS Percentile89th percentile
Description

OpenTelemetry-Go Contrib is a collection of third-party packages for OpenTelemetry-Go. Prior to version 0.46.0, the grpc Unary Server Interceptor out of the box adds labels net.peer.sock.addr and net.peer.sock.port that have unbound cardinality. It leads to the server's potential memory exhaustion when many malicious requests are sent. An attacker can easily flood the peer address and port for requests. Version 0.46.0 contains a fix for this issue. As a workaround to stop being affected, a view removing the attributes can be used. The other possibility is to disable grpc metrics instrumentation by passing otelgrpc.WithMeterProvider option with noop.NewMeterProvider.

@SashkoMarchuk SashkoMarchuk changed the title Add Terraform infrastructure for MN Vectorization project Add Terraform infrastructure for MN Vectorization [coderabbit-ai-review] Feb 27, 2026
@github-actions
Copy link

github-actions bot commented Feb 27, 2026

Image description CodeRabbit


Uplevel your code reviews with CodeRabbit Pro

CodeRabbit Pro

If you like this project, please support us by purchasing the Pro version. The Pro version has advanced context, superior noise reduction and several proprietary improvements compared to the open source version. Moreover, CodeRabbit Pro is free for open source projects.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description CodeRabbit

Commits Files that changed from the base of the PR and between 90f3fd2 and 2b18960 commits.
Files selected (16)
  • mn-vectorization/infra/.gitignore (1)
  • mn-vectorization/infra/backend.tf (1)
  • mn-vectorization/infra/cloudwatch.tf (1)
  • mn-vectorization/infra/data.tf (1)
  • mn-vectorization/infra/dynamodb.tf (1)
  • mn-vectorization/infra/environments/dev.tfvars (1)
  • mn-vectorization/infra/environments/prod.tfvars (1)
  • mn-vectorization/infra/environments/staging.tfvars (1)
  • mn-vectorization/infra/iam.tf (1)
  • mn-vectorization/infra/locals.tf (1)
  • mn-vectorization/infra/outputs.tf (1)
  • mn-vectorization/infra/provider.tf (1)
  • mn-vectorization/infra/s3.tf (1)
  • mn-vectorization/infra/secrets.tf (1)
  • mn-vectorization/infra/security_group.tf (1)
  • mn-vectorization/infra/variables.tf (1)
Files not summarized due to errors (16)
  • mn-vectorization/infra/.gitignore (nothing obtained from openai)
  • mn-vectorization/infra/cloudwatch.tf (nothing obtained from openai)
  • mn-vectorization/infra/backend.tf (nothing obtained from openai)
  • mn-vectorization/infra/dynamodb.tf (nothing obtained from openai)
  • mn-vectorization/infra/environments/dev.tfvars (nothing obtained from openai)
  • mn-vectorization/infra/data.tf (nothing obtained from openai)
  • mn-vectorization/infra/environments/prod.tfvars (nothing obtained from openai)
  • mn-vectorization/infra/environments/staging.tfvars (nothing obtained from openai)
  • mn-vectorization/infra/locals.tf (nothing obtained from openai)
  • mn-vectorization/infra/iam.tf (nothing obtained from openai)
  • mn-vectorization/infra/outputs.tf (nothing obtained from openai)
  • mn-vectorization/infra/provider.tf (nothing obtained from openai)
  • mn-vectorization/infra/s3.tf (nothing obtained from openai)
  • mn-vectorization/infra/secrets.tf (nothing obtained from openai)
  • mn-vectorization/infra/security_group.tf (nothing obtained from openai)
  • mn-vectorization/infra/variables.tf (nothing obtained from openai)
Files not reviewed due to errors (16)
  • mn-vectorization/infra/backend.tf (no response)
  • mn-vectorization/infra/data.tf (no response)
  • mn-vectorization/infra/.gitignore (no response)
  • mn-vectorization/infra/dynamodb.tf (no response)
  • mn-vectorization/infra/cloudwatch.tf (no response)
  • mn-vectorization/infra/environments/dev.tfvars (no response)
  • mn-vectorization/infra/environments/prod.tfvars (no response)
  • mn-vectorization/infra/provider.tf (no response)
  • mn-vectorization/infra/environments/staging.tfvars (no response)
  • mn-vectorization/infra/outputs.tf (no response)
  • mn-vectorization/infra/iam.tf (no response)
  • mn-vectorization/infra/locals.tf (no response)
  • mn-vectorization/infra/s3.tf (no response)
  • mn-vectorization/infra/secrets.tf (no response)
  • mn-vectorization/infra/security_group.tf (no response)
  • mn-vectorization/infra/variables.tf (no response)
Review comments generated (0)
  • Review: 0
  • LGTM: 0

Tips

Chat with Image description CodeRabbit Bot (@coderabbitai)

  • Reply on review comments left by this bot to ask follow-up questions. A review comment is a comment on a diff or a file.
  • Invite the bot into a review comment chain by tagging @coderabbitai in a reply.

Code suggestions

  • The bot may make code suggestions, but please review them carefully before committing since the line number ranges may be misaligned.
  • You can edit the comment made by the bot and manually tweak the suggestion if it is slightly off.

Pausing incremental reviews

  • Add @coderabbitai: ignore anywhere in the PR description to pause further reviews from the bot.

@SashaMarchuk
Copy link

Code review

No issues found. Checked for bugs and CLAUDE.md compliance.

Reviewed: S3 naming, IAM policies, DynamoDB schema, Secrets Manager, CloudWatch alarms, security group rules, variable validation, cross-file references, and environment tfvars consistency.

🤖 Generated with Claude Code

- If this code review was useful, please react with 👍. Otherwise, react with 👎.

…dd GH Actions pipeline

Address all 6 CodeRabbit issues: KMS ARN validation, secrets bootstrap docs,
parameterized secret paths, TF version upper bound, multipart upload cleanup,
and DynamoDB PITR. Switch resource naming to underscore convention per Nomad
standards (S3 keeps hyphens). Add Terraform CI/CD workflow for fmt/validate/plan/apply.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/terraform-mn-vectorization.yml:
- Around line 58-62: In the "Terraform Plan" step, remove the unconditional
continue-on-error: true and run terraform plan with the -detailed-exitcode flag
(terraform plan -var-file=environments/dev.tfvars -no-color -input=false
-detailed-exitcode), then explicitly inspect the plan exit code: if exit code ==
1 fail the job (error/exit non-zero) to surface real errors, if exit code == 0
treat as no changes and succeed, and if exit code == 2 treat as detected
changes/drift (succeed but set a clear step output or annotation indicating
changes were found); implement this explicit exit-code handling in the step that
invokes the terraform plan command.
- Around line 13-16: Update the workflow to use GitHub OIDC by adding the
id-token: write permission under the permissions block and replacing static AWS
key usage with role assumption: remove references to
secrets.MN_VECTORIZATION_AWS_ACCESS_KEY_ID and
secrets.MN_VECTORIZATION_AWS_SECRET_ACCESS_KEY and update the AWS auth step (the
step that currently configures AWS credentials) to use
actions/configure-aws-credentials with role-to-assume: ${{
secrets.MN_VECTORIZATION_AWS_ROLE_ARN }} (and preserve aws-region) so the job
uses OIDC role assumption instead of long-lived secrets.

In `@mn-vectorization/infra/environments/staging.tfvars`:
- Around line 10-12: The staging tfvars enables alarms with is_alarm_enabled =
true while alarm_sns_topic_arn is empty; change the configuration so alarms are
only enabled when a notification target exists by either supplying a valid SNS
topic ARN to alarm_sns_topic_arn or setting is_alarm_enabled = false until an
SNS topic is provisioned (update the values for alarm_sns_topic_arn and
is_alarm_enabled accordingly, leaving log_retention_days unchanged).

In `@mn-vectorization/infra/security_group.tf`:
- Around line 29-37: The egress rule aws_security_group_rule.mcp_https_egress
currently allows 0.0.0.0/0; change it to use a configurable allowlist by
replacing cidr_blocks = ["0.0.0.0/0"] with cidr_blocks =
var.mcp_https_egress_cidrs (or the equivalent variable name) and add a new
variable definition var.mcp_https_egress_cidrs in variables.tf typed as
list(string) with a secure default (e.g., VPC CIDR and any required external
service CIDRs) and document it; consider using aws_vpc_endpoint resources for
AWS services where applicable and ensure any module/stack consumers pass the
narrow CIDR set.

ℹ️ Review info

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2b18960 and 3e899c8.

📒 Files selected for processing (13)
  • .github/workflows/terraform-mn-vectorization.yml
  • mn-vectorization/infra/cloudwatch.tf
  • mn-vectorization/infra/dynamodb.tf
  • mn-vectorization/infra/environments/dev.tfvars
  • mn-vectorization/infra/environments/prod.tfvars
  • mn-vectorization/infra/environments/staging.tfvars
  • mn-vectorization/infra/iam.tf
  • mn-vectorization/infra/locals.tf
  • mn-vectorization/infra/provider.tf
  • mn-vectorization/infra/s3.tf
  • mn-vectorization/infra/secrets.tf
  • mn-vectorization/infra/security_group.tf
  • mn-vectorization/infra/variables.tf
🚧 Files skipped from review as they are similar to previous changes (5)
  • mn-vectorization/infra/environments/dev.tfvars
  • mn-vectorization/infra/provider.tf
  • mn-vectorization/infra/variables.tf
  • mn-vectorization/infra/environments/prod.tfvars
  • mn-vectorization/infra/locals.tf

Comment on lines +13 to +16
permissions:
contents: read
pull-requests: write

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's locate and read the workflow file
find . -name "terraform-mn-vectorization.yml" -type f

Repository: speedandfunction/automatization

Length of output: 125


🏁 Script executed:

# Let's also search for this file with a broader pattern
git ls-files | grep -i "terraform.*vectorization"

Repository: speedandfunction/automatization

Length of output: 123


🏁 Script executed:

cat -n .github/workflows/terraform-mn-vectorization.yml

Repository: speedandfunction/automatization

Length of output: 2303


🌐 Web query:

aws-actions/configure-aws-credentials v4 OIDC role-to-assume documentation

💡 Result:

To use aws-actions/configure-aws-credentials@v4 with GitHub OIDC and role-to-assume:

1) GitHub Actions workflow (OIDC)

permissions:
  id-token: write
  contents: read

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Configure AWS credentials (OIDC)
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: arn:aws:iam::<AWS_ACCOUNT_ID>:role/<ROLE_NAME>
          aws-region: us-east-1

      - run: aws sts get-caller-identity

[1][2]

2) IAM role trust policy (key OIDC bits)

Your role’s trust policy must allow sts:AssumeRoleWithWebIdentity from GitHub’s OIDC provider, typically scoping by aud and sub, e.g.:

  • token.actions.githubusercontent.com:aud = sts.amazonaws.com
  • token.actions.githubusercontent.com:sub = repo:<ORG>/<REPO>:ref:refs/heads/<BRANCH>
    [1][2]

3) Primary docs

[1] https://github.com/aws-actions/configure-aws-credentials (README: “Quick Start (OIDC, recommended)”, options incl. role-to-assume/audience)
[2] https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services (end-to-end GitHub OIDC + role-to-assume example)

Switch to GitHub OIDC authentication to eliminate static AWS credentials.

The workflow currently uses long-lived AWS access keys stored in secrets (lines 41–42), which increases credential exposure risk. Replace with OIDC role assumption as recommended by AWS and GitHub.

Required changes:

  • Add id-token: write permission (line 13)
  • Replace static key references with role-to-assume: ${{ secrets.MN_VECTORIZATION_AWS_ROLE_ARN }}
🔧 Suggested OIDC-based auth
 permissions:
   contents: read
   pull-requests: write
+  id-token: write
@@
       - name: Configure AWS Credentials
         uses: aws-actions/configure-aws-credentials@v4
         with:
-          aws-access-key-id: ${{ secrets.MN_VECTORIZATION_AWS_ACCESS_KEY_ID }}
-          aws-secret-access-key: ${{ secrets.MN_VECTORIZATION_AWS_SECRET_ACCESS_KEY }}
+          role-to-assume: ${{ secrets.MN_VECTORIZATION_AWS_ROLE_ARN }}
           aws-region: ${{ env.AWS_REGION }}

Also applies to: lines 38–43

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/terraform-mn-vectorization.yml around lines 13 - 16,
Update the workflow to use GitHub OIDC by adding the id-token: write permission
under the permissions block and replacing static AWS key usage with role
assumption: remove references to secrets.MN_VECTORIZATION_AWS_ACCESS_KEY_ID and
secrets.MN_VECTORIZATION_AWS_SECRET_ACCESS_KEY and update the AWS auth step (the
step that currently configures AWS credentials) to use
actions/configure-aws-credentials with role-to-assume: ${{
secrets.MN_VECTORIZATION_AWS_ROLE_ARN }} (and preserve aws-region) so the job
uses OIDC role assumption instead of long-lived secrets.

Comment on lines +58 to +62
- name: Terraform Plan
if: github.event_name == 'pull_request'
run: terraform plan -var-file=environments/dev.tfvars -no-color -input=false
continue-on-error: true

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n .github/workflows/terraform-mn-vectorization.yml | head -80

Repository: speedandfunction/automatization

Length of output: 2303


🌐 Web query:

Terraform plan exit codes -detailed-exitcode behavior meaning

💡 Result:

terraform plan -detailed-exitcode makes terraform plan return three possible exit codes so automation can tell the difference between “no changes” and “changes needed”:

  • 0: Plan succeeded and the diff is empty (no changes to apply).
  • 1: Plan failed (error running the plan).
  • 2: Plan succeeded and the diff is non-empty (there are changes to apply). [1]

Without -detailed-exitcode, terraform plan generally returns 0 for success (even if changes are shown) and 1 for errors, so you can’t reliably detect “changes present” via the exit code alone. [1]


Do not swallow Terraform plan failures in PRs.

continue-on-error: true silently masks plan errors (exit code 1), such as invalid configurations or provider failures. The -detailed-exitcode flag distinguishes between actual failures, no changes (exit 0), and detected drift/changes (exit 2). Handle these cases explicitly:

Suggested fix
       - name: Terraform Plan
         if: github.event_name == 'pull_request'
-        run: terraform plan -var-file=environments/dev.tfvars -no-color -input=false
-        continue-on-error: true
+        run: |
+          set +e
+          terraform plan -var-file=environments/dev.tfvars -no-color -input=false -detailed-exitcode
+          ec=$?
+          if [ "$ec" -eq 1 ]; then
+            exit 1
+          fi
+          exit 0
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Terraform Plan
if: github.event_name == 'pull_request'
run: terraform plan -var-file=environments/dev.tfvars -no-color -input=false
continue-on-error: true
- name: Terraform Plan
if: github.event_name == 'pull_request'
run: |
set +e
terraform plan -var-file=environments/dev.tfvars -no-color -input=false -detailed-exitcode
ec=$?
if [ "$ec" -eq 1 ]; then
exit 1
fi
exit 0
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/terraform-mn-vectorization.yml around lines 58 - 62, In
the "Terraform Plan" step, remove the unconditional continue-on-error: true and
run terraform plan with the -detailed-exitcode flag (terraform plan
-var-file=environments/dev.tfvars -no-color -input=false -detailed-exitcode),
then explicitly inspect the plan exit code: if exit code == 1 fail the job
(error/exit non-zero) to surface real errors, if exit code == 0 treat as no
changes and succeed, and if exit code == 2 treat as detected changes/drift
(succeed but set a clear step output or annotation indicating changes were
found); implement this explicit exit-code handling in the step that invokes the
terraform plan command.

Comment on lines +10 to +12
log_retention_days = 30
alarm_sns_topic_arn = ""
is_alarm_enabled = true
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Enable alarms only when a notification target exists.

This staging config enables alarms but leaves alarm_sns_topic_arn empty, so alarms will not notify anyone. Consider setting is_alarm_enabled = false until an SNS topic is wired.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mn-vectorization/infra/environments/staging.tfvars` around lines 10 - 12, The
staging tfvars enables alarms with is_alarm_enabled = true while
alarm_sns_topic_arn is empty; change the configuration so alarms are only
enabled when a notification target exists by either supplying a valid SNS topic
ARN to alarm_sns_topic_arn or setting is_alarm_enabled = false until an SNS
topic is provisioned (update the values for alarm_sns_topic_arn and
is_alarm_enabled accordingly, leaving log_retention_days unchanged).

Comment on lines +29 to +37
resource "aws_security_group_rule" "mcp_https_egress" {
type = "egress"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = aws_security_group.mcp.id
description = "HTTPS egress for AWS services and external APIs"
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n mn-vectorization/infra/security_group.tf

Repository: speedandfunction/automatization

Length of output: 2458


🏁 Script executed:

fd -t f "variables.tf" mn-vectorization/infra/

Repository: speedandfunction/automatization

Length of output: 110


🏁 Script executed:

fd -t f "\.tf$" mn-vectorization/infra/ | head -20

Repository: speedandfunction/automatization

Length of output: 483


🏁 Script executed:

cat -n mn-vectorization/infra/variables.tf

Repository: speedandfunction/automatization

Length of output: 4894


🏁 Script executed:

rg -i "vpc.*endpoint|vpce-" mn-vectorization/infra/

Repository: speedandfunction/automatization

Length of output: 57


🏁 Script executed:

rg -C3 "0\.0\.0\.0/0" mn-vectorization/infra/

Repository: speedandfunction/automatization

Length of output: 599


Restrict HTTPS egress scope (currently unrestricted).

Allowing outbound 443 to 0.0.0.0/0 is broader than necessary for this worker role and weakens network controls, especially since DNS and ingress rules are properly scoped to the VPC. This permits unrestricted outbound access to any external service on HTTPS, increasing the blast radius if credentials or the instance are compromised. Prefer a controlled CIDR allowlist (e.g., via variable) or VPC endpoints where feasible for AWS services.

🔧 Suggested approach
 resource "aws_security_group_rule" "mcp_https_egress" {
   type              = "egress"
   from_port         = 443
   to_port           = 443
   protocol          = "tcp"
-  cidr_blocks       = ["0.0.0.0/0"]
+  cidr_blocks       = var.mcp_https_egress_cidrs
   security_group_id = aws_security_group.mcp.id
   description       = "HTTPS egress for AWS services and external APIs"
 }

Add var.mcp_https_egress_cidrs to variables.tf as a list of strings with explicit CIDR ranges (e.g., VPC CIDR + specific external endpoints).

🧰 Tools
🪛 Trivy (0.69.1)

[error] 34-34: A security group rule should not allow unrestricted egress to any IP address.

Security group rule allows unrestricted egress to any IP address.

Rule: AWS-0104

Resource: aws_security_group_rule.mcp_https_egress

Learn more

(IaC/AWS)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mn-vectorization/infra/security_group.tf` around lines 29 - 37, The egress
rule aws_security_group_rule.mcp_https_egress currently allows 0.0.0.0/0; change
it to use a configurable allowlist by replacing cidr_blocks = ["0.0.0.0/0"] with
cidr_blocks = var.mcp_https_egress_cidrs (or the equivalent variable name) and
add a new variable definition var.mcp_https_egress_cidrs in variables.tf typed
as list(string) with a secure default (e.g., VPC CIDR and any required external
service CIDRs) and document it; consider using aws_vpc_endpoint resources for
AWS services where applicable and ensure any module/stack consumers pass the
narrow CIDR set.

@SashaMarchuk
Copy link

Hi @a-nomad,

This PR adds the Terraform IAM foundation module and a GitHub Actions CI pipeline for Terraform (fmt/validate/plan). Key updates since the initial submission:

  • CodeRabbit suggestions addressed: naming conventions aligned, redundant outputs removed, variable descriptions improved, lifecycle rules added
  • GH Actions pipeline added: runs terraform fmt -check, validate, and plan on PRs touching terraform/

Ask: Could you review by Monday, Mar 2? Specifically — is the TF code quality and structure acceptable as a foundation to extend with VPC, Qdrant, and Bedrock modules, or would you prefer a different approach?

Note on CI checks: Some failing checks (Docker Security Scan, SonarCloud) are pre-existing repo-wide failures unrelated to this PR. The new Terraform CI check requires AWS credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION) to be configured in the repo secrets before it can run terraform plan successfully.

Thanks!

continue-on-error: true silently masked real Terraform plan failures
(exit code 1). Now uses -detailed-exitcode to distinguish between
actual failures (exit 1 → fail pipeline) and detected changes (exit 2
→ pass, expected on PRs).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
.github/workflows/terraform-mn-vectorization.yml (1)

13-16: ⚠️ Potential issue | 🟠 Major

Migrate AWS auth to OIDC instead of static access keys.

Line 41 and Line 42 still rely on long-lived credentials. This is a security risk and was already identified earlier; it should be resolved before rollout.

Suggested fix
 permissions:
   contents: read
   pull-requests: write
+  id-token: write
@@
       - name: Configure AWS Credentials
         uses: aws-actions/configure-aws-credentials@v4
         with:
-          aws-access-key-id: ${{ secrets.MN_VECTORIZATION_AWS_ACCESS_KEY_ID }}
-          aws-secret-access-key: ${{ secrets.MN_VECTORIZATION_AWS_SECRET_ACCESS_KEY }}
+          role-to-assume: ${{ secrets.MN_VECTORIZATION_AWS_ROLE_ARN }}
           aws-region: ${{ env.AWS_REGION }}
aws-actions/configure-aws-credentials v4 OIDC role-to-assume GitHub Actions required permissions id-token

Also applies to: 38-43

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/terraform-mn-vectorization.yml around lines 13 - 16,
Current workflow uses long-lived AWS credentials; replace them with OIDC role
assumption by switching the GitHub Action to
aws-actions/configure-aws-credentials@v4 and removing the
aws-access-key-id/aws-secret-access-key usage. Update the workflow step that
configures AWS (look for configure-aws-credentials) to use role-to-assume and
aws-region inputs and ensure the job-level permissions include id-token: write
(keep contents: read). Also confirm the target AWS role has a trust policy for
GitHub OIDC; remove any secrets referencing long-lived keys from the workflow
and from repo secrets once rolled out.
🧹 Nitpick comments (1)
.github/workflows/terraform-mn-vectorization.yml (1)

58-69: Add explicit signal for “changes detected” (exit code 2).

Current logic succeeds for both exit code 0 and 2 but emits no explicit indicator for code 2. A notice improves PR observability.

Suggested fix
       - name: Terraform Plan
         if: github.event_name == 'pull_request'
         run: |
           set +e
           terraform plan -var-file=environments/dev.tfvars -no-color -input=false -detailed-exitcode
           ec=$?
           if [ "$ec" -eq 1 ]; then
             echo "::error::Terraform plan failed"
             exit 1
+          elif [ "$ec" -eq 2 ]; then
+            echo "::notice::Terraform plan detected infrastructure changes"
           fi
           exit 0
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/terraform-mn-vectorization.yml around lines 58 - 69, The
Terraform Plan step currently treats exit codes 0 and 2 the same and emits no
explicit signal for "changes detected"; update the logic around the terraform
plan invocation (the terraform plan -var-file=... -detailed-exitcode call and
the ec variable) in the "Terraform Plan" step so that after capturing ec you
check for ec == 2 and emit a clear GitHub Actions notice (e.g., echo
"::notice::Terraform plan detected changes") before exiting successfully, while
keeping the existing check that treats ec == 1 as a failure and exits 1.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/terraform-mn-vectorization.yml:
- Around line 48-53: The workflow hardcodes the backend key and tfvars to "dev"
in the Terraform Init and later tfvars steps (the Terraform Init step and
backend-config="key=mn-vectorization/dev/terraform.tfstate"), which will
mis-target runs on main; change these to use a variable/environment value
instead (e.g., ENVIRONMENT or derive from GitHub context like GITHUB_REF or a
workflow input) and update the backend-config key and any tfvars references to
use that variable (replace the literal "dev" with the chosen variable, and
ensure the ENV var is set earlier in the workflow or as an input to the job so
Terraform Init and the tfvars step both reference the same environment).

---

Duplicate comments:
In @.github/workflows/terraform-mn-vectorization.yml:
- Around line 13-16: Current workflow uses long-lived AWS credentials; replace
them with OIDC role assumption by switching the GitHub Action to
aws-actions/configure-aws-credentials@v4 and removing the
aws-access-key-id/aws-secret-access-key usage. Update the workflow step that
configures AWS (look for configure-aws-credentials) to use role-to-assume and
aws-region inputs and ensure the job-level permissions include id-token: write
(keep contents: read). Also confirm the target AWS role has a trust policy for
GitHub OIDC; remove any secrets referencing long-lived keys from the workflow
and from repo secrets once rolled out.

---

Nitpick comments:
In @.github/workflows/terraform-mn-vectorization.yml:
- Around line 58-69: The Terraform Plan step currently treats exit codes 0 and 2
the same and emits no explicit signal for "changes detected"; update the logic
around the terraform plan invocation (the terraform plan -var-file=...
-detailed-exitcode call and the ec variable) in the "Terraform Plan" step so
that after capturing ec you check for ec == 2 and emit a clear GitHub Actions
notice (e.g., echo "::notice::Terraform plan detected changes") before exiting
successfully, while keeping the existing check that treats ec == 1 as a failure
and exits 1.

ℹ️ Review info

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3e899c8 and 6d4686c.

📒 Files selected for processing (1)
  • .github/workflows/terraform-mn-vectorization.yml

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description CodeRabbit

Commits Files that changed from the base of the PR and between 2b18960 and 6d4686c commits.
Files selected (13)
  • .github/workflows/terraform-mn-vectorization.yml (1)
  • mn-vectorization/infra/cloudwatch.tf (1)
  • mn-vectorization/infra/dynamodb.tf (1)
  • mn-vectorization/infra/environments/dev.tfvars (1)
  • mn-vectorization/infra/environments/prod.tfvars (1)
  • mn-vectorization/infra/environments/staging.tfvars (1)
  • mn-vectorization/infra/iam.tf (1)
  • mn-vectorization/infra/locals.tf (1)
  • mn-vectorization/infra/provider.tf (1)
  • mn-vectorization/infra/s3.tf (1)
  • mn-vectorization/infra/secrets.tf (1)
  • mn-vectorization/infra/security_group.tf (1)
  • mn-vectorization/infra/variables.tf (1)
Files not summarized due to errors (13)
  • mn-vectorization/infra/dynamodb.tf (nothing obtained from openai)
  • mn-vectorization/infra/cloudwatch.tf (nothing obtained from openai)
  • mn-vectorization/infra/environments/dev.tfvars (nothing obtained from openai)
  • mn-vectorization/infra/environments/staging.tfvars (nothing obtained from openai)
  • mn-vectorization/infra/environments/prod.tfvars (nothing obtained from openai)
  • .github/workflows/terraform-mn-vectorization.yml (nothing obtained from openai)
  • mn-vectorization/infra/secrets.tf (nothing obtained from openai)
  • mn-vectorization/infra/provider.tf (nothing obtained from openai)
  • mn-vectorization/infra/locals.tf (nothing obtained from openai)
  • mn-vectorization/infra/iam.tf (nothing obtained from openai)
  • mn-vectorization/infra/s3.tf (nothing obtained from openai)
  • mn-vectorization/infra/security_group.tf (nothing obtained from openai)
  • mn-vectorization/infra/variables.tf (nothing obtained from openai)
Files not reviewed due to errors (13)
  • mn-vectorization/infra/environments/staging.tfvars (no response)
  • mn-vectorization/infra/environments/prod.tfvars (no response)
  • mn-vectorization/infra/dynamodb.tf (no response)
  • mn-vectorization/infra/cloudwatch.tf (no response)
  • mn-vectorization/infra/environments/dev.tfvars (no response)
  • .github/workflows/terraform-mn-vectorization.yml (no response)
  • mn-vectorization/infra/s3.tf (no response)
  • mn-vectorization/infra/locals.tf (no response)
  • mn-vectorization/infra/provider.tf (no response)
  • mn-vectorization/infra/secrets.tf (no response)
  • mn-vectorization/infra/iam.tf (no response)
  • mn-vectorization/infra/security_group.tf (no response)
  • mn-vectorization/infra/variables.tf (no response)
Review comments generated (0)
  • Review: 0
  • LGTM: 0

Tips

Chat with Image description CodeRabbit Bot (@coderabbitai)

  • Reply on review comments left by this bot to ask follow-up questions. A review comment is a comment on a diff or a file.
  • Invite the bot into a review comment chain by tagging @coderabbitai in a reply.

Code suggestions

  • The bot may make code suggestions, but please review them carefully before committing since the line number ranges may be misaligned.
  • You can edit the comment made by the bot and manually tweak the suggestion if it is slightly off.

Pausing incremental reviews

  • Add @coderabbitai: ignore anywhere in the PR description to pause further reviews from the bot.

Document intentional choices that CodeRabbit flagged:
- Static AWS keys: OIDC deferred until Nomad provisions IAM OIDC provider
- Hardcoded dev env: only environment deployed for POC
- HTTPS egress 0.0.0.0/0: SaaS APIs have rotating IPs, CIDR allowlist impractical

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
.github/workflows/terraform-mn-vectorization.yml (1)

13-16: ⚠️ Potential issue | 🟠 Major

Use OIDC role assumption instead of static AWS keys.

Line 43 and Line 44 rely on long-lived secrets. Even for POC, this is a significant credential-risk pattern in CI.

Suggested change
 permissions:
   contents: read
-  pull-requests: write
+  id-token: write
@@
       - name: Configure AWS Credentials
         uses: aws-actions/configure-aws-credentials@v4
         with:
-          aws-access-key-id: ${{ secrets.MN_VECTORIZATION_AWS_ACCESS_KEY_ID }}
-          aws-secret-access-key: ${{ secrets.MN_VECTORIZATION_AWS_SECRET_ACCESS_KEY }}
+          role-to-assume: ${{ secrets.MN_VECTORIZATION_AWS_ROLE_ARN }}
           aws-region: ${{ env.AWS_REGION }}
What is the current recommended way to authenticate aws-actions/configure-aws-credentials@v4 in GitHub Actions: static access keys or GitHub OIDC role-to-assume?

Also applies to: 40-45

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/terraform-mn-vectorization.yml around lines 13 - 16, The
workflow is using long-lived static AWS keys with
aws-actions/configure-aws-credentials@v4; replace that with GitHub OIDC role
assumption: update the workflow permissions to include id-token: write and
change the configure-aws-credentials step to use the action's role-to-assume
(and optional role-session-name/region) inputs instead of
aws-access-key-id/aws-secret-access-key secrets, and ensure the repo or OIDC
provider is configured with the target AWS IAM role ARN (refer to
aws-actions/configure-aws-credentials@v4 and the step name that invokes it) so
CI uses OIDC-based short-lived credentials rather than static keys.
🧹 Nitpick comments (1)
.github/workflows/terraform-mn-vectorization.yml (1)

21-24: Add apply guardrails (environment approval + concurrency).

Given auto-approve on main, add deployment protection and a concurrency group to reduce accidental or overlapping applies.

Suggested change
 jobs:
   terraform:
     name: Terraform
     runs-on: ubuntu-latest
+    concurrency:
+      group: terraform-mn-vectorization-${{ github.ref }}
+      cancel-in-progress: false
+    environment: mn-vectorization-dev

Also applies to: 74-76

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/terraform-mn-vectorization.yml around lines 21 - 24, The
workflow's terraform job (job id "terraform", name "Terraform") needs deployment
guardrails: add an environment with required reviewers to the job (use the
environment keyword under the terraform job so merges to main require approval)
and add a concurrency block to prevent overlapping runs (e.g., a concurrency
group keyed by the repo/ref like "terraform-${{ github.ref }}" with
cancel-in-progress: true) and apply the same changes to the other terraform job
block referenced around lines 74-76; update both job definitions to include
environment and concurrency entries.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/terraform-mn-vectorization.yml:
- Around line 13-15: The workflow permissions are overly broad: the permissions
block currently sets pull-requests: write which is unnecessary; update the
permissions section to the least privilege by removing or changing
pull-requests: write to a read/none-level scope (e.g., remove the pull-requests
entry or set pull-requests: none) and keep only the required permission(s) such
as contents: read so the workflow token no longer has write access to pull
requests.

---

Duplicate comments:
In @.github/workflows/terraform-mn-vectorization.yml:
- Around line 13-16: The workflow is using long-lived static AWS keys with
aws-actions/configure-aws-credentials@v4; replace that with GitHub OIDC role
assumption: update the workflow permissions to include id-token: write and
change the configure-aws-credentials step to use the action's role-to-assume
(and optional role-session-name/region) inputs instead of
aws-access-key-id/aws-secret-access-key secrets, and ensure the repo or OIDC
provider is configured with the target AWS IAM role ARN (refer to
aws-actions/configure-aws-credentials@v4 and the step name that invokes it) so
CI uses OIDC-based short-lived credentials rather than static keys.

---

Nitpick comments:
In @.github/workflows/terraform-mn-vectorization.yml:
- Around line 21-24: The workflow's terraform job (job id "terraform", name
"Terraform") needs deployment guardrails: add an environment with required
reviewers to the job (use the environment keyword under the terraform job so
merges to main require approval) and add a concurrency block to prevent
overlapping runs (e.g., a concurrency group keyed by the repo/ref like
"terraform-${{ github.ref }}" with cancel-in-progress: true) and apply the same
changes to the other terraform job block referenced around lines 74-76; update
both job definitions to include environment and concurrency entries.

ℹ️ Review info

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6d4686c and 79a74e7.

📒 Files selected for processing (2)
  • .github/workflows/terraform-mn-vectorization.yml
  • mn-vectorization/infra/security_group.tf
🚧 Files skipped from review as they are similar to previous changes (1)
  • mn-vectorization/infra/security_group.tf

- Extract hardcoded 'dev' to TF_ENV env variable (DRY, one place to change)
- Remove pull-requests: write — workflow doesn't write PR comments
- Keeps env as 'dev' (only environment deployed for POC)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@sonarqubecloud
Copy link

Quality Gate Failed Quality Gate failed

Failed conditions
4 Security Hotspots

See analysis details on SonarQube Cloud

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description CodeRabbit

Commits Files that changed from the base of the PR and between 6d4686c and 8b8d8ba commits.
Files selected (2)
  • .github/workflows/terraform-mn-vectorization.yml (1)
  • mn-vectorization/infra/security_group.tf (1)
Files not summarized due to errors (2)
  • .github/workflows/terraform-mn-vectorization.yml (nothing obtained from openai)
  • mn-vectorization/infra/security_group.tf (nothing obtained from openai)
Files not reviewed due to errors (2)
  • .github/workflows/terraform-mn-vectorization.yml (no response)
  • mn-vectorization/infra/security_group.tf (no response)
Review comments generated (0)
  • Review: 0
  • LGTM: 0

Tips

Chat with Image description CodeRabbit Bot (@coderabbitai)

  • Reply on review comments left by this bot to ask follow-up questions. A review comment is a comment on a diff or a file.
  • Invite the bot into a review comment chain by tagging @coderabbitai in a reply.

Code suggestions

  • The bot may make code suggestions, but please review them carefully before committing since the line number ranges may be misaligned.
  • You can edit the comment made by the bot and manually tweak the suggestion if it is slightly off.

Pausing incremental reviews

  • Add @coderabbitai: ignore anywhere in the PR description to pause further reviews from the bot.

@SashaMarchuk
Copy link

Terraform code moved to dedicated repo: speedandfunction/mn-vectorization-iac (PR #1). Closing this PR — all fixes (CodeRabbit, pipeline, permissions) applied in the new repo.

@SashaMarchuk
Copy link

Closing this PR — Terraform code has been moved to a dedicated repo: speedandfunction/mn-vectorization-iac (PR #1).

All fixes from this PR (CodeRabbit review, GH Actions pipeline, permissions, TF_ENV extraction) have been applied in the new repo.

This branch (feature/mn-vectorization-infra) will be deleted.

@SashkoMarchuk SashkoMarchuk deleted the feature/mn-vectorization-infra branch February 27, 2026 19:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants