Add Terraform infrastructure for MN Vectorization [coderabbit-ai-review]#119
Add Terraform infrastructure for MN Vectorization [coderabbit-ai-review]#119SashkoMarchuk wants to merge 5 commits intomainfrom
Conversation
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
🤖 CodeRabbit AI Review AvailableTo request a code review from CodeRabbit AI, add CodeRabbit will analyze your code and provide feedback on:
Note: Reviews are only performed when |
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (1)
Warning CodeRabbit GitHub Action detectedThe repository is using both CodeRabbit Pro and CodeRabbit Open Source (via GitHub Actions), which is not recommended as it may lead to duplicate comments and extra noise. Please remove the CodeRabbit GitHub Action. 📝 WalkthroughWalkthroughAdds a new Terraform-based AWS infrastructure for mn-vectorization: provider/backend, S3, DynamoDB, IAM, CloudWatch, Secrets Manager, security group, data sources, locals/outputs, environment tfvars, CI workflow, and Terraform .gitignore updates. Changes
Sequence Diagram(s)sequenceDiagram
actor Dev as "Developer / GitHub PR"
participant GH as "GitHub Actions"
participant TF as "Terraform (CI)"
participant S3 as "Terraform S3 Backend"
participant AWS as "AWS APIs"
Dev->>GH: open PR / push main
GH->>TF: checkout & setup terraform
TF->>S3: init backend (S3)
TF->>AWS: validate & plan (reads data sources)
alt on main push
TF->>AWS: apply -> create S3, DynamoDB, IAM, Secrets, SG, CloudWatch
AWS-->>TF: return ARNs/IDs
end
TF-->>GH: plan/apply output
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (4)
mn-vectorization/infra/locals.tf (1)
19-31: Usevar.project_namefor secret path prefixes to keep naming consistent.Lines 19, 23, 27, and 31 hardcode
mn-vectorizationinstead of reusing the project variable.Proposed fix
- name = "mn-vectorization/${var.environment}/anthropic-api-key" + name = "${var.project_name}/${var.environment}/anthropic-api-key" ... - name = "mn-vectorization/${var.environment}/cohere-api-key" + name = "${var.project_name}/${var.environment}/cohere-api-key" ... - name = "mn-vectorization/${var.environment}/qdrant-api-key" + name = "${var.project_name}/${var.environment}/qdrant-api-key" ... - name = "mn-vectorization/${var.environment}/qdrant-url" + name = "${var.project_name}/${var.environment}/qdrant-url"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mn-vectorization/infra/locals.tf` around lines 19 - 31, The secret names in locals.tf hardcode "mn-vectorization" instead of using the project variable; update each secret name (the entries for anthropic_api_key, cohere_api_key, qdrant_api_key, and qdrant_url) to use var.project_name as the prefix (e.g. replace "mn-vectorization/${var.environment}/..." with "${var.project_name}/${var.environment}/...") so the secret paths consistently derive from var.project_name and var.environment.mn-vectorization/infra/provider.tf (1)
2-2: Constrain Terraform core major version to avoid surprise breakage.Line 2 (
>= 1.5) permits Terraform 2.x and higher releases. Add an upper bound for safer reproducibility and consistency with the AWS provider constraint (~> 5.0).Proposed fix
- required_version = ">= 1.5" + required_version = ">= 1.5, < 2.0"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mn-vectorization/infra/provider.tf` at line 2, The Terraform required_version constraint currently allows any Terraform 2.x release ("required_version = \">= 1.5\""); restrict it by adding an upper bound to prevent unexpected major-version breaks (e.g., change the required_version to ">= 1.5, < 2.0") so it aligns with the AWS provider stability approach and ensures reproducible runs; update the required_version setting in the provider.tf (look for the required_version assignment) accordingly.mn-vectorization/infra/s3.tf (1)
39-55: Add rule to abort incomplete multipart uploads.The lifecycle configuration handles embeddings expiration but doesn't clean up incomplete multipart uploads. These can accumulate silently and incur storage costs. Consider adding an
abort_incomplete_multipart_uploadrule.♻️ Proposed fix to add multipart cleanup
resource "aws_s3_bucket_lifecycle_configuration" "artifacts" { - count = var.embeddings_expiry_days > 0 ? 1 : 0 bucket = aws_s3_bucket.artifacts.id rule { + id = "abort-incomplete-multipart" + status = "Enabled" + + filter { + prefix = "" + } + + abort_incomplete_multipart_upload { + days_after_initiation = 7 + } + } + + dynamic "rule" { + for_each = var.embeddings_expiry_days > 0 ? [1] : [] + content { id = "expire-embeddings" status = "Enabled" filter { prefix = "embeddings/" } expiration { days = var.embeddings_expiry_days } + } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mn-vectorization/infra/s3.tf` around lines 39 - 55, The lifecycle config (resource aws_s3_bucket_lifecycle_configuration.artifacts, rule id "expire-embeddings") needs an abort_incomplete_multipart_upload block to clean up partial uploads; add abort_incomplete_multipart_upload { days_after_initiation = var.multipart_abort_days } inside the existing rule, and introduce a new variable var.multipart_abort_days (with a sensible default, e.g., 7) in your variables.tf (and reference it in any docs/terraform.tfvars) so incomplete multipart uploads are automatically aborted after the configured number of days.mn-vectorization/infra/dynamodb.tf (1)
6-26: Consider enabling Point-in-Time Recovery for production.The DynamoDB tables lack Point-in-Time Recovery (PITR), which provides continuous backups. For the
tasksanduserstables storing async query state and ACL data, enabling PITR in production would protect against accidental data loss or corruption.♻️ Proposed fix to add PITR
resource "aws_dynamodb_table" "main" { for_each = local.dynamodb_tables name = "${local.name_prefix}-${each.key}" billing_mode = "PAY_PER_REQUEST" hash_key = each.value.hash_key attribute { name = each.value.hash_key type = "S" } dynamic "ttl" { for_each = each.value.ttl_attr != null ? [each.value.ttl_attr] : [] content { attribute_name = ttl.value enabled = true } } + point_in_time_recovery { + enabled = var.environment == "prod" + } + tags = { Name = "${local.name_prefix}-${each.key}" } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mn-vectorization/infra/dynamodb.tf` around lines 6 - 26, The DynamoDB tables in aws_dynamodb_table.main lack Point-in-Time Recovery; add a point_in_time_recovery block and enable it for production (or at least for the sensitive tables like "tasks" and "users"). Update the resource (aws_dynamodb_table.main) to include either a static block: point_in_time_recovery { enabled = true } or a conditional/dynamic block driven by each.value.pitr or a local flag (e.g., local.env == "prod" or each.value.pitr) so that the setting is applied only in production or for tables that opt-in (check each.key for "tasks" and "users" or add each.value.pitr to local.dynamodb_tables). Ensure the attribute uses enabled = true when the condition is met.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@mn-vectorization/infra/iam.tf`:
- Around line 157-174: The aws_iam_role_policy resource
(aws_iam_role_policy.kms_access) can be created with an empty Resource ARN if
var.is_kms_enabled is true but var.kms_key_arn is empty; add a precondition
block inside aws_iam_role_policy.kms_access that asserts length(var.kms_key_arn)
> 0 and supplies a clear error_message like "kms_key_arn must be provided when
is_kms_enabled is true." Alternatively, clearly document the required
relationship in the kms_key_arn variable description in variables.tf so users
know to supply an ARN whenever is_kms_enabled is true.
In `@mn-vectorization/infra/locals.tf`:
- Around line 18-33: The locals file currently materializes secrets
(anthropic_api_key, cohere_api_key, qdrant_api_key, qdrant_url) into Terraform
state by assigning raw values to locals that are fed into
aws_secretsmanager_secret_version.secret_string; remove these secret local
declarations and instead consume the sensitive variables (var.anthropic_api_key,
var.cohere_api_key, var.qdrant_api_key, var.qdrant_url) directly (mark those
variable blocks as sensitive) and populate them from environment via TF_VAR_* at
apply time, or alternatively manage initial secret bootstrap out-of-band; keep
the existing lifecycle { ignore_changes = [secret_string] } on
aws_secretsmanager_secret_version to avoid future state overwrites.
---
Nitpick comments:
In `@mn-vectorization/infra/dynamodb.tf`:
- Around line 6-26: The DynamoDB tables in aws_dynamodb_table.main lack
Point-in-Time Recovery; add a point_in_time_recovery block and enable it for
production (or at least for the sensitive tables like "tasks" and "users").
Update the resource (aws_dynamodb_table.main) to include either a static block:
point_in_time_recovery { enabled = true } or a conditional/dynamic block driven
by each.value.pitr or a local flag (e.g., local.env == "prod" or
each.value.pitr) so that the setting is applied only in production or for tables
that opt-in (check each.key for "tasks" and "users" or add each.value.pitr to
local.dynamodb_tables). Ensure the attribute uses enabled = true when the
condition is met.
In `@mn-vectorization/infra/locals.tf`:
- Around line 19-31: The secret names in locals.tf hardcode "mn-vectorization"
instead of using the project variable; update each secret name (the entries for
anthropic_api_key, cohere_api_key, qdrant_api_key, and qdrant_url) to use
var.project_name as the prefix (e.g. replace
"mn-vectorization/${var.environment}/..." with
"${var.project_name}/${var.environment}/...") so the secret paths consistently
derive from var.project_name and var.environment.
In `@mn-vectorization/infra/provider.tf`:
- Line 2: The Terraform required_version constraint currently allows any
Terraform 2.x release ("required_version = \">= 1.5\""); restrict it by adding
an upper bound to prevent unexpected major-version breaks (e.g., change the
required_version to ">= 1.5, < 2.0") so it aligns with the AWS provider
stability approach and ensures reproducible runs; update the required_version
setting in the provider.tf (look for the required_version assignment)
accordingly.
In `@mn-vectorization/infra/s3.tf`:
- Around line 39-55: The lifecycle config (resource
aws_s3_bucket_lifecycle_configuration.artifacts, rule id "expire-embeddings")
needs an abort_incomplete_multipart_upload block to clean up partial uploads;
add abort_incomplete_multipart_upload { days_after_initiation =
var.multipart_abort_days } inside the existing rule, and introduce a new
variable var.multipart_abort_days (with a sensible default, e.g., 7) in your
variables.tf (and reference it in any docs/terraform.tfvars) so incomplete
multipart uploads are automatically aborted after the configured number of days.
ℹ️ Review info
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (16)
mn-vectorization/infra/.gitignoremn-vectorization/infra/backend.tfmn-vectorization/infra/cloudwatch.tfmn-vectorization/infra/data.tfmn-vectorization/infra/dynamodb.tfmn-vectorization/infra/environments/dev.tfvarsmn-vectorization/infra/environments/prod.tfvarsmn-vectorization/infra/environments/staging.tfvarsmn-vectorization/infra/iam.tfmn-vectorization/infra/locals.tfmn-vectorization/infra/outputs.tfmn-vectorization/infra/provider.tfmn-vectorization/infra/s3.tfmn-vectorization/infra/secrets.tfmn-vectorization/infra/security_group.tfmn-vectorization/infra/variables.tf
| anthropic_api_key = { | ||
| name = "mn-vectorization/${var.environment}/anthropic-api-key" | ||
| value = var.anthropic_api_key | ||
| } | ||
| cohere_api_key = { | ||
| name = "mn-vectorization/${var.environment}/cohere-api-key" | ||
| value = var.cohere_api_key | ||
| } | ||
| qdrant_api_key = { | ||
| name = "mn-vectorization/${var.environment}/qdrant-api-key" | ||
| value = var.qdrant_api_key | ||
| } | ||
| qdrant_url = { | ||
| name = "mn-vectorization/${var.environment}/qdrant-url" | ||
| value = var.qdrant_url | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd mn-vectorization/infra && rg -n -C5 'aws_secretsmanager_secret_version|secret_string|local\.secrets|each\.value\.value' secrets.tfRepository: speedandfunction/automatization
Length of output: 786
Secret values flow to state, but mitigation is in place.
Lines 20, 24, 28, and 32 pass raw secret values through Terraform locals to aws_secretsmanager_secret_version.secret_string, which means secrets are written to Terraform state on initial apply. However, the lifecycle { ignore_changes = [secret_string] } block prevents Terraform from re-writing these values on subsequent applies, limiting state exposure.
This approach works for bootstrap scenarios where values are set once and then managed out-of-band (as indicated by the comment in secrets.tf). For additional hardening, consider using Terraform variables exclusively via environment variables (TF_VAR_*) and avoiding local value declarations for secrets altogether.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mn-vectorization/infra/locals.tf` around lines 18 - 33, The locals file
currently materializes secrets (anthropic_api_key, cohere_api_key,
qdrant_api_key, qdrant_url) into Terraform state by assigning raw values to
locals that are fed into aws_secretsmanager_secret_version.secret_string; remove
these secret local declarations and instead consume the sensitive variables
(var.anthropic_api_key, var.cohere_api_key, var.qdrant_api_key, var.qdrant_url)
directly (mark those variable blocks as sensitive) and populate them from
environment via TF_VAR_* at apply time, or alternatively manage initial secret
bootstrap out-of-band; keep the existing lifecycle { ignore_changes =
[secret_string] } on aws_secretsmanager_secret_version to avoid future state
overwrites.
🔍 Vulnerabilities of
|
| digest | sha256:17e54ff5e9a181d1bdbf7334ce9637f9c3934d54a65427ae36a5743f46487f15 |
| vulnerabilities | |
| platform | linux/amd64 |
| size | 218 MB |
| packages | 358 |
📦 Base Image alpine:3
| also known as |
|
| digest | sha256:1c4eef651f65e2f7daee7ee785882ac164b02b78fb74503052a26dc061c90474 |
| vulnerabilities |
Description
Description
Description
Description
Description
Description
Description
Description
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description
Description
Description
Description
Description
Description
Description
Description
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description
Description
Description
Description
Description | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description
Description
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description
|
Uplevel your code reviews with CodeRabbit ProCodeRabbit ProIf you like this project, please support us by purchasing the Pro version. The Pro version has advanced context, superior noise reduction and several proprietary improvements compared to the open source version. Moreover, CodeRabbit Pro is free for open source projects. |
There was a problem hiding this comment.
Files selected (16)
- mn-vectorization/infra/.gitignore (1)
- mn-vectorization/infra/backend.tf (1)
- mn-vectorization/infra/cloudwatch.tf (1)
- mn-vectorization/infra/data.tf (1)
- mn-vectorization/infra/dynamodb.tf (1)
- mn-vectorization/infra/environments/dev.tfvars (1)
- mn-vectorization/infra/environments/prod.tfvars (1)
- mn-vectorization/infra/environments/staging.tfvars (1)
- mn-vectorization/infra/iam.tf (1)
- mn-vectorization/infra/locals.tf (1)
- mn-vectorization/infra/outputs.tf (1)
- mn-vectorization/infra/provider.tf (1)
- mn-vectorization/infra/s3.tf (1)
- mn-vectorization/infra/secrets.tf (1)
- mn-vectorization/infra/security_group.tf (1)
- mn-vectorization/infra/variables.tf (1)
Files not summarized due to errors (16)
- mn-vectorization/infra/.gitignore (nothing obtained from openai)
- mn-vectorization/infra/cloudwatch.tf (nothing obtained from openai)
- mn-vectorization/infra/backend.tf (nothing obtained from openai)
- mn-vectorization/infra/dynamodb.tf (nothing obtained from openai)
- mn-vectorization/infra/environments/dev.tfvars (nothing obtained from openai)
- mn-vectorization/infra/data.tf (nothing obtained from openai)
- mn-vectorization/infra/environments/prod.tfvars (nothing obtained from openai)
- mn-vectorization/infra/environments/staging.tfvars (nothing obtained from openai)
- mn-vectorization/infra/locals.tf (nothing obtained from openai)
- mn-vectorization/infra/iam.tf (nothing obtained from openai)
- mn-vectorization/infra/outputs.tf (nothing obtained from openai)
- mn-vectorization/infra/provider.tf (nothing obtained from openai)
- mn-vectorization/infra/s3.tf (nothing obtained from openai)
- mn-vectorization/infra/secrets.tf (nothing obtained from openai)
- mn-vectorization/infra/security_group.tf (nothing obtained from openai)
- mn-vectorization/infra/variables.tf (nothing obtained from openai)
Files not reviewed due to errors (16)
- mn-vectorization/infra/backend.tf (no response)
- mn-vectorization/infra/data.tf (no response)
- mn-vectorization/infra/.gitignore (no response)
- mn-vectorization/infra/dynamodb.tf (no response)
- mn-vectorization/infra/cloudwatch.tf (no response)
- mn-vectorization/infra/environments/dev.tfvars (no response)
- mn-vectorization/infra/environments/prod.tfvars (no response)
- mn-vectorization/infra/provider.tf (no response)
- mn-vectorization/infra/environments/staging.tfvars (no response)
- mn-vectorization/infra/outputs.tf (no response)
- mn-vectorization/infra/iam.tf (no response)
- mn-vectorization/infra/locals.tf (no response)
- mn-vectorization/infra/s3.tf (no response)
- mn-vectorization/infra/secrets.tf (no response)
- mn-vectorization/infra/security_group.tf (no response)
- mn-vectorization/infra/variables.tf (no response)
Review comments generated (0)
- Review: 0
- LGTM: 0
Tips
Chat with
CodeRabbit Bot (@coderabbitai)
- Reply on review comments left by this bot to ask follow-up questions. A review comment is a comment on a diff or a file.
- Invite the bot into a review comment chain by tagging
@coderabbitaiin a reply.
Code suggestions
- The bot may make code suggestions, but please review them carefully before committing since the line number ranges may be misaligned.
- You can edit the comment made by the bot and manually tweak the suggestion if it is slightly off.
Pausing incremental reviews
- Add
@coderabbitai: ignoreanywhere in the PR description to pause further reviews from the bot.
Code reviewNo issues found. Checked for bugs and CLAUDE.md compliance. Reviewed: S3 naming, IAM policies, DynamoDB schema, Secrets Manager, CloudWatch alarms, security group rules, variable validation, cross-file references, and environment tfvars consistency. 🤖 Generated with Claude Code - If this code review was useful, please react with 👍. Otherwise, react with 👎. |
…dd GH Actions pipeline Address all 6 CodeRabbit issues: KMS ARN validation, secrets bootstrap docs, parameterized secret paths, TF version upper bound, multipart upload cleanup, and DynamoDB PITR. Switch resource naming to underscore convention per Nomad standards (S3 keeps hyphens). Add Terraform CI/CD workflow for fmt/validate/plan/apply. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/terraform-mn-vectorization.yml:
- Around line 58-62: In the "Terraform Plan" step, remove the unconditional
continue-on-error: true and run terraform plan with the -detailed-exitcode flag
(terraform plan -var-file=environments/dev.tfvars -no-color -input=false
-detailed-exitcode), then explicitly inspect the plan exit code: if exit code ==
1 fail the job (error/exit non-zero) to surface real errors, if exit code == 0
treat as no changes and succeed, and if exit code == 2 treat as detected
changes/drift (succeed but set a clear step output or annotation indicating
changes were found); implement this explicit exit-code handling in the step that
invokes the terraform plan command.
- Around line 13-16: Update the workflow to use GitHub OIDC by adding the
id-token: write permission under the permissions block and replacing static AWS
key usage with role assumption: remove references to
secrets.MN_VECTORIZATION_AWS_ACCESS_KEY_ID and
secrets.MN_VECTORIZATION_AWS_SECRET_ACCESS_KEY and update the AWS auth step (the
step that currently configures AWS credentials) to use
actions/configure-aws-credentials with role-to-assume: ${{
secrets.MN_VECTORIZATION_AWS_ROLE_ARN }} (and preserve aws-region) so the job
uses OIDC role assumption instead of long-lived secrets.
In `@mn-vectorization/infra/environments/staging.tfvars`:
- Around line 10-12: The staging tfvars enables alarms with is_alarm_enabled =
true while alarm_sns_topic_arn is empty; change the configuration so alarms are
only enabled when a notification target exists by either supplying a valid SNS
topic ARN to alarm_sns_topic_arn or setting is_alarm_enabled = false until an
SNS topic is provisioned (update the values for alarm_sns_topic_arn and
is_alarm_enabled accordingly, leaving log_retention_days unchanged).
In `@mn-vectorization/infra/security_group.tf`:
- Around line 29-37: The egress rule aws_security_group_rule.mcp_https_egress
currently allows 0.0.0.0/0; change it to use a configurable allowlist by
replacing cidr_blocks = ["0.0.0.0/0"] with cidr_blocks =
var.mcp_https_egress_cidrs (or the equivalent variable name) and add a new
variable definition var.mcp_https_egress_cidrs in variables.tf typed as
list(string) with a secure default (e.g., VPC CIDR and any required external
service CIDRs) and document it; consider using aws_vpc_endpoint resources for
AWS services where applicable and ensure any module/stack consumers pass the
narrow CIDR set.
ℹ️ Review info
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
.github/workflows/terraform-mn-vectorization.ymlmn-vectorization/infra/cloudwatch.tfmn-vectorization/infra/dynamodb.tfmn-vectorization/infra/environments/dev.tfvarsmn-vectorization/infra/environments/prod.tfvarsmn-vectorization/infra/environments/staging.tfvarsmn-vectorization/infra/iam.tfmn-vectorization/infra/locals.tfmn-vectorization/infra/provider.tfmn-vectorization/infra/s3.tfmn-vectorization/infra/secrets.tfmn-vectorization/infra/security_group.tfmn-vectorization/infra/variables.tf
🚧 Files skipped from review as they are similar to previous changes (5)
- mn-vectorization/infra/environments/dev.tfvars
- mn-vectorization/infra/provider.tf
- mn-vectorization/infra/variables.tf
- mn-vectorization/infra/environments/prod.tfvars
- mn-vectorization/infra/locals.tf
| permissions: | ||
| contents: read | ||
| pull-requests: write | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's locate and read the workflow file
find . -name "terraform-mn-vectorization.yml" -type fRepository: speedandfunction/automatization
Length of output: 125
🏁 Script executed:
# Let's also search for this file with a broader pattern
git ls-files | grep -i "terraform.*vectorization"Repository: speedandfunction/automatization
Length of output: 123
🏁 Script executed:
cat -n .github/workflows/terraform-mn-vectorization.ymlRepository: speedandfunction/automatization
Length of output: 2303
🌐 Web query:
aws-actions/configure-aws-credentials v4 OIDC role-to-assume documentation
💡 Result:
To use aws-actions/configure-aws-credentials@v4 with GitHub OIDC and role-to-assume:
1) GitHub Actions workflow (OIDC)
permissions:
id-token: write
contents: read
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Configure AWS credentials (OIDC)
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::<AWS_ACCOUNT_ID>:role/<ROLE_NAME>
aws-region: us-east-1
- run: aws sts get-caller-identity[1][2]
2) IAM role trust policy (key OIDC bits)
Your role’s trust policy must allow sts:AssumeRoleWithWebIdentity from GitHub’s OIDC provider, typically scoping by aud and sub, e.g.:
token.actions.githubusercontent.com:aud = sts.amazonaws.comtoken.actions.githubusercontent.com:sub = repo:<ORG>/<REPO>:ref:refs/heads/<BRANCH>
[1][2]
3) Primary docs
[1] https://github.com/aws-actions/configure-aws-credentials (README: “Quick Start (OIDC, recommended)”, options incl. role-to-assume/audience)
[2] https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services (end-to-end GitHub OIDC + role-to-assume example)
Switch to GitHub OIDC authentication to eliminate static AWS credentials.
The workflow currently uses long-lived AWS access keys stored in secrets (lines 41–42), which increases credential exposure risk. Replace with OIDC role assumption as recommended by AWS and GitHub.
Required changes:
- Add
id-token: writepermission (line 13) - Replace static key references with
role-to-assume: ${{ secrets.MN_VECTORIZATION_AWS_ROLE_ARN }}
🔧 Suggested OIDC-based auth
permissions:
contents: read
pull-requests: write
+ id-token: write
@@
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
- aws-access-key-id: ${{ secrets.MN_VECTORIZATION_AWS_ACCESS_KEY_ID }}
- aws-secret-access-key: ${{ secrets.MN_VECTORIZATION_AWS_SECRET_ACCESS_KEY }}
+ role-to-assume: ${{ secrets.MN_VECTORIZATION_AWS_ROLE_ARN }}
aws-region: ${{ env.AWS_REGION }}Also applies to: lines 38–43
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/terraform-mn-vectorization.yml around lines 13 - 16,
Update the workflow to use GitHub OIDC by adding the id-token: write permission
under the permissions block and replacing static AWS key usage with role
assumption: remove references to secrets.MN_VECTORIZATION_AWS_ACCESS_KEY_ID and
secrets.MN_VECTORIZATION_AWS_SECRET_ACCESS_KEY and update the AWS auth step (the
step that currently configures AWS credentials) to use
actions/configure-aws-credentials with role-to-assume: ${{
secrets.MN_VECTORIZATION_AWS_ROLE_ARN }} (and preserve aws-region) so the job
uses OIDC role assumption instead of long-lived secrets.
| - name: Terraform Plan | ||
| if: github.event_name == 'pull_request' | ||
| run: terraform plan -var-file=environments/dev.tfvars -no-color -input=false | ||
| continue-on-error: true | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n .github/workflows/terraform-mn-vectorization.yml | head -80Repository: speedandfunction/automatization
Length of output: 2303
🌐 Web query:
Terraform plan exit codes -detailed-exitcode behavior meaning
💡 Result:
terraform plan -detailed-exitcode makes terraform plan return three possible exit codes so automation can tell the difference between “no changes” and “changes needed”:
- 0: Plan succeeded and the diff is empty (no changes to apply).
- 1: Plan failed (error running the plan).
- 2: Plan succeeded and the diff is non-empty (there are changes to apply). [1]
Without -detailed-exitcode, terraform plan generally returns 0 for success (even if changes are shown) and 1 for errors, so you can’t reliably detect “changes present” via the exit code alone. [1]
Do not swallow Terraform plan failures in PRs.
continue-on-error: true silently masks plan errors (exit code 1), such as invalid configurations or provider failures. The -detailed-exitcode flag distinguishes between actual failures, no changes (exit 0), and detected drift/changes (exit 2). Handle these cases explicitly:
Suggested fix
- name: Terraform Plan
if: github.event_name == 'pull_request'
- run: terraform plan -var-file=environments/dev.tfvars -no-color -input=false
- continue-on-error: true
+ run: |
+ set +e
+ terraform plan -var-file=environments/dev.tfvars -no-color -input=false -detailed-exitcode
+ ec=$?
+ if [ "$ec" -eq 1 ]; then
+ exit 1
+ fi
+ exit 0📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Terraform Plan | |
| if: github.event_name == 'pull_request' | |
| run: terraform plan -var-file=environments/dev.tfvars -no-color -input=false | |
| continue-on-error: true | |
| - name: Terraform Plan | |
| if: github.event_name == 'pull_request' | |
| run: | | |
| set +e | |
| terraform plan -var-file=environments/dev.tfvars -no-color -input=false -detailed-exitcode | |
| ec=$? | |
| if [ "$ec" -eq 1 ]; then | |
| exit 1 | |
| fi | |
| exit 0 |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/terraform-mn-vectorization.yml around lines 58 - 62, In
the "Terraform Plan" step, remove the unconditional continue-on-error: true and
run terraform plan with the -detailed-exitcode flag (terraform plan
-var-file=environments/dev.tfvars -no-color -input=false -detailed-exitcode),
then explicitly inspect the plan exit code: if exit code == 1 fail the job
(error/exit non-zero) to surface real errors, if exit code == 0 treat as no
changes and succeed, and if exit code == 2 treat as detected changes/drift
(succeed but set a clear step output or annotation indicating changes were
found); implement this explicit exit-code handling in the step that invokes the
terraform plan command.
| log_retention_days = 30 | ||
| alarm_sns_topic_arn = "" | ||
| is_alarm_enabled = true |
There was a problem hiding this comment.
Enable alarms only when a notification target exists.
This staging config enables alarms but leaves alarm_sns_topic_arn empty, so alarms will not notify anyone. Consider setting is_alarm_enabled = false until an SNS topic is wired.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mn-vectorization/infra/environments/staging.tfvars` around lines 10 - 12, The
staging tfvars enables alarms with is_alarm_enabled = true while
alarm_sns_topic_arn is empty; change the configuration so alarms are only
enabled when a notification target exists by either supplying a valid SNS topic
ARN to alarm_sns_topic_arn or setting is_alarm_enabled = false until an SNS
topic is provisioned (update the values for alarm_sns_topic_arn and
is_alarm_enabled accordingly, leaving log_retention_days unchanged).
| resource "aws_security_group_rule" "mcp_https_egress" { | ||
| type = "egress" | ||
| from_port = 443 | ||
| to_port = 443 | ||
| protocol = "tcp" | ||
| cidr_blocks = ["0.0.0.0/0"] | ||
| security_group_id = aws_security_group.mcp.id | ||
| description = "HTTPS egress for AWS services and external APIs" | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n mn-vectorization/infra/security_group.tfRepository: speedandfunction/automatization
Length of output: 2458
🏁 Script executed:
fd -t f "variables.tf" mn-vectorization/infra/Repository: speedandfunction/automatization
Length of output: 110
🏁 Script executed:
fd -t f "\.tf$" mn-vectorization/infra/ | head -20Repository: speedandfunction/automatization
Length of output: 483
🏁 Script executed:
cat -n mn-vectorization/infra/variables.tfRepository: speedandfunction/automatization
Length of output: 4894
🏁 Script executed:
rg -i "vpc.*endpoint|vpce-" mn-vectorization/infra/Repository: speedandfunction/automatization
Length of output: 57
🏁 Script executed:
rg -C3 "0\.0\.0\.0/0" mn-vectorization/infra/Repository: speedandfunction/automatization
Length of output: 599
Restrict HTTPS egress scope (currently unrestricted).
Allowing outbound 443 to 0.0.0.0/0 is broader than necessary for this worker role and weakens network controls, especially since DNS and ingress rules are properly scoped to the VPC. This permits unrestricted outbound access to any external service on HTTPS, increasing the blast radius if credentials or the instance are compromised. Prefer a controlled CIDR allowlist (e.g., via variable) or VPC endpoints where feasible for AWS services.
🔧 Suggested approach
resource "aws_security_group_rule" "mcp_https_egress" {
type = "egress"
from_port = 443
to_port = 443
protocol = "tcp"
- cidr_blocks = ["0.0.0.0/0"]
+ cidr_blocks = var.mcp_https_egress_cidrs
security_group_id = aws_security_group.mcp.id
description = "HTTPS egress for AWS services and external APIs"
}Add var.mcp_https_egress_cidrs to variables.tf as a list of strings with explicit CIDR ranges (e.g., VPC CIDR + specific external endpoints).
🧰 Tools
🪛 Trivy (0.69.1)
[error] 34-34: A security group rule should not allow unrestricted egress to any IP address.
Security group rule allows unrestricted egress to any IP address.
Rule: AWS-0104
Resource: aws_security_group_rule.mcp_https_egress
(IaC/AWS)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mn-vectorization/infra/security_group.tf` around lines 29 - 37, The egress
rule aws_security_group_rule.mcp_https_egress currently allows 0.0.0.0/0; change
it to use a configurable allowlist by replacing cidr_blocks = ["0.0.0.0/0"] with
cidr_blocks = var.mcp_https_egress_cidrs (or the equivalent variable name) and
add a new variable definition var.mcp_https_egress_cidrs in variables.tf typed
as list(string) with a secure default (e.g., VPC CIDR and any required external
service CIDRs) and document it; consider using aws_vpc_endpoint resources for
AWS services where applicable and ensure any module/stack consumers pass the
narrow CIDR set.
|
Hi @a-nomad, This PR adds the Terraform IAM foundation module and a GitHub Actions CI pipeline for Terraform (fmt/validate/plan). Key updates since the initial submission:
Ask: Could you review by Monday, Mar 2? Specifically — is the TF code quality and structure acceptable as a foundation to extend with VPC, Qdrant, and Bedrock modules, or would you prefer a different approach? Note on CI checks: Some failing checks (Docker Security Scan, SonarCloud) are pre-existing repo-wide failures unrelated to this PR. The new Terraform CI check requires AWS credentials ( Thanks! |
continue-on-error: true silently masked real Terraform plan failures (exit code 1). Now uses -detailed-exitcode to distinguish between actual failures (exit 1 → fail pipeline) and detected changes (exit 2 → pass, expected on PRs). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
.github/workflows/terraform-mn-vectorization.yml (1)
13-16:⚠️ Potential issue | 🟠 MajorMigrate AWS auth to OIDC instead of static access keys.
Line 41 and Line 42 still rely on long-lived credentials. This is a security risk and was already identified earlier; it should be resolved before rollout.
Suggested fix
permissions: contents: read pull-requests: write + id-token: write @@ - name: Configure AWS Credentials uses: aws-actions/configure-aws-credentials@v4 with: - aws-access-key-id: ${{ secrets.MN_VECTORIZATION_AWS_ACCESS_KEY_ID }} - aws-secret-access-key: ${{ secrets.MN_VECTORIZATION_AWS_SECRET_ACCESS_KEY }} + role-to-assume: ${{ secrets.MN_VECTORIZATION_AWS_ROLE_ARN }} aws-region: ${{ env.AWS_REGION }}aws-actions/configure-aws-credentials v4 OIDC role-to-assume GitHub Actions required permissions id-tokenAlso applies to: 38-43
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/terraform-mn-vectorization.yml around lines 13 - 16, Current workflow uses long-lived AWS credentials; replace them with OIDC role assumption by switching the GitHub Action to aws-actions/configure-aws-credentials@v4 and removing the aws-access-key-id/aws-secret-access-key usage. Update the workflow step that configures AWS (look for configure-aws-credentials) to use role-to-assume and aws-region inputs and ensure the job-level permissions include id-token: write (keep contents: read). Also confirm the target AWS role has a trust policy for GitHub OIDC; remove any secrets referencing long-lived keys from the workflow and from repo secrets once rolled out.
🧹 Nitpick comments (1)
.github/workflows/terraform-mn-vectorization.yml (1)
58-69: Add explicit signal for “changes detected” (exit code 2).Current logic succeeds for both exit code 0 and 2 but emits no explicit indicator for code 2. A notice improves PR observability.
Suggested fix
- name: Terraform Plan if: github.event_name == 'pull_request' run: | set +e terraform plan -var-file=environments/dev.tfvars -no-color -input=false -detailed-exitcode ec=$? if [ "$ec" -eq 1 ]; then echo "::error::Terraform plan failed" exit 1 + elif [ "$ec" -eq 2 ]; then + echo "::notice::Terraform plan detected infrastructure changes" fi exit 0🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/terraform-mn-vectorization.yml around lines 58 - 69, The Terraform Plan step currently treats exit codes 0 and 2 the same and emits no explicit signal for "changes detected"; update the logic around the terraform plan invocation (the terraform plan -var-file=... -detailed-exitcode call and the ec variable) in the "Terraform Plan" step so that after capturing ec you check for ec == 2 and emit a clear GitHub Actions notice (e.g., echo "::notice::Terraform plan detected changes") before exiting successfully, while keeping the existing check that treats ec == 1 as a failure and exits 1.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/terraform-mn-vectorization.yml:
- Around line 48-53: The workflow hardcodes the backend key and tfvars to "dev"
in the Terraform Init and later tfvars steps (the Terraform Init step and
backend-config="key=mn-vectorization/dev/terraform.tfstate"), which will
mis-target runs on main; change these to use a variable/environment value
instead (e.g., ENVIRONMENT or derive from GitHub context like GITHUB_REF or a
workflow input) and update the backend-config key and any tfvars references to
use that variable (replace the literal "dev" with the chosen variable, and
ensure the ENV var is set earlier in the workflow or as an input to the job so
Terraform Init and the tfvars step both reference the same environment).
---
Duplicate comments:
In @.github/workflows/terraform-mn-vectorization.yml:
- Around line 13-16: Current workflow uses long-lived AWS credentials; replace
them with OIDC role assumption by switching the GitHub Action to
aws-actions/configure-aws-credentials@v4 and removing the
aws-access-key-id/aws-secret-access-key usage. Update the workflow step that
configures AWS (look for configure-aws-credentials) to use role-to-assume and
aws-region inputs and ensure the job-level permissions include id-token: write
(keep contents: read). Also confirm the target AWS role has a trust policy for
GitHub OIDC; remove any secrets referencing long-lived keys from the workflow
and from repo secrets once rolled out.
---
Nitpick comments:
In @.github/workflows/terraform-mn-vectorization.yml:
- Around line 58-69: The Terraform Plan step currently treats exit codes 0 and 2
the same and emits no explicit signal for "changes detected"; update the logic
around the terraform plan invocation (the terraform plan -var-file=...
-detailed-exitcode call and the ec variable) in the "Terraform Plan" step so
that after capturing ec you check for ec == 2 and emit a clear GitHub Actions
notice (e.g., echo "::notice::Terraform plan detected changes") before exiting
successfully, while keeping the existing check that treats ec == 1 as a failure
and exits 1.
There was a problem hiding this comment.
Files selected (13)
- .github/workflows/terraform-mn-vectorization.yml (1)
- mn-vectorization/infra/cloudwatch.tf (1)
- mn-vectorization/infra/dynamodb.tf (1)
- mn-vectorization/infra/environments/dev.tfvars (1)
- mn-vectorization/infra/environments/prod.tfvars (1)
- mn-vectorization/infra/environments/staging.tfvars (1)
- mn-vectorization/infra/iam.tf (1)
- mn-vectorization/infra/locals.tf (1)
- mn-vectorization/infra/provider.tf (1)
- mn-vectorization/infra/s3.tf (1)
- mn-vectorization/infra/secrets.tf (1)
- mn-vectorization/infra/security_group.tf (1)
- mn-vectorization/infra/variables.tf (1)
Files not summarized due to errors (13)
- mn-vectorization/infra/dynamodb.tf (nothing obtained from openai)
- mn-vectorization/infra/cloudwatch.tf (nothing obtained from openai)
- mn-vectorization/infra/environments/dev.tfvars (nothing obtained from openai)
- mn-vectorization/infra/environments/staging.tfvars (nothing obtained from openai)
- mn-vectorization/infra/environments/prod.tfvars (nothing obtained from openai)
- .github/workflows/terraform-mn-vectorization.yml (nothing obtained from openai)
- mn-vectorization/infra/secrets.tf (nothing obtained from openai)
- mn-vectorization/infra/provider.tf (nothing obtained from openai)
- mn-vectorization/infra/locals.tf (nothing obtained from openai)
- mn-vectorization/infra/iam.tf (nothing obtained from openai)
- mn-vectorization/infra/s3.tf (nothing obtained from openai)
- mn-vectorization/infra/security_group.tf (nothing obtained from openai)
- mn-vectorization/infra/variables.tf (nothing obtained from openai)
Files not reviewed due to errors (13)
- mn-vectorization/infra/environments/staging.tfvars (no response)
- mn-vectorization/infra/environments/prod.tfvars (no response)
- mn-vectorization/infra/dynamodb.tf (no response)
- mn-vectorization/infra/cloudwatch.tf (no response)
- mn-vectorization/infra/environments/dev.tfvars (no response)
- .github/workflows/terraform-mn-vectorization.yml (no response)
- mn-vectorization/infra/s3.tf (no response)
- mn-vectorization/infra/locals.tf (no response)
- mn-vectorization/infra/provider.tf (no response)
- mn-vectorization/infra/secrets.tf (no response)
- mn-vectorization/infra/iam.tf (no response)
- mn-vectorization/infra/security_group.tf (no response)
- mn-vectorization/infra/variables.tf (no response)
Review comments generated (0)
- Review: 0
- LGTM: 0
Tips
Chat with
CodeRabbit Bot (@coderabbitai)
- Reply on review comments left by this bot to ask follow-up questions. A review comment is a comment on a diff or a file.
- Invite the bot into a review comment chain by tagging
@coderabbitaiin a reply.
Code suggestions
- The bot may make code suggestions, but please review them carefully before committing since the line number ranges may be misaligned.
- You can edit the comment made by the bot and manually tweak the suggestion if it is slightly off.
Pausing incremental reviews
- Add
@coderabbitai: ignoreanywhere in the PR description to pause further reviews from the bot.
Document intentional choices that CodeRabbit flagged: - Static AWS keys: OIDC deferred until Nomad provisions IAM OIDC provider - Hardcoded dev env: only environment deployed for POC - HTTPS egress 0.0.0.0/0: SaaS APIs have rotating IPs, CIDR allowlist impractical Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
.github/workflows/terraform-mn-vectorization.yml (1)
13-16:⚠️ Potential issue | 🟠 MajorUse OIDC role assumption instead of static AWS keys.
Line 43 and Line 44 rely on long-lived secrets. Even for POC, this is a significant credential-risk pattern in CI.
Suggested change
permissions: contents: read - pull-requests: write + id-token: write @@ - name: Configure AWS Credentials uses: aws-actions/configure-aws-credentials@v4 with: - aws-access-key-id: ${{ secrets.MN_VECTORIZATION_AWS_ACCESS_KEY_ID }} - aws-secret-access-key: ${{ secrets.MN_VECTORIZATION_AWS_SECRET_ACCESS_KEY }} + role-to-assume: ${{ secrets.MN_VECTORIZATION_AWS_ROLE_ARN }} aws-region: ${{ env.AWS_REGION }}What is the current recommended way to authenticate aws-actions/configure-aws-credentials@v4 in GitHub Actions: static access keys or GitHub OIDC role-to-assume?Also applies to: 40-45
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/terraform-mn-vectorization.yml around lines 13 - 16, The workflow is using long-lived static AWS keys with aws-actions/configure-aws-credentials@v4; replace that with GitHub OIDC role assumption: update the workflow permissions to include id-token: write and change the configure-aws-credentials step to use the action's role-to-assume (and optional role-session-name/region) inputs instead of aws-access-key-id/aws-secret-access-key secrets, and ensure the repo or OIDC provider is configured with the target AWS IAM role ARN (refer to aws-actions/configure-aws-credentials@v4 and the step name that invokes it) so CI uses OIDC-based short-lived credentials rather than static keys.
🧹 Nitpick comments (1)
.github/workflows/terraform-mn-vectorization.yml (1)
21-24: Add apply guardrails (environment approval + concurrency).Given auto-approve on
main, add deployment protection and a concurrency group to reduce accidental or overlapping applies.Suggested change
jobs: terraform: name: Terraform runs-on: ubuntu-latest + concurrency: + group: terraform-mn-vectorization-${{ github.ref }} + cancel-in-progress: false + environment: mn-vectorization-devAlso applies to: 74-76
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/terraform-mn-vectorization.yml around lines 21 - 24, The workflow's terraform job (job id "terraform", name "Terraform") needs deployment guardrails: add an environment with required reviewers to the job (use the environment keyword under the terraform job so merges to main require approval) and add a concurrency block to prevent overlapping runs (e.g., a concurrency group keyed by the repo/ref like "terraform-${{ github.ref }}" with cancel-in-progress: true) and apply the same changes to the other terraform job block referenced around lines 74-76; update both job definitions to include environment and concurrency entries.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/terraform-mn-vectorization.yml:
- Around line 13-15: The workflow permissions are overly broad: the permissions
block currently sets pull-requests: write which is unnecessary; update the
permissions section to the least privilege by removing or changing
pull-requests: write to a read/none-level scope (e.g., remove the pull-requests
entry or set pull-requests: none) and keep only the required permission(s) such
as contents: read so the workflow token no longer has write access to pull
requests.
---
Duplicate comments:
In @.github/workflows/terraform-mn-vectorization.yml:
- Around line 13-16: The workflow is using long-lived static AWS keys with
aws-actions/configure-aws-credentials@v4; replace that with GitHub OIDC role
assumption: update the workflow permissions to include id-token: write and
change the configure-aws-credentials step to use the action's role-to-assume
(and optional role-session-name/region) inputs instead of
aws-access-key-id/aws-secret-access-key secrets, and ensure the repo or OIDC
provider is configured with the target AWS IAM role ARN (refer to
aws-actions/configure-aws-credentials@v4 and the step name that invokes it) so
CI uses OIDC-based short-lived credentials rather than static keys.
---
Nitpick comments:
In @.github/workflows/terraform-mn-vectorization.yml:
- Around line 21-24: The workflow's terraform job (job id "terraform", name
"Terraform") needs deployment guardrails: add an environment with required
reviewers to the job (use the environment keyword under the terraform job so
merges to main require approval) and add a concurrency block to prevent
overlapping runs (e.g., a concurrency group keyed by the repo/ref like
"terraform-${{ github.ref }}" with cancel-in-progress: true) and apply the same
changes to the other terraform job block referenced around lines 74-76; update
both job definitions to include environment and concurrency entries.
ℹ️ Review info
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.github/workflows/terraform-mn-vectorization.ymlmn-vectorization/infra/security_group.tf
🚧 Files skipped from review as they are similar to previous changes (1)
- mn-vectorization/infra/security_group.tf
- Extract hardcoded 'dev' to TF_ENV env variable (DRY, one place to change) - Remove pull-requests: write — workflow doesn't write PR comments - Keeps env as 'dev' (only environment deployed for POC) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
There was a problem hiding this comment.
Files selected (2)
- .github/workflows/terraform-mn-vectorization.yml (1)
- mn-vectorization/infra/security_group.tf (1)
Files not summarized due to errors (2)
- .github/workflows/terraform-mn-vectorization.yml (nothing obtained from openai)
- mn-vectorization/infra/security_group.tf (nothing obtained from openai)
Files not reviewed due to errors (2)
- .github/workflows/terraform-mn-vectorization.yml (no response)
- mn-vectorization/infra/security_group.tf (no response)
Review comments generated (0)
- Review: 0
- LGTM: 0
Tips
Chat with
CodeRabbit Bot (@coderabbitai)
- Reply on review comments left by this bot to ask follow-up questions. A review comment is a comment on a diff or a file.
- Invite the bot into a review comment chain by tagging
@coderabbitaiin a reply.
Code suggestions
- The bot may make code suggestions, but please review them carefully before committing since the line number ranges may be misaligned.
- You can edit the comment made by the bot and manually tweak the suggestion if it is slightly off.
Pausing incremental reviews
- Add
@coderabbitai: ignoreanywhere in the PR description to pause further reviews from the bot.
|
Terraform code moved to dedicated repo: speedandfunction/mn-vectorization-iac (PR #1). Closing this PR — all fixes (CodeRabbit, pipeline, permissions) applied in the new repo. |
|
Closing this PR — Terraform code has been moved to a dedicated repo: All fixes from this PR (CodeRabbit review, GH Actions pipeline, permissions, TF_ENV extraction) have been applied in the new repo. This branch ( |


Summary
Terraform infrastructure for the MN Vectorization project (RAG system for meeting transcripts).
Resources created (all in
us-east-1):tasks(async query state, with TTL) +users(ACL)File structure
Notes
ec2_instance_idin dev.tfvars is a placeholder — needs real valueCHANGE_ME) — real keys set viaaws secretsmanager put-secret-value[coderabbit-ai-review]
@claude review
Summary by CodeRabbit
Infrastructure
Monitoring
Storage & Encryption
Database
Secrets
Access & Networking
CI/CD
Chores