Skip to content

chore: migrate from Deno to Node.js#51

Open
veksen wants to merge 2 commits intomainfrom
chore-migrate-deno-to-node
Open

chore: migrate from Deno to Node.js#51
veksen wants to merge 2 commits intomainfrom
chore-migrate-deno-to-node

Conversation

@veksen
Copy link
Member

@veksen veksen commented Feb 13, 2026

Summary

  • Replace Deno runtime APIs with Node.js equivalents (process.env, process.exit, fs, child_process)
  • Replace bare HTTP server with Fastify (with @fastify/cors, @fastify/websocket, @fastify/rate-limit)
  • Migrate tests from Deno.test + @std/assert to Vitest (with assertion messages preserved)
  • Replace postgres.js with node-postgres (pg) + pg-cursor
  • Add serializeParams for JSONB compatibility and toPgTextArray for PG array params
  • Add transaction savepoint wrapping and exec serialization to match postgres.js behavior
  • Add pool error handler for DROP DATABASE WITH (FORCE) resilience
  • Delete deno.json, deno.lock, rate-limit.ts
  • Add package.json, tsconfig.json, vitest.config.ts
  • Update CI workflows and Dockerfiles for Node.js 24
  • Add esbuild bundling with runtime asset copying

@veksen veksen force-pushed the chore-migrate-deno-to-node branch 3 times, most recently from cb64ebb to 6d20729 Compare February 13, 2026 11:27
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Found New Recommended Indexes

The following indexes are likely to make the most impact to your queries. They're ordered by how many queries seen in your tests

Index Definition Usage Count
assets(event_id, uploader_id, inserted_at desc) 1
guest_ip_addresses(ip_address) 1

Statistics Mode

When generating recommendations, we made the following changes to your database statistics:

fixed rows per table

  • Rows per table: 10000
  • Pages per table: 1

Instead of assuming a fixed number of rows per table (which can cause unnecessary recommendations), you can export statistics from your production database and import it using the STATISTICS_PATH environment variable. You can read more about how to sync stats here.

- name: Analyze
  uses: query-doctor/analyzer@v0
  env:
    STATISTICS_PATH: ./statistics.json

Optimization Overview

Query Base Cost Optimized Cost Improvement
3527900520088515000 15922.03 1638.53 9.72x
7119592456364862000 126 9.04 13.94x

Query 3527900520088515000

New indexes improve cost by 9.72x:
  1. assets(event_id, uploader_id, inserted_at desc)
View Query (too long to display inline)
SELECT
  "guests"."id",
  "guests"."session_id",
  "guests"."username",
  "guests"."avatar_path",
  "guests"."color",
  "guests"."side",
  "guests"."audio_recording_path",
  "guests"."audio_recording_public",
  "guests"."memo",
  "guests"."memo_public",
  "guests"."setup_at",
  "guests"."last_upload",
  "guests"."inserted_at",
  "guests"."updated_at",
  "userAssets"."id",
  "userAssets"."kind",
  "userAssets"."event_id",
  "userAssets"."uploader_id",
  "userAssets"."uploader_ip",
  "userAssets"."path",
  "userAssets"."file_size",
  "userAssets"."width",
  "userAssets"."height",
  "userAssets"."visible_at",
  "userAssets"."deleted_at",
  "userAssets"."inserted_at",
  "userAssets"."updated_at"
FROM
  (
    SELECT
      "id",
      "session_id",
      "username",
      "avatar_path",
      "color",
      "side",
      "audio_recording_path",
      "audio_recording_public",
      "memo",
      "memo_public",
      "setup_at",
      "last_upload",
      "inserted_at",
      "updated_at"
    FROM
      "guests"
    ORDER BY
      "guests"."last_upload" DESC,
      "guests"."id" DESC
    LIMIT
      100
  ) "guests"
  CROSS JOIN LATERAL (
    SELECT
      "id",
      "kind",
      "event_id",
      "uploader_id",
      "uploader_ip",
      "path",
      "file_size",
      "width",
      "height",
      "visible_at",
      "deleted_at",
      "inserted_at",
      "updated_at"
    FROM
      "assets"
    WHERE
      (
        "assets"."event_id" = (
          SELECT
            "id"
          FROM
            "events"
          WHERE
            "events"."event_key" = '01JKCVP4M2CH34SVTQGHSW4Y5G'
        )
        AND "assets"."uploader_id" = "guests"."id"
      )
    ORDER BY
      "assets"."inserted_at" DESC
    LIMIT
      100
  ) "userAssets";
View Explain Plan (before optimization)
{
  "Node Type": "Nested Loop",
  "Parallel Aware": false,
  "Async Capable": false,
  "Join Type": "Inner",
  "Startup Cost": 159.35,
  "Total Cost": 15922.03,
  "Plan Rows": 100,
  "Plan Width": 370,
  "Inner Unique": false,
  "Plans": [
    {
      "Node Type": "Limit",
      "Parent Relationship": "Outer",
      "Parallel Aware": false,
      "Async Capable": false,
      "Startup Cost": 0.16,
      "Total Cost": 1.78,
      "Plan Rows": 100,
      "Plan Width": 238,
      "Plans": [
        {
          "Node Type": "Index Scan",
          "Parent Relationship": "Outer",
          "Parallel Aware": false,
          "Async Capable": false,
          "Scan Direction": "Forward",
          "Index Name": "guests_last_upload_desc_id_desc_index",
          "Relation Name": "guests",
          "Alias": "guests",
          "Startup Cost": 0.16,
          "Total Cost": 162.16,
          "Plan Rows": 10000,
          "Plan Width": 238
        }
      ]
    },
    {
      "Node Type": "Limit",
      "Parent Relationship": "Inner",
      "Parallel Aware": false,
      "Async Capable": false,
      "Startup Cost": 159.19,
      "Total Cost": 159.19,
      "Plan Rows": 1,
      "Plan Width": 132,
      "Plans": [
        {
          "Node Type": "Index Scan",
          "Parent Relationship": "InitPlan",
          "Subplan Name": "InitPlan 1 (returns $0)",
          "Parallel Aware": false,
          "Async Capable": false,
          "Scan Direction": "Forward",
          "Index Name": "events_event_key_index",
          "Relation Name": "events",
          "Alias": "events",
          "Startup Cost": 0.16,
          "Total Cost": 8.18,
          "Plan Rows": 1,
          "Plan Width": 8,
          "Index Cond": "(event_key = '01JKCVP4M2CH34SVTQGHSW4Y5G'::text)"
        },
        {
          "Node Type": "Sort",
          "Parent Relationship": "Outer",
          "Parallel Aware": false,
          "Async Capable": false,
          "Startup Cost": 151.01,
          "Total Cost": 151.01,
          "Plan Rows": 1,
          "Plan Width": 132,
          "Sort Key": [
            "assets.inserted_at DESC"
          ],
          "Plans": [
            {
              "Node Type": "Seq Scan",
              "Parent Relationship": "Outer",
              "Parallel Aware": false,
              "Async Capable": false,
              "Relation Name": "assets",
              "Alias": "assets",
              "Startup Cost": 0,
              "Total Cost": 151,
              "Plan Rows": 1,
              "Plan Width": 132,
              "Filter": "((event_id = $0) AND (uploader_id = guests.id))"
            }
          ]
        }
      ]
    }
  ]
}
View Explain Plan (after optimization)
{
  "Node Type": "Nested Loop",
  "Parallel Aware": false,
  "Async Capable": false,
  "Join Type": "Inner",
  "Startup Cost": 8.5,
  "Total Cost": 1638.53,
  "Plan Rows": 100,
  "Plan Width": 370,
  "Inner Unique": false,
  "Plans": [
    {
      "Node Type": "Limit",
      "Parent Relationship": "Outer",
      "Parallel Aware": false,
      "Async Capable": false,
      "Startup Cost": 0.16,
      "Total Cost": 1.78,
      "Plan Rows": 100,
      "Plan Width": 238,
      "Plans": [
        {
          "Node Type": "Index Scan",
          "Parent Relationship": "Outer",
          "Parallel Aware": false,
          "Async Capable": false,
          "Scan Direction": "Forward",
          "Index Name": "guests_last_upload_desc_id_desc_index",
          "Relation Name": "guests",
          "Alias": "guests",
          "Startup Cost": 0.16,
          "Total Cost": 162.16,
          "Plan Rows": 10000,
          "Plan Width": 238
        }
      ]
    },
    {
      "Node Type": "Limit",
      "Parent Relationship": "Inner",
      "Parallel Aware": false,
      "Async Capable": false,
      "Startup Cost": 8.34,
      "Total Cost": 16.36,
      "Plan Rows": 1,
      "Plan Width": 132,
      "Plans": [
        {
          "Node Type": "Index Scan",
          "Parent Relationship": "InitPlan",
          "Subplan Name": "InitPlan 1 (returns $0)",
          "Parallel Aware": false,
          "Async Capable": false,
          "Scan Direction": "Forward",
          "Index Name": "events_event_key_index",
          "Relation Name": "events",
          "Alias": "events",
          "Startup Cost": 0.16,
          "Total Cost": 8.18,
          "Plan Rows": 1,
          "Plan Width": 8,
          "Index Cond": "(event_key = '01JKCVP4M2CH34SVTQGHSW4Y5G'::text)"
        },
        {
          "Node Type": "Index Scan",
          "Parent Relationship": "Outer",
          "Parallel Aware": false,
          "Async Capable": false,
          "Scan Direction": "Forward",
          "Index Name": "assets(event_id, uploader_id, inserted_at desc)",
          "Relation Name": "assets",
          "Alias": "assets",
          "Startup Cost": 0.16,
          "Total Cost": 8.18,
          "Plan Rows": 1,
          "Plan Width": 132,
          "Index Cond": "((event_id = $0) AND (uploader_id = guests.id))"
        }
      ]
    }
  ]
}

Query 7119592456364862000

New indexes improve cost by 13.94x:
  1. guest_ip_addresses(ip_address)
SELECT
  *
FROM
  guest_ip_addresses
WHERE
  ip_address = '127.0.0.1';
View Explain Plan (before optimization)
{
  "Node Type": "Seq Scan",
  "Parallel Aware": false,
  "Async Capable": false,
  "Relation Name": "guest_ip_addresses",
  "Alias": "guest_ip_addresses",
  "Startup Cost": 0,
  "Total Cost": 126,
  "Plan Rows": 50,
  "Plan Width": 56,
  "Filter": "(ip_address = '127.0.0.1'::text)"
}
View Explain Plan (after optimization)
{
  "Node Type": "Index Scan",
  "Parallel Aware": false,
  "Async Capable": false,
  "Scan Direction": "Forward",
  "Index Name": "guest_ip_addresses(ip_address)",
  "Relation Name": "guest_ip_addresses",
  "Alias": "guest_ip_addresses",
  "Startup Cost": 0.16,
  "Total Cost": 9.04,
  "Plan Rows": 50,
  "Plan Width": 56,
  "Index Cond": "(ip_address = '127.0.0.1'::text)"
}

What are the numbers next to the query? The numbers are a fingerprint uniquely identifying the query. Let us know in the Discord if you'd like to be able to assign unique names to your queries.
What is cost? Cost is an arbitrary value representing the amount of work postgres decided it needs to do to execute a query based on what it knows about the database and the query itself.
We use cost to look for improvements when checking if an index helps optimize a query in CI as the full production dataset is simply not available to work with.
Execution metadata
Log size
212123 bytes
Time elapsed
2352ms
Queries Seen
32
Queries matched
11
Queries optimized
2
Queries errored
2

@veksen veksen force-pushed the chore-migrate-deno-to-node branch 6 times, most recently from 9352369 to 27433b8 Compare February 13, 2026 19:27
veksen and others added 2 commits February 13, 2026 23:34
Replace Deno runtime with Node.js 24:
- postgres.js → pg (node-postgres) with serializeParams, savepoints, queue
- Deno.serve → Fastify HTTP server
- Deno test → Vitest
- Deno file imports → Node.js fs/path
- Add pool error handler for DROP DATABASE WITH (FORCE) resilience
- Add esbuild bundling with runtime asset copying

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Replace Deno setup with Node.js 24 in GitHub Actions
- Update Dockerfile to use esbuild bundle + node:24-alpine
- Update Dockerfile.dev for Node.js development
- Add .dockerignore for node_modules/dist

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@veksen veksen force-pushed the chore-migrate-deno-to-node branch from 27433b8 to 6787b82 Compare February 13, 2026 19:35
Comment on lines -17 to +23
- name: Install Deno
uses: denoland/setup-deno@v2
- name: Install Node.js
uses: actions/setup-node@v4
with:
cache-hash: ${{ hashFiles('**/deno.lock') }}
deno-version: v2.x
node-version: 24
cache: npm
- name: Install dependencies
run: npm ci
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

claude is very out of date here. The latest setup-node is v6

Comment on lines +279 to +280
expect((emailQueryAfterToggle!.optimization as any).indexesUsed).not.toContain("users_email_idx");
expect((emailQueryAfterToggle!.optimization as any).cost).toBeGreaterThan(costWithIndex);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ow this is not good. Deno's assert is defined with a type guard that narrows the result. Is there any way to prevent sprinkling any everywhere with node?

Comment on lines +22 to +32
async function onSync(body: unknown) {
const startTime = Date.now();
const url = new URL(req.url);

if (!req.body) {
return new Response("Missing body", { status: 400 });
}
let body: SyncRequest;
const bodyString = await req.text();
let parsed: SyncRequest;
try {
body = SyncRequest.parse(JSON.parse(bodyString));
parsed = SyncRequest.parse(body);
} catch (e: unknown) {
if (e instanceof ZodError) {
return Response.json(
{
return {
status: 400,
body: {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it possible to use the zod type provider for fastify? I feel like that could make the code a little bit cleaner and get rid of repetitive 400 error handling

*/
public async getRecentQueries(): Promise<RecentQuery[]> {
try {
const pssSchema = await this.getPssSchema();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use PgIdentifier for this instead?

Comment on lines +87 to 95
/**
* Format a string array as a PostgreSQL array literal.
* Use this for params that target ::text[] casts instead of ::jsonb,
* so they bypass JSON serialization in serializeParams.
*/
export function toPgTextArray(arr: string[]): string {
if (arr.length === 0) return "{}";
return "{" + arr.map((s) => '"' + s.replace(/\\/g, "\\\\").replace(/"/g, '\\"') + '"').join(",") + "}";
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm very concerned with everything going on in this file. Feels like something is going wrong here. does node-pg not handle passing arrays in parameters?

Comment on lines +71 to +84
/**
* Pre-serialize array/object params as JSON strings.
* pg serializes arrays as PostgreSQL array literals ({...}),
* but the Postgres interface expects postgres.js semantics
* where complex types are sent as JSON strings ([...]).
*/
function serializeParams(params?: unknown[]): unknown[] | undefined {
if (!params) return params;
return params.map((p) => {
if (p === null || p === undefined) return p;
if (Array.isArray(p)) return JSON.stringify(p);
if (typeof p === "object" && !(p instanceof Buffer)) return JSON.stringify(p);
return p;
});
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This also looks like logic the driver should be handling for us

Comment on lines +125 to +141
exec: (query, params) => {
const doExec = async () => {
const sp = "sp_" + savepointCounter++;
await client.query("SAVEPOINT " + sp);
try {
const result = await client.query(query, serializeParams(params) as any[]);
await client.query("RELEASE SAVEPOINT " + sp);
return result.rows;
} catch (error) {
await client.query("ROLLBACK TO SAVEPOINT " + sp);
throw error;
}
};
const result = queue.then(doExec, doExec);
queue = result.then(() => {}, () => {});
return result;
},
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another instance of probably correct but very questionable changes here. What do we need to change with our logic to not have to do this?

Comment on lines +158 to +169
const client = await pool.connect();
try {
const cursor = client.query(new Cursor(query, serializeParams(params) as any[]));
const batchSize = options?.size ?? DEFAULT_ITEMS_PER_PAGE;
let rows = await cursor.read(batchSize);
while (rows.length > 0) {
yield* rows as T[];
rows = await cursor.read(batchSize);
}
await cursor.close();
} finally {
client.release();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm concerned about explicitly calling .connect() on a pool where connection lifecycles are meant to be managed by the driver. Some thing is happening in transaction()

Comment on lines +98 to +103
// Handle idle client errors to prevent process crashes.
// Expected during DROP DATABASE ... WITH (FORCE) which terminates
// all connections to the target database.
pool.on("error", (err) => {
log.warn(`Pool idle client error: ${err.message}`, "postgres");
});
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting, makes sense that force db drops would cause an error even though we probably don't want to handle it like one. Maybe there's a path where we can selectively silence errors related to force disconnects caused by resetting the db database. Perhaps there's a better way of doing this gracefully

this.queries.clear();
this.target = undefined;
this._finish = Promise.withResolvers();
this._finish = Promise.withResolvers<void>();
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why?


type HandlerResult = {
status: number;
body: unknown;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why?

}

private async toggleIndex(request: Request): Promise<Response> {
async toggleIndex(body: unknown): Promise<HandlerResult> {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can the body be typed?

}

private async getStatus(): Promise<Response> {
async getStatus(): Promise<unknown> {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why unknown?

type: "ok";
// the type of this is not super important
// currently the frontend only cares whether
// or not this array is empty
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can keep this comment

assertEquals(
indexesAfter.length,
1,
"Indexes were not copied over correctly from the source db",
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing assert message

assertEquals(
initialDiffs.length,
0,
"Should have no diffs initially after sync",
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing assert message

assertEquals(
diffsResult.status,
"fulfilled",
"Schema poll should succeed",
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing assert message

assertEquals(
diffs.length,
2,
"Should detect 2 schema changes (added column and index)",
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing assert message

const syncer = new PostgresSyncer(sourceConnectionManager);

async function onSync(req: Request) {
async function onSync(body: unknown) {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unknown

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants