NestJS API server that orchestrates local LLM inference via Ollama and LangChain. Provides endpoints for code translation, review, and bug detection.
Note: This is part of an experimental hobby project. Code quality and performance optimizations are secondary to learning and experimentation.
npm install# Development mode
npm run start:dev
# Production mode
npm run startThe API runs on http://localhost:3000
POST /converter/translate— Translate code between languagesPOST /converter/review— Get AI feedback on codePOST /converter/fix— Get AI bug fixes with diff
All requests expect JSON bodies with code and language parameters.
- Controller:
converter.controller.ts— HTTP request handling - Service:
converter.service.ts— AI prompt orchestration via LangChain - Module:
converter.module.ts— Dependency injection setup
@langchain/ollama— LLM integration with Ollama@nestjs/core— Framework@nestjs/common— Common utilities
- AI responses take 10-90+ seconds depending on code size and hardware
- Requests are blocking (no queue system)
- Suitable for local development, not production use
- Single model instance (Qwen2.5-Coder)