An autonomous AI agent framework built in April 2024. Early experiment in building a self-directing agent loop that plans, decomposes tasks, remembers context, and calls tools — all running against local open-source LLMs.
The agent takes a goal from the user and autonomously works toward it through a continuous loop:
- Plan — generates a strategic plan for the goal
- Decompose — breaks the plan into a concrete task list
- Execute — iterates through tasks, calling tools and updating state each step
- Remember — retrieves relevant past context via vector-based memory (cosine similarity RAG)
- Complete — signals when the goal is achieved
Tool calls are parsed from XML (<tool_call>) in the LLM output — no function-calling API needed, works with any completion endpoint.
| Component | What it does |
|---|---|
| Agent | Holds agent state (goal, plan, task list, dialog history, vector DB) and runs the main loop |
| Executor | Calls the LLM, parses tool calls from the response, dispatches to tool handlers |
| PromptGenerator | Builds ChatML-format prompts with system instructions, tool definitions, task state, memory, and dialog history |
| RetriveMemory | Vector RAG — embeds the current query, finds relevant past entries via cosine similarity |
| BaseFunctions | Tool implementations: SetPlan, TaskListCreationOrUpdate, IsGoalAchieved |
| LlamaInfer | HTTP client for a local llama.cpp-compatible inference server (text generation + embeddings) |
| VectorService | Persistence layer for the vector database (JSON on disk) |
| AgentService | Persistence layer for agent state (JSON on disk) |
- C# / .NET 8
- Local LLM inference via a llama.cpp-compatible HTTP server on
localhost:5037 - Built for open-source models (Qwen, Mistral, etc.) using ChatML prompt format
- No external API dependencies — everything runs locally
This was an early experiment from April 2024 — two commits, proof of concept. The ideas are straightforward (agent loop, tool use, vector memory), but it was a useful exercise that shows the trajectory from "what if I just let an LLM loop on itself" to more structured agent systems.
MIT — see LICENSE.