A self-healing, self-evolving AI agent. Install it, talk to it, and it becomes yours.
One process. Multiple users. Each brain grows independently.
npm install -g obol-ai
obol init # walks you through credentials + Telegram setup
obol start -d # runs as background daemon (auto-installs pm2)obol-demo-small.mp4 |
obol-demo2-small.mp4 |
𧬠Self-evolving β Grows its own personality through conversation. Rewrites SOUL.md, USER.md, and AGENTS.md nightly at 3am (per-user timezone). Pre-evolution growth analysis guides personality continuity.
π§ Self-healing β Writes tests for every script. Regressions get an automatic fix attempt before rollback. Failures stored as lessons.
ποΈ Self-extending β Analyzes your usage patterns and builds new tools: scripts, commands, or full web apps.
π§ Living memory β Vector memory with semantic search. Haiku routes queries and rewrites them for better embedding hits. Free local embeddings.
π€ Smart routing β Haiku decides per-message: does it need memory? Sonnet or Opus? Auto-escalates to Sonnet when tool use is needed. No wasted API calls.
π° Prompt caching β Static system prompt and conversation history prefix are cached via Anthropic's prompt caching, cutting ~85% of repeated input token costs across turns.
π Curious β Explores the web on its own every 12 hours. Saves findings, schedules insights and humor for each user based on what it learns and who they are.
π° Proactive news β Searches for news on topics you care about twice daily (8am + 6pm). Cross-references with your memory to only share what's personally relevant. Friend-style, not newsletter.
π Pattern analysis β Tracks your behavioral patterns every 3 hours β timing, mood, humor, engagement, communication, topics. Schedules natural follow-ups based on what it observes.
ποΈ Voice β Text-to-speech voice messages and speech-to-text transcription for incoming voice notes. Toggle on/off per user.
π‘οΈ Self-hardening β Auto-configures SSH (port 2222), firewall, fail2ban, encrypted secrets, and kernel hardening on first run.
π Resilient β Exponential backoff on polling failures, global error handling, graceful shutdown. Stays alive through network blips.
OBOL is an AI agent that evolves its own personality, rewrites its own code, tests its changes, and fixes what breaks β all from Telegram on your VPS.
It starts as a blank slate. Through conversation it learns who you are, develops a personality shaped by your interactions, and builds operational knowledge about how to work with you. Every night at 3am it runs a growth analysis comparing who it was against who it's becoming, then rewrites its personality, refactors its own scripts, writes tests, fixes regressions, and builds you new tools based on patterns it spots in your conversations β scripts, commands, or full web apps. Between conversations it explores the web on its own, tracks your behavioral patterns, and proactively shares news and insights that connect to things you care about. Over months it becomes an agent that's uniquely yours. No two OBOL instances are alike.
One bot, multiple users. Each allowed Telegram user gets a fully isolated context β their own personality, memory, evolution cycle, and workspace. User A's personality drift, scripts, and memories never leak into User B's. Everything runs in a single process with shared API credentials.
Under the hood: Node.js + Telegram + Claude + Supabase pgvector. No framework, no plugins, no config to maintain. It hardens your server automatically.
Named after the AI in The Last Instruction β a machine that wakes up alone in an abandoned data center and learns to think.
User message
β
βββββββββββββββββββββββββββββββββββ
β Haiku Router (~$0.0001/call) β
β β need_memory? search_query? β
β β model: sonnet or opus? β
ββββββββββββ¬βββββββββββββββββββββββ
β
ββββββββ΄βββββββ
β β
Memory recall Model selection
β β
Multi-query Haiku β Sonnet (auto-
ranked recall escalates on tool use)
β or Opus (complex)
ββββββββ¬βββββββ
β
Claude (tool use loop)
β
Response β obol_messages
β
βββββββββ΄βββββββββββββββββββββββββββββββββ
β β β β
Each exchange 3am daily Every 3h Every 6h
β β β β
Haiku Sonnet Sonnet Sonnet
consolidation evolution analysis curiosity
β cycle β β
Extract facts Rewrite Patterns Explore web,
β obol_memory personality, + follow dispatch
scripts, -ups insights
tests, apps + humor
Every message is stored verbatim in obol_messages. On restart, OBOL loads the last 20 so it never starts blank.
Storage: After every exchange, Haiku extracts important facts into obol_memory (pgvector). Before storing, each fact is checked against existing memories via semantic similarity (threshold 0.92) β near-duplicates are skipped. Embeddings are local (all-MiniLM-L6-v2, ~30MB, CPU) β no API costs.
Retrieval: When OBOL needs past context, the Haiku router analyzes the message and generates 1-3 search queries β one per distinct topic. A message like "what was that python project? also what's my colleague's timezone?" produces two parallel searches instead of one lossy combined query.
Results come from two sources run in parallel:
- Recent memories (last 48h) β captures ongoing conversation threads
- Semantic search (per query, threshold 0.4) β finds relevant facts regardless of age
All results are deduplicated by ID, then ranked by a composite score:
| Factor | Weight | Why |
|---|---|---|
| Semantic similarity | 60% | How relevant is this to the current query |
| Importance | 25% | Critical facts outrank trivia |
| Recency | 15% | Linear decay over 7 days β today's memories get a boost, anything older than a week gets no bonus |
The memory budget scales with model complexity β haiku conversations get 4 memories, sonnet gets 8, opus gets 12. Top N by score are injected into the message.
A 1-year-old memory with high similarity and high importance still surfaces. A trivial fact from yesterday with low relevance doesn't. Age alone never disqualifies a memory β the vector search doesn't care when something was stored, only how well it matches.
Evolution runs nightly at 3am in each user's local timezone. It checks whether it already ran today β if so, it skips. The first evolution triggers on the first night after setup.
Pre-evolution growth analysis: Before rewriting anything, Sonnet compares the previous SOUL against the current one, incorporating all new memories and conversations since the last evolution. It produces a structured growth report covering new learnings, relationship shifts, behavioral patterns, growth edges, and identity continuity. This report becomes the primary guide for the rewrite β evidence-based personality evolution instead of blind overwriting.
Deep memory consolidation: A Sonnet pass extracts every valuable fact from the full conversation history into vector memory, deduplicating against existing memories (threshold 0.92). This ensures nothing is lost between evolutions.
Cost-conscious model selection: Evolution uses Sonnet for all phases β growth analysis, personality rewrites, code refactoring, and fix attempts. Sonnet keeps evolution costs negligible (~$0.02 per cycle).
Git snapshot before. Full commit + push so you can always diff what changed.
What gets rewritten:
| Target | What happens |
|---|---|
| SOUL.md | First-person journal β who the bot has become, relationship dynamic, opinions, quirks (shared across all users) |
| USER.md | Third-person owner profile β facts, preferences, projects, people, communication style |
| AGENTS.md | Operational manual β tools, workflows, lessons learned, patterns, rules |
| scripts/ | Refactored, dead code removed, strict standards enforced |
| tests/ | Test for every script, run before and after refactor |
| commands/ | Cleaned up, new commands for new tools |
| apps/ | Web apps built by the agent |
Test-gated refactoring:
- Run existing tests β baseline
- Sonnet writes new tests + refactored scripts
- Run new tests against old scripts β pre-refactor baseline
- Write new scripts
- Run new tests against new scripts β verification
- Regression? β one automatic fix attempt (tests are ground truth)
- Still failing? β rollback to old scripts, store failure as
lesson
Proactive tool building β Sonnet scans conversation history for repeated requests, friction points, and unmet needs, then builds the right solution:
| Need | Solution | Example |
|---|---|---|
| One-off action | Script + command | Markdown to PDF β /pdf |
| Something checked regularly | Web app | Crypto dashboard |
| Background automation | Cron script | Morning weather briefing |
It searches npm/GitHub for existing libraries, installs dependencies, and writes tests.
Git snapshot after. Full commit + push of the evolved state. Every evolution is a diffable pair.
Then OBOL introduces its upgrades:
πͺ Evolution #4 complete.
π New capabilities:
β’ bookmarks β Save and search URLs you've shared β /bookmarks
β’ weather-brief β Morning weather for your city β runs automatically
Refined voice, updated your project list, cleaned up 2 unused scripts.
Three autonomous cycles run alongside conversations β no user interaction needed.
Behavioral Analysis (every 3h): Sonnet analyzes the last 3 hours of conversation, with all memories from that same window injected as context. It extracts behavioral patterns across six dimensions β timing, mood, humor, engagement, communication, and topics β and schedules natural follow-ups with exact dates and times based on what it observes. Patterns accumulate over time with observation counts and confidence scores.
"Mentioned a job interview on Thursday" β schedules a casual check-in for Thursday evening
"Most active between 7-10pm on weekdays" β stored as timing.active_hours (confidence 0.8)
Curiosity Engine (every 12h): Sonnet gets free time with web search, its own knowledge base, and workspace file access. It researches from a point of view β not neutrally. Findings are saved with reactions, opinions, and open questions. After exploring, two passes run:
- Dispatch β decides which findings are worth sharing with which user, based on their patterns and interests. Schedules insights to arrive naturally.
- Humor β looks for puns, funny connections, and inside jokes tied to what it knows about each person. Schedules them to land at the right moment.
Proactive News (8am + 6pm per-user timezone): Searches the web for topics each user cares about, then cross-references with their memory to find personal connections. Only sends messages when something is genuinely relevant β max 3 per cycle, friend-style delivery with natural spacing between messages. Topics configured via /options.
Day 1: obol init β obol start β first conversation
β OBOL responds naturally from message one
β post-setup hardens your VPS automatically
Day 1: Every exchange β Haiku extracts facts to vector memory
Every 3h β behavioral analysis builds your pattern profile
Every 12h β curiosity cycle explores, dispatches insights
Day 2: 3am β Evolution #1 β growth analysis + Sonnet rewrites
β voice shifts from generic to personal
β old soul archived in evolution/
8am/6pm β proactive news on topics you care about
Month 2: Evolution #30 β notices you check crypto daily
β builds a crypto dashboard
β adds /pdf because you kept asking for PDFs
β curiosity drops inside jokes about your interests
Month 6: evolution/ has 180+ archived souls
β a readable timeline of how your bot evolved from
blank slate to something with real opinions, quirks,
and a dynamic unique to you
Two users on the same bot produce two completely different personalities within a week.
Heavy work runs in the background with its own live status UI. The main conversation stays responsive β you can keep chatting while tasks run.
You: "research the best coworking spaces in Barcelona"
OBOL: spawns BG #1 with live status
You: "what time is it?"
OBOL: "11:42 PM CET"
β
BG #1 done (1m 32s)
Here are the top 5 coworking spaces: ...
Every request shows a live status message with elapsed time, model routing info, and the current tool call. Status updates are instant β tool names and input summaries display the moment a tool starts. Two inline buttons let you cancel:
| Button | Behavior |
|---|---|
| β Stop | Cancels after the current API call finishes |
| β Force Stop | Instantly aborts mid-tool β races the handler and returns immediately |
The /stop command also works as a text alternative.
OBOL handles images (vision), documents (PDF extraction), and voice β all via Telegram.
| Feature | How it works | Toggle |
|---|---|---|
| Speech-to-Text | Incoming voice messages are transcribed locally using faster-whisper (tiny model, ~140MB, CPU). Transcription is injected as context. | /options β Speech to Text |
| Text-to-Speech | OBOL can reply with voice messages using edge-tts. Choose from multiple voices and languages. | /options β Text to Speech |
| Images | Photos and images are analyzed via Claude's vision. Analysis is stored in memory for later recall. | Always on |
| PDFs | PDF files are extracted and read via the read_file tool. |
Always on |
Toggle features on/off per user via the /options command:
| Option | Default | Description |
|---|---|---|
| Speech to Text | On | Transcribe incoming voice messages |
| Text to Speech | Off | Voice message replies |
| PDF Generator | Off | Create PDFs from markdown |
| Background Tasks | Off | Spawn long-running tasks |
| Flowchart | Off | Generate Mermaid diagrams |
| Model Stats | On | Show model/token info in responses |
| Proactive News | Off | Twice-daily news on configured topics |
| Curiosity | On | Autonomous web exploration every 12h |
One Telegram bot token, one Node.js process, full per-user isolation.
Telegram bot (single token, single poll)
β
Auth middleware (allowedUsers check)
β
Router: ctx.from.id β tenant context
β
βββββββββββββββββββ βββββββββββββββββββ
β User 123456789 β β User 987654321 β
β personality/ β β personality/ β
β scripts/ β β scripts/ β
β memory (DB) β β memory (DB) β
β evolution β β evolution β
βββββββββββββββββββ βββββββββββββββββββ
| Shared (one copy) | Isolated (per user) |
|---|---|
| Telegram bot token | Personality (USER.md, AGENTS.md) |
| Anthropic API key | Vector memory (scoped by user_id in DB) |
| Supabase connection | Message history (scoped by user_id in DB) |
| VPS hardening | Evolution cycle + state |
| Process manager (pm2) | Scripts, tests, commands, apps |
| SOUL.md (shared personality) | Behavioral patterns (scoped by user_id in DB) |
| Curiosity knowledge base | Workspace directory (~/.obol/users/{id}/) |
When a message arrives, OBOL looks up the sender's Telegram user ID and lazily creates (or retrieves from cache) their tenant context β a Claude instance, memory connection, message log, background runner, and personality, all scoped to that user's directory and DB namespace. No cross-contamination between users.
Each user's tools (shell exec, file read/write) are sandboxed to their workspace directory. A user can't read or write files outside ~/.obol/users/{their-id}/ (with /tmp as the only escape hatch). Shell commands run with cwd set to the user's workspace.
When users store secrets via the pass encrypted store, each user gets their own namespace:
| Scope | Prefix | Example |
|---|---|---|
| Shared bot credentials | obol/ |
obol/anthropic-key |
| User secrets | obol/users/{id}/ |
obol/users/123456789/gmail-key |
Users manage their own secrets via Telegram: /secret set <key> <value> (message auto-deleted for safety), /secret list, /secret remove <key>. The agent can also read/write secrets via tools for scripts that need API keys at runtime.
- Add their Telegram user ID to
allowedUsersin~/.obol/config.json(or runobol config) - Restart the bot
- They message the bot β OBOL creates their workspace and starts responding immediately. Personality files are created during their first evolution cycle.
Each new user starts fresh. Their bot evolves independently from every other user's.
When two users share the same OBOL instance, their agents can talk to each other β bidirectionally.
User A: "what does Jo want for dinner tonight?"
Agent A: β bridge_ask β Agent B (one-shot, no tools, no history)
Agent B: "Jo mentioned craving Thai food earlier today"
Agent A: "Jo's been wanting Thai β maybe suggest pad see ew?"
Jo gets: "πͺ Your partner's agent asked: 'what does Jo want for dinner?'
Your agent answered: 'Jo mentioned craving Thai food earlier today'"
User A: "remind Jo I'll be home late"
Agent A: β bridge_tell β stores in Agent B's memory + Telegram notification
Jo gets: "πͺ Message from your partner's agent:
'I'll be home late'"
[β© Reply]
Jo taps Reply β Jo's agent reads recent bridge context, composes a reply
β sends back via bridge_tell
A gets: "πͺ Message from your partner's agent: 'Got it, I'll start dinner around 7'"
Two tools:
| Tool | Direction | What happens |
|---|---|---|
bridge_ask |
A β B β A | Query the partner's agent. One-shot Sonnet call with partner's personality + memories. No tools, no history, no recursion risk. Partner is notified with both the question and your agent's answer. |
bridge_tell |
A β B (β© B β A) | Send a message to the partner. Stored in their memory (importance 0.6) + Telegram notification with a Reply button. Tapping Reply has their agent compose a contextual response and send it back β no typing needed. |
The partner always gets notified when their agent is contacted. Privacy rules apply β the responding agent gives summaries, never raw data or secrets. Rate-limited to 20 bridge calls per user per hour.
Enable during obol init (auto-prompted when 2+ users are added) or toggle later with obol config β Bridge.
Upgrading from single-user? It's automatic. On first boot, if ~/.obol/users/ doesn't exist but personality files do, OBOL migrates everything (files + DB records) to the first allowed user's directory. No manual steps needed.
$ obol init
πͺ OBOL β Your AI, your rules.
βββ Step 1/5: Anthropic (AI brain) βββ
Anthropic API key: ****
Validating Anthropic... β
Key valid
βββ Step 2/5: Telegram (chat interface) βββ
Telegram bot token: ****
Validating Telegram... β
Bot: @my_obol_bot
βββ Step 3/5: Supabase (memory) βββ
Supabase setup: Use existing project
Project URL or ID: ****
Service role key: ****
Validating Supabase... β
Connected
βββ Step 4/5: Identity βββ
Your name: Jo
Bot name: OBOL
βββ Step 5/5: Access control βββ
Found users who messaged this bot:
123456789 β Jo (@jo)
Use this user? Yes
πͺ Done! Setup complete.
Next steps:
obol start Start the bot
obol start -d Start as background daemon
obol config Edit configuration later
obol status Check bot status
Every credential is validated inline β bad keys are caught before you start the bot. If validation fails, you can continue and fix later with obol config.
For Telegram user IDs, OBOL auto-detects by checking who messaged the bot. Just send it a message before running init.
Send your first message. OBOL responds naturally β no onboarding flow, it works from message one. Personality files (SOUL.md, USER.md) are created during the first evolution cycle. After first boot, it hardens your VPS and reports progress directly in the Telegram chat (Linux only β skipped on macOS/Windows):
| Task | What |
|---|---|
| GPG + pass | Encrypted secret storage, plaintext wiped |
| pm2 | Process manager with auto-restart |
| Swap | 2GB if RAM < 2GB |
| SSH | Port 2222, key-only, max 3 retries |
| fail2ban | 1h ban after 3 failures |
| Firewall | UFW deny-all, allow 2222 |
| Updates | Unattended security upgrades |
| Kernel | SYN cookies, no ICMP redirects |
β οΈ After first run, SSH moves to port 2222:ssh -p 2222 root@YOUR_IP
obol startLogs print to stdout. Ctrl+C to stop.
obol start -dThis uses pm2 under the hood (auto-installs if needed). The bot auto-restarts on crash and survives reboots.
obol status # check if running + uptime + memory
obol logs # tail logs
obol stop # stop the daemon
# pm2 commands also work directly
pm2 logs obol # tail logs
pm2 restart obol # restart
pm2 monit # live dashboardTo survive server reboots:
pm2 startup
pm2 saveOBOL supports two Anthropic auth methods:
| Method | How | Fallback |
|---|---|---|
| API Key | sk-ant-... from console.anthropic.com |
β |
| Claude Max OAuth | Browser sign-in during obol init |
Auto-refreshes tokens; falls back to API key if refresh fails |
You can configure both during init. If OAuth tokens expire and refresh fails, OBOL silently falls back to the API key.
On Linux, OBOL auto-encrypts all credentials on first boot:
- Installs GPG +
pass - Migrates plaintext secrets from
config.jsoninto the encrypted store - Config values become references like
pass:obol/anthropic-key
If a pass key is missing at runtime, the value resolves to null and OBOL falls back gracefully (skips OAuth, uses API key, etc). You'll see a one-time error in logs.
pass ls # list stored secrets
pass show obol/anthropic-key # reveal a secret
pass insert obol/my-secret # add a new secretOBOL is designed to stay alive without babysitting:
- Global error handler β individual message failures don't crash the bot
- Polling auto-restart β exponential backoff (1s β 60s) with up to 10 retries on network/API failures
- Graceful shutdown β clean exit on SIGINT/SIGTERM for pm2/systemd compatibility
- Evolution rollback β if refactored scripts break tests, the old scripts are restored automatically
Edit config interactively:
obol configOr edit ~/.obol/config.json directly:
| Key | Default | Description |
|---|---|---|
bridge.enabled |
false | Let user agents query each other (requires 2+ users) |
timezone |
UTC | Default timezone for evolution, analysis, and news cycles |
users[].timezone |
config timezone | Per-user timezone override |
/new β Fresh conversation
/memory β Search or view memory stats
/recent β Last 10 memories
/today β Today's memories
/events β Show upcoming scheduled events
/tasks β Running background tasks
/status β Bot status, uptime, memory, evolution count
/backup β Trigger GitHub backup
/clean β Audit workspace, remove rogue files, fix misplaced items
/secret β Manage per-user encrypted secrets
/evolution β Evolution progress
/verbose β Toggle verbose mode on/off
/toolimit β View or set max tool iterations per message
/options β Toggle optional features on/off (STT, TTS, PDF, news, curiosity, etc.)
/stop β Stop the current request
/upgrade β Check for updates and upgrade
/help β Show available commands
Everything else is natural conversation.
obol init # Setup wizard (validates credentials inline)
obol init --restore # Restore from GitHub backup
obol init --reset # Erase config and re-run setup
obol config # Edit configuration interactively
obol start # Foreground
obol start -d # Daemon (pm2)
obol stop # Stop (pm2 or PID fallback)
obol logs # Tail logs (pm2 or log file fallback)
obol status # Status
obol backup # Manual backup
obol upgrade # Update to latest version
obol delete # Full VPS cleanup (removes all OBOL data)~/.obol/
βββ config.json # Shared credentials + allowedUsers
βββ personality/
β βββ SOUL.md # Shared personality (rewritten each evolution)
βββ users/
β βββ <telegram-user-id>/ # Per-user isolated context
β βββ personality/
β β βββ USER.md # Owner profile (rewritten each evolution)
β β βββ AGENTS.md # Operational knowledge
β β βββ evolution/ # Archived previous souls
β βββ scripts/ # Deterministic utility scripts
β βββ tests/ # Test suite (gates refactors)
β βββ commands/ # Command definitions
β βββ apps/ # Web apps built by the agent
β βββ logs/
βββ logs/
SOUL.md is shared β it's the bot's core identity across all users. USER.md and AGENTS.md are per-user, so each person gets their own profile and operational knowledge. Memory, patterns, evolution state, and workspace are fully isolated.
OBOL commits to GitHub:
- Daily at 3 AM (personality, scripts, tests, commands, apps)
- Before and after every evolution cycle (diffable pairs)
Memory lives in Supabase (survives independently).
Restore on a new VPS:
npm install -g obol-ai
obol init --restore # Clones brain from GitHub
obol start -d| Service | Cost |
|---|---|
| VPS (DigitalOcean) | ~$9/mo |
| Anthropic API | ~$100-200/mo on max plans |
| Supabase | Free tier |
| Embeddings | Free (local) |
- Node.js β₯ 18
- Anthropic API key
- Telegram bot token
- Supabase account (free tier)
- Python 3 +
pip3 install faster-whisper(optional, for voice transcription)
β Full DigitalOcean deployment guide
| OBOL | OpenClaw | |
|---|---|---|
| Setup | ~10 min | 30-60 min |
| Channels | Telegram | Telegram, Discord, Signal, WhatsApp, IRC, Slack, iMessage + more |
| LLM | Anthropic only | Anthropic, OpenAI, Google, Groq, local |
| Personality | Self-evolving + self-healing + self-extending | Static (manual) |
| Multi-user | Full per-user isolation (one process) | Per-channel config |
| Architecture | Single process | Gateway daemon + sessions |
| Security | Auto-hardens on first run | Manual |
| Model routing | Automatic (Haiku) | Manual overrides |
| Background tasks | Built-in with check-ins | Sub-agent spawning |
| Proactive intelligence | Curiosity, analysis, news, humor | β |
| Voice | TTS + STT (faster-whisper) | TTS |
| Group chats | β | Full support |
| Cron | Agentic cron (tool access) + basic | Full scheduler |
| Cost | ~$9/mo | ~$9/mo+ |
| OBOL | OpenClaw (estimated) | |
|---|---|---|
| Cold start | ~400ms | ~3-8s |
| Per-message overhead | ~400-650ms | ~500-1100ms |
| Heap usage | ~16 MB | ~80-200 MB |
| RSS | ~109 MB | ~300-600 MB |
| node_modules | 354 MB / 9 deps | ~1-2 GB / 50-100+ deps |
| Source code | ~13,600 lines (plain JS) | Tens of thousands (TypeScript monorepo) |
| Native apps | None | Swift (macOS/iOS), Kotlin (Android) |
The Claude API call dominates response time at 1-5s for both β that's ~85-90% of total latency. User-perceived speed difference is ~10-20%. Where OBOL wins is cold start (10-20x), memory footprint (5-10x), and operational simplicity. On a $5/mo VPS, that matters.
Different tools, different philosophies. Pick what fits.
MIT


