Skip to content

Self-evolving AI assistant that learns, remembers, and acts on its own. Persistent vector memory, self-rewriting personality, proactive heartbeats. Deploy your own in minutes.

Notifications You must be signed in to change notification settings

jestersimpps/obol

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

285 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸͺ™ OBOL

OBOL

A self-healing, self-evolving AI agent. Install it, talk to it, and it becomes yours.

One process. Multiple users. Each brain grows independently.

npm install -g obol-ai
obol init       # walks you through credentials + Telegram setup
obol start -d   # runs as background daemon (auto-installs pm2)
obol-demo-small.mp4
obol-demo2-small.mp4

🧬 Self-evolving β€” Grows its own personality through conversation. Rewrites SOUL.md, USER.md, and AGENTS.md nightly at 3am (per-user timezone). Pre-evolution growth analysis guides personality continuity.

πŸ”§ Self-healing β€” Writes tests for every script. Regressions get an automatic fix attempt before rollback. Failures stored as lessons.

πŸ—οΈ Self-extending β€” Analyzes your usage patterns and builds new tools: scripts, commands, or full web apps.

🧠 Living memory β€” Vector memory with semantic search. Haiku routes queries and rewrites them for better embedding hits. Free local embeddings.

πŸ€– Smart routing β€” Haiku decides per-message: does it need memory? Sonnet or Opus? Auto-escalates to Sonnet when tool use is needed. No wasted API calls.

πŸ’° Prompt caching β€” Static system prompt and conversation history prefix are cached via Anthropic's prompt caching, cutting ~85% of repeated input token costs across turns.

πŸ” Curious β€” Explores the web on its own every 12 hours. Saves findings, schedules insights and humor for each user based on what it learns and who they are.

πŸ“° Proactive news β€” Searches for news on topics you care about twice daily (8am + 6pm). Cross-references with your memory to only share what's personally relevant. Friend-style, not newsletter.

πŸ“Š Pattern analysis β€” Tracks your behavioral patterns every 3 hours β€” timing, mood, humor, engagement, communication, topics. Schedules natural follow-ups based on what it observes.

πŸŽ™οΈ Voice β€” Text-to-speech voice messages and speech-to-text transcription for incoming voice notes. Toggle on/off per user.

πŸ›‘οΈ Self-hardening β€” Auto-configures SSH (port 2222), firewall, fail2ban, encrypted secrets, and kernel hardening on first run.

πŸ”„ Resilient β€” Exponential backoff on polling failures, global error handling, graceful shutdown. Stays alive through network blips.


What is it?

OBOL is an AI agent that evolves its own personality, rewrites its own code, tests its changes, and fixes what breaks β€” all from Telegram on your VPS.

It starts as a blank slate. Through conversation it learns who you are, develops a personality shaped by your interactions, and builds operational knowledge about how to work with you. Every night at 3am it runs a growth analysis comparing who it was against who it's becoming, then rewrites its personality, refactors its own scripts, writes tests, fixes regressions, and builds you new tools based on patterns it spots in your conversations β€” scripts, commands, or full web apps. Between conversations it explores the web on its own, tracks your behavioral patterns, and proactively shares news and insights that connect to things you care about. Over months it becomes an agent that's uniquely yours. No two OBOL instances are alike.

One bot, multiple users. Each allowed Telegram user gets a fully isolated context β€” their own personality, memory, evolution cycle, and workspace. User A's personality drift, scripts, and memories never leak into User B's. Everything runs in a single process with shared API credentials.

Under the hood: Node.js + Telegram + Claude + Supabase pgvector. No framework, no plugins, no config to maintain. It hardens your server automatically.

Named after the AI in The Last Instruction β€” a machine that wakes up alone in an abandoned data center and learns to think.

How It Works

User message
    ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Haiku Router (~$0.0001/call)   β”‚
β”‚  β†’ need_memory? search_query?   β”‚
β”‚  β†’ model: sonnet or opus?       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
           ↓
    β”Œβ”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”
    ↓             ↓
Memory recall   Model selection
    ↓             ↓
Multi-query     Haiku β†’ Sonnet (auto-
ranked recall   escalates on tool use)
    ↓             or Opus (complex)
    β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
           ↓
   Claude (tool use loop)
           ↓
   Response β†’ obol_messages
           ↓
   β”Œβ”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
   ↓                ↓              ↓        ↓
Each exchange    3am daily    Every 3h   Every 6h
   ↓                ↓              ↓         ↓
Haiku            Sonnet        Sonnet     Sonnet
consolidation    evolution     analysis   curiosity
   ↓             cycle            ↓         ↓
Extract facts    Rewrite       Patterns   Explore web,
β†’ obol_memory    personality,  + follow   dispatch
                 scripts,      -ups       insights
                 tests, apps              + humor

Layer 1: Message Log + Vector Memory

Every message is stored verbatim in obol_messages. On restart, OBOL loads the last 20 so it never starts blank.

Storage: After every exchange, Haiku extracts important facts into obol_memory (pgvector). Before storing, each fact is checked against existing memories via semantic similarity (threshold 0.92) β€” near-duplicates are skipped. Embeddings are local (all-MiniLM-L6-v2, ~30MB, CPU) β€” no API costs.

Retrieval: When OBOL needs past context, the Haiku router analyzes the message and generates 1-3 search queries β€” one per distinct topic. A message like "what was that python project? also what's my colleague's timezone?" produces two parallel searches instead of one lossy combined query.

Results come from two sources run in parallel:

  • Recent memories (last 48h) β€” captures ongoing conversation threads
  • Semantic search (per query, threshold 0.4) β€” finds relevant facts regardless of age

All results are deduplicated by ID, then ranked by a composite score:

Factor Weight Why
Semantic similarity 60% How relevant is this to the current query
Importance 25% Critical facts outrank trivia
Recency 15% Linear decay over 7 days β€” today's memories get a boost, anything older than a week gets no bonus

The memory budget scales with model complexity β€” haiku conversations get 4 memories, sonnet gets 8, opus gets 12. Top N by score are injected into the message.

A 1-year-old memory with high similarity and high importance still surfaces. A trivial fact from yesterday with low relevance doesn't. Age alone never disqualifies a memory β€” the vector search doesn't care when something was stored, only how well it matches.

Layer 2: The Evolution Cycle

Evolution runs nightly at 3am in each user's local timezone. It checks whether it already ran today β€” if so, it skips. The first evolution triggers on the first night after setup.

Pre-evolution growth analysis: Before rewriting anything, Sonnet compares the previous SOUL against the current one, incorporating all new memories and conversations since the last evolution. It produces a structured growth report covering new learnings, relationship shifts, behavioral patterns, growth edges, and identity continuity. This report becomes the primary guide for the rewrite β€” evidence-based personality evolution instead of blind overwriting.

Deep memory consolidation: A Sonnet pass extracts every valuable fact from the full conversation history into vector memory, deduplicating against existing memories (threshold 0.92). This ensures nothing is lost between evolutions.

Cost-conscious model selection: Evolution uses Sonnet for all phases β€” growth analysis, personality rewrites, code refactoring, and fix attempts. Sonnet keeps evolution costs negligible (~$0.02 per cycle).

Git snapshot before. Full commit + push so you can always diff what changed.

What gets rewritten:

Target What happens
SOUL.md First-person journal β€” who the bot has become, relationship dynamic, opinions, quirks (shared across all users)
USER.md Third-person owner profile β€” facts, preferences, projects, people, communication style
AGENTS.md Operational manual β€” tools, workflows, lessons learned, patterns, rules
scripts/ Refactored, dead code removed, strict standards enforced
tests/ Test for every script, run before and after refactor
commands/ Cleaned up, new commands for new tools
apps/ Web apps built by the agent

Test-gated refactoring:

  1. Run existing tests β†’ baseline
  2. Sonnet writes new tests + refactored scripts
  3. Run new tests against old scripts β†’ pre-refactor baseline
  4. Write new scripts
  5. Run new tests against new scripts β†’ verification
  6. Regression? β†’ one automatic fix attempt (tests are ground truth)
  7. Still failing? β†’ rollback to old scripts, store failure as lesson

Proactive tool building β€” Sonnet scans conversation history for repeated requests, friction points, and unmet needs, then builds the right solution:

Need Solution Example
One-off action Script + command Markdown to PDF β†’ /pdf
Something checked regularly Web app Crypto dashboard
Background automation Cron script Morning weather briefing

It searches npm/GitHub for existing libraries, installs dependencies, and writes tests.

Git snapshot after. Full commit + push of the evolved state. Every evolution is a diffable pair.

Then OBOL introduces its upgrades:

πŸͺ™ Evolution #4 complete.

πŸ†• New capabilities:
β€’ bookmarks β€” Save and search URLs you've shared β†’ /bookmarks
β€’ weather-brief β€” Morning weather for your city β†’ runs automatically

Refined voice, updated your project list, cleaned up 2 unused scripts.

Layer 3: Background Intelligence

Three autonomous cycles run alongside conversations β€” no user interaction needed.

Behavioral Analysis (every 3h): Sonnet analyzes the last 3 hours of conversation, with all memories from that same window injected as context. It extracts behavioral patterns across six dimensions β€” timing, mood, humor, engagement, communication, and topics β€” and schedules natural follow-ups with exact dates and times based on what it observes. Patterns accumulate over time with observation counts and confidence scores.

"Mentioned a job interview on Thursday" β†’ schedules a casual check-in for Thursday evening
"Most active between 7-10pm on weekdays" β†’ stored as timing.active_hours (confidence 0.8)

Curiosity Engine (every 12h): Sonnet gets free time with web search, its own knowledge base, and workspace file access. It researches from a point of view β€” not neutrally. Findings are saved with reactions, opinions, and open questions. After exploring, two passes run:

  • Dispatch β€” decides which findings are worth sharing with which user, based on their patterns and interests. Schedules insights to arrive naturally.
  • Humor β€” looks for puns, funny connections, and inside jokes tied to what it knows about each person. Schedules them to land at the right moment.

Proactive News (8am + 6pm per-user timezone): Searches the web for topics each user cares about, then cross-references with their memory to find personal connections. Only sends messages when something is genuinely relevant β€” max 3 per cycle, friend-style delivery with natural spacing between messages. Topics configured via /options.

The Lifecycle

Day 1:   obol init β†’ obol start β†’ first conversation
         β†’ OBOL responds naturally from message one
         β†’ post-setup hardens your VPS automatically

Day 1:   Every exchange β†’ Haiku extracts facts to vector memory
         Every 3h β†’ behavioral analysis builds your pattern profile
         Every 12h β†’ curiosity cycle explores, dispatches insights

Day 2:   3am β†’ Evolution #1 β†’ growth analysis + Sonnet rewrites
         β†’ voice shifts from generic to personal
         β†’ old soul archived in evolution/
         8am/6pm β†’ proactive news on topics you care about

Month 2: Evolution #30 β†’ notices you check crypto daily
         β†’ builds a crypto dashboard
         β†’ adds /pdf because you kept asking for PDFs
         β†’ curiosity drops inside jokes about your interests

Month 6: evolution/ has 180+ archived souls
         β†’ a readable timeline of how your bot evolved from
         blank slate to something with real opinions, quirks,
         and a dynamic unique to you

Two users on the same bot produce two completely different personalities within a week.

Background Tasks

Heavy work runs in the background with its own live status UI. The main conversation stays responsive β€” you can keep chatting while tasks run.

You: "research the best coworking spaces in Barcelona"
OBOL: spawns BG #1 with live status

You: "what time is it?"
OBOL: "11:42 PM CET"

βœ… BG #1 done (1m 32s)
Here are the top 5 coworking spaces: ...

Live Status & Stop Controls

Status UI

Every request shows a live status message with elapsed time, model routing info, and the current tool call. Status updates are instant β€” tool names and input summaries display the moment a tool starts. Two inline buttons let you cancel:

Button Behavior
β–  Stop Cancels after the current API call finishes
β–  Force Stop Instantly aborts mid-tool β€” races the handler and returns immediately

The /stop command also works as a text alternative.

Voice & Media

OBOL handles images (vision), documents (PDF extraction), and voice β€” all via Telegram.

Feature How it works Toggle
Speech-to-Text Incoming voice messages are transcribed locally using faster-whisper (tiny model, ~140MB, CPU). Transcription is injected as context. /options β†’ Speech to Text
Text-to-Speech OBOL can reply with voice messages using edge-tts. Choose from multiple voices and languages. /options β†’ Text to Speech
Images Photos and images are analyzed via Claude's vision. Analysis is stored in memory for later recall. Always on
PDFs PDF files are extracted and read via the read_file tool. Always on

Options

Toggle features on/off per user via the /options command:

Option Default Description
Speech to Text On Transcribe incoming voice messages
Text to Speech Off Voice message replies
PDF Generator Off Create PDFs from markdown
Background Tasks Off Spawn long-running tasks
Flowchart Off Generate Mermaid diagrams
Model Stats On Show model/token info in responses
Proactive News Off Twice-daily news on configured topics
Curiosity On Autonomous web exploration every 12h

Multi-User Architecture

One Telegram bot token, one Node.js process, full per-user isolation.

Telegram bot (single token, single poll)
      ↓
Auth middleware (allowedUsers check)
      ↓
Router: ctx.from.id β†’ tenant context
      ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ User 123456789  β”‚  β”‚ User 987654321  β”‚
β”‚ personality/    β”‚  β”‚ personality/    β”‚
β”‚ scripts/        β”‚  β”‚ scripts/        β”‚
β”‚ memory (DB)     β”‚  β”‚ memory (DB)     β”‚
β”‚ evolution       β”‚  β”‚ evolution       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

What's shared vs isolated

Shared (one copy) Isolated (per user)
Telegram bot token Personality (USER.md, AGENTS.md)
Anthropic API key Vector memory (scoped by user_id in DB)
Supabase connection Message history (scoped by user_id in DB)
VPS hardening Evolution cycle + state
Process manager (pm2) Scripts, tests, commands, apps
SOUL.md (shared personality) Behavioral patterns (scoped by user_id in DB)
Curiosity knowledge base Workspace directory (~/.obol/users/{id}/)

Tenant routing

When a message arrives, OBOL looks up the sender's Telegram user ID and lazily creates (or retrieves from cache) their tenant context β€” a Claude instance, memory connection, message log, background runner, and personality, all scoped to that user's directory and DB namespace. No cross-contamination between users.

Workspace isolation

Each user's tools (shell exec, file read/write) are sandboxed to their workspace directory. A user can't read or write files outside ~/.obol/users/{their-id}/ (with /tmp as the only escape hatch). Shell commands run with cwd set to the user's workspace.

Secret namespacing (pass)

When users store secrets via the pass encrypted store, each user gets their own namespace:

Scope Prefix Example
Shared bot credentials obol/ obol/anthropic-key
User secrets obol/users/{id}/ obol/users/123456789/gmail-key

Users manage their own secrets via Telegram: /secret set <key> <value> (message auto-deleted for safety), /secret list, /secret remove <key>. The agent can also read/write secrets via tools for scripts that need API keys at runtime.

Adding users

  1. Add their Telegram user ID to allowedUsers in ~/.obol/config.json (or run obol config)
  2. Restart the bot
  3. They message the bot β†’ OBOL creates their workspace and starts responding immediately. Personality files are created during their first evolution cycle.

Each new user starts fresh. Their bot evolves independently from every other user's.

Bridge (couples / roommates / teams)

When two users share the same OBOL instance, their agents can talk to each other β€” bidirectionally.

User A: "what does Jo want for dinner tonight?"
Agent A: β†’ bridge_ask β†’ Agent B (one-shot, no tools, no history)
Agent B: "Jo mentioned craving Thai food earlier today"
Agent A: "Jo's been wanting Thai β€” maybe suggest pad see ew?"

Jo gets: "πŸͺ™ Your partner's agent asked: 'what does Jo want for dinner?'
          Your agent answered: 'Jo mentioned craving Thai food earlier today'"
User A: "remind Jo I'll be home late"
Agent A: β†’ bridge_tell β†’ stores in Agent B's memory + Telegram notification

Jo gets: "πŸͺ™ Message from your partner's agent:
          'I'll be home late'"
          [↩ Reply]

Jo taps Reply β†’ Jo's agent reads recent bridge context, composes a reply
             β†’ sends back via bridge_tell
A gets: "πŸͺ™ Message from your partner's agent: 'Got it, I'll start dinner around 7'"

Two tools:

Tool Direction What happens
bridge_ask A β†’ B β†’ A Query the partner's agent. One-shot Sonnet call with partner's personality + memories. No tools, no history, no recursion risk. Partner is notified with both the question and your agent's answer.
bridge_tell A β†’ B (↩ B β†’ A) Send a message to the partner. Stored in their memory (importance 0.6) + Telegram notification with a Reply button. Tapping Reply has their agent compose a contextual response and send it back β€” no typing needed.

The partner always gets notified when their agent is contacted. Privacy rules apply β€” the responding agent gives summaries, never raw data or secrets. Rate-limited to 20 bridge calls per user per hour.

Enable during obol init (auto-prompted when 2+ users are added) or toggle later with obol config β†’ Bridge.

Legacy migration

Upgrading from single-user? It's automatic. On first boot, if ~/.obol/users/ doesn't exist but personality files do, OBOL migrates everything (files + DB records) to the first allowed user's directory. No manual steps needed.

Setup

CLI (~2 minutes)

$ obol init

πŸͺ™ OBOL β€” Your AI, your rules.

─── Step 1/5: Anthropic (AI brain) ───
  Anthropic API key: ****
  Validating Anthropic... βœ… Key valid

─── Step 2/5: Telegram (chat interface) ───
  Telegram bot token: ****
  Validating Telegram... βœ… Bot: @my_obol_bot

─── Step 3/5: Supabase (memory) ───
  Supabase setup: Use existing project
  Project URL or ID: ****
  Service role key: ****
  Validating Supabase... βœ… Connected

─── Step 4/5: Identity ───
  Your name: Jo
  Bot name: OBOL

─── Step 5/5: Access control ───
  Found users who messaged this bot:
    123456789 β€” Jo (@jo)
  Use this user? Yes

πŸͺ™ Done! Setup complete.

  Next steps:
    obol start      Start the bot
    obol start -d   Start as background daemon
    obol config     Edit configuration later
    obol status     Check bot status

Every credential is validated inline β€” bad keys are caught before you start the bot. If validation fails, you can continue and fix later with obol config.

For Telegram user IDs, OBOL auto-detects by checking who messaged the bot. Just send it a message before running init.

First Conversation

Send your first message. OBOL responds naturally β€” no onboarding flow, it works from message one. Personality files (SOUL.md, USER.md) are created during the first evolution cycle. After first boot, it hardens your VPS and reports progress directly in the Telegram chat (Linux only β€” skipped on macOS/Windows):

Task What
GPG + pass Encrypted secret storage, plaintext wiped
pm2 Process manager with auto-restart
Swap 2GB if RAM < 2GB
SSH Port 2222, key-only, max 3 retries
fail2ban 1h ban after 3 failures
Firewall UFW deny-all, allow 2222
Updates Unattended security upgrades
Kernel SYN cookies, no ICMP redirects

⚠️ After first run, SSH moves to port 2222: ssh -p 2222 root@YOUR_IP

Running the Bot

Foreground (testing)

obol start

Logs print to stdout. Ctrl+C to stop.

Daemon (production)

obol start -d

This uses pm2 under the hood (auto-installs if needed). The bot auto-restarts on crash and survives reboots.

obol status              # check if running + uptime + memory
obol logs                # tail logs
obol stop                # stop the daemon

# pm2 commands also work directly
pm2 logs obol            # tail logs
pm2 restart obol         # restart
pm2 monit                # live dashboard

To survive server reboots:

pm2 startup
pm2 save

Authentication

OBOL supports two Anthropic auth methods:

Method How Fallback
API Key sk-ant-... from console.anthropic.com β€”
Claude Max OAuth Browser sign-in during obol init Auto-refreshes tokens; falls back to API key if refresh fails

You can configure both during init. If OAuth tokens expire and refresh fails, OBOL silently falls back to the API key.

Secret Storage (pass)

On Linux, OBOL auto-encrypts all credentials on first boot:

  1. Installs GPG + pass
  2. Migrates plaintext secrets from config.json into the encrypted store
  3. Config values become references like pass:obol/anthropic-key

If a pass key is missing at runtime, the value resolves to null and OBOL falls back gracefully (skips OAuth, uses API key, etc). You'll see a one-time error in logs.

pass ls                         # list stored secrets
pass show obol/anthropic-key    # reveal a secret
pass insert obol/my-secret      # add a new secret

Resilience

OBOL is designed to stay alive without babysitting:

  • Global error handler β€” individual message failures don't crash the bot
  • Polling auto-restart β€” exponential backoff (1s β†’ 60s) with up to 10 retries on network/API failures
  • Graceful shutdown β€” clean exit on SIGINT/SIGTERM for pm2/systemd compatibility
  • Evolution rollback β€” if refactored scripts break tests, the old scripts are restored automatically

Configuration

Edit config interactively:

obol config

Or edit ~/.obol/config.json directly:

Key Default Description
bridge.enabled false Let user agents query each other (requires 2+ users)
timezone UTC Default timezone for evolution, analysis, and news cycles
users[].timezone config timezone Per-user timezone override

Telegram Commands

/new        β€” Fresh conversation
/memory     β€” Search or view memory stats
/recent     β€” Last 10 memories
/today      β€” Today's memories
/events     β€” Show upcoming scheduled events
/tasks      β€” Running background tasks
/status     β€” Bot status, uptime, memory, evolution count
/backup     β€” Trigger GitHub backup
/clean      β€” Audit workspace, remove rogue files, fix misplaced items
/secret     β€” Manage per-user encrypted secrets
/evolution  β€” Evolution progress
/verbose    β€” Toggle verbose mode on/off
/toolimit   β€” View or set max tool iterations per message
/options    β€” Toggle optional features on/off (STT, TTS, PDF, news, curiosity, etc.)
/stop       β€” Stop the current request
/upgrade    β€” Check for updates and upgrade
/help       β€” Show available commands

Upgrade

Everything else is natural conversation.

CLI

obol init              # Setup wizard (validates credentials inline)
obol init --restore    # Restore from GitHub backup
obol init --reset      # Erase config and re-run setup
obol config            # Edit configuration interactively
obol start             # Foreground
obol start -d          # Daemon (pm2)
obol stop              # Stop (pm2 or PID fallback)
obol logs              # Tail logs (pm2 or log file fallback)
obol status            # Status
obol backup            # Manual backup
obol upgrade           # Update to latest version
obol delete            # Full VPS cleanup (removes all OBOL data)

Directory Structure

~/.obol/
β”œβ”€β”€ config.json                    # Shared credentials + allowedUsers
β”œβ”€β”€ personality/
β”‚   └── SOUL.md                    # Shared personality (rewritten each evolution)
β”œβ”€β”€ users/
β”‚   └── <telegram-user-id>/        # Per-user isolated context
β”‚       β”œβ”€β”€ personality/
β”‚       β”‚   β”œβ”€β”€ USER.md            # Owner profile (rewritten each evolution)
β”‚       β”‚   β”œβ”€β”€ AGENTS.md          # Operational knowledge
β”‚       β”‚   └── evolution/         # Archived previous souls
β”‚       β”œβ”€β”€ scripts/               # Deterministic utility scripts
β”‚       β”œβ”€β”€ tests/                 # Test suite (gates refactors)
β”‚       β”œβ”€β”€ commands/              # Command definitions
β”‚       β”œβ”€β”€ apps/                  # Web apps built by the agent
β”‚       └── logs/
└── logs/

SOUL.md is shared β€” it's the bot's core identity across all users. USER.md and AGENTS.md are per-user, so each person gets their own profile and operational knowledge. Memory, patterns, evolution state, and workspace are fully isolated.

Backup & Restore

OBOL commits to GitHub:

  • Daily at 3 AM (personality, scripts, tests, commands, apps)
  • Before and after every evolution cycle (diffable pairs)

Memory lives in Supabase (survives independently).

Restore on a new VPS:

npm install -g obol-ai
obol init --restore    # Clones brain from GitHub
obol start -d

Costs

Service Cost
VPS (DigitalOcean) ~$9/mo
Anthropic API ~$100-200/mo on max plans
Supabase Free tier
Embeddings Free (local)

Requirements

  • Node.js β‰₯ 18
  • Anthropic API key
  • Telegram bot token
  • Supabase account (free tier)
  • Python 3 + pip3 install faster-whisper (optional, for voice transcription)

β†’ Full DigitalOcean deployment guide

OBOL vs OpenClaw

OBOL OpenClaw
Setup ~10 min 30-60 min
Channels Telegram Telegram, Discord, Signal, WhatsApp, IRC, Slack, iMessage + more
LLM Anthropic only Anthropic, OpenAI, Google, Groq, local
Personality Self-evolving + self-healing + self-extending Static (manual)
Multi-user Full per-user isolation (one process) Per-channel config
Architecture Single process Gateway daemon + sessions
Security Auto-hardens on first run Manual
Model routing Automatic (Haiku) Manual overrides
Background tasks Built-in with check-ins Sub-agent spawning
Proactive intelligence Curiosity, analysis, news, humor β€”
Voice TTS + STT (faster-whisper) TTS
Group chats β€” Full support
Cron Agentic cron (tool access) + basic Full scheduler
Cost ~$9/mo ~$9/mo+

Performance

OBOL OpenClaw (estimated)
Cold start ~400ms ~3-8s
Per-message overhead ~400-650ms ~500-1100ms
Heap usage ~16 MB ~80-200 MB
RSS ~109 MB ~300-600 MB
node_modules 354 MB / 9 deps ~1-2 GB / 50-100+ deps
Source code ~13,600 lines (plain JS) Tens of thousands (TypeScript monorepo)
Native apps None Swift (macOS/iOS), Kotlin (Android)

The Claude API call dominates response time at 1-5s for both β€” that's ~85-90% of total latency. User-perceived speed difference is ~10-20%. Where OBOL wins is cold start (10-20x), memory footprint (5-10x), and operational simplicity. On a $5/mo VPS, that matters.

Different tools, different philosophies. Pick what fits.

License

MIT

About

Self-evolving AI assistant that learns, remembers, and acts on its own. Persistent vector memory, self-rewriting personality, proactive heartbeats. Deploy your own in minutes.

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors