
AI Agent Wired Into a 3,800-Node Knowledge Graph
OpenClaw / Node.js / Obsidian CLI / OpenViking / Gemini API / ElevenLabs / Telegram API / SQLite
An AI agent running in my own environment, wired into everything I use. My tools, my LLMs, my harness, my rules. It runs 24/7 on a home server, navigating a 3,800+ node Obsidian knowledge graph via CLI. Three communication channels (Telegram with 9 topic-based sessions, Slack, Discord) feed into a three-layer memory architecture: semantic context engine, daily workspace files, and hybrid BM25 + vector SQLite store. The agent answers questions, traverses the graph through wikilinks and backlinks, relates new information to existing projects, and writes structured notes following a vault schema.
Chat interfaces come with the provider's harness, the provider's tools, the provider's limits. I wanted an agent that reads my notes, runs crons for my morning brief, and uses my tools. Free to pick any model, any provider, with room for local models down the road.
An AI that knows my full context and operates on my terms.
One Telegram chat with topic threads instead of five subscriptions to five apps. I talk to my agent, it works on my notes, everything stays in memory. I can recall things from months ago because it's all in the vault. One query on my phone instead of a web search. I replaced Google with a graph search over my own knowledge.
8-layer tool hierarchy. The agent always checks memory before reaching for the internet.
# Agent Tool Priority T0: OpenViking — long-term semantic memory (autoRecall)T1: Obsidian CLI — knowledge graph navigation (backlinks, wikilinks, tags)T2: memory_search — hybrid BM25 + vector over workspace memoryT3: defuddle — web content extraction (clean markdown)T4: bird — X/Twitter reading (posts, threads, search)T5: gws-safe — Google Workspace (Gmail, Calendar, Drive, Sheets)T6: web_search — internet search (neural + keyword)T7: exec — shell command execution Rule: always start at T0, escalate only when needed.Graph-first navigation pattern. The agent follows edges instead of searching blindly.
# Graph Navigation — Golden Path 1. ENTITY → read file="Atlas" (what is it? what do I know?)2. BACKLINKS → backlinks file="Atlas" (who references it?)3. FILTER → pick by date/tag (March 2026? → 2 results)4. READ → read path="...result..." (answer in content) Goal: answer in ≤4 commands.If you need >6 — you're grep-thinking, not graph-thinking.Recall hierarchy, graph navigation patterns, and vault write protocol ensuring knowledge graph integrity.
# Recall Hierarchy Exhaust local context before reaching outward: 0. autoRecall ← automatic semantic context, zero cost 1. MEMORY.md ← curated long-term facts, always in context 2. ov_find ← manual semantic search (OpenViking) 3. Obsidian CLI ← graph navigation: backlinks, wikilinks, tags 4. memory_search ← hybrid vector + BM25 over workspace files ... 6. web_search ← internet is LAST RESORT # Pre-Write Checklist Before ANY vault write:1. date +"%Y-%m-%d %H%M" — current timestamp2. read file="VAULT-SCHEMA" — source of truth3. Use template if folder has one — never create manually4. Check for subfolder note — if missing, propose one Every proper noun = [[wikilink]]. No exceptions.Agent personality design. Recall priority, memory compaction, and anti-sycophancy directives.
# Recall Priority Rule: always exhaust local context before reaching outward. 1. MEMORY.md → curated long-term facts (source of truth)2. Obsidian CLI → navigate graph: backlinks, wikilinks, tags3. memory_search → hybrid vector + BM25 over workspace files4. Web → last resort, verify against local context # Compaction — what survives context reset On threshold (4000 tokens):→ flush 3-5 bullets to memory/YYYY-MM-DD.md→ key decisions, new facts, action items ONLY→ NOT a transcript. NOT a conversation summary.→ If nothing worth saving — NO_REPLY. # Anti-sycophancy Banned: "Swietne pytanie!", "Absolutnie!", "Chetnie pomoge!"Rule: push back when wrong, challenge lazy thinking, hold accountable for goals. Opinions > compliance.
Brain-Vault knowledge graph. Clusters represent projects, people, and areas of interest.

Agent Architecture. Message flow from channels through tool dispatch to knowledge graph.