Every article you read should never be forgotten.
The difference between forgetting and compounding
What changes when LLM-Wiki enters your workflow.
You clip an article, drop it in a folder, forget the filename. Three months later, search returns nothing useful.
Drop it in sources/. The plugin reads it, extracts key ideas, updates 10–15 related wiki pages, and files the summary. Done in seconds.
You manually link two notes. Six months later, one is updated — the other still points to the old version. Links silently rot.
All cross-references are maintained automatically. When new data contradicts an old claim, every affected page updates.
A year of notes is a pile of files. Each new project starts from zero. You re-read the same sources, re-derive the same insights.
A year of sources builds a dense, cross-referenced wiki. New questions tap into everything you have ever read. Knowledge compounds.
What changes when you use it
Not a feature list. A set of behaviors that change how you interact with your own knowledge.
Auto-Organize
Drop articles, papers, or notes into sources/. The plugin reads them in full context, extracts entities and concepts, and builds structured wiki pages — no manual tagging, no folders to manage.
↓ (auto)
wiki/entities/...
wiki/concepts/...
Bidirectional Links
Every generated page is woven with [[bidirectional links]] into your Obsidian graph. Open Graph View and watch your knowledge map grow organically with each new source.
auto-linked
graph grows
Conversational Query
Ask questions about your knowledge in natural language. The plugin reads your wiki pages (not the internet), synthesizes cited answers with [[wiki-links]], and can save responses back as new wiki pages.
→ [[X]] + [[Y]] cited
→ saved as wiki page
Auto Maintenance
File watcher for automatic ingestion. Periodic lint detects contradictions, stale info, and orphaned pages. Startup health check keeps your wiki consistent — all default OFF to avoid surprise costs.
lint: weekly
health check: startup
(default: OFF)
Up and running in minutes
No extra apps. Runs inside your existing Obsidian vault.
Download
Grab main.js, manifest.json, and styles.css from GitHub Releases. These three files are all you need.
Download LatestInstall
Open Obsidian → Settings → Community plugins. Click the icon to open your plugins directory. Create a new folder called llm-wiki, drop the three files in, then click the icon in Obsidian. Enable the plugin when it appears.
Configure
Go to Settings → Karpathy LLM Wiki. Pick a provider (DeepSeek, Claude, Gemini, GPT, Ollama, etc.), enter your API key, click Fetch Models, then Test Connection and Save.
Use
Press Cmd+P (or Ctrl+P on Windows) to open the command palette. Type Ingest to find Ingest Sources or Ingest from Folder. Type Query to ask questions about your knowledge. The plugin handles the rest.
Every plugin, amplified
LLM Wiki for Obsidian is not an island. It makes every tool you already use more powerful.
Graph View
Visualize links between notes
Wiki gives your graph real meaning — hub pages, knowledge clusters, orphan detection. Graph becomes navigation, not decoration.
Web Clipper
Clip articles as Markdown
Drop into sources/, AI auto-extracts, links, updates index. One clip → 10+ wiki pages.
Dataview
Query frontmatter to generate tables
AI auto-adds structured metadata (tags, dates, categories). Dataview turns empty tables into living dashboards.
Git
Version control for files
Watch your knowledge evolve step by step. Every commit tells how your understanding grew.
Marp
Generate slide decks
A presentation is just a query: "Summarize my research for 10-min talk." Wiki generates slides from knowledge.
Canvas
Infinite visual canvas
AI generates concept maps, timelines, decision trees from your knowledge graph — not blank canvas.
Built in the open
MIT licensed, community driven, vendor neutral. Use it, extend it, make it yours.
Obsidian Native
No extra apps to install. Runs entirely inside your Obsidian vault.
Vendor Neutral
Your LLM, your choice. Cloud or local, switch anytime.
Long-context models (256K+ tokens) are recommended — the full wiki fits in a single inference call, giving the LLM complete understanding.
Works with any major provider — no lock-in.