I’ve spent the last few days building what I call the AI-Wiki: a personal knowledge system that lives in Obsidian, runs a suite of automated tasks powered by Claude, and connects everything I work on — calendar, tasks, code, writing — into one continuously updated second brain. Much like PAI (Personal AI Infrastructure), an open source project for persistent memory across Claude Code sessions, helps my coding sessions have context across things. I want to see how much I can use AI to build out the things that I am trying to track into a Wiki. This post is a writeup of how it works, why I built it the way I did, and what it’s like to actually use day to day.
The short version: Obsidian holds the knowledge. Claude does the thinking. Scheduled tasks run the pipeline. And Dispatch on my iPhone is the remote control for all of it along with being able to interact with it in co-work.
Background
The idea of a personal wiki isn’t new. I have used a zillion different notes tools in the past, but they just became a dumping ground for things and I couldn’t find them. It really just made a digital version of the notebook. I never had time to go through and organize the notes and put them into tasks, and file the information away so I could find it.
The insight that changed things was realizing that the friction wasn’t in capturing information; it was in processing it. Notes would pile up in my inbox, tasks would drift across apps, and the knowledge that was theoretically “in my system” wasn’t actually accessible when I needed it.
As you can guess by reading my blog, and I am on a big AI kick, trying to push the boundaries. I really see it as a revolutionary technology that can change the way we do things. I don’t see it replacing me at this point, but it certainly should be able to make me more efficient. I was thinking a lot about how LLMs could fit into a personal workflow not just as a chat interface you visit occasionally. But as an always-on layer that runs in the background, connects your tools, and does the synthesis work that’s too tedious to do manually. The combination I chose after research was Obsidian as the knowledge store, Claude as the reasoning engine, and scheduled automation as the connective tissue.
This is something that my colleague, John, VP of Platform Engineering, brought up to me. I had read a blog post on it, but hadn’t paid too much attention to it. That is because I wasn’t thinking about augmenting my work, I was still stuck in the coding mindset. Plus I was using Claude Code in the terminal. I am getting ready to go to Google Cloud next for work, so I wanted a place to capture notes, and a way to output some blog posts while I build on a summary for Crate and Barrel Holdings.
Websites of note:
Andrej Karpathy’s gist — The original idea file. Karpathy proposes using a plain markdown wiki queried directly by a long-context LLM as a lightweight alternative to RAG, treating the AI as a “compiler” that turns raw source documents into a structured, interlinked knowledge base you actually own.
Data Science Dojo’s blog post — A step-by-step tutorial for building your own LLM wiki using Claude without any coding. Covers folder setup, compilation prompts, and explains why an AI-maintained wiki compounds knowledge across queries in a way that stateless RAG retrieval doesn’t.
MindStudio Blog — A deeper implementation guide focused on Claude Code and Obsidian as the knowledge viewer. Covers the “compiler analogy” — why you pre-process raw sources into structured markdown rather than querying them cold each time — and compares the approach to traditional RAG pipelines.
The Obsidian Setup
The vault lives in iCloud under iCloud~md~obsidian/Documents/AI-Wiki, which means it syncs automatically to my iPhone, iPad, and every Mac I use. That sync is load-bearing — it’s what makes the whole system usable on mobile.
The folder structure is straightforward but intentional:
- Inbox/ — the drop zone. Anything unprocessed lives here.
- Blog Posts/ — drafts and published posts, including this one.
- Coding Projects/ — wiki pages for active repos, PRDs, and daily project summaries synced from git.
- Templates/ — note templates, configured as the vault-wide template folder.
- _attachments/ — all images and attachments in one place, so links don’t break when notes move.
- Processed/ — notes that have been ingested and acted on, archived for reference.
- Topics/, Notes/, People/, Projects/, Sources/ — the main knowledge graph: topic hubs, evergreen notes, people pages, and source references.
- Daily Notes/ — dated notes generated by the automated briefings.
The _attachments/ folder was a deliberate call. Obsidian can scatter attachments anywhere, and that gets messy fast. Centralizing them means I can reference images in blog posts and know they’ll be findable when the publish script runs.
iCloud sync has been reliable enough that I don’t think about it. The vault feels like it’s just there, everywhere, which was the goal.
Automated Tasks
This is where the system actually earns its keep. I have a handful of scheduled tasks running through Claude — some on a timer, some triggered manually. Each one is a self-contained pipeline that reads from the world, reasons about it, and writes results back into the wiki.
Daily Morning Briefing (7am PT)
Every morning at 7am, a task pulls my Apple Calendar events for the day — converting from UTC to Pacific time — and fetches my Things 3 task list: today’s tasks, whatever’s sitting in my inbox, and a 7-day lookahead of upcoming items. Claude compiles all of that into a structured morning briefing note and saves it to Daily Notes folder. It also goes out and searches the internet for articles on a few topics that I follow, like the Cowboys, Blazers and AI. I have it give me a top 10 from some curated sources I have asked it to watch. I am sure I will add additional topics and sources over time.
The reason I built this is simple: I was starting every day by opening three or four apps, mentally assembling a picture of what was going on, and then trying to hold it in my head while I got to work. The briefing does that assembly for me. I open one note and I have the full picture.
Having a briefing isn’t exactly new, but having it with personalization is a game changer. It also gives me an ingestion report from the ingest portion of this task as it takes everything I piled up in the inbox and tells me how it organized it. I will sometimes say let’s track this as a project. I also have a people folder that tracks interactions with people, and important dates. This lets me file things away and link them to projects as well.
Wiki Inbox Ingest (noon PT)
At noon, the ingest task scans the Inbox/ folder for any files that have landed there since the last run. For each file, Claude extracts action items and creates corresponding tasks in Things 3, identifies which topic hub pages in the wiki the note is relevant to and updates them, and writes an ingestion summary noting what was processed and what was done with it.
Crucially, it never deletes originals. Files stay in Inbox/ and a copy goes to Processed/. The original is the source of truth; the processing is additive. This is for two reasons, one it lets me delete it, in case I had it open when the task runs and didn’t have my changes saved. And additional reason for this (in both workflows) is that Claude co-work needs to ask to delete files and there isn’t an option to let it do it as part of the workflow. A protection that I am sure eventually we can get around.
The workflow this enables: I drop a link, a clipping, or a rough note into Inbox/ from anywhere — phone, desktop, browser extension — and by noon it’s been read, acted on, and woven into the relevant parts of the wiki. I don’t have to think about where it goes.
Sync Coding Projects (8pm PT)
Every evening at 8pm, this task scans all git repos in ~/git for commits made in the last 24 hours. For each repo with recent activity, it pulls the markdown docs from the repo’s /docs/ directory and syncs them into the wiki under Coding Projects/. It also writes a Daily Summary note listing what changed across all projects.
This one solves a specific problem: I’d be working in a repo, updating docs, and the wiki would have no idea. Or I’d open the wiki and the project page would be stale. The sync task closes that loop. By the time I’m done for the day, the wiki has current documentation context for everything I touched — which matters both for my own reference and because Claude can read those docs when I’m asking about a project.
I am really good about using Claude Code to generate documentation. But, it is not always up to date in the repo. And If I want to start doing work, or create a new Product Requirement Document, I don’t have a good way of ingesting those documents on the go. This lets me do that.
Publish Blog from Obsidian (manual)
This one I trigger manually when I’m ready to publish a post. It syncs the relevant file from Blog Posts/ to my site repo (jamesjung-site), handles a few conversion steps — most importantly, it converts Obsidian’s image syntax (![[image.png]]) to standard HTML <img> tags, and copies the referenced images from _attachments/ into the repo. Then it commits and pushes to main, which triggers a site deploy.
The full workflow is: write the post in Obsidian on my phone, iPad or desktop, drop in images by name, trigger Publish, and the post is live. No manual file management, no copy-pasting, no fiddling with the repo directly. Writing and publishing feel like one continuous act.
For context, I used this workflow to create this on my iPad today.
What I learned getting this working
Getting the publish task to actually run end-to-end took a few rounds of debugging. Here’s what bit me.
iCloud and the VM don’t always see the same files. Cowork’s VM reads the vault from the filesystem, but iCloud uses on-demand storage — files can exist as cloud placeholders that haven’t been downloaded locally yet. The task would find the file, try to read it, and hit a deadlock. The fix was adding a brctl download step at the start of the task, which forces macOS to fully pull the file before anything else runs. Simple once you know why it’s happening.
Git auth needed to be rewired. My repo remote was set to SSH (git@github.com), but the machine authenticates via the GitHub CLI (gh) over HTTPS. The task kept failing on push. I switched the remote URL to HTTPS and ran gh auth setup-git to register the credential helper, and it worked from there. If you’re setting up something similar, just start with HTTPS and save yourself the detour.
The VM can’t push to GitHub directly. The Cowork sandbox blocks outbound network connections, so git push from inside a task just fails silently. My workaround was a post-commit git hook in the repo that runs git push origin main automatically after every local commit. The hook fires natively on the Mac where the network and credentials are available, so the task just commits and the hook handles the rest. Push results go to ~/.cowork-push.log if I ever need to check what happened.
Worktree branches need to land on main first. Claude Code tasks run in isolated git worktree branches, which is good for safety but means any commit made during a task isn’t on main yet. Since the post-commit hook only pushes main, I have to merge the worktree branch back to main before the hook fires and the site deploy triggers. Took me one missed publish to figure that one out.
Sync PRD to Repo (manual)
When I’m working on a product and I’ve written the PRD in the wiki’s Coding Projects/ section, this task finds the PRD markdown file and copies it into the correct git repo under docs/prd/.
The reason this matters: Claude Code reads the docs directory when reasoning about a codebase. If the PRD lives only in Obsidian, Claude Code doesn’t know what we’re building. Sync PRD bridges that gap — product requirements written in plain language in the wiki flow directly into the codebase where they can actually be acted on. Writing a spec and having an AI act on it is supposed to be seamless; this task is the plumbing that makes it seamless.
Dispatch: Mobile Control
All of these tasks run on my desktop Mac, which is where the Obsidian vault is authoritative and where Claude has access to the local filesystem and git repos. But I’m not always at my desk.
The Claude desktop app has a feature called Dispatch that lets you trigger any configured task remotely from your iPhone or iPad. It’s essentially a remote control for the desktop agent. I can open Dispatch on my phone, hit “Publish Blog,” and the desktop runs the task — fetches the file, converts it, commits, pushes — while I go make coffee.
This was the key insight that made the whole system feel real rather than theoretical. A wiki that requires you to be at your desktop is useful. A wiki you can operate from anywhere, on a device that’s always in your pocket, is a different kind of tool. Dispatch closed that gap. I honestly thought I would need OpenClaw to make this happen, and I am glad that I don’t.
The tasks I trigger most from mobile are Publish Blog and Sync PRD — the ones where I’ve done the thinking work on my phone and want to ship it without switching contexts. I can execute the other ones as needed and I am pretty sure I am just scratching the surface of what I can do with dispatch. Those adventures will continue.
What’s Next
I am going to Google Cloud Next for next week. I plan to travel around the conference without my laptop. That should be a good pressure test. I also have VNC setup on my desktop, along with Screens 5 setup. I can VPN into the house from anywhere and take control of my desktop if I need to. So I have a bail-out.
I also have a VDI for work that I can access, so I shouldn’t have an actual need to get my laptop out. But, I will travel with it. My plan is to try to use the sling, and the IPad/iPhone to take notes at the conference as a mobile setup.
If this works, that changes my workflow significantly. I am able to do what I need on the road and able to create as needed. I am hoping to use the notes I generate to create some slide decks and some blog posts, so stay tuned for those.