š² Extract Value From the "Process" Layer of Your Notes
Glean's "context graph" framework for corporate information handling has really got me thinking about how to level up my personal knowledge management game.
Arvind Jain, the CEO of Glean ā one of those big enterprise AI companies youāve probably never heard of unless youāre deep in that world ā recently published a twitter article about ācontext graphs,ā and buried in the middle of their enterprise pitch is an observation that I think is directionally true for personal knowledge management.
Jainās main point was that most of our tools model what exists. Notes, highlights, documents, tags, folders, links between ideas. But they donāt model how things actually happen. They donāt capture the sequence of actions, the patterns of your process, the temporal chain of āI read this, then I annotated that, then three weeks later I connected it to something Iād forgotten I knew.ā
Jain calls this the difference between a knowledge graph and a context graph. And after sitting with the idea for a few days, I think the distinction is useful enough to be worth stealing.
Most PKM Tools are Knowledge Graphs
If you use Obsidian, or Notion, or Capacities, or Roam, or really any note-taking app with linking, what youāve built is some version of a knowledge graph. Most of them literally display this in a graph view, but even ones that donāt have nodes (your notes) and edges (your links between them). You might also have metadata ā tags, properties, dates. If youāre fancy, you have an index or a map of content (or perhaps several of each, like me).
Such a knowledge graph tells you what you know. It answers questions like āwhat are all my notes about trophic cascades?ā or āwhich books connect to my article about marriage in the ancient world?ā It models the state of your notes at any given point.
This is useful. Iāve written thousands of words about why making and organizing notes is valuable and I meant every word. Iāve built my entire research workflow around the idea that connections between ideas are more valuable than the ideas in isolation ā particularly in the sense of comparing moments across time, like locust plagues in the ancient world vs. 1900s Palestine vs. modern America, or the diplomatic role of women in various cultures.
But a knowledge graph is inherently static. My vault does have information about how I came to know things ā I have git backups, and I link to sources, which is part of why the interlinking is so valuable. What I tend to do with the information is write articles, which end up in another folder. But the graph doesnāt capture the process that connects reading to writing, or tell me where that process breaks down.
What a Context Graph Adds
In Jainās framework, a context graph takes the knowledge graph and layers on process. Instead of āthis ticket exists,ā you get āwhen a top-priority incident happens, someone opens a ticket, an engineer investigates, they escalate if necessary, they document the fix ā and hereās how long each step usually takes.ā Actions become first-class entities. Who did what, in which apps, in what order, and with what effect.
For an enterprise, this makes sense because you want AI to automate workflows. GitHubās former top guy has built an entire new development platform intended to capture AI context better, which is frankly way over my head. But these concepts (as I understand them) should port well to individual knowledge work, where the problem is slightly different: you donāt want to automate your thinking, you want to understand it well enough to support it.
In my ideal world, a personal context graph would capture stuff like: āWhen I read a long nonfiction book, I highlight for weeks, then do nothing for a while, then process all the highlights in one frantic burst, then slowly spin out articles over the next few months.ā Or: āI tend to abandon notes in my inbox when they sit longer than ten days ā but if I touch them within three, they almost always get processed.ā Or even just: āMy best writing happens when Iāve recently re-read my own annotations, not when I go back to the original source.ā
None of my current tools capture this. My Obsidian vault can tell me what Iāve written and when, but it canāt tell me how ā the actual chain of reading, annotating, connecting, drafting, revising. It canāt tell me which processes produce my best work, or where I consistently drop the ball.
The only way I will exercise is if someone whose opinion I care about is watching, and I donāt have to think about any of hte pieces. It took me decades to learn that. Itās not my favorite thing about myself, but knowing it has been immensely valuable for my 2025 goal of developing an exercise habit.
Value Lives in the Process Layer
Jain makes this point for enterprises: āsystems of record capture decisions, but the real work happens in meetings, chats, emails, and docs.ā The same is true for individuals. Your notes capture decisions ā what you decided was worth keeping ā but the real learning happens in the spaces between. The re-reads, the connections you make at 11pm while brushing your teeth, how you find the good stuff, why you ignored the bad stuff, the slow accumulation of related highlights that eventually tips over into an article idea. Most of that doesnāt get recorded anywhere.
Iāve been circling this problem for a while without having a good name for it. When I wrote about how Claude + MCP solved my organizational problems, part of what I was describing was Claudeās ability to infer process from state. When I told Claude to āgo through the files in this folder, figure out the patterns, and write a script to put information like A into location B,ā what I was really asking it to do was reconstruct my process from the artifacts Iād left behind. It did a reasonable job. But it was working backwards from the end product, not from actual traces of what Iād done.
Working backwards from artifacts means Claude has to guess at my intent. Working from actual traces ā a log of āshe highlighted this passage, then wrote this annotation, then three days later searched for this term, then created this noteā ā would let it understand what I was actually trying to do.
What Personal Context Graphs Look Like Now
The closest thing most individuals have to a context graph is scattered across multiple unconnected tools.
Reading history in Reader (or Goodreads, or whatever) captures some of the temporal dimension. You can see when you read something, how long you spent on it, what you highlighted, whether you reviewed it. Thereās metadata like ādate savedā and ādate last updatedā that offers hints about your process. Version history in Notion or Obsidian captures when you modified files. Sometimes, the pre-filled search terms that count as āsearch historyā capture what you were looking for and when. Your browser history keeps track (sort of) of what you read and in what order.
But none of these are connected to each other. The reading traces live in one database. The writing traces live in git. The search traces live in your browser or your filesystem. The thinking traces ā if they exist at all ā live in diaries and annotations, scattered across different sources.
Before I read the context graph post it didnāt even occur to me to think about that sort of information, really. I mean, I know that when I read a nonfiction book quickly Iām more likely to actually finish it and actually write a review, but I donāt have hard numbers. I suspect many of you donāt either. But between local-first apps with plugin ecosystems, AI that can read your files, and tools like MCP that let different programs actually talk to each other ā the pieces exist. As far as I know, nobodyās assembled them yet.
From āWhatā to āHowā in Practice
Jainās core argument is that we need to shift from modeling what exists to modeling how change happens. For personal knowledge management, I think this translates into looking at the efficacy of our methods and habits moreso than how tidy the first-class information (or even metadata) is. When did you last review how many days typically pass between highlighting and processing? Between processing and writing? Between writing and publishing? I certainly hadnāt, until I started poking at this.
And itās not just timing. Links between notes are static ā they tell you that two ideas are related, but not the order in which you encountered them, or the path you took from one to the other, or the detours along the way. Obsidianās graph view shows connections, but doesnāt elevate the age of a document, or how many times itās been read or edited since creation. What would a āprocess viewā show? I genuinely donāt know, but itās fun to think about.
Glean builds process models by analyzing many users doing similar work. You only have one user ā yourself ā but you have probably years of data. I have git commits, highlight timestamps, file modification dates. I could, right now, probably reconstruct a rough timeline of every article Iāve ever written and identify patterns in how long each phase took. There were years I only wrote one article, and months where I never missed a deadline. The reasons why (social obligations, pregnancy issues) live in my head, but I never really wrote them down.
But what helps me succeed in my goals is probably the most important information my notes could possibly tell me!
The part of Jainās framework I find most interesting for individuals is what they call closing the loop. When their AI agents run, the traces from those runs get fed back into the context graph. Successful patterns get reinforced; failure patterns get flagged. In my own Claude Code setup, I have built a strong habit of updating memory and skills and rules every time something goes awry.
The same principle should apply to personal workflows ā when you try a new process and it works, capture that fact somewhere explicit. When something fails, capture that too. Right now, most of us rely on meatspace memory and a vague sense of what works for us, even when we manage a strong daily log or diary habit.
What Would It Take to Build One?
What Iād want ā and I think other PKM enthusiasts might want too ā is something that watches the flow across all of these tools and surfaces patterns... in a way that doesnāt feel like being constantly monitored by a creepy corporate surveilance net. āHereās what your last month of knowledge work actually looked like. Hereās where your process broke down. Hereās what your most productive weeks had in common.ā sounds amazing!
Glean can (probably, I dunno, Iāve never used it) build this for enterprises because they control the connectors and the data layer. For individuals, the most promising path is probably some combination of local-first tooling (like Obsidianās plugin ecosystem) and AI that can reason about the traces you already generate. Claude Code already sits in a position where it can read my notes, see my git history, track my archive of accomplishments, and query my Reader highlights. What it canāt (yet?) do is build a persistent model of my process from all that data. What Iāve been trying to do instead is build a habit of saying āgo look at what I changed, compare it to before, and gain some insight from how I did thatā ā because itās easy, even if itās manual. I donāt trust an automated flywheel, however much everyone raves about Clawdbot.
I prefer manual command invocations, coupled with a regular report about whatās happening. I keep hoping someone will figure out the tooling ā or that Iāll have some extra time on a weekend to try hacking something together ā but for now itās still a gap.
The Gap Between Knowledge and Action
Separately, thereās a deeper implication of the context graph framework that I want to emphasize. Jain distinguishes between knowledge (what exists) and process (how things happen). But thereās a third layer that I think matters even more: intent. Why did you do what you did? What were you trying to accomplish?
In an enterprise context, intent is usually obvious ā resolve the ticket, close the deal, ship the feature ā or, more clearly stated: profit. In personal knowledge management, intent is more varied and harder to pin down. Sometimes Iām reading because I want to write an article. Sometimes Iām just curious. And sometimes ā honestly, more often than Iād like to admit ā Iām killing time and happened to find something interesting. These different intents produce different processes, and a good context graph would need to account for that.
This is why Iām skeptical of fully automated approaches. The tools that try to āorganize your notes for youā tend to fail because they canāt distinguish between intent categories. They donāt know whether a highlight means āthis is important to my researchā or āthis was a funny quote I wanted to send to my friendā or āIām giving feedback on a friendās book and do not ever want this book resurfaced againā ā sure, I can flag that manually, but itās a pain. Context graphs wonāt solve this problem entirely ā you still need to leave breadcrumbs about why you did something, not just what you did. Tags, annotations, metadata, and (of course) folders will continue to matter.
But I keep coming back to the shift from āwhatā to āhow.ā Iām not sure what to do with that yet, and I kind of hate the idea of writing everything down so an AI can tell me how to optimize my life even more. So far, AI is not reducing my work, itās intensifying it, and Iām still trying to figure out the best way to avoid burning out chasing the dream of being able to do so much more than I ever could before.
So if you have any great ideas about how to leverage automated processes for that, please let me know in the comments!

Iām actively avoiding using āAIā tools, but still found this post interesting.
Apart from in metadata, search history, etc, much of my āhow did I get here?ā data will be recorded in an unstructured way in my (electronic) work Journal, where I record my current thinking, what I plan to do about it, and the immediate outcomes. Other peopleās might be found in their āDaily Notesā.
I never read through my past Journal entries, as I use them for āthinking out loudā to myself to overcome current issues, but perhaps I should be reviewing them occasionally. (That said, I canāt imagine anything more off-putting than reading a historical litany of seldom-met good intentions!)
over the last year or so i have done some extensive work with claude for my ttrpg gamemaster and player notes. most recently i had it create a skill that reads my last player session note and from that creates new atomic notes either from deadlinks i created or from inferred meaning. It has a copy of my live vault and compares existing notes and determines if there is a need for updating or creating something new. I gave it no real instructions about what to do, but it puts in a header of the session number and writes a small recap from that session in the respective note (npc, location, item, lore, faction) and it puts in links to other notes that mentions the note and updates meta data. It even created new meta data for me that fit the vaults structure. As a player that saves me time, i dont have to go through my session notes creating new notes from all my deadlinks and fill out templates etc.. So i have actually found something where AI gives me less work to do :) but i think its akin to innovation. You need an idea and you need to iterate through it to get anywhere. Much like other automatisation, it takes alot of work to set it up just right, but when you get out on the other side of that, thats where it begins helping. The "setup" is iterating through ideas and instructions to see what works and how until you distill the correct process for that specific idea.