pdxmph


Notes on asystem

#

With a few nips and tucks and some stuff I held back from daily workflows, I got to use asystem a little this evening. Good first pass.

Recap:

Each has two aspects: There’s a TUI app built to suit my preferences for keyboard-centric interactions with a tool, and there’s a CLI tool that can emit JSON for all the objects and take batch commands. The former is so a human can use these tools like TUI productivity tools, and the latter is so agents can efficiently search all the data, make programmatic changes/additions/removals, and insert relationships between the objects each manages.

These are all glued together on the agent level as SKILL.md files for each that explain how the tools work and how they are meant to relate to each other.

They’re all used together by a LocalGPT instance.

I work with LocalGPT through Slack, because it allows me to create structured channels. LocalGPT doesn’t naturally do Slack, so I’ve got a bot running locally that hits the LocalGPT http API. Each channel in Slack spawns a new instance of a LocalGPT agent running as a bot in Slack. The Slack bot has some features that allow LocalGPT to see the canvas or pinned messages in each channel and use them as its topmost directive if the pinned message starts with instructions. For instance, here’s the #daily channel:

[instructions]

**Daily Channel Protocol**

This channel operates in **three modes**. Between triggers, it defaults to **log mode**.

---

**Log Mode (default)**
Drop anything in here throughout the day — ideas, reactions, tasks, stray thoughts. No formatting required. I'll acknowledge briefly and hold. No analysis, no riffing, no engagement. Just capture.


**"start of day"** → Planning mode. Pulse check, calendar scan, task triage, daily intention, notebook prompt.

**"end of day"** → First triages the log (promotes entries to anote for ideas and atask for actions, flags process signals, updates records in apeople when they're mentioned), then runs synthesis: what happened, open threads, tomorrow's anchor, energy debrief.

**"process the log"** → Standalone log triage. Same promotion steps as end-of-day, without the full close-out. Use mid-day to clear the buffer.

---

**Key behaviors:**
- In log mode, responses are one line max. Catch and hold.
- Meaning comes later — during triage and end-of-day, not mid-stream.
- Full protocol details: daily-protocols.md in  memory.

So during the day, I can just go to #daily channel and type in a quick note naming people and actions. The logging protocol dumps that into a text file to ensure the context survives a LocalGPT daemon restart or context cleanse, and goes on passively listening. When triggered, it goes through the file and processes.

Today, for instance, it took a recap of a meeting with an internal stakeholder, updated his contact record, and associated him with new tasks reminding me to look into some Cloudflare features. It also captured observations about a call with our privacy counsel, and added an item to chat with my boss about next steps there.

I make all this easier by using Wisprflow: I just go to the channel, press ctrl + opt, dictate a quick note between meetings, and it’s captured. No typing or visiting individual tools in the system. In the event of agent failure, well, there’s a Slack record I can go visit.

All the notes in the system are using Markdown with YAML frontmatter, and they have relationship arrays in the YAML, so as associations are made between people, tasks, and ideas, the agent knows to look for them when one of those things becomes a topic, stitching together context from the existence of a relationship in the frontmatter and scanning the interaction logs for more context.

… and all this context is there at the beginning of the day during my morning timeblocking ritual:

The agent retrieves my calendar, checks the invites for people and projects it knows about, and gives me a small cheat sheet for meeting agendas. Then it looks at my live projects and tasks, their priority, and estimated effort, and helps me block work in my discretionary time.

Ambitions

I’ve been keeping an eye on a vibecoder at work who is currently beginning to brush up against the limits of Google’s AI Studio. Its integration picture is poor and it has its limits, but it does crank out nice-looking little web apps with integration into Gemini. I did a quick sprint on Friday to implement the guts of asystem as a little PoC app I named Hecubus. Hitting my own limits with AI Studio, I learned that if you’re desperate you can store JSON on Google Drive to feed a chat agent you can embed in a Studio App. I’m assuming I could just store Markdown that way, too, and build a web front end for all this, leveraging Gemini for the LLM.