Plumbing the pieces of asystem
#I’ve spent the last week evolving asystem at a pretty quick pace. The three apps in the suite: atask, apeople, and anote all got a common library to cover their shared functionality, and a collection of new API stuff that makes it easier to cross-link objects between the three.
The goal has been to keep the TUIs I like working with, but make the tools better for use with an agent of some kind. Each comes with a SKILL.md that helps LocalGPT understand how to interact with them, and LocalGPT has some protocols in place to use the skills for daily routine stuff for me.
I also put a React front-end on the whole thing, so there’s a web interface available over Tailscale that gives me a GUI to access my stuff from wherever. It’s all still Markdown and YAML underneath, but simple to manage from a phone.
I’ve built a few workflows with all these pieces to help me plan my week and day. asystem goes out and grabs my calendar, looks at my task/project list, and scans my meetings for who the participants are. I time block my hard meeting times, then dialog back and forth about what to prioritize when and schedule that into the day.
In the React front-end, there’s a daily page that reflects any tasks I’ve signed up for, and it also tosses in links to the contacts asystem found in my calendar for the day. Since tasks, projects, ideas, and people are all cross-linkable, I’ve pretty much got my agenda there in the app, whether it’s a list of tasks or some ideas I’ve been kicking around that relate to that person.
Getting a web front-end built has taken a lot of the pressure off to change the back end, because I don’t really need a db with this arrangement. If I want something tappy/clicky from my iPad or phone, I’ve got aweb. If I want a TUI, those apps are still there from ghostty or Blink. If I just want to edit a file, I can. The underlying execution from any entry point is just the individual CLI apps that ingest and emit JSON when an agent is using them, but present me with a TUI if I want.
A lot of the fussing with this right now is about figuring out how much structure to provide agents to do useful things for me without ending up just writing the procedural code. By putting tasks, contacts, and ideas in hybrid Markdown/YAML files, there’s plenty of structured data to remove some kinds of ambiguity, but enough room for prose and personal notes that it’s still useful and familiar to me. The SKILL.md scaffolding makes sure the agents use the tools the way they’re meant.
I’ve added other scaffolding in the form of LocalGPT’s MEMORY.md file, which contains an org chart, and in relationship: tags in apeople that help guide the agent’s recommendations during planning times: I took about 30 minutes to go through and classify everyone at work based on a few flavors of relationship, so the agent doesn’t recommend rescheduling the wrong people, and it can tell from looking at a given contact if I have any open business with them that might encourage me to keep the current time before pushing back.
I fed the whole thing a few personality tests and used that to work through a work habits and patterns map that is also sitting in LocalGPT’s memory. It looks out for tasks that aren’t getting done and prods a little, and because it has the apeople map with my read on everybody around me, it offers a little “how do you want to show up for this” guidance when doing the daily overview, so I get reminders, like “this is one of your reports, so keep it concrete and not too conceptual.” If I feed it work I need to review from someone, it uses the apeople log and anote idea mapping to draw my attention to areas I should be on the lookout for, or think about better feedback on. And during the day, I just drop log entries into it that it goes through at the end of the day and processes as either potential next steps (aspirations) or things I might or might not know (beliefs), making links between ideas, tasks, projects, and people that further inform how it surfaces stuff and what it recommends.
So slowly there is some stuff emerging from all the linked context across people, tasks, and ideas. Nothing profound. I am not in communion with another mind, I’m just finding a state where the LLM can do helpful things, and is operating in such an introspectable, finite web of information that it doesn’t have room to invent shit but is instead reduced to catching associations and surfacing them in the form of little reminders, or reducing the number of ideas I have to say “no” to as I plan my day.
Oh, helpfully, Claude Code also has access to all the asystem SKILL files, and CLAUDE.md has some light guidance on them. So when I drop an idea into anote about how to evolve the system, I can reference it in a Code session and that helps drive feature development. This part isn’t as rich as I’d like it to be, because the CLAUDE.md instructions and skills aren’t really written around “have a holistic understanding of your operator and his context,” but the plumbing is there if I care to get around to figuring out the prompt.
So … sort of fun turn of the crank on vibe coding this time. Last time around, last year, I built some TUI tools that were sort of fun little throwback exercises. This time around the more interesting part is all the connections I can make with those tools.