Design compaction strategy for Omni/Agent coding agent

t-760·WorkTask·
·
·
Created1 week ago·Updated1 week ago·pipeline runs →

Description

Edit

Context: switching persistent coding agents off pi to our own Omni/Agent binary (epic t-759). Pi has a compaction strategy that's reactive — it triggers on the inference provider's 'out of context' / 'request too big' API error and compacts the history in place. Result: Ben never has to think about context windows, it just works.

Our current approach in Omni/Agent (and the wider ContextRequest → PromptIR → Prompt pipeline) is dynamic/adaptive context assembly, which is structurally different. For a simple coding agent loop, a flat linear history with good reactive compaction may actually be better than adaptive assembly.

This task is design first, implementation second.

Design work needed:

  • Decide whether the coding agent uses linear+compaction or adaptive context.
  • If linear+compaction: spec the compaction trigger (API error classification), the compaction prompt/strategy, what gets preserved (system prompt, recent turns, tool results), what gets summarized, and how state threads through Op.
  • How this interacts with Op free monad state threading (t-759.1 landed persistent state via asMessages).
  • How it interacts with the Prompt/IR pipeline — is compaction a Prompt/IR concern or an Agent loop concern?
  • Reference: pi-agent-core's compaction implementation (read the source to understand the trigger + prompt).

Acceptance criteria:

  • Design doc (markdown) in Omni/Agent/ or Biz/ with the decision, rationale, and a concrete plan.
  • Explicit callout of how it differs from / integrates with existing dynamic context work.
  • Followup implementation task filed once design is approved.

Dependencies: t-759 epic (persistent agent in Omni/Agent).

Timeline (0)

No activity yet.