The Prompt IR infrastructure is complete (t-477). This task is to wire it into Ava's Telegram handler and add observability to validate it works in practice.
| Module | Purpose |
|--------|---------|
| Omni/Agent/Prompt/IR.hs | Core types: Section, ToolDef, ContextRequest, CompositionMode, Priority |
| Omni/Agent/Prompt/Hydrate.hs | ContextRequest → PromptIR via context sources |
| Omni/Agent/Prompt/Compile.hs | PromptIR → CompiledPrompt with budget enforcement |
| Omni/Agent/Prompt/MemorySources.hs | Memory-backed context sources (temporal, semantic, knowledge) |
| Omni/Agent/Op.hs | Infer takes ContextRequest, InferRaw for legacy |
| Omni/Agent/Interpreter/Sequential.hs | Handles hydration + compilation |
infer(model, ContextRequest)
→ hydrate (using MemorySources)
→ PromptIR (labeled sections + metadata)
→ compile (budget enforcement, priority ordering)
→ LLM call
Ava currently builds context manually in Telegram.hs:
Memory.getAdaptiveContext for conversation historyMemory.recallMemories for user memoriesThis works but lacks:
1. Import MemorySources module
2. Build HydrationConfig using:
memoryTemporalSource userId chatId memorySemanticSourcememoryKnowledgeSource userId3. Pass to SeqConfig via seqHydrationConfig
Key function to modify: processEngagedMessage in Telegram.hs
Add logging/tracing to see what context is retrieved:
1. Log hydration results:
2. Add to PromptMeta:
pmSectionsIncluded :: [Text] - section IDs that made itpmSectionsDropped :: [Text] - section IDs dropped for budgetpmRetrievalLatencyMs :: Int - time spent in hydration3. Emit trace events:
Trace.EventCustom for hydration stats1. Feature flag: Add config option to enable/disable new context system 2. A/B comparison: Log both old and new context (without using new) to compare 3. Full switch: Once validated, remove old code path
-- In Omni/Agent/Telegram.hs, around line 1380
-- Currently:
let seqConfig = (Seq.defaultSeqConfig provider seqToolsAllowed)
{ Seq.seqGetIteration = ...
, Seq.seqOnEvent = ...
}
-- Change to:
let hydrationCfg = MS.buildHydrationConfig
systemPrompt
tools
[MS.mkTimeSection now tz, MS.mkProjectSection proj dir]
(Memory.unUserId uid)
chatId
let seqConfig = (Seq.defaultSeqConfig provider seqToolsAllowed)
{ Seq.seqHydrationConfig = Just hydrationCfg
, Seq.seqGetIteration = ...
, Seq.seqOnEvent = ...
}
-- In Sequential.hs, after hydration (around line 145)
Free (Op.Infer model contextReq k) ->
case seqHydrationConfig config of
Just hydrationCfg -> do
t0 <- Time.getCurrentTime
promptIR <- Hydrate.hydrate hydrationCfg contextReq
-- NEW: Log hydration stats
let stats = summarizeHydration promptIR
seqOnEvent config (Trace.EventCustom "hydration" stats t0)
compiled <- Compile.compile promptIR
...
1. Ava uses the new context system for all conversations 2. Logs show what context was retrieved and used 3. No regression in response quality (subjective, observed over ~1 week) 4. Budget enforcement works (can observe sections being dropped under pressure)
Implementation complete. Commit 6043c96.
Changes: 1. Bridge.hs: Added runAgentWithHydration function 2. Sequential.hs: InferRaw now injects hydrated context when HydrationConfig is set 3. Hydrate.hs: Added hydrateWithObservation convenience function 4. Telegram.hs: Builds HydrationConfig and calls runAgentWithHydration
Observability:
Ready for testing in ava worktree.