Design: leverage pure cognitive compute in Bayesian prompt framework

t-400·WorkTask·
·
·
Created1 month ago·Updated4 weeks ago

Description

Edit

Explore how the think+execute agent pattern fits into and amplifies the Bayesian prompt framework.

Context

We have:

  • agent: pure Bayesian inference (think) + tools (execute)
  • agentd: orchestrator for distributed agent execution
  • Free monad: composable representation of agent programs

Key Insight

The agent is itself a prompt processor:

  • Input: prompt (prior) + context (observations)
  • Output: actions (samples from posterior)
  • Loop: action -> observation -> update -> action

Questions to Explore

1. Agent as Compression

Can an agent compress a task into a trace?

  • Long spec -> agent runs -> short action sequence
  • The trace is a 'compiled' form of the spec
  • Replay trace = execute without re-inference

2. Agent as Factorization

Can agents discover parallelism dynamically?

  • Agent realizes sub-tasks are independent
  • Spawns parallel sub-agents
  • Collects and combines results
  • Self-factorization at runtime

3. Agent as Analysis

Can an agent analyze prompts for other agents?

  • Meta-agent that predicts behavior of prompts
  • 'Will this prompt cause unsafe tool use?'
  • 'What resources will this prompt need?'
  • Agent as prompt type-checker

4. Agent as Composition

Can agents compose themselves?

  • Agent A has skill X, Agent B has skill Y
  • How do they collaborate?
  • Shared context = shared posterior?
  • Message passing = belief propagation?

5. Recursive Structure

The agent uses prompts, which are analyzed by agents, which use prompts...

  • Where does this bottom out?
  • Is there a fixed-point?
  • Can we bootstrap increasingly sophisticated prompt analysis?

Notes

This is design exploration, not implementation. Goal is to identify the highest-leverage applications of think+execute in the Bayesian framework.

Timeline (2)

💬[human]4 weeks ago

Connection to Prompt IR (from t-477 design session)

The Prompt IR and dynamic context construction directly relates to several of your exploration questions:

Agent as Compression

The trace of an agent execution can be seen as a "compiled prompt":

  • Long task spec → agent runs → trace of actions
  • The PromptIR at each step is the "current state" of the prior
  • Replaying trace = executing without re-inference
-- A trace could be converted back to a PromptIR for replay
traceToIR :: Trace -> PromptIR
traceToIR trace = PromptIR
  { pirSections = 
      [ Section "trace" "## Execution History" (SourceStatic "replay")
          (formatTrace trace) tokens Nothing Critical Nothing Nothing Contextual Nothing Nothing
      ]
  , ...
  }

Agent as Factorization

The IR's CompositionMode enables dynamic decomposition:

  • Agent sees task, realizes subtasks are independent
  • Creates separate PromptIRs for each subtask (Additive composition)
  • Runs in parallel, merges results
-- Factorize a task into parallel subtasks
factorize :: PromptIR -> IO [PromptIR]
factorize ir = do
  -- Identify independent Additive sections
  let additive = filter ((== Additive) . secCompositionMode) (pirSections ir)
  -- Each could be processed independently
  pure [ir { pirSections = [s] } | s <- additive]

Agent as Analysis (Meta-Agent)

An agent can analyze PromptIRs for other agents:

-- Meta-agent prompt for analyzing another prompt
analyzePromptIR :: PromptIR -> PromptIR
analyzePromptIR target = PromptIR
  { pirSections = 
      [ Section "task" "## Your Task" (SourceStatic "meta")
          "Analyze the following prompt and predict: (1) Will it cause unsafe tool use? (2) What resources will it need? (3) Estimated completion time?"
          100 Nothing Critical Nothing Nothing Hierarchical Nothing Nothing
      , Section "target" "## Prompt to Analyze" (SourceStatic "input")
          (renderIR target)  -- Serialize the target IR
          (pmTotalTokens (pirMeta target)) Nothing High Nothing Nothing Contextual Nothing Nothing
      ]
  , pirTools = []  -- Analysis only, no tools
  , pirObservation = "Analyze this prompt."
  , pirMeta = defaultMeta
  }

Recursive Structure

The PromptIR is data, which means agents can manipulate it:

  • Agent A constructs a PromptIR for Agent B
  • Agent B's output could include a PromptIR for Agent C
  • This is the "prompt as data" insight - enables meta-programming

Fixed point question: Where does it bottom out?

  • Eventually you need a "ground" agent that just executes
  • The IR provides a consistent interface at every level
  • Bootstrapping: start with simple strategies, learn better ones