Multi-model LLM routing infrastructure

t-675.5·WorkTask·
·
·
·omni.hs
Parent:t-675·Created1 month ago·Updated1 month ago·pipeline runs →

Dependencies

Description

Edit

Build the infrastructure to route prompts to different LLM providers (Anthropic, OpenAI) within a single workflow. This is a prerequisite for MVP 3 but also generally useful.

Current state: the agent system likely targets a single model per workflow. This task adds the ability to call different models within the same orchestration flow.

Requirements:

  • A way to specify which model to use for a given inference call (e.g. Opus for spec writing, GPT-5.3 for review, Sonnet for execution)
  • Model selection can be driven by task metadata (complexity) or workflow step
  • Unified response format regardless of provider (the orchestrator shouldn't care which model answered)
  • Error handling: if a provider is down, graceful fallback or clear error

This is infrastructure that MVP 2 doesn't need (it uses one model) but MVP 3 and beyond require.

Acceptance criteria:

  • Can make inference calls to at least 2 different providers (Anthropic + OpenAI) in the same workflow
  • Model selection is configurable per workflow step
  • Responses are normalized to a common format
  • Provider failures are handled gracefully

Timeline (0)

No activity yet.