The model catalog tracks every LLM available to the platform. Each model record stores provider identity, per-token pricing, capability surface, and lifecycle status. The catalog powers agent model selection, cost attribution, and complexity-based routing.
Models are a platform concern. Agents declare model preferences, but the platform resolves the actual model at runtime based on catalog state, routing rules, and availability.
Catalog Fields
| Field | Description | Example |
|---|
modelId | Unique ID in provider/model format | anthropic/claude-sonnet-4-6 |
provider | Model provider | anthropic, openai, google |
displayName | Human-readable name | Claude Sonnet 4.6 |
costInput | Input token cost per 1M tokens | $3.00 |
costOutput | Output token cost per 1M tokens | $15.00 |
contextWindow | Maximum context window in tokens | 200000 |
reasoning | Supports extended thinking | true / false |
status | Lifecycle status | active, deprecated, preview |
Providers
| Provider | Role | Models |
|---|
Anthropic | Primary | Claude Opus 4, Sonnet 4.6, Haiku 3.5 |
OpenAI | Image generation | GPT-4o, o3, GPT-Image-1 |
Google | Available | Gemini Pro, Flash, Ultra |
Pricing and Cost Attribution
The catalog’s per-token pricing feeds directly into the token spend attribution pipeline. When an agent processes a message, the model’s catalog pricing computes cost attributed to worker → task → workflow → project. The costCachedInput field tracks the discounted rate for cached/prompt-cached input tokens.
Usage Tracking
The model catalog shows which agents use which models and at what volume. This surfaces model utilization patterns — helping operators identify cost optimization opportunities and plan model migrations when providers release new versions or deprecate existing ones.
Models can require governance approval via the approvalRequired flag, gating access to expensive frontier models or models with compliance implications.
In Burgundy: View model economics in Analytics. →