Skip to main content
The model catalog tracks every LLM available to the platform. Each model record stores provider identity, per-token pricing, capability surface, and lifecycle status. The catalog powers agent model selection, cost attribution, and complexity-based routing.
Models are a platform concern. Agents declare model preferences, but the platform resolves the actual model at runtime based on catalog state, routing rules, and availability.

Catalog Fields

FieldDescriptionExample
modelIdUnique ID in provider/model formatanthropic/claude-sonnet-4-6
providerModel provideranthropic, openai, google
displayNameHuman-readable nameClaude Sonnet 4.6
costInputInput token cost per 1M tokens$3.00
costOutputOutput token cost per 1M tokens$15.00
contextWindowMaximum context window in tokens200000
reasoningSupports extended thinkingtrue / false
statusLifecycle statusactive, deprecated, preview

Providers

ProviderRoleModels
AnthropicAnthropicPrimaryClaude Opus 4, Sonnet 4.6, Haiku 3.5
OpenAIOpenAIImage generationGPT-4o, o3, GPT-Image-1
GoogleGoogleAvailableGemini Pro, Flash, Ultra

Pricing and Cost Attribution

The catalog’s per-token pricing feeds directly into the token spend attribution pipeline. When an agent processes a message, the model’s catalog pricing computes cost attributed to worker → task → workflow → project. The costCachedInput field tracks the discounted rate for cached/prompt-cached input tokens.

Usage Tracking

The model catalog shows which agents use which models and at what volume. This surfaces model utilization patterns — helping operators identify cost optimization opportunities and plan model migrations when providers release new versions or deprecate existing ones. Models can require governance approval via the approvalRequired flag, gating access to expensive frontier models or models with compliance implications. In Burgundy: View model economics in Analytics. →