Skip to main content
POST
/
ai
/
v2
/
chat
/
completions
curl --request POST \
  --url https://platform.ai.gloo.com/ai/v2/chat/completions \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "messages": [
    {
      "role": "user",
      "content": "What does the Bible say about forgiveness?"
    }
  ],
  "auto_routing": true,
  "tradition": "evangelical",
  "temperature": 0.7,
  "max_tokens": 1024
}
'
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1704067200,
  "model": "gloo-anthropic-claude-3-sonnet",
  "routing_mechanism": "auto_routing",
  "routing_tier": "standard",
  "routing_confidence": 0.92,
  "tradition": "evangelical",
  "provider": "anthropic",
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "message": {
        "content": "The Bible speaks extensively about forgiveness...",
        "role": "assistant"
      }
    }
  ],
  "usage": {
    "completion_tokens": 150,
    "prompt_tokens": 25,
    "total_tokens": 175
  }
}

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json

Request body for the completions v2 endpoint. Exactly one routing mechanism (auto_routing, model, or model_family) must be specified.

messages
LlmMessage · object[]
required

Chat message history with role and content fields.

auto_routing
boolean

Enables intelligent model selection. When true, the system analyzes queries to select the optimal model tier balancing speed versus capability. Mutually exclusive with model and model_family.

model
string

Specific Gloo model identifier (e.g., gloo-openai-gpt-5-pro). Provides full control and reproducibility. Mutually exclusive with auto_routing and model_family.

model_family
enum<string>

Provider family for model selection. The system optimizes model choice within that family. Mutually exclusive with auto_routing and model.

Available options:
openai,
anthropic,
google,
open source
tradition
enum<string>

Theological perspective to apply. Options: evangelical, catholic, mainline. Omit for general Christian perspective.

Available options:
evangelical,
catholic,
mainline,
not_faith_specific
stream
boolean
default:false

Enables streaming responses. When true, responses are sent as server-sent events as tokens are generated.

temperature
number
default:0.7

Sampling temperature controlling randomness. Higher values (e.g., 1.5) make output more random, lower values (e.g., 0.2) make it more deterministic.

Required range: 0 <= x <= 2
max_tokens
integer

Maximum number of tokens to generate in the response.

Required range: x >= 1
tools
Tool · object[]

Function calling definitions. Each tool includes type, name, description, and parameters schema.

tool_choice
default:none

Controls which tool (if any) the model should use.

Available options:
none,
auto,
required

Response

Successful completion response

Response from the completions v2 endpoint.

id
string

Unique completion identifier.

object
string
default:chat.completion

Object type, always 'chat.completion'.

created
integer

Unix timestamp of when the completion was created.

model
string

The model that was selected and used for the completion.

routing_mechanism
string

The routing method used: auto_routing, model_selection, or provider_selection.

routing_tier
string

The performance tier assigned by the routing system.

routing_confidence
number

Confidence score for the routing decision (0.0 to 1.0).

tradition
string

The theological perspective that was applied to the response.

provider
string

The model provider name (e.g., openai, anthropic, google).

choices
CompletionsV2Choice · object[]

List of completion choices.

usage
CompletionsV2Usage · object

Token consumption metrics.