Skip to main content
POST
/
models
/
v2
/
openai
/
v1
/
responses
Responses
curl --request POST \
  --url https://api.bytez.com/models/v2/openai/v1/responses \
  --header 'Authorization: <authorization>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "model": "<string>",
  "input": "<string>",
  "max_output_tokens": 256,
  "temperature": 0.7,
  "stream": false,
  "reasoning": {
    "effort": "medium",
    "summary": "none"
  },
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "<string>",
        "description": "<string>",
        "parameters": {}
      }
    }
  ],
  "tool_choice": "auto",
  "metadata": {},
  "user": "<string>",
  "include": [
    "<string>"
  ],
  "top_logprobs": 123
}
'
{
  "id": "<string>",
  "object": "<string>",
  "created": 123,
  "model": "<string>",
  "output": [
    {
      "type": "message",
      "role": "<string>",
      "content": [
        {
          "type": "output_text",
          "text": "<string>"
        }
      ]
    }
  ],
  "output_text": "<string>",
  "usage": {}
}

Headers

Authorization
string
required

Token for authentication

Body

application/json
model
string
required

The ID of the model to run (e.g., anthropic/claude-opus-4-5, openai/gpt-5.1)

input
required

The input to the model. Can be a string (simple prompt) or an array of chat-style messages. For richer multimodal inputs, use the array form with multi-part content.

max_output_tokens
integer
default:256

Maximum number of tokens to generate (counts reasoning + visible output for reasoning models)

temperature
number
default:0.7

Sampling temperature

stream
boolean
default:false

Whether to stream SSE events

reasoning
object

Optional "thinking" controls for supported reasoning models.

tools
object[]

Optional tool definitions for function/tool calling

tool_choice

Tool selection behavior

Available options:
auto,
none,
required
metadata
object

Arbitrary key/value metadata to attach to the request

user
string

End-user identifier (if supported)

include
string[]

Optional list of extra fields to include in the response (e.g. logprobs). Example values may include message.output_text.logprobs (provider/model dependent).

top_logprobs
integer

Number of most likely tokens to return at each position (if logprobs are included)

Response

Successful response

id
string
required

Unique ID for this response

object
string
required

Type of returned object (usually response)

created
integer
required

Unix timestamp of response creation

model
string
required

Model used to generate the response

output
object[]

Output items (messages, reasoning summaries, tool calls, etc.)

output_text
string

Convenience field containing concatenated output text (when applicable)

usage
object

Token usage details (shape may vary by provider)