Skip to main content
POST
/
models
/
v2
/
openai
/
v1
/
completions
Completions
curl --request POST \
  --url https://api.bytez.com/models/v2/openai/v1/completions \
  --header 'Authorization: <authorization>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "model": "<string>",
  "prompt": "<string>",
  "max_tokens": 256,
  "temperature": 0.7,
  "stream": false,
  "top_p": 123,
  "presence_penalty": 123,
  "frequency_penalty": 123,
  "logprobs": 123
}
'
{
  "id": "<string>",
  "object": "<string>",
  "created": 123,
  "choices": [
    {
      "index": 123,
      "text": "<string>",
      "finish_reason": "<string>"
    }
  ]
}

Headers

Authorization
string
required

Token for authentication

Body

application/json
model
string
required

The ID of the completion model to run (e.g., text-davinci-003)

prompt
string
required

The input text prompt

max_tokens
integer
default:256

Maximum number of tokens to generate

temperature
number
default:0.7

Sampling temperature

stream
boolean
default:false

Whether to stream responses

top_p
number

Nucleus sampling parameter

presence_penalty
number

Penalize new tokens based on whether they appear in the text so far

frequency_penalty
number

Penalize new tokens based on their existing frequency in the text so far

logprobs
integer

Include the log probabilities on the logprobs most likely output tokens (legacy-style)

Response

Successful text completion

id
string
object
string
created
integer
choices
object[]