Generate Text

Generate text by sending structured chat messages to a model.

Quickstart

Generate a Response

Send a text prompt to generate a response from a chat model.

import Bytez from "bytez.js";

(async () => {
  const client = new Bytez("YOUR_BYTEZ_KEY_HERE");
  const model = client.model("microsoft/Phi-3-mini-4k-instruct");

  const messages = [
    {
      role: "system",
      content: "You are a friendly chatbot",
    },
    {
      role: "user",
      content: "What is the capital of England?",
    },
  ];

  const params = { max_length: 100 };

  // Run model and get output
  const { error, output } = await model.run(messages, params);

  if (error) {
    console.error("Error running the model:", error);
    return;
  }

  console.log(output);
})();

Streaming

Enable real-time text generation by streaming responses.

import bytez

client = bytez.Bytez("YOUR_BYTEZ_KEY_HERE")
model = client.model("microsoft/Phi-3-mini-4k-instruct")

text_input = [
    {"role": "system", "content": "You are a friendly chatbot."},
    {"role": "user", "content": "How are you?"}
]

params = {"max_new_tokens": 50}

# Enable streaming
stream = model.run(text_input, params, stream=True)

try:
    for chunk in stream:
        print(chunk.decode("utf-8"))  # Process each chunk as it arrives
except Exception as error:
    print(f"Error during streaming: {error}")

Proprietary Models

Our v2 endpoint supports interacting with proprietary models from Anthropic, Google, Cohere, OpenAI, and Mistral.

Code

curl --location 'https://api.bytez.com/models/v2/openai/gpt-4o-mini' \
--header 'Authorization: Key YOUR_BYTEZ_KEY_HERE' \
--header 'Provider-Key: PROVIDER_KEY' \
--header 'Content-Type: application/json' \
--data '{
    "messages": [{"role": "user", "content": "Hello my name is Bob and I like to eat"}],
    "stream": false,
    "params": { "max_tokens": 100 }
}'

Demo

Explore Specialized Models