Documentation
API Reference

Endpoint Overview

High-performance, OpenAI-compatible API endpoints.

The gateway exposes an OpenAI-compatible API surface, so most existing clients only need a base URL change and a gateway key to start working.

Base URL

Point your SDK or HTTP client at your deployed gateway instance:

https://openrouter-clone-api-gateway.onrender.com/v1

All primary inference routes are mounted under /v1, matching the shape expected by OpenAI-compatible tooling.

Primary Endpoint

The main entry point for model execution is:

POST /v1/chat/completions

Use this endpoint for standard chat completions, streaming responses, provider-routed requests, and most app-level inference traffic.

HeaderValue
AuthorizationBearer gateway-sk-xxxx
Content-Typeapplication/json

The request and response formats remain aligned with the OpenAI schema, which makes migration from existing OpenAI SDK integrations straightforward.

Example Request

curl https://openrouter-clone-api-gateway.onrender.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $GATEWAY_API_KEY" \
  -d '{
    "model": "google/gemini-3.1-pro",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

On this page