← All docs
Getting Started

Getting Started

Route your first AI agent call through Kurral and see tokens, cost, and latency in the dashboard — in under 5 minutes.

Route your first AI agent call through Kurral and see tokens, cost, and latency in the dashboard — in under 5 minutes.


Prerequisites

  • A Kurral account at app.kurral.com
  • Python 3.10+ with an existing AI agent (OpenAI, Anthropic, or Gemini)
  • Your LLM provider API key

Step 1: Get Your Kurral API Key

  1. Sign in to the Kurral dashboard
  2. Go to API Keys in the sidebar
  3. Click Create Key — copy the full key (kr_live_...). You'll only see it once.

Step 2: Register Your Agent

  1. Go to Agents in the sidebar
  2. Click Register Agent
  3. Enter an agent key (e.g., my-support-bot) — this is immutable and used to identify your agent across all calls
  4. Click Register

Step 3: Route Your LLM Calls Through Kurral

Change your SDK client's base URL to point at the Kurral proxy. Your provider API key stays the same — Kurral forwards it upstream.

OpenAI

from openai import OpenAI

client = OpenAI(
    base_url="https://kurral-api.onrender.com/api/proxy/openai/v1",
    api_key="sk-your-openai-key",  # your real OpenAI key
    default_headers={
        "X-Kurral-API-Key": "kr_live_your-kurral-key",
        "x-kurral-agent": "my-support-bot",
    },
)

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello, world!"}],
)
print(response.choices[0].message.content)

Anthropic

import anthropic

client = anthropic.Anthropic(
    base_url="https://kurral-api.onrender.com/api/proxy/anthropic",
    api_key="sk-ant-your-anthropic-key",  # your real Anthropic key
    default_headers={
        "X-Kurral-API-Key": "kr_live_your-kurral-key",
        "x-kurral-agent": "my-support-bot",
    },
)

message = client.messages.create(
    model="claude-sonnet-4-5-20250929",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Hello, world!"}],
)
print(message.content[0].text)

Gemini

POST https://kurral-api.onrender.com/api/proxy/google/v1beta/models/gemini-2.0-flash:generateContent?key=YOUR_GEMINI_KEY

Headers:
  X-Kurral-API-Key: kr_live_your-kurral-key
  x-kurral-agent: my-support-bot

Step 4: Check the Dashboard

After sending your first request:

  1. Open the Kurral dashboard
  2. Go to Agents — your agent should show as active
  3. Click into it to see the session: model, tokens, cost, latency, and the full request/response

That's it. Every LLM call through the proxy is automatically captured.


What Gets Captured

Every call through the proxy records:

DataDescription
TokensInput tokens, output tokens, total
CostCalculated from model-specific pricing
LatencyTotal request time, time-to-first-token for streaming
ModelWhich model was used
AgentWhich agent made the call
SessionGrouped conversation context (optional)
Tool interactionsTool call arguments and results (part of the LLM conversation)
ContentFull request/response bodies (configurable retention)

Next Steps

  • Proxy Integration — Session grouping, data retention controls, streaming, and all optional headers
  • SDK Tracing — Add discrete tool event timing, prompt template capture, and replay for LangChain/LangGraph agents
  • Security Scans — Run adversarial security tests against your agent's tools
  • Agent Replay — Replay agent sessions to catch regressions when you change models or prompts
  • Examples — Full working agents (ShopBot, HelpDesk) with Kurral integration