wylon

Switch to wylon

wylon is compatible with both the OpenAI and Anthropic client SDKs. Whether you’re migrating from GPT or Claude, the change takes three small configuration updates — no new libraries, no request schema to relearn.

Choose your SDK

Select the SDK you’re currently using to see the relevant migration steps.

What changes — OpenAI SDK

Only three values in your existing OpenAI code need to be updated. Everything else — request parameters, response schema, streaming, tool use — stays identical.

Setting OpenAI wylon
base_url https://api.openai.com/v1 https://api.wylon.cn/v1
api_key OPENAI_API_KEY WYLON_API_KEY
model gpt-4o, gpt-4o-mini, … moonshotai/kimi-k2.5, and more

Side-by-side comparison

The diff below shows a minimal OpenAI script and its wylon equivalent. Only the highlighted lines change.

from openai import OpenAI
import os

client = OpenAI(
    api_key=os.environ["OPENAI_API_KEY"],
)

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user",   "content": "Explain KV cache in one paragraph."},
    ],
    temperature=0.6,
    max_tokens=512,
)

print(response.choices[0].message.content)
from openai import OpenAI
import os

client = OpenAI(
    api_key=os.environ["WYLON_API_KEY"],         # ← changed
    base_url="https://api.wylon.cn/v1",         # ← added
)

response = client.chat.completions.create(
    model="moonshotai/kimi-k2.5",               # ← changed
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user",   "content": "Explain KV cache in one paragraph."},
    ],
    temperature=0.6,
    max_tokens=512,
)

print(response.choices[0].message.content)

Full example — OpenAI SDK

A complete, runnable script. Set WYLON_API_KEY and run it directly.

from openai import OpenAI
import os

client = OpenAI(
    api_key=os.environ["WYLON_API_KEY"],
    base_url="https://api.wylon.cn/v1",
)

response = client.chat.completions.create(
    model="moonshotai/kimi-k2.5",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user",   "content": "What are the benefits of open-source LLMs?"},
    ],
    temperature=0.7,
    max_tokens=512,
)

print(response.choices[0].message.content)
print(f"\nTokens used: {response.usage.total_tokens}")
import OpenAI from "openai";

const client = new OpenAI({
  apiKey: process.env.WYLON_API_KEY,
  baseURL: "https://api.wylon.cn/v1",
});

const response = await client.chat.completions.create({
  model: "moonshotai/kimi-k2.5",
  messages: [
    { role: "system", content: "You are a helpful assistant." },
    { role: "user",   content: "What are the benefits of open-source LLMs?" },
  ],
  temperature: 0.7,
  max_tokens: 512,
});

console.log(response.choices[0].message.content);
console.log(`\nTokens used: ${response.usage.total_tokens}`);
curl https://api.wylon.cn/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $WYLON_API_KEY" \
  -d '{
    "model": "moonshotai/kimi-k2.5",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user",   "content": "What are the benefits of open-source LLMs?"}
    ],
    "temperature": 0.7,
    "max_tokens": 512
  }'

What changes — Anthropic SDK

Three values need updating. The Anthropic SDK’s messages.create interface, streaming, and tool use all continue to work without further changes.

Setting Anthropic wylon
base_url (default) https://api.wylon.cn
api_key ANTHROPIC_API_KEY WYLON_API_KEY
model claude-opus-4-5, claude-sonnet-4-6, … moonshotai/kimi-k2.5, and more
info
Note on base_url. The Anthropic SDK appends /v1/messages automatically. Pass https://api.wylon.cn without a trailing path so the SDK constructs the correct endpoint.

Side-by-side comparison

A minimal Anthropic script and its wylon equivalent. Only the highlighted lines change.

import anthropic
import os

client = anthropic.Anthropic(
    api_key=os.environ["ANTHROPIC_API_KEY"],
)

message = client.messages.create(
    model="claude-opus-4-5",
    max_tokens=512,
    system="You are a helpful assistant.",
    messages=[
        {"role": "user", "content": "Explain KV cache in one paragraph."},
    ],
)

print(message.content[0].text)
import anthropic
import os

client = anthropic.Anthropic(
    api_key=os.environ["WYLON_API_KEY"],          # ← changed
    base_url="https://api.wylon.cn",             # ← added
)

message = client.messages.create(
    model="moonshotai/kimi-k2.5",               # ← changed
    max_tokens=512,
    system="You are a helpful assistant.",
    messages=[
        {"role": "user", "content": "Explain KV cache in one paragraph."},
    ],
)

print(message.content[0].text)

Full example — Anthropic SDK

A complete, runnable script. Set WYLON_API_KEY and run it directly.

import anthropic
import os

client = anthropic.Anthropic(
    api_key=os.environ["WYLON_API_KEY"],
    base_url="https://api.wylon.cn",
)

message = client.messages.create(
    model="moonshotai/kimi-k2.5",
    max_tokens=512,
    system="You are a helpful assistant.",
    messages=[
        {"role": "user", "content": "What are the benefits of open-source LLMs?"},
    ],
)

print(message.content[0].text)
print(f"\nInput tokens:  {message.usage.input_tokens}")
print(f"Output tokens: {message.usage.output_tokens}")
import Anthropic from "@anthropic-ai/sdk";

const client = new Anthropic({
  apiKey: process.env.WYLON_API_KEY,
  baseURL: "https://api.wylon.cn",
});

const message = await client.messages.create({
  model: "moonshotai/kimi-k2.5",
  max_tokens: 512,
  system: "You are a helpful assistant.",
  messages: [
    { role: "user", content: "What are the benefits of open-source LLMs?" },
  ],
});

console.log(message.content[0].text);
console.log(`\nInput tokens:  ${message.usage.input_tokens}`);
console.log(`Output tokens: ${message.usage.output_tokens}`);
curl https://api.wylon.cn/v1/messages \
  -H "Content-Type: application/json" \
  -H "x-api-key: $WYLON_API_KEY" \
  -H "anthropic-version: 2023-06-01" \
  -d '{
    "model": "moonshotai/kimi-k2.5",
    "max_tokens": 512,
    "system": "You are a helpful assistant.",
    "messages": [
      {"role": "user", "content": "What are the benefits of open-source LLMs?"}
    ]
  }'

Set your environment variable

Generate a wylon API key from Account Settings → API keys in the dashboard, then export it to your shell. The same key works with both the OpenAI and Anthropic SDKs.

# Add to ~/.bashrc or ~/.profile
export WYLON_API_KEY="wl-••••••••••••••••••••••••••••••••"
# Add to ~/.zshrc
export WYLON_API_KEY="wl-••••••••••••••••••••••••••••••••"
# PowerShell — persistent for the current user
[Environment]::SetEnvironmentVariable("WYLON_API_KEY", "wl-••••••••••••••••••••••••••••••••", "User")
key
Keep your key secret. Never commit API keys to source control or ship them in client-side bundles. Use a secret manager or a server-side proxy for production workloads.

What stays the same

wylon mirrors both API surfaces end-to-end. The following capabilities require zero changes to your existing code, regardless of which SDK you use.

Feature OpenAI SDK Anthropic SDK
Request parameters (temperature, max_tokens, top_p, …) Supported Supported
Streaming via server-sent events Supported Supported
Function calling & tool use Supported Supported
Structured output / JSON mode Supported Supported
Multi-turn conversation history Supported Supported
LangChain, LlamaIndex, LiteLLM integrations Supported Supported
info
Model IDs are different. Neither OpenAI (gpt-4o) nor Anthropic (claude-opus-4-5) model names exist on wylon. Use a wylon model ID such as moonshotai/kimi-k2.5. Browse all available models on the Models page.

Next steps

You’re migrated. Here’s what to explore next.

沪ICP备2026010432号-1 沪公网安备31010402336632号