wylon

Function calling & tools

Function calling lets a model invoke your code — to look up data, mutate state, or call external APIs — and fold the result back into its reasoning. You declare the available tools; the model decides when to call them and with what arguments.

build
The model never runs your code. It produces a structured intent in the form of a JSON argument object. Your application executes the function and feeds the result back as a tool message.

The tool-use flow

  1. Declare the tools

    Each tool is a JSON Schema description of its name, purpose, and parameters.

  2. Send the request

    Include the tools array in your chat completion call. Optionally set tool_choice to force or forbid a call.

  3. Dispatch & respond

    If finish_reason is tool_calls, execute each call and append the results as role: "tool" messages.

  4. Let the model finalize

    Call the API again with the enriched history. The model produces a natural-language answer informed by the tool outputs.

Declaring tools

A tool definition has three parts: a name, a description, and a JSON Schema parameters block. Good descriptions — for both the tool and each parameter — matter more than anything else.

[
  {
    "type": "function",
    "function": {
      "name": "get_weather",
      "description": "Get the current temperature for a city.",
      "parameters": {
        "type": "object",
        "properties": {
          "city": {"type": "string", "description": "IANA city name, e.g. 'Shanghai'."},
          "unit": {"type": "string", "enum": ["c", "f"]}
        },
        "required": ["city"]
      }
    }
  }
]

Controlling tool selection

tool_choiceBehaviour
"auto" (default)Model decides whether to call a tool or answer directly.
"none"Tools are ignored for this call.
"required"Model must call at least one tool.
{"type":"function","function":{"name":"get_weather"}}Force a specific tool.

End-to-end example

A complete round trip: declare the tool, let the model request a call, execute it, feed back the result, and return the final answer.

import json, os
from openai import OpenAI

client = OpenAI(base_url="https://api.wylon.cn/v1", api_key=os.environ["WYLON_API_KEY"])

tools = [{
    "type": "function",
    "function": {
        "name": "get_weather",
        "description": "Get current temperature for a city.",
        "parameters": {
            "type": "object",
            "properties": {"city": {"type": "string"}},
            "required": ["city"],
        },
    },
}]

def get_weather(city):
    return {"city": city, "temp_c": 22}

messages = [{"role": "user", "content": "What's the weather in Shanghai?"}]

# 1. Model decides to call the tool
first = client.chat.completions.create(
    model="moonshotai/kimi-k2.5", messages=messages, tools=tools,
)
messages.append(first.choices[0].message)

# 2. Execute each tool call and append results
for call in first.choices[0].message.tool_calls:
    args = json.loads(call.function.arguments)
    result = get_weather(**args)
    messages.append({
        "role": "tool",
        "tool_call_id": call.id,
        "content": json.dumps(result),
    })

# 3. Let the model produce a final answer
final = client.chat.completions.create(
    model="moonshotai/kimi-k2.5", messages=messages, tools=tools,
)
print(final.choices[0].message.content)
import OpenAI from "openai";

const client = new OpenAI({ baseURL: "https://api.wylon.cn/v1", apiKey: process.env.WYLON_API_KEY });

const tools = [{
  type: "function",
  function: {
    name: "get_weather",
    description: "Get current temperature for a city.",
    parameters: { type: "object", properties: { city: { type: "string" } }, required: ["city"] },
  },
}];

const getWeather = (city) => ({ city, temp_c: 22 });

const messages = [{ role: "user", content: "What's the weather in Shanghai?" }];

const first = await client.chat.completions.create({ model: "moonshotai/kimi-k2.5", messages, tools });
messages.push(first.choices[0].message);

for (const call of first.choices[0].message.tool_calls ?? []) {
  const args = JSON.parse(call.function.arguments);
  messages.push({ role: "tool", tool_call_id: call.id, content: JSON.stringify(getWeather(args.city)) });
}

const final = await client.chat.completions.create({ model: "moonshotai/kimi-k2.5", messages, tools });
console.log(final.choices[0].message.content);

Parallel tool calls

Modern models can emit several tool calls in a single turn. Iterate over message.tool_calls and append one role: "tool" message per call, each keyed by its tool_call_id. The model merges all results in the next step.

Best practices

沪ICP备2026010432号-1 沪公网安备31010402336632号