Function calling & tools
Function calling lets a model invoke your code — to look up data, mutate state, or call external APIs — and fold the result back into its reasoning. You declare the available tools; the model decides when to call them and with what arguments.
tool message.
The tool-use flow
-
Declare the tools
Each tool is a JSON Schema description of its name, purpose, and parameters.
-
Send the request
Include the
toolsarray in your chat completion call. Optionally settool_choiceto force or forbid a call. -
Dispatch & respond
If
finish_reasonistool_calls, execute each call and append the results asrole: "tool"messages. -
Let the model finalize
Call the API again with the enriched history. The model produces a natural-language answer informed by the tool outputs.
Declaring tools
A tool definition has three parts: a name, a description, and a JSON Schema parameters block. Good descriptions — for both the tool and each parameter — matter more than anything else.
[
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current temperature for a city.",
"parameters": {
"type": "object",
"properties": {
"city": {"type": "string", "description": "IANA city name, e.g. 'Shanghai'."},
"unit": {"type": "string", "enum": ["c", "f"]}
},
"required": ["city"]
}
}
}
]
Controlling tool selection
tool_choice | Behaviour |
|---|---|
"auto" (default) | Model decides whether to call a tool or answer directly. |
"none" | Tools are ignored for this call. |
"required" | Model must call at least one tool. |
{"type":"function","function":{"name":"get_weather"}} | Force a specific tool. |
End-to-end example
A complete round trip: declare the tool, let the model request a call, execute it, feed back the result, and return the final answer.
import json, os
from openai import OpenAI
client = OpenAI(base_url="https://api.wylon.cn/v1", api_key=os.environ["WYLON_API_KEY"])
tools = [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current temperature for a city.",
"parameters": {
"type": "object",
"properties": {"city": {"type": "string"}},
"required": ["city"],
},
},
}]
def get_weather(city):
return {"city": city, "temp_c": 22}
messages = [{"role": "user", "content": "What's the weather in Shanghai?"}]
# 1. Model decides to call the tool
first = client.chat.completions.create(
model="moonshotai/kimi-k2.5", messages=messages, tools=tools,
)
messages.append(first.choices[0].message)
# 2. Execute each tool call and append results
for call in first.choices[0].message.tool_calls:
args = json.loads(call.function.arguments)
result = get_weather(**args)
messages.append({
"role": "tool",
"tool_call_id": call.id,
"content": json.dumps(result),
})
# 3. Let the model produce a final answer
final = client.chat.completions.create(
model="moonshotai/kimi-k2.5", messages=messages, tools=tools,
)
print(final.choices[0].message.content)
import OpenAI from "openai";
const client = new OpenAI({ baseURL: "https://api.wylon.cn/v1", apiKey: process.env.WYLON_API_KEY });
const tools = [{
type: "function",
function: {
name: "get_weather",
description: "Get current temperature for a city.",
parameters: { type: "object", properties: { city: { type: "string" } }, required: ["city"] },
},
}];
const getWeather = (city) => ({ city, temp_c: 22 });
const messages = [{ role: "user", content: "What's the weather in Shanghai?" }];
const first = await client.chat.completions.create({ model: "moonshotai/kimi-k2.5", messages, tools });
messages.push(first.choices[0].message);
for (const call of first.choices[0].message.tool_calls ?? []) {
const args = JSON.parse(call.function.arguments);
messages.push({ role: "tool", tool_call_id: call.id, content: JSON.stringify(getWeather(args.city)) });
}
const final = await client.chat.completions.create({ model: "moonshotai/kimi-k2.5", messages, tools });
console.log(final.choices[0].message.content);
Parallel tool calls
Modern models can emit several tool calls in a single turn. Iterate over
message.tool_calls and append one role: "tool" message per call, each
keyed by its tool_call_id. The model merges all results in the next step.
Best practices
- Name tools like verbs.
get_weather,create_ticket— notweatherorticket. - Describe every parameter. Include units, formats, accepted enum values.
- Keep the surface small. 10–20 tools per request is a safe ceiling; beyond that, retrieve a shortlist first.
- Validate inputs. Schema validation catches hallucinated arguments before you execute code.
- Return structured JSON. Easier for the model to interpret than prose.