Function Calling
NaviGator Toolkit offers the capability to allow the LLM to use functions that you have developed. This allows the LLM to interface with data and services that it does not usually have access to like weather services. It could also be used on behalf of the LLM to get things accomplished let sending out email or updating a file.
Quickstart
The following example shows how to write a python script that exposes two functions to the LLM: get_weather
and get_dad_jokes
get_weather
is a function that returns information based on a static set of locations. It shows how you can pass parameters to a function call by the LLM.
get_dad_jokes
is a function call that uses the API available at https://icanhazdadjoke.com/ to return a Dad Joke. This is an example of a function call that does not need any parameters passed in.
- python
from openai import OpenAI
import json
import re
import requests
# Set your OpenAI API key and base URL here
api_key = "sk-XXXXXXXX" # Replace with your OpenAI API key
base_url = "https://api.ai.it.ufl.edu/v1/" # Base URL for OpenAI API
# Initialize the OpenAI API client
client = OpenAI(api_key=api_key, base_url=base_url)
# Define the tool (function) interfaces
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current temperature for a given location.",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City and country e.g. Paris, France"
}
},
"required": ["location"],
"additionalProperties": False
}
}
},
{
"type": "function",
"function": {
"name": "get_dad_joke",
"description": "Get a random dad joke.",
"parameters": {
"type": "object",
"properties": {},
"required": [],
"additionalProperties": False
}
}
}
]
# Example function that fetches a random dad joke
def get_dad_joke() -> str:
"""
Fetch a random dad joke from the icanhazdadjoke API.
Returns:
str: A dad joke or error message.
"""
url = "https://icanhazdadjoke.com/"
headers = {"Accept": "application/json", "User-Agent": "open-researcher/1.0"}
try:
response = requests.get(url, headers=headers, timeout=10)
data = response.json()
return data.get("joke", "No joke found.")
except Exception as e:
# Reason: Network or API error
return f"Error fetching dad joke: {e}"
# Example function that simulates fetching weather data based on location
def get_weather(location: str) -> str:
"""
Get weather information for a given location.
Args:
location (str): City and country, e.g. "Paris, France".
Returns:
str: Weather description or "Location not found".
"""
weather_data = {
"Paris, France": "Cloudy, 16°C",
"New York, USA": "Sunny, 22°C",
"Tokyo, Japan": "Rainy, 18°C"
}
return weather_data.get(location, "Location not found")
def call_openai_tool(prompt: str, model: str, tool: str) -> str:
"""
Call the OpenAI chat completion API with function calling.
Args:
prompt (str): The user prompt.
model (str): The LLM to use
tool (str): The name of the tool to call
Returns:
str: The result from the model or function call.
"""
try:
# Prepare messages for chat completion
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
if re.search("^(?!mistral).*$", model, re.IGNORECASE):
tool_choice = {
"type": "function",
"function": {
"name": tool
}
}
else:
tool_choice = "auto"
# Call the OpenAI chat completion API with function calling
response = client.chat.completions.create(
model=model,
messages=messages,
tools=tools,
tool_choice = tool_choice,
max_tokens=100
)
choice = response.choices[0]
message = choice.message
# Check if a tool (function) call was made
if hasattr(message, "tool_calls") and message.tool_calls:
for tool_call in message.tool_calls:
if tool_call.function.name == "get_weather":
params = json.loads(tool_call.function.arguments)
location = params.get("location")
weather = get_weather(location)
return weather
elif tool_call.function.name == "get_dad_joke":
return get_dad_joke()
return "Function not recognized."
else:
# Return model text if no function call
return message.content or ""
except Exception as e:
print(f"Error calling OpenAI tool: {e}")
return None
model = "llama-3.3-70b-instruct"
# model = "mistral-small-3.1"
# model = "gemini-2.0-flash"
# Example: Test the function by asking about the weather
prompt = "What is the weather like in New York today?"
tool = "get_weather"
result = call_openai_tool(prompt, model, tool)
print("Function Call Result: ", result)
# Example: Test the function by asking for a dad joke
prompt = "Tell me a dad joke."
tool = "get_dad_joke"
result = call_openai_tool(prompt, model, tool)
print("Function Call Result: ", result)
Caveats
At the time of writing this document (2025-04-22) there are some LLMs that need to be told which tool needs to be used during tool calling. mistral-small-3.1 is an example of an LLM that does not need to be set that way and can use auto.
In the above code that is why tool_choice
needs to be set differently based on the LLM that is being used:
- python
if re.search("^(?!mistral).*$", model, re.IGNORECASE):
tool_choice = {
"type": "function",
"function": {
"name": tool
}
}
else:
tool_choice = "auto"
If you use another LLM and get the following error:
Error calling OpenAI tool: Error code: 400 - {'error': {'message': 'litellm.BadRequestError: OpenAIException - "auto" tool choice requires --enable-auto-tool-choice and --tool-call-parser to be set. Received Model Group=llama-3.3-70b-instruct\nAvailable Model Group Fallbacks=None', 'type': None, 'param': None, 'code': '400'}}
Function Call Result: None
Then it does not support setting the tool_choice
to auto and you must explicitly set the function.name
when making the tool call.