Skip to main content

Responses

NaviGator Toolkit has limited support for using the OpenAI Responses API.

Currently only models made by OpenAI are supported. Look here for a list of models.

The following features are available when using the Responses API:

  • Text inputs and outputs
  • Structured output
  • Image Analysis

Text Input

The following example shows how to write a python script that can uses the Responses API to send text input to an LLM:

  from openai import OpenAI

# Set your OpenAI API key and base URL here
api_key = "sk-XXXXXXXX" # Replace with your OpenAI API key
base_url = "https://api.ai.it.ufl.edu/v1/" # Base URL for OpenAI API

# Initialize the OpenAI API client
client = OpenAI(api_key=api_key, base_url=base_url)

response = client.responses.create(
model="gpt-5-nano",
input="Tell me a short story about a pirate lost at sea"
)

response_id = response.id

retrieved_response = client.responses.retrieve(response_id)
print(f"Response text is: {retrieved_response.output_text}")
delete_response = client.responses.delete(response_id)

Structured Output

The following shows how to get structured output from the LLM. You need to define a class that will be used for the structured output. In the example below the class is CalendarEvent.

  from openai import OpenAI
from pydantic import BaseModel

# Model class that will be used for structured output
class CalendarEvent(BaseModel):
name: str
date: str
participants: list[str]

# Set your OpenAI API key and base URL here
api_key = "sk-XXXXXXXX" # Replace with your OpenAI API key
base_url = "https://api.ai.it.ufl.edu/v1/" # Base URL for OpenAI API

# Initialize the OpenAI API client
client = OpenAI(api_key=api_key, base_url=base_url)

response = client.responses.parse(
model="gpt-5-mini",
input=[
{
"role": "system",
"content": "Extract event information"
},
{
"role": "user",
"content": prompt
}
],
text_format=CalendarEvent
)

response_id = response.id

retrieved_response = client.responses.retrieve(response_id)
print(f"Response text is: {retrieved_response.output_text}")
delete_response = client.responses.delete(response_id)

Image Analysis

In the following example you will provide an image and the LLM will analyze the image based on your prompt.

This call requries the following information to be filled out:

  • PATH_TO_IMAGE - with the path to the image file you wish to upload
  • IMAGE_TYPE - with the type of image that it is valid options are: jpeg, png, gif (non-animated), webp
  from openai import OpenAI
from pydantic import BaseModel

# Set your OpenAI API key and base URL here
api_key = "sk-XXXXXXXX" # Replace with your OpenAI API key
base_url = "https://api.ai.it.ufl.edu/v1/" # Base URL for OpenAI API

prompt = "What is in this image?"

image = "PATH_TO_IMAGE"
with open(image,"rb") as image_file:
image_contents = base64.b64encode(image_file.read()).decode("utf-8")


# Initialize the OpenAI API client
client = OpenAI(api_key=api_key, base_url=base_url)

response = client.responses.create(
model="gpt-5-mini,
input=[
{
"role": "user",
"content": [
{
"type": "input_text",
"text": f"{prompt}"
},
{
"type": "input_image",
"image_url": f"data:image/IMAGE_TYPE;base64,{image_contents}"
}
]
}
]
)

response_id = response.id

retrieved_response = client.responses.retrieve(response_id)
print(f"Response text is: {retrieved_response.output_text}")
delete_response = client.responses.delete(response_id)