Skip to main content

Responses

NaviGator Toolkit has limited support for using the OpenAI Responses API.

Most cloud and local models will support using the Responses API.

The following features are available when using the Responses API:

  • Text inputs and outputs
  • Structured output
  • Image Analysis

Limitations

Model support

Most of the models deployed in NaviGator Toolkit support being used via the Responses API. Not all models support the entire Responses API. For example the local models do not support Image analysis via the Responses API but they do support it via the Chat Completions API The exception is that with local models you can not retrieve the responses via the id. Instead the response is returned inline with the request.

See the Text Input section below for more details.

Files API

Sometimes when using the Responses API you need or want to use it along with the Files API. The Files API is currently partially supported in NaviGator Toolkit. Only OpenAI models have support for using the Files API. All other models will fail at this time.

See these directions for more information about using the Files API.

Examples

Text Input

The following example shows how to write a python script that uses the Responses API to send text input to an LLM:

  from openai import OpenAI

# Set your OpenAI API key and base URL here
api_key = "sk-XXXXXXXX" # Replace with your OpenAI API key
base_url = "https://api.ai.it.ufl.edu/v1/" # Base URL for OpenAI API

# Initialize the OpenAI API client
client = OpenAI(api_key=api_key, base_url=base_url)

response = client.responses.create(
model="gpt-5-nano",
input="Tell me a short story about a pirate lost at sea"
)

response_id = response.id

retrieved_response = client.responses.retrieve(response_id)
print(f"Response text is: {retrieved_response.output_text}")
delete_response = client.responses.delete(response_id)

The above should work on all Cloud based models. The local models currently do not support retrieving the response via the generated response ID so use the following code instead:

  from openai import OpenAI

# Set your OpenAI API key and base URL here
api_key = "sk-XXXXXXXX" # Replace with your OpenAI API key
base_url = "https://api.ai.it.ufl.edu/v1/" # Base URL for OpenAI API

# Initialize the OpenAI API client
client = OpenAI(api_key=api_key, base_url=base_url)

response = client.responses.create(
model="gpt-oss-20b",
input="Tell me a short story about a pirate lost at sea"
)

print(f"Response text is: {response.output_text}")

Structured Output

The following shows how to get structured output from the LLM. You need to define a class that will be used for the structured output. In the example below the class is CalendarEvent.

  from openai import OpenAI
from pydantic import BaseModel

# Model class that will be used for structured output
class CalendarEvent(BaseModel):
name: str
date: str
participants: list[str]

# Set your OpenAI API key and base URL here
api_key = "sk-XXXXXXXX" # Replace with your OpenAI API key
base_url = "https://api.ai.it.ufl.edu/v1/" # Base URL for OpenAI API

# Initialize the OpenAI API client
client = OpenAI(api_key=api_key, base_url=base_url)

response = client.responses.parse(
model="gpt-5-mini",
input=[
{
"role": "system",
"content": "Extract event information"
},
{
"role": "user",
"content": "Alice and Bob are going to a science fair on Friday"
}
],
text_format=CalendarEvent
)

response_id = response.id

retrieved_response = client.responses.retrieve(response_id)
print(f"Response text is: {retrieved_response.output_text}")
delete_response = client.responses.delete(response_id)

Just as in the Text input example above, the local models do not support retrieving the response via the response id so just use response.output_text instead and don't worry about deleting the response.

Image Analysis

See the Image to Text page for the various ways that image analysis tasks can be achieved.