Skip to main content

o4-mini-medium

Approved Data Classifications

Description

​OpenAI's o4-mini is a compact, cost-efficient reasoning model optimized for speed and performance in tasks like coding, mathematics, and visual analysis. Building upon its predecessor, o3-mini, o4-mini introduces native multimodal capabilities, enabling it to process both text and images seamlessly. It integrates fully with ChatGPT tools—including Python execution, web browsing, and image manipulation—making it ideal for high-throughput applications that demand quick, reliable reasoning effort (low, medium, high). Notably, o4-mini achieved top marks on the AIME 2025 benchmark when paired with a Python interpreter, demonstrating its effectiveness in tool-augmented problem-solving.

Capabilities

ModelTraining DataInputOutputContext LengthCost (per 1 million tokens)
o4-mini-mediumMay 2024Text, ImageText200,000$1.10/1M input
$4.40/1M output
info
  • 1M represents 1 Million Tokens
  • All prices listed are based on 1 Million Tokens

Availability

Cloud Provider

Usage

curl -X POST https://api.ai.it.ufl.edu/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <API_TOKEN>" \
-d '{
"model": "o4-mini-medium",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Write a haiku about an Alligator."
}
]
}'

When to use

​OpenAI's o4-mini is a compact, cost-effective reasoning model optimized for speed and efficiency in tasks like coding, mathematics, and visual analysis. As the successor to o3-mini, it introduces native multimodal capabilities, allowing it to process both text and images seamlessly. With full integration of ChatGPT tools—including Python execution, web browsing, image analysis, and file interpretation—o4-mini is well-suited for applications that demand quick, reliable reasoning. Its affordability and performance make it an excellent choice for developers, educators, and professionals seeking a balance between capability and cost.