Gemini 1.5 Pro
Approved Data Classifications
Description
Gemini 1.5 Pro is a cutting-edge multimodal AI model developed by Google DeepMind, designed to handle a diverse range of tasks involving text, images, audio, and video. Launched in early 2024, this model features an impressive context window of up to 1 million tokens, allowing it to process vast amounts of information simultaneously—equivalent to analyzing hours of video or extensive text documents. Utilizing a sophisticated mixture-of-experts (MoE) architecture, Gemini 1.5 Pro activates specialized pathways within its neural network to optimize performance based on the input type, enhancing both efficiency and output quality. The model excels in complex reasoning tasks, generating structured outputs like JSON from unstructured data, and demonstrates advanced capabilities in translation, code generation, and multimodal question answering. With its ability to learn from extensive prompts without additional fine-tuning, Gemini 1.5 Pro represents a significant advancement in AI technology, making it an invaluable tool for developers and businesses looking to leverage its versatile capabilities across various applications.
Capabilities
Model | Training Data | Input | Output | Context Length | Cost (per 1 million tokens) |
---|---|---|---|---|---|
gemini-1.5-pro | February 2024 | Image , Text | Text | 1,000,000 | $2.50/1M input $10.00/1M output |
1M
represents 1 Million Tokens- All prices listed are based on 1 Million Tokens
Availability
Cloud Provider
Usage
- curl
- python
- javascript
curl -X POST https://api.ai.it.ufl.edu/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <API_TOKEN>" \
-d '{
"model": "gemini-1.5-pro",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Write a haiku about an Alligator."
}
]
}'
from openai import OpenAI
client = OpenAI(
api_key="your_api_key",
base_url="https://api.ai.it.ufl.edu/v1"
)
response = client.chat.completions.create(
model="gemini-1.5-pro", # model to send to the proxy
messages = [
{ role: "system", content: "You are a helpful assistant." },
{
"role": "user",
"content": "Write a haiku about an Alligator."
}
]
)
print(response.choices[0].message)
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: 'your_api_key',
baseURL: 'https://api.ai.it.ufl.edu/v1'
});
const completion = await openai.chat.completions.create({
model: "gemini-1.5-pro",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{
role: "user",
content: "Write a haiku about an Alligator.",
},
],
});
print(completion.choices[0].message)