Mistral Small
Approved Data Classifications
Description
Mistral Small is Mistral AI's compact yet powerful language model, designed for efficiency and optimized for low-latency applications. Launched in May 2024, this model stands out as the smallest proprietary offering from Mistral, outperforming larger models like Mixtral 8x7B while maintaining rapid response times. With a context window of up to 32,000 tokens, Mistral Small excels in a variety of language-based tasks, including code generation and multilingual processing, supporting languages such as French, German, Spanish, and Italian. Its architecture is specifically tailored for high-volume workloads, making it an ideal choice for developers seeking to integrate AI capabilities into applications that demand quick and reliable performance. Additionally, Mistral Small incorporates built-in safety features to ensure responsible usage, further enhancing its appeal for deployment in real-world scenarios.
Capabilities
Model | Training Data | Input | Output | Context Length | Cost (per 1 million tokens) |
---|---|---|---|---|---|
mistral-small | February 2024 | Text | Text | 32,000 | $1.00/1M input $3.00/1M output |
1M
represents 1 Million Tokens- All prices listed are based on 1 Million Tokens
Availability
Cloud Provider
Usage
- curl
- python
- javascript
curl -X POST https://api.ai.it.ufl.edu/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <API_TOKEN>" \
-d '{
"model": "mistral-small",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Write a haiku about an Alligator."
}
]
}'
from openai import OpenAI
client = OpenAI(
api_key="your_api_key",
base_url="https://api.ai.it.ufl.edu/v1"
)
response = client.chat.completions.create(
model="mistral-small", # model to send to the proxy
messages = [
{ role: "system", content: "You are a helpful assistant." },
{
"role": "user",
"content": "Write a haiku about an Alligator."
}
]
)
print(response.choices[0].message)
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: 'your_api_key',
baseURL: 'https://api.ai.it.ufl.edu/v1'
});
const completion = await openai.chat.completions.create({
model: "mistral-small",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{
role: "user",
content: "Write a haiku about an Alligator.",
},
],
});
print(completion.choices[0].message)