Skip to main content

Mixtral 8x7b

Approved Data Classifications

Description

Mixtral 8x7B is an innovative sparse mixture-of-experts (SMoE) AI model developed by Mistral AI, designed to deliver exceptional performance across a range of language tasks while optimizing computational efficiency. Launched in late 2023, Mixtral 8x7B features a unique architecture that utilizes eight expert networks, each with seven billion parameters, allowing the model to activate only a subset of these experts during inference. This design enables it to achieve high-quality outputs with reduced computational costs, outperforming larger models like Llama 2 70B and even matching or exceeding the capabilities of GPT-3.5 on various benchmarks. With a context length of up to 32,000 tokens, Mixtral 8x7B excels in tasks such as code generation, multilingual processing, and complex data analysis, making it a versatile tool for developers and researchers seeking cutting-edge AI solutions. Its open-source nature and efficient architecture further enhance its appeal within the AI community, promoting accessibility and fostering innovation.

Capabilities

ModelTraining DataInputOutputContext LengthCost (per 1 million tokens)
mixtral-8x7b-instructDecember 2023TextText32,000$4.50/1M input
$7.00/1M output
info
  • 1M represents 1 Million Tokens
  • All prices listed are based on 1 Million Tokens

Availability

Cloud Provider

Usage

curl -X POST https://api.ai.it.ufl.edu/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <API_TOKEN>" \
-d '{
"model": "mixtral-8x7b-instruct",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Write a haiku about an Alligator."
}
]
}'