Skip to main content
Mistral AIReleased April 17, 2024

Mixtral 8x22B

Open Source#16 Arena Rank176B (39B active) parameters

Mixtral 8x22B holds a solid spot in the Arena rankings at #16. Context window: 0.064K tokens.

Context

64K

Input

$0.90

Key Specifications

🏆

Arena Rank

#16

📐

Context Window

64K

📥

Input Price

per 1M tokens

$0.90

📤

Output Price

per 1M tokens

$2.70

🧠

Parameters

176B (39B active)

🔓

Open Source

Yes

Best For

Efficient reasoningmultilingualcoding

About Mixtral 8x22B

Mixtral 8x22B, developed by Mistral AI, is a large Mixture-of-Experts model with 176 billion total parameters (39 billion active per token) and a 64K token context window. The model scales the MoE architecture to deliver stronger reasoning, coding, and multilingual performance while maintaining the efficiency advantages of sparse expert routing. It supports function calling and structured outputs for production agentic workflows. Free and open-source, Mixtral 8x22B can be deployed on enterprise GPU infrastructure for organizations requiring powerful, self-hosted AI. Priced at $0.90 per million input tokens through API providers. The model demonstrates competitive performance with proprietary models at significantly lower operational cost due to its efficient architecture. Mixtral 8x22B ranks #16 on the Chatbot Arena leaderboard, confirming strong capability for an open-weight MoE model.

Pricing per 1M tokens

Input Tokens

$0.90

Output Tokens

$2.70

Frequently Asked Questions

What is Mixtral 8x22B?
Mixtral 8x22B, developed by Mistral AI, is a large Mixture-of-Experts model with 176 billion total parameters (39 billion active per token) and a 64K token context window. The model scales the MoE architecture to deliver stronger reasoning, coding, and multilingual performance while maintaining the efficiency advantages of sparse expert routing. It supports function calling and structured outputs for production agentic workflows. Free and open-source, Mixtral 8x22B can be deployed on enterprise GPU infrastructure for organizations requiring powerful, self-hosted AI. Priced at $0.90 per million input tokens through API providers. The model demonstrates competitive performance with proprietary models at significantly lower operational cost due to its efficient architecture. Mixtral 8x22B ranks #16 on the Chatbot Arena leaderboard, confirming strong capability for an open-weight MoE model.
How much does Mixtral 8x22B cost?
Input pricing for Mixtral 8x22B is $0.90 per million tokens; output runs $2.70. Token-based pricing means you can scale up or down without a fixed commitment.
What is Mixtral 8x22B's context window?
The context window for Mixtral 8x22B is 64K tokens. That's the maximum amount of text you can feed into a single prompt, including system instructions, conversation history, and the actual query.
Is Mixtral 8x22B open source?
Mixtral 8x22B is fully open source. You can grab the weights, run it on your own hardware, and fine-tune it for specific tasks. That flexibility is a big deal for teams with strict data requirements.
What is Mixtral 8x22B best for?
The sweet spot for Mixtral 8x22B is: Efficient reasoning, multilingual, coding. If your workload fits one of these categories, it's worth benchmarking against alternatives.