Mixtral 8x22B
Mixtral 8x22B holds a solid spot in the Arena rankings at #16. Context window: 0.064K tokens.
Context
64K
Input
$0.90
Key Specifications
Arena Rank
#16
Context Window
64K
Input Price
per 1M tokens
$0.90
Output Price
per 1M tokens
$2.70
Parameters
176B (39B active)
Open Source
Best For
About Mixtral 8x22B
Mixtral 8x22B, developed by Mistral AI, is a large Mixture-of-Experts model with 176 billion total parameters (39 billion active per token) and a 64K token context window. The model scales the MoE architecture to deliver stronger reasoning, coding, and multilingual performance while maintaining the efficiency advantages of sparse expert routing. It supports function calling and structured outputs for production agentic workflows. Free and open-source, Mixtral 8x22B can be deployed on enterprise GPU infrastructure for organizations requiring powerful, self-hosted AI. Priced at $0.90 per million input tokens through API providers. The model demonstrates competitive performance with proprietary models at significantly lower operational cost due to its efficient architecture. Mixtral 8x22B ranks #16 on the Chatbot Arena leaderboard, confirming strong capability for an open-weight MoE model.
Pricing per 1M tokens
Input Tokens
$0.90
Output Tokens
$2.70
Compare Mixtral 8x22B
See how Mixtral 8x22B stacks up against other leading AI models