Skip to main content
Mistral AIReleased September 18, 2024

Mistral Small

Open Source#19 Arena Rank22B parameters

Mistral Small holds a solid spot in the Arena rankings at #19. Context window: 0.032K tokens.

Context

32K

Input

$0.20

Key Specifications

🏆

Arena Rank

#19

📐

Context Window

32K

📥

Input Price

per 1M tokens

$0.20

📤

Output Price

per 1M tokens

$0.60

🧠

Parameters

22B

🔓

Open Source

Yes

Best For

Fast inferencecost-effective taskschat

About Mistral Small

Mistral Small, developed by Mistral AI, is a compact 22 billion parameter model with a 32K token context window optimized for fast inference and low deployment costs. The model handles coding, summarization, classification, and conversational tasks while maintaining the quality standards established by the Mistral model family. Its small footprint makes it suitable for edge deployment, cost-sensitive production applications, and use cases requiring low-latency responses. Priced at $0.20 per million input tokens and $0.60 per million output tokens, it offers affordable access to Mistral's technology. As an open-source model, it can also be self-hosted without API costs. Mistral Small ranks #19 on the Chatbot Arena leaderboard, demonstrating competitive performance for its compact size and establishing it as a strong option for budget-conscious deployments.

Pricing per 1M tokens

Input Tokens

$0.20

Output Tokens

$0.60

Frequently Asked Questions

What is Mistral Small?
Mistral Small, developed by Mistral AI, is a compact 22 billion parameter model with a 32K token context window optimized for fast inference and low deployment costs. The model handles coding, summarization, classification, and conversational tasks while maintaining the quality standards established by the Mistral model family. Its small footprint makes it suitable for edge deployment, cost-sensitive production applications, and use cases requiring low-latency responses. Priced at $0.20 per million input tokens and $0.60 per million output tokens, it offers affordable access to Mistral's technology. As an open-source model, it can also be self-hosted without API costs. Mistral Small ranks #19 on the Chatbot Arena leaderboard, demonstrating competitive performance for its compact size and establishing it as a strong option for budget-conscious deployments.
How much does Mistral Small cost?
Input pricing for Mistral Small is $0.20 per million tokens; output runs $0.60. Token-based pricing means you can scale up or down without a fixed commitment.
What is Mistral Small's context window?
The context window for Mistral Small is 32K tokens. That's the maximum amount of text you can feed into a single prompt, including system instructions, conversation history, and the actual query.
Is Mistral Small open source?
Mistral Small is fully open source. You can grab the weights, run it on your own hardware, and fine-tune it for specific tasks. That flexibility is a big deal for teams with strict data requirements.
What is Mistral Small best for?
The sweet spot for Mistral Small is: Fast inference, cost-effective tasks, chat. If your workload fits one of these categories, it's worth benchmarking against alternatives.