Gemini 2.0 Flash Lite
Gemini 2.0 Flash Lite holds a solid spot in the Arena rankings at #22. Context window: 0.001K tokens.
Context
1M
Input
$0.075
Key Specifications
Arena Rank
#22
Context Window
1M
Input Price
per 1M tokens
$0.075
Output Price
per 1M tokens
$0.30
Parameters
Undisclosed
Open Source
Best For
About Gemini 2.0 Flash Lite
Gemini 2.0 Flash Lite, developed by Google DeepMind, is the most affordable model in Google's lineup with a 1 million token context window. The model targets extremely high-volume applications where cost minimization is the primary constraint, handling classification, content filtering, routing, and basic summarization tasks competently. At $0.075 per million input tokens and $0.30 per million output tokens, it ranks among the cheapest API-accessible models from any major AI provider. Despite its budget positioning, Flash Lite inherits the massive context window from the Gemini architecture, enabling long-document processing at minimal cost. Gemini 2.0 Flash Lite ranks #22 on the Chatbot Arena leaderboard, demonstrating adequate quality for production workloads that prioritize throughput and cost-efficiency over maximum capability.
Pricing per 1M tokens
Input Tokens
$0.075
Output Tokens
$0.30
Compare Gemini 2.0 Flash Lite
See how Gemini 2.0 Flash Lite stacks up against other leading AI models