Google DeepMind ↗Released February 25, 2025
Gemini 2.0 Flash Lite
#22 Arena RankUndisclosed parameters
Context
1M
Input
$0.075
Key Specifications
🏆
Arena Rank
#22
📐
Context Window
1M
📥
Input Price
per 1M tokens
$0.075
📤
Output Price
per 1M tokens
$0.30
🧠
Parameters
Undisclosed
🔒
Open Source
No
Best For
High-volumelow-cost tasks
About Gemini 2.0 Flash Lite
Gemini 2.0 Flash Lite is Google's most affordable model, designed for extremely high-volume applications where cost is the primary concern. At just $0.075 per million input tokens, it's one of the cheapest AI models available from a major provider. Despite its low price, it supports a 1 million token context window and handles basic tasks competently. Ideal for classification, routing, content filtering, and other high-throughput tasks.
Built byGoogle DeepMind↗
Pricing per 1M tokens
Input Tokens
$0.075
Output Tokens
$0.30
Compare Gemini 2.0 Flash Lite
See how Gemini 2.0 Flash Lite stacks up against other leading AI models
Other Google DeepMind Models
Other Top Models
Frequently Asked Questions
What is Gemini 2.0 Flash Lite?▾
Gemini 2.0 Flash Lite is Google's most affordable model, designed for extremely high-volume applications where cost is the primary concern. At just $0.075 per million input tokens, it's one of the cheapest AI models available from a major provider. Despite its low price, it supports a 1 million token context window and handles basic tasks competently. Ideal for classification, routing, content filtering, and other high-throughput tasks.
How much does Gemini 2.0 Flash Lite cost?▾
Gemini 2.0 Flash Lite costs $0.075 per 1 million input tokens and $0.30 per 1 million output tokens. Pricing is based on token usage, making it cost-effective for both small and large-scale applications.
What is Gemini 2.0 Flash Lite's context window?▾
Gemini 2.0 Flash Lite has a context window of 1M tokens. This determines how much text the model can process in a single request — larger context windows allow the model to handle longer documents, maintain more conversation history, and reason over bigger codebases.
Is Gemini 2.0 Flash Lite open source?▾
No, Gemini 2.0 Flash Lite is a proprietary model available through Google DeepMind's API. Proprietary models are typically accessible via API endpoints and offer managed infrastructure, support, and regular updates from the provider.
What is Gemini 2.0 Flash Lite best for?▾
Gemini 2.0 Flash Lite is best suited for: High-volume, low-cost tasks. These use cases leverage the model's specific strengths in terms of capability, speed, and cost-effectiveness within Google DeepMind's model lineup.