Google: Gemini 2.5 Flash Lite

Gemini 2.5 Flash-Lite is a lightweight reasoning model in the Gemini 2.5 family, optimized for ultra-low latency and cost efficiency. It offers improved throughput, faster token generation, and better performance across common benchmarks compared to earlier Flash models. By default, "thinking" (i.e. multi-pass reasoning) is disabled to prioritize speed, but developers can enable it via the [Reasoning API parameter](https://openrouter.ai/docs/use-cases/reasoning-tokens) to selectively trade off cost for intelligence.

Description

Gemini 2.5 Flash-Lite is a lightweight reasoning model in the Gemini 2.5 family, optimized for ultra-low latency and cost efficiency. It offers improved throughput, faster token generation, and better performance across common benchmarks compared to earlier Flash models. By default, "thinking" (i.e. multi-pass reasoning) is disabled to prioritize speed, but developers can enable it via the [Reasoning API parameter](https://openrouter.ai/docs/use-cases/reasoning-tokens) to selectively trade off cost for intelligence.

ArchitectureАрхитектура

Modality:
text+image->text
InputModalities:
file, image, text, audio
OutputModalities:
text
Tokenizer:
Gemini

ContextAndLimits

ContextLength:
1048576 Tokens
MaxResponseTokens:
65535 Tokens
Moderation:
Disabled

PricingRUB

Request:
Image:
WebSearch:
InternalReasoning:
Prompt1KTokens:
Completion1KTokens:

DefaultParameters

Temperature:
0
StartChatWith Google: Gemini 2.5 Flash Lite

UserComments