The 2.0 Family
Google's Gemini 2.0 announcement in October 2025 formalised the full model family. Gemini 2.0 Ultra (the most capable), Gemini 2.0 Pro (balanced capability and cost), and Gemini 2.0 Flash (speed and efficiency optimised) each targeted different use cases across the developer and enterprise spectrum. The 2.0 architecture introduced native multimodal output — the ability to generate images and audio directly, not just accept them as input.
Agentic Architecture
All 2.0 models were designed for agentic workflows. Google announced Project Mariner (browser automation), Project Astra (real-time multimodal assistant), and Jules (developer coding agent) as flagship applications of the 2.0 architecture. Developers could access agentic capabilities through the Gemini API's Function Calling and Code Execution features.
Performance Benchmarks
Gemini 2.0 Ultra scored 87.5% on MMLU, 90.0% on HumanEval, and 86.2% on MATH — placing it at or near the top of published benchmark leaderboards. On video understanding tasks unique to Google's benchmark suite, 2.0 Ultra demonstrated capabilities significantly beyond any competitor model — a reflection of Google's unique advantage in training data and compute infrastructure.
Pricing and Availability
Gemini 2.0 Flash remained the free-tier default. Gemini 2.0 Pro was priced at $7.00/million input tokens, and 2.0 Ultra at $25.00/million — positioning Google's premium tier below GPT-4 Turbo's pricing from earlier in 2025.
What This Means for Indian Businesses
Gemini 2.0's expanded capabilities are increasingly accessible to Indian developers and businesses through the free AI Studio tier and competitive Vertex AI pricing. The 2.0 family's improvements in instruction following and agentic task completion make it a stronger candidate for building reliable AI products. Indian startups looking to embed AI into their products have a clear, cost-effective path through Gemini 2.0 Flash.