Gemini 3 at a Glance:
- Enhanced reasoning control. Developers can adjust model thinking depth with new parameters.
- Multimodal processing flexibility. Improved media handling and structured outputs for diverse input types.
- Developer impact. Updates give developers granular control over model reasoning cost and workflow reliability.
Google DeepMind announced updates to the Gemini API to support new capabilities in Gemini 3. The release added controls over reasoning depth, multimodal token usage and enforcement of encrypted "Thought Signatures."
According to company officials, the changes help developers tune cost, latency and reliability for agentic workflows that span text, images and external tools. Grounding with Google Search pricing shifted to a usage-based model.
Table of Contents
- Feature Breakdown: What’s New in the Gemini 3 API
- Who the Gemini 3 API Updates Are Built For
- The Push Toward Reasoning-First AI
- Google DeepMind Background
Feature Breakdown: What’s New in the Gemini 3 API
The update introduced controls for reasoning depth, multimodal tokens and conversation continuity.
| New Capability | Description |
|---|---|
| thinking_level parameter | Adjusts internal reasoning depth for cost or quality optimization |
| media_resolution parameter | Controls token allocation for images, video and documents |
| Thought Signatures | Encrypted markers that retain chain of reasoning across conversations |
| Function calling validation | Enforces signatures or returns 400 error for reliability |
| Search pricing change | Usage-based model at $14 per 1,000 queries |
Who the Gemini 3 API Updates Are Built For
- Enterprise developers building agentic workflows
- Product teams integrating multimodal inputs
- Cost-focused AI platform owners
Related Article: Gemini 3 Raises the Bar for AI: What It Means for CX and Marketing Leaders
The Push Toward Reasoning-First AI
Reasoning Models & Depth Control
Enterprises now prioritize reasoning-focused models paired with agentic capabilities to unlock complex use cases previously out of reach.
These models increasingly emphasize hybrid techniques and explainable AI to meet regulatory and trust requirements. Selecting the right reasoning model has become essential for mitigating risk and delivering business value.
Multimodal Inputs & Workflow Integration
Agentic platforms offer document and content preparation, transforming unstructured content — PDFs, contracts, images and video — into structured and automation-ready data.
Open multi-agent frameworks enable dynamic workflows across documents, APIs and enterprise systems. Workflow integration spanning these assets is now a minimum requirement for deploying agentic AI at scale.
Agentic AI Workflows & Governance
AI agents are autonomous systems that adapt in real time and solve multi-step problems. These agents evolve from task-specific assistants into systems capable of end-to-end work with limited human oversight.
Governance mechanisms are essential with agentic AI to prevent runaway automation, combining machine learning operations, cognitive science and cybersecurity to secure APIs and manage escalation paths.
Pricing Models & Cost Practices
AI costs depend on licensing, API access and infrastructure. A model that appears economical at low volume can become expensive at scale. Decomposing complex tasks and offloading simpler steps to smaller models can materially reduce expenses.
Related Article: Workato Launches EU Compliant Agentic AI Services
Google DeepMind Background
DeepMind, a subsidiary under the Alphabet umbrella, was founded in 2010 and acquired by Google in 2014. Post-acquisition, the AI research lab merged with Google's AI Brain division to become Google DeepMind.
The firm is responsible for the development of Gemini along with other generative AI tools, like the text-to-image model Imagen, the text-to-video model Veo and the text-to-music model Lyria.