Google Announces Upgrade to Flagship Gemini AI Platform, Enhancing Multimodal Capabilities

Google has launched Gemini 2.0, designed to empower enterprise users and developers with advanced multimodal capabilities and enhanced performance. The update aims to solidify Google's position as a leader in cutting-edge AI technology.

Initially announced in December as part of an experimental rollout on Vertex AI, Gemini 2.0 is now officially available across Google's cloud AI services and other platforms. This release marks a significant step forward in making sophisticated AI tools more accessible and versatile for diverse applications.

"Today, we're making the updated Gemini 2.0 Flash generally available via the Gemini API in Google AI Studio and Vertex AI," Google said in a recent blog post. "Developers can now build production applications with 2.0 Flash."

Vertex AI is Google Cloud's unified machine learning platform, designed to streamline the entire machine learning lifecycle. It helps developers and data scientists build, deploy, and scale AI models more efficiently, from experimentation to production.

Google's Vertex AI site lists new and enhanced features for Gemini 2.0 Flash, with an emphasis on multimodal capabilities:

  • Multimodal Live API: This new API enables low-latency bidirectional voice and video interactions with Gemini.
  • Quality: Enhanced performance across most quality benchmarks than Gemini 1.5 Pro.
  • Improved agentic capabilities: 2.0 Flash delivers improvements to multimodal understanding, coding, complex instruction following, and function calling. These improvements work together to support better agentic experiences.
  • New modalities: 2.0 Flash introduces built-in image generation and controllable text-to-speech capabilities, enabling image editing, localized artwork creation, and expressive storytelling.

Some Gemini 2.0 Flash features aren't available or are in preview stage on Vertex AI.

Along with the Vertex AI cloud service, the new tech is also available via API to users of Google AI Studio, a browser-based development environment specifically designed for building and experimenting with generative AI models.

The new model also pops up when you access the online Gemini app. Users who hop online to try it out are advised it defaults to a concise style that Google said makes it easier to use and reduces cost, though it can also be prompted to use a more verbose style that produces better results in chat-oriented use cases. In testing out that style, the app, when asked about the most important thing to note about the update, said: "For IT pros and developers, the single most important thing to note about the Gemini 2.0 update is its enhanced multimodal capabilities, enabling seamless integration and understanding of information across text, images, audio, and video."

Availability of the new LLM on Vertex AI, Google AI Studio and the online app was just part of the news in Google's post, which also announced:

Gemini 2.0 Flash-Lite: This is a new model in public preview, focusing on cost efficiency. Google said it offers:

  • Better quality than 1.5 Flash: While maintaining the same speed and cost.
  • Multimodal input: Can understand and process information from images and text.
  • 1 million token context window: Can handle large amounts of information in a single interaction.

Gemini 2.0 Pro Experimental: This is an experimental version of the Pro model, geared towards complex tasks and coding. Google said it features:

  • Strongest coding performance: Better than any previous Gemini model.
  • Improved world knowledge and reasoning: Can handle complex prompts and understand nuances in language.
  • 2 million token long context window: Can analyze and understand vast amounts of information.

For more information, visit the Google blog.

About the Author

David Ramel is an editor and writer at Converge 360.

Featured

  • From the Kuali Days 2025 Conference: A CEO's View of Planning for AI

    How can a company serving higher education navigate the changes AI brings to ed tech? What will customers expect? CT talks with Kuali CEO Joel Dehlin, who shared his company's AI strategies with attendees at Kuali Days 2025 in Anaheim.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.

  •  floating digital interface with glowing icons, surrounded by faint geometric shapes

    Digital Education Council Defines 5 Dimensions of AI Literacy

    A recent report from the Digital Education Council, a global community devoted to "revolutionizing the world of education and work through technology and collaboration," provides an AI literacy framework to help higher education institutions equip their constituents with foundational AI competencies.

  • cloud and circuit patterns with AI stamp

    Cloud Management Startup Launches Infrastructure Intelligence Tool

    A new AI-powered infrastructure intelligence tool from cloud management startup env0 aims to turn the fog of sprawling, enterprise-scale deployments into crisp, queryable insight, minus the spreadsheets, scripts, and late-night Slack threads.