Google Announces Upgrade to Flagship Gemini AI Platform, Enhancing Multimodal Capabilities

Google has launched Gemini 2.0, designed to empower enterprise users and developers with advanced multimodal capabilities and enhanced performance. The update aims to solidify Google's position as a leader in cutting-edge AI technology.

Initially announced in December as part of an experimental rollout on Vertex AI, Gemini 2.0 is now officially available across Google's cloud AI services and other platforms. This release marks a significant step forward in making sophisticated AI tools more accessible and versatile for diverse applications.

"Today, we're making the updated Gemini 2.0 Flash generally available via the Gemini API in Google AI Studio and Vertex AI," Google said in a recent blog post. "Developers can now build production applications with 2.0 Flash."

Vertex AI is Google Cloud's unified machine learning platform, designed to streamline the entire machine learning lifecycle. It helps developers and data scientists build, deploy, and scale AI models more efficiently, from experimentation to production.

Google's Vertex AI site lists new and enhanced features for Gemini 2.0 Flash, with an emphasis on multimodal capabilities:

  • Multimodal Live API: This new API enables low-latency bidirectional voice and video interactions with Gemini.
  • Quality: Enhanced performance across most quality benchmarks than Gemini 1.5 Pro.
  • Improved agentic capabilities: 2.0 Flash delivers improvements to multimodal understanding, coding, complex instruction following, and function calling. These improvements work together to support better agentic experiences.
  • New modalities: 2.0 Flash introduces built-in image generation and controllable text-to-speech capabilities, enabling image editing, localized artwork creation, and expressive storytelling.

Some Gemini 2.0 Flash features aren't available or are in preview stage on Vertex AI.

Along with the Vertex AI cloud service, the new tech is also available via API to users of Google AI Studio, a browser-based development environment specifically designed for building and experimenting with generative AI models.

The new model also pops up when you access the online Gemini app. Users who hop online to try it out are advised it defaults to a concise style that Google said makes it easier to use and reduces cost, though it can also be prompted to use a more verbose style that produces better results in chat-oriented use cases. In testing out that style, the app, when asked about the most important thing to note about the update, said: "For IT pros and developers, the single most important thing to note about the Gemini 2.0 update is its enhanced multimodal capabilities, enabling seamless integration and understanding of information across text, images, audio, and video."

Availability of the new LLM on Vertex AI, Google AI Studio and the online app was just part of the news in Google's post, which also announced:

Gemini 2.0 Flash-Lite: This is a new model in public preview, focusing on cost efficiency. Google said it offers:

  • Better quality than 1.5 Flash: While maintaining the same speed and cost.
  • Multimodal input: Can understand and process information from images and text.
  • 1 million token context window: Can handle large amounts of information in a single interaction.

Gemini 2.0 Pro Experimental: This is an experimental version of the Pro model, geared towards complex tasks and coding. Google said it features:

  • Strongest coding performance: Better than any previous Gemini model.
  • Improved world knowledge and reasoning: Can handle complex prompts and understand nuances in language.
  • 2 million token long context window: Can analyze and understand vast amounts of information.

For more information, visit the Google blog.

About the Author

David Ramel is an editor and writer at Converge 360.

Featured

  • hand with glowing networking lines and bokeh lights

    Call for Speakers Now Open for Tech Tactics in Education: Thriving in the Age of AI

    The annual virtual conference from the producers of Campus Technology and THE Journal will return on May 7, 2025, with a focus on AI, cybersecurity, and student success.

  • From Fire TV to Signage Stick: University of Utah's Digital Signage Evolution

    Jake Sorensen, who oversees sponsorship and advertising and Student Media in Auxiliary Business Development at the University of Utah, has navigated the digital signage landscape for nearly 15 years. He was managing hundreds of devices on campus that were incompatible with digital signage requirements and needed a solution that was reliable and lowered labor costs. The Amazon Signage Stick, specifically engineered for digital signage applications, gave him the stability and design functionality the University of Utah needed, along with the assurance of long-term support.

  • Microsoft

    Microsoft Introduces Its First Quantum Computing Chip

    Microsoft has unveiled Majorana 1, its first quantum computing chip, aimed at deployment in datacenters.

  • glowing digital brain made of blue circuitry hovers above multiple stylized clouds of interconnected network nodes against a dark, futuristic background

    Report: 85% of Organizations Are Using Some Form of AI

    Eighty-five percent of organizations today are leveraging some form of AI, according to the latest State of AI in the Cloud 2025 report from Wiz. While AI's role in innovation and disruption continues to expand, security vulnerabilities and governance challenges remain pressing concerns.