Return to site

What is Google's Gemini?

A powerful new multi-modal AI model

December 7, 2023

Google Gemini is a highly advanced AI model developed by Google, showcasing next-generation capabilities in multimodal understanding and reasoning. Unlike previous models that were trained on separate components for different modalities and later stitched together, Gemini is natively multimodal. From the beginning, it has been pretrained on various modalities like text, images, and audio, then fine-tuned with additional multimodal data. This allows Gemini to seamlessly understand and reason about diverse inputs, making it superior to existing multimodal models.

Gemini 1.0 was specifically trained to recognize and understand text, images, and audio simultaneously. This training approach enhances its performance in explaining reasoning in complex subjects such as math and physics. Additionally, Gemini excels in understanding, explaining, and generating high-quality code in popular programming languages like Python, Java, C++, and Go, positioning it as one of the leading foundation models for coding.

Google trained Gemini 1.0 on its AI-optimized infrastructure using Tensor Processing Units (TPUs) v4 and v5e, designed in-house. This has made Gemini not only Google's most scalable and reliable model to train but also the most efficient to serve.

Gemini is now being integrated across various Google products and platforms. For instance, Bard now uses a fine-tuned version of Gemini Pro for more advanced reasoning and understanding. Gemini is also being incorporated into products like the Pixel 8 Pro, Google Search, Ads, Chrome, and Duet AI.

For developers and enterprise customers, Google has made Gemini Pro accessible via the Gemini API in Google AI Studio And Google Cloud Vertex AI. Google AI Studio is a free, web-based developer tool for prototyping and launching apps, while Vertex AI offers customization of Gemini with full data control and additional Google Cloud features.

Gemini Ultra, a more advanced version of the model, is currently undergoing trust and safety checks. It is being refined through fine-tuning and reinforcement learning with human feedback (RLHF) before its broader availability.