skip to Main Content
Welcome to Gimasys!
Hotline: +84 974 417 099 (HCM) | +84 987 682 505 (HN) gcp@gimasys.com

Google is a Leader in The Forrester Wave™: AI Foundation Models for Language, Q2 2024

Google was recently named a leader in AI language modeling in The Forrester Wave™: AI Foundation Models for Language, Q2 2024. Gemini Google’s multi-modal metamodel has surpassed expectations and become one of the most powerful AI models available today. With Gemini, Google is giving developers the tools to create more intelligent, personalized, and innovative apps. This is an important step in Google’s journey to build a future where AI works for everyone.

“Gemini is uniquely differentiated in the market especially in multimodality and context length while also ensuring interconnectivity with the broader ecosystem of complementary cloud services.” 

– The Forrester Wave™: AI Foundation Models for Language, Q2 2024

 

It can be easily seen, Generative AI (GenAI) is changing the way people interact with technology. Powerful management models are now being harnessed by developers who are capturing the core of the model to build innovative new applications, experiences, and agents for end users. Model refinement is more accessible than ever, requiring just 1% of the data it once did. 

For Google, Gemini is a multimodal model, the result of a large-scale collaborative effort by teams including Google DeepMind and Google Research. Built from the ground up to seamlessly combine and understand text, code, images, audio, and video, Gemini models are helping developers create advanced AI agents across virtually every industry.

Gemini is available to customers through Vertex AI, Google Cloud’s fully-managed, unified platform for developing, deploying and monitoring machine learning models at scale. Equipped to support both generative and predictive AI models, Vertex AI enables customers to customize and deploy Gemini and other AI models with enterprise-ready tuning grounding, monitoring, and inference capabilities, all alongside leading AI infrastructure and easy-to-use tooling to build AI agents.

To help you quickly grasp the core information from The Forrester Wave™: AI Foundation Models for Language, Q2 2024 study, Gimasys would like to summarize the following highlights, which will definitely bring you new and interesting perspectives.

State-of-the-art performance

The following Gemini models are available for enterprise customers via Vertex AI:

  • Gemini 1.5 Pro Earlier this year we announced Gemini 1.5 Pro, now GA, providing our customers with an industry-leading, breakthrough context window of 1 million tokens that enables accurate processing across large documents, codebases, or entire videos with a single prompt. For use cases that require an even larger context window — such as analyzing very large code bases or extensive document libraries — customers will soon be able to try Gemini 1.5 Pro with up to a 2 million token context window
  • Gemini 1.5 Flash, also now GA, offers our groundbreaking context window of 1 million tokens, but is lighter-weight than 1.5 Pro and designed to efficiently serve with speed and scale for tasks like chat applications.
  • Gemini 1.0 Pro: Designed to handle natural language tasks, multiturn text and code chat, and code generation. A new version is generally available with decreased latency and improved quality, along with supervised tuning capabilities.
  • Gemini 1.0 Pro Vision: Supports multimodal prompts. You can include text, images, and video in your prompt requests and get text or code responses.

Vertex AI

Vertex AI makes it possible to customize and deploy Gemini, empowering developers to build new and differentiated applications that can process information across text, code, images, and video at this time. With Vertex AI, developers can:

  • Discover and use Gemini, or select from a curated list of more than 130 models from Google, open-source, and third-parties that meet Google’s strict enterprise safety and quality standards. Developers can access models as easy-to-use APIs to quickly build them into applications.
  • Customize model behavior with specific domain or company expertise, using tuning tools to augment training knowledge and even adjust model weights when required. Vertex AI provides a variety of tuning techniques including prompt design, adapter-based tuning such as Low Rank Adaptation (LoRA), and distillation. We also provide the ability to improve a model by capturing user feedback through our support for reinforcement learning from human feedback (RLHF).
  • Augment models with tools to help adapt Gemini Pro to specific contexts or use cases. Vertex AI Extensions and connectors let developers link Gemini Pro to external APIs for transactions and other actions, retrieve data from outside sources, or call functions in codebases. Vertex AI also gives organizations the ability to ground foundation model outputs in their own data sources, helping to improve the accuracy and relevance of a model’s answers. We offer the ability for enterprises to use grounding against their structured and unstructured data, and grounding with Google Search technology.
  • Manage and scale models in production with purpose-built tools to help ensure that once applications are built, they can be easily deployed and maintained. Customers can evaluate models with Automatic Side by Side (Auto SxS), an on-demand, automated tool to compare models. Auto SxS is faster and more cost-efficient than manual model evaluation, as well as customizable across various task specifications to handle new generative AI use cases.
  • Build AI agents in a low code / no code environment. With Vertex AI Agent Builder, developers across all machine learning skill levels can use Gemini models to create engaging, production-grade AI agents in hours and days instead of weeks and months.
  • Deliver innovation responsibly by using Vertex AI’s safety filters, content moderation APIs, and other responsible AI tooling to help developers ensure their models don’t output inappropriate content.
  • Help protect data with Google Cloud’s built-in data governance and privacy controls. Customers remain in control of their data, and Google never uses customer data to train our models. Vertex AI provides a variety of mechanisms to keep customers in sole control of their data including Customer Managed Encryption Keys and VPC Service Controls.

Recent innovations 

Ongoing innovation in Vertex AI is designed to bring the best models from Google and the industry alongside an end-to-end model building platform and capabilities to develop and deploy agents faster — all built on a foundation of scale and enterprise readiness. Recent product innovations include:

  • Batch API, is a super-efficient way to send large numbers of non-latency sensitive text prompt requests, supporting use cases such as classification and sentiment analysis, data extraction, and description generation. It helps speed up developer workflows and reduces costs by enabling multiple prompts to be sent to models in a single request.
  • Context caching, in public preview this month, lets customers actively manage and reuse cached context data. As processing costs increase by context length, it can be expensive to move long-context applications to production. Vertex AI context caching helps customers significantly reduce costs by leveraging cached data.
  • Controlled generation, coming to public preview later this month, lets customers define Gemini model outputs according to specific formats or schemas. Most models cannot guarantee the format and syntax of their outputs, even with specified instructions. Vertex AI controlled generation lets customers choose the desired output format via pre-built options like YAML and XML, or by defining custom formats. JSON, as a pre-built option, is live.
  • LlamaIndex on Vertex AI simplifies the retrieval augmented generation (RAG) process, from data ingestion and transformation to embedding, indexing, retrieval, and generation. Now Vertex AI customers can leverage Google’s models and AI-optimized infrastructure alongside LlamaIndex’s simple, flexible, open-source data framework, to connect custom data sources to generative models.
  • Genkit, announced by Firebase , is an open-source Typescript/JavaScript framework designed to simplify the development, deployment, and monitoring of production-ready AI agents. Facilitated through the Vertex AI plugin, Firebase developers can now take advantage of Google models like Gemini and Imagen 2, as well as text embeddings.
  • Grounding with Google Search, now generally available, allows you to connect models with world knowledge, a wide possible range of topics, or up-to-date information on the internet. By grounding Gemini models with Google Search, we offer customers the combined power of Google’s latest foundation models along with access to fresh, high-quality information to significantly improve the completeness and accuracy of responses.
  • Gemma 2 is the next generation in our family of open models built for a broad range of AI developer use cases from the same technologies used to create Gemini. Gemma 2 models will soon be available in Vertex AI Model Garden.
  • Imagen 3, coming soon to Vertex AI, is our highest-quality text-to-image generation model yet, able to generate an incredible level of detail and produce photorealistic, lifelike images.

How customers are innovating with Gemini models

Vertex AI has seen strong adoption with API requests increasing nearly 6X from H1 to H2 last year. We are really impressed with the amazing things customers are doing with Gemini models particularly because they are multimodal and can handle complex reasoning so well.

  • Samsung: Samsung recently announced that their Galaxy S24 series is the first smartphone equipped with Gemini models. Starting with Samsung-native applications, customers can take advantage of summarization features across Notes and Voice Recorder. Samsung is confident their end users are protected with built-in security, safety, and privacy in Vertex AI.
  • Jasper: Jasper, an AI marketing platform that enables enterprise marketing teams to create on-brand content and campaigns at scale, is using Gemini models to quickly generate marketing campaign content for their customers. Teams can now move faster while maintaining a high quality bar for content, ensuring it adheres to brand voice and marketing guidelines.
  • Quora: Quora, the popular question and answer platform, is using Gemini to help power creator monetization on their AI chat platform, Poe, where users can explore a wide-variety of AI-powered bots. Gemini is enabling Poe creators to build custom bots across a variety of use cases including writing assistance, generating code, personalized learning, and more.

Gimasys: A trusted partner accompanying you on your digital transformation journey with AI

With extensive experience and a team of experienced experts, Gimasys and Google have supported many large and small businesses in successfully implementing AI solutions, including:

  • 20 / 5.000 Building AI models: Customize AI models to suit the specific needs of each business.
  • Solution implementation: Integrate AI solutions into existing business systems.
  • Strategic Consulting: Come up with the right AI strategy to achieve business goals.

Gimasys and Google have combined decades of AI research and development experience into building models, hyperscale infrastructure, and Vertex AI capabilities to benefit Google customers. Gimasys is committed to continued research and innovation in the field of AI. If your business needs to learn more about AI and how to improve your business, please contact Gimasys for the most detailed consultation.

Back To Top
0974 417 099