Head over to our on-demand library to view sessions from VB Transform 2023. Register Here
AI is a core focus for Google, and at its Google Next event today the company announced a series of updates across its portfolio that benefit from the power of generative AI.
Front and center are enhancements and new capabilities across Google’s Vertex AI platform, including both developer tooling and foundation models.
Google’s PaLM 2 large language model (LLM), first announced at the Google I/O conference in May, is getting an incremental boost with more language support and longer token length. The Codey code generation LLM and the Imagen image generation LLMs are also getting updates to improve performance and quality.
Vertex AI is being expanded with new extensions to make it easier for developers to connect to data sources. Google is making both the Vertex AI Search and Vertex AI Conversation services generally available, providing search and chatbot capabilities to Google’s enterprise users.
VB Transform 2023 On-Demand
Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.
PaLM opens up with larger token length
In a press briefing ahead of the Google Next conference, June Yang, VP, cloud AI and industry solutions at Google, detailed some of the Vertex AI-related updates.
“AI is undergoing a major shift with the rise of foundation models. Now you can leverage these foundation models for a variety of use cases without ML [machine learning] expertise,” she said. “This is really a game-changer for AI, especially for the enterprises.”
Google builds its own foundation models and also provides support for a number of third-party models that can run on Google Cloud. Google’s flagship model is PaLM 2, available in a number of configurations. One is the text model, which is being enhanced with a larger input token length context window, something Yang said has been a “key request” from customers. Expanded from 4,000 to 32,000 tokens, PaLM 2’s context window will enable text users to process longer-form documents than before.
PaLM 2 is also being expanded with more language support, now with the general availability of 38 languages including Arabic, Chinese, Japanese, German and Spanish.
Code development and image generation get a boost
The Codey text-to-code LLM is another foundation model that has received an update, one which, according to Google, provides up to a 25% quality improvement for code generation.
“Leveraging our Codey foundation model, partners like GitLab are helping developers to stay in the flow by predicting and completing lines of code, generating test cases, explaining code and many more use cases,” Yang said.
The Imagen text-to-image LLM is being upgraded as well. The big new feature, one Yang referred to as one of the coolest she’s seen, is something Google calls “style tuning.”
“Our customers can now create images aligned to their specific brand guidelines or other creative needs with as few as 10 reference images,” she said.
For example, Yang said that with style tuning an Imagen user can apply corporate guidelines to either a newly generated image or an existing one, and the resulting Imagen image will have the appropriate style built into it.
Llama 2 joins Google’s foundation model lineup
While PaLM 2 is Google’s flagship foundation model, the company is also providing third-party LLM access on Google Cloud. The ability to support multiple foundation models is increasingly becoming table stakes for cloud providers. Amazon, for example, supports multiple third-party models with its Bedrock service.
Among the new third-party models that Google now supports is Meta’s Llama 2, which was just released in July. Yang said that Google will enable users to use reinforcement learning with human feedback (RLHF) so organizations can further train Llama 2 on their own enterprise data to get more relevant and precise results.
Extending Vertex AI
Foundation models on their own are interesting, but they get a whole lot more interesting when enterprises can connect them to their own data to take actions. That’s where the new Vertex AI Extensions tools fit in.
Yang said that developers can use the Extensions to build powerful generative AI applications like digital assistants, customized search engines and automated workflows.
“Vertex AI Extensions are a set of fully managed developer tools, which connect models via API to real-world data and enable models to perform real-world actions,” Yang said.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.