Gemini, Google and AI
Digest more
Google on Tuesday revealed new Android development tools, a new mobile AI architecture, and an expanded developer community. The announcements accompanied the unveiling of an AI Mode for Google Search at the Google I/O keynote in Mountain View, California.
Google’s Gemini Diffusion demo didn’t get much airtime at I/O, but its blazing speed—and potential for coding—has AI insiders speculating about a shift in the model wars.
Google says the release version of 2.5 Flash is better at reasoning, coding, and multimodality, but it uses 20–30 percent fewer tokens than the preview version. This edition is now live in Vertex AI, AI Studio, and the Gemini app. It will be made the default model in early June.
Unsurprisingly, the bulk of Google's announcements at I/O this week focused on AI. Although past Google I/O events also heavily leaned on AI, what made this year's announcements different is that the features were spread across nearly every Google offering and touched nearly every task people partake in every day.
I/O presentation, the company revealed AI assistants of all kinds, smart glasses and headsets, and state-of-the-art AI filmmaking tools.
Google’s AI models are learning to reason, wield agency, and build virtual models of the real world. The company’s AI lead, Demis Hassabis, says all this—and more—will be needed for true AGI.
Google’s AI models have a secret ingredient that’s giving the company a leg up on competitors like OpenAI and Anthropic. That ingredient is your data, and it’s only just scratched the surface in terms of how it can use your information to “personalize” Gemini’s responses.
Volvo’s cars will be among the first vehicles to have Gemini integration. Gemini will replace Assistant in these cars later this year. Google will use the car maker as its lead development partner for new Android Automotive features and updates.