If you were on LinkedIn last week you may have noticed chatter about Google’s new “AI Mode,” which launched for U.S. users. What got less attention is that at I/O they rolled out dozens more AI tools and features — literally dozens.
AI Mode
Think of AI Mode as Search with a personality. Instead of just links, it uses Gemini 2.0 to have conversational, sourced discussions about complex topics. You can ask multi-part questions like “What’s the difference between smart rings, smartwatches, and sleep tracking mats?” and get a single, thorough breakdown. It also follows up on your queries, making research feel like chatting with a knowledgeable friend rather than toggling between search results.
Key points
– Available Now: ✅
– Where: US only (expanding soon)
– Free: ✅
SynthID
SynthID embeds invisible watermarks into AI-generated text, images, video, and audio that humans can’t see but specialized tools can detect. The SynthID Detector portal lets you upload content to check for watermarks and highlights likely AI-generated regions.
Flow
Flow is Google’s answer to tools like OpenAI’s Sora. It merges Veo (video), Imagen (image), and Gemini (text) into a single toolkit that can generate videos from prompts. Describe scenes in plain English, import assets, or generate everything from scratch, then use camera controls and scene builders to craft cinematic clips.
Project Astra
Astra is an ambitious AI assistant that can use your phone camera to understand your surroundings, remember past conversations, and control Android apps to complete tasks. For example, it could find a bike manual, open a YouTube repair tutorial, and contact shops about parts while you work on the repair.
Jules
Jules is an autonomous coding assistant that does more than suggest completions: it can clone a GitHub repo into a secure cloud environment, create plans, fix bugs, write tests, and prepare pull requests for your review. It’s language-agnostic but works best with JavaScript/TypeScript, Python, Go, Java, and Rust.
Key points
– Available Now: ✅ (public beta)
– Where: Global (where Gemini is available)
– Free: ✅ (with limits)
– Languages: JS, Python, Go, Java, Rust
Google Beam
Beam evolves Project Starline into a video-call system that makes participants appear three-dimensional without headsets. Using six cameras, custom light-field displays, and AI, Beam creates 3D call representations with millimeter-level head tracking at 60 fps. Given the hardware complexity, expect enterprise conference-room deployments first, not home setups.
Key points
– Available Now: ❌ (late 2025)
– Where: Enterprise customers first
– Cost: TBD (likely expensive)
And that’s not everything
Those six highlights barely scratch the surface of the roughly 100 announcements Google made at I/O. Other notable items include:
– Real-time speech translation in Google Meet that preserves voice tone and expressions
– Android XR smart glasses (Samsung partnership) with Gemini integration
– NotebookLM Video Overviews: documents turned into narrated video summaries
– New Gemini variants: 2.5 Flash, Gemma 3n, MedGemma (medicine), SignGemma (sign language)
– Fully agentic Google Colab that fixes code errors automatically
– Developer tools: Stitch (UI generation), Journeys (app testing), Version Upgrade Agent (dependency updates)
– Lyria 2 for real-time music composition
– Firebase Studio improvements with Figma import
– Project Mariner: a browser agent that navigates websites and completes complex tasks
The overlap problem
From the outside, Google now looks like many semi-independent teams shipping projects under the same brand. The result is considerable overlap: multiple coding assistants (Jules, Gemini Code Assist, Colab, AI Studio, Firebase), several ways to get AI in search or browsing (AI Mode, Gemini in Chrome, Search Live/Project Astra), and multiple agent frameworks and developer tools.
This creates product confusion. How many distinct ways do we need AI to write or fix code? How should users choose among overlapping search and browsing experiences? The abundance of tools makes it hard to see a single coherent product strategy — which may be intentional or simply the result of rapid experimentation.
The bigger picture
All this comes while Google faces a major U.S. antitrust lawsuit alleging search dominance and anti-competitive behavior. It’s an odd juxtaposition: accused of being too dominant in search, Google responds with a torrent of diverse products that can make the company seem unfocused. The individual technologies are impressive — real-time voice-preserving translation, autonomous coding, near-perfect 3D video calls — but released en masse, important breakthroughs risk getting lost in the noise.
Maybe the scatter-shot approach is strategic: let teams experiment independently and let winners emerge quickly. Or maybe consolidation and clearer product boundaries are forthcoming. Only time will tell.
Have you been following Google’s releases? Which tool are you most excited to try, and which of these were new to you? I’d love to hear your take.

