AI moves from Google at I/O

I’ve been following Generative AI advancements lately on my blog, explaining the underlying technology advancements, where it is going next, and Open AI + Microsoft’s partnership and moves.

We just saw Google finally retaliate against Microsoft & Open AI at their I/O event yesterday, making several new moves around Generative AI. Let’s dive into the most interesting announcements:

  • PaLM 2
  • new GenAI models in Vertex
  • new VMs w/ H100s
  • Bard upgrades (now in public beta)
  • Duet AI for GCP
  • DuetAI for Google Workspaces + Sidekick
  • Search Gen Exp (SGE)

PaLM 2

They announced PaLM 2, their answer to Open AI’s GPT-4. They improved its advanced reasoning, math, and code skills, and it is now multi-lingual over 100 languages. It is now in private preview, but already powers 25 of their products.

PaLM 2 is available via API or as a foundational model on the VertexAI platform, and comes in 4 sizes (the smallest of which can run on a phone). They are already working on the next iteration (Gemini) for more multi-modal, native integrations, and improved memory & planning.

It is being fine-tuned to hone it for specific industries, such as medical and security. In security, Sec-PaLM powers their new Security AI Workbench, their answer to Microsoft’s Security Copilot announced recently. It provides a SOC tool that security analysts can use as a knowledge base and analyze over security data.

These new SOC tools have huge implications to security platforms like CrowdStrike and Sentinel One! And seems highly disruptive to SIEMs like Splunk.

Building AI

Customers can build their own AI models in VertexAI, either DIY or by creating custom engines by honing (fine-tuning) foundational models like PaLM 2. Beyond PaLM 2, Vertex is adding other foundational models, including Imagen (text-to-image), Chirp (speech-to-text), and Codey, a code assistant AI derived from PaLM 2. It also added discovery and fine-tuning playground tools.

For those creating their own DIY models, GCP added new VM instances based on H100, Nvidia’s latest GPU with embedded transformer engines (the AI model behind this wave of GenAI and LLMs).

Chat AI & Code Assistant

Bard is Google’s new AI-driven chatbot, their answer to Open AI’s ChatGPT. It is now in open preview and avail in most countries. Beyond English, it now speaks in Japanese and Korean, with 40 more coming. It is now backed by PaLM 2, so has better reasoning, math, and coding abilities. It can also provide search suggestions, export answers to Docs/Gmail, and is soon adding integrations (extensions) and be multi-modal (images in input/output).

Code assistance is now a big use case, making it their answer for GitHub Copilot. It can generate code and unit tests, make suggestions, and upskill devs. Code exports to Google’s Colab IDE & soon Replit.

But most exciting is Bard Extensions, their answer to ChatGPT Plugins. It will soon integrate with a variety of Google products (Shopping, Lens, Flights, Maps, Workspace apps) and then add 3rd party integrations with other platforms. (Many of the same ones that ChatGPT announced.)

Bard is now in open beta and, unlike ChatGPT, is free to use. They are likely using that free tier as a way to improve/enrich the PaLM 2 models from here.

Infusing AI in products

From there, they are infusing Generative AI across their software products, including Workspaces, GCP, and Search. They are calling it Duet AI, their answer for Microsoft’s many new Copilot lines.

Duet AI for GCP sits over Google Cloud itself, building a knowledge base over GCP services that DevOps, sec, data engr, and data sci can query. It also embeds code assistance into dev platforms like Cloud Functions.

Duet AI for Google Workspaces sits over their productivity suite, bringing an AI-assistant to Gmail, Docs, Sheets, Slides, Meet, and Calendar. This is their answer for Microsoft 365 Copilot. It can generate content (docs, slides, xls), extract summaries, and create custom imagery. A coming feature called Sidekick is an embedded AI that provides guided assistance for where to take content next (generate next paragraph, brainstorm, enrich with images, etc).

As for Search, they announced Search Generative Experience (SGE) in private preview. This embeds a Bard-like chat AI to answer more open-ended questions or ones that take several search queries (comparisons). This is their answer to Bing’s new chat interface.

It provides a Bard-like chat AI that can directly enrich the content with Search, Ads, and Perspectives (user-generated content from social media, forums, Q&A sites). It provides suggested searches for more depth, and, like code, is adding citations.

You can sign up for private previews of Search and Duet AI for Google Workspace now at Google Labs.

To sum up, Google introduced several new AI capabilities that compete with Open AI and Microsoft, and is pushing out products around Generative AI in search, chat, code assistants, productivity suite, and cloud (GCP). Bard is advancing quickly and already moved into code, and soon adding integrations.

It’s an exciting time, I expect we will see these types of chat interfaces appearing in nearly every platform. Monday-com announced they are adding AI features soon (see the recent thread), and Snowflake, Datadog, CrowdStrike, and others would be well served by having a plain language chat capability, allowing their customers’ data to be queried as knowledge bases.



Muji, This sentence caught my attention. I would appreciate it if would elaborate on those implications. I was under the impression that S had already incorporated AI into their base product (hence, I’ve finally taken a position). Do you think those “implications” are positive or negative for these two end-point security competitors? Will it impact them similarly or will one benefit (or suffer) more?


These next-gen security platforms such as XDR platforms (CrowdStrike and SentinelOne) have leveraged AI from the start as analytical engines over security data – as have SIEMs (security log mgmt platforms like Splunk, Sumo Logic, and Datadog). Both are at risk of having that analytical engine part of their platform be co-opted.

What Google and Microsoft are now doing is using LLMs (honed to security use cases) over that pool of security data to use them as knowledge bases that can be queried over in plain language, and are further enriching it with threat intel and vuln checking features. This provides a SOC tool to improves security analyst productivity, and upskill them – greatly helping address the overall skill shortage in security, and helping existing staff do more with less.



How are they getting access to the pool of security data? I thought Crwd and SentinelOne were keeping their information to themselves as part of their competitive advantage.



these new security AI platforms first and foremost sit over their own security tools (Google’s SIEM & SOAR, vs Microsoft’s many many product lines in XDR, SIEM, IAM and more). but both have indicated they are opening up to 3rd party partners.

both CRWD and S have shown willingness to push data into SIEMs and other partners, so I wouldn’t say they were keeping data to themselves – though they certain would PREFER to as that brings more value to the whole “XDR” proposition.