Newsletter Saturday, November 9

I recently saw him on Netflix’s Drive to Survive TV show, bringing the hammer down on Zak Brown, the head of McLaren’s Formula 1 racing team. Google is a big sponsor and Samat wanted to see improvement.

And last month, Samat moved up via a reorg at the top of Google. He’s now President of Android Ecosystems, which means he runs Android, the world’s most popular smartphone platform. This job also includes Android TV, Android Auto, and new augmented and mixed-reality technology.

At Google IO, I got a chance to interview Samat. I started by asking him how AI is changing the smartphone market, competition with Apple, and the distribution of Google’s technology.

“AI is having a moment. It’s a huge opportunity for the Android ecosystem,” he said. “We are going to be very fast-moving to not miss this opportunity. It’s a once-in-a-generation moment to reinvent what phones can do. We are going to seize that moment.”

With Google’s new Gemini AI models, “we can do things that have never been possible on smartphones,” he added.

The 800 pound smartphone gorilla, Apple — this was not a word uttered by Samat during the interview. At one point he said “the other OS,” referring to Apple’s iOS mobile platform, which leads in the US, but still lags far behind Android globally.

More than an app

On that “other OS,” Google’s Gemini is just an app. On Android, it’s way more, according to Samat.

He showed me an example, by pressing and holding the power button on his Pixel 8 phone. This summoned Gemini to appear on top of the YouTube app he was in.

There was a video playing and he asked Gemini questions about the clip. Gemini analyzed the footage and responded from the relevant part of the video. He then pulled out a Samsung S24 and did the same thing, but touched and dragged up from the bottom right of the screen to summon Gemini.

The “System UI” level

This is possible because Google has baked Gemini AI models and assistant technology into the “System UI” level on Android devices. That’s below the app level, where the technical important stuff happens.

“You can’t do stuff like this if you’re just an app on a device,” Samat said. “We can do this on Android, so Gemini can come into the situation with context, above or to the side of what’s happening.”

Without being hemmed into an app, Gemini is free to roam around more of the device and understand the context of what you’re doing at any moment.

Samat stressed that this only happens if users summon the AI with intentional actions such as that button press on the Pixel 8 or the swipe on the S24.

On-device AI

He cited another example: On-device AI with the smaller Gemini Nano model. This runs on the Pixel 8 and the S24, with more Android devices coming soon.

This allows Gemini to do useful things while not sending user data to cloud data centers.

One use case for this approach: If you’re using an encrypted messaging service on your phone, you can’t send that data off to a data center for AI models to process. So an AI assistant or agent can’t help you write replies and do other cool stuff when texting.

With Nano, Google has on-device AI that can process these encrypted messages and provide help on crafting replies and taking other actions. None of that data will leave the device, Samat said. 

Gemini on iPhones?

Then I asked Samat a big business-strategy question: Does Google want its best Gemini models distributed more prominently on Apple devices?

Google already pays Apple billions of dollars a year to have Search as the default on Safari. Will it do a similar deal to distribute Gemini on iPhones, for instance?

Samat declined to comment. More generally, he said Google’s broad goal is to serve all users around the world.

However, he stressed that this doesn’t mean the company can’t build unique experiences on Android devices, including many new AI experiences.

Circle to Search

He cited Circle to Search as an example. This lets you search for anything you see on your phone screen by simply circling, scribbling or highlighting it. For example, if you’re looking at a video and find a hat or sunglasses you want to buy, all you need to do is launch the Circle to Search feature and circle the product.

This works through a combination of Google Search, Gemini AI technology, and Android — something that’s not possible on any other platform, Samat said.

These experiences require end-to-end optimization, which is what Google is doing with its Pixel devices and with Samsung, and soon other Android providers. 

“AI is a fundamental differentiator for Android. Samsung is a critical part of this, as is Pixel,” he said. 

“Is this all about our Pixel devices? No!” he added. Samsung and other Android device makers are crucial in this next wave of AI-powered devices, he explained.

Read the full article here

Share.
Leave A Reply