Three Pixel 9 phones, but with the background set to an AI-generated moonscape, with another moon visible in the background.
Enlarge / I asked Gemini to “reimagine” the background of this Pixel 9 group shot (originally on beige paper) as “science fiction moonscape,” and then used “Auto frame” to expand the initially tight shot. Maybe that explains why this moon surface has another moon visible?

Kevin Purdy / Gemini AI

Google made its AI assistant, Gemini, central to its pitch to reviewers and the public—it’s what makes Pixel phones different from any other Android phone, the company says. In fact, you have to go 24 minutes into Google’s keynote presentation, and cringe through a couple of live AI demo failures, before Pixel hardware details are even mentioned.

I’ve been using a Pixel 9 Pro as my daily phone for about a week. There is almost nothing new about the Pixel 9 that is not linked to Gemini in some way, minus the physical design of the thing. So this review will look at how Gemini performs on the Pixel 9, which is Google’s premier platform for Gemini at the moment. While some of the Pixel 9’s AI-powered features may make it to other Android-powered phones in future Android releases, that’s not a certainty. AI—as a free trial, as a custom Google-designed chip, and as an OS integration—is something Google is using to set Pixels apart.

I wrote a separate review of the three main Pixel 9 devices. But considering the Pixel 9 as a hardware-only product is strange. The short version is that the phones themselves are capable evolutions of the Pixel series and probably the best versions Google has made yet, and they’re sold at prices that reflect that. If you love Pixel phones, are eager to upgrade, and plan to ignore Gemini specifically and AI features generally, that might be all you need to know.

But if you buy a Pixel 9 Pro, Pro XL, or Pro Fold (coming later), starting at $1,000 for the Pro, you get access to a free year of Gemini Advanced ($240 per year after that), and you’ll see Gemini suggested in every Google-made corner of the device. So let’s talk about Gemini as a phone task assistant, image editor, and screenshot librarian. I used Gemini as much as felt reasonable during my week with a Pixel 9 Pro.

I’m very new to general-purpose AI chatbots and prompt-based image generation and had never used an “advanced” model like Gemini Live before. Those with more experience or pre-existing enthusiasm will likely get more out of Google’s Gemini tools than I did. I’ll also leave discussions of Google’s approach to on-device AI and its energy impacts for other articles.

Google

Gemini, generally: Like a very fast blogger working for you

Testing the Pixel 9 Pro, I’ve had access to the most advanced versions of Gemini, both the “Advanced” model itself (a free one-year trial given to every Pixel 9 buyer) and its advanced speech dialogue, “Gemini Live.” Has it been helpful?

It has been like I hired a blogger to be available to me at all times, working much faster and with far fewer complaints than its human counterparts, at the push of a button. This blogger is a capable if unstylish writer, one who can look things up quickly and cobble together some facts and advice. But the blogger is also easily distracted and not somebody you’d inherently trust with key decisions without further research, perhaps into the very sources they’re citing.

I should know—I used to be that kind of fast-writing, six-posts-a-day blogger when I worked at Lifehacker. In the late 2000s, I was in my mid-to-late 20s, and I certainly didn’t have all the knowledge and experience needed to write confidently about every possible subject under the broad topics of “technology,” “productivity,” and “little things that might improve your life if you think about them for a bit.”

But I could certainly search, read, and triangulate the advice of a few sites and blogs and come up with reasonable summaries and suggestions. Depending on how you looked at it, I was an agile general assignment writer, a talented bullshitter, or some combination thereof.