Skip to main content

Google showed me its AI future for Google Home, and it blew me away

The Google Home logo on a Pixel phone.
Joe Maring / Digital Trends

Google’s making a few announcements today ahead of its big Pixel event next Tuesday. In addition to revealing the new Nest Learning Thermostat and the Google TV Streamer, Google is also providing a sneak peek at some big Google Home and Google Assistant changes. And they’re all really impressive.

We’ll start with the Google Assistant. Google has revealed a new voice for the Assistant, and it sounds significantly more natural than the current one. It’s difficult to describe in writing, but the gist is that the Assistant’s voice now sounds more like a human and less like a robot. The Assistant takes natural pauses while speaking and has inflections in its voice.

Recommended Videos

Additionally, the Google Assistant is getting better and more natural with follow-up questions. In a demo video I saw, someone asks the Google Assistant if Pluto is still a planet. The Assistant explains that it is not and that the International Astronomical Union (the IAU) decided to reclassify Pluto as a dwarf planet. The person then simply asks, “Could they change their minds again?” The Assistant knows that “they” are the IAU and that the person is asking if the organization could change its mind about Pluto being a dwarf planet.

As cool as this all is, the really exciting stuff has to do with Google Home. Google showcased its plans for bringing Gemini into the Google Home experience, and even as someone who’s not been particularly impressed with existing Gemini features, the stuff it’s adding to Google Home is pretty jaw-dropping.

Using Gemini to create a Google Home automation.
Gemini creating a Google Home automation Google

My favorite Gemini feature is how you can use it to create automations. Automations are an important part of any smart home, but they’re also not particularly easy to set up. Having the lights automatically turn on when you get home is great, but setting that up yourself can be easier said than done.

Using Gemini to create a Google Home automation.
Gemini creating a Google Home automation Google

With Gemini, you’ll be able to create automations by simply saying or writing what you want your automation to do. In an example, Google shows someone using Gemini in the Google Home app and saying, “Help the kids remember to put their bikes in the garage when they come home from school.” Using that, Gemini creates an automation that will turn on the garage lights and broadcast a message with a reminder to put away bikes whenever someone arrives home between 3:30 p.m. and 5 p.m. You can then tap a button to see the full automation process and customize it if you want to. Otherwise, you tap another button to save it, and that’s all there is to it.

Gemini improving camera activity search results in Google Home.
Google

Gemini is also going to make searching your camera activity a lot easier. Using the same bike example, you could go to the Activity page in Google Home and search, “Did the kids leave their bikes in the driveway?” You then get a clear answer at the top, followed by video clips Gemini pulled its answer from. It sounds simple explained this way, but the technical process happening behind the scenes to make this look so seamless is nothing short of amazing.

Gemini improving camera activity search results in Google Home.
Google

This is all possible because of how Gemini will greatly improve the quality and detail of what your smart-home cameras see. For example, as it stands today, a Nest camera looking at your backyard can give you an alert if it sees a bird on your bird feeder and knows to classify that as an animal. With Gemini, however, it could provide a much more in-depth explanation of the scene, such as:

“A blue jay at a seed-filled feeder. Its blue and white feathers vibrant against a dull, wintry backdrop. There are no people or vehicles, just tranquil natural scenery and the colorful bird.”

An example of Gemini providing a detailed explanation of a scene a smart camera sees.
Google

While it remains to be seen how all of this works in the real world compared to pre-rendered demos in a press briefing, everything Google is showing here looks incredible. It often feels like Google announces Gemini features without a clear explanation of how they’re supposed to make your life easier, but that’s not the case here. Using Gemini to create automations is ingenious and something I can’t wait to try. The upgraded Google Assistant sounds fantastic. The new AI tools for Nest cameras are like something straight out of the future.

Now, the important question: When can you use all of these features for yourself? Google says it’ll begin rolling everything out to Nest Aware subscribers in a Public Preview phase later this year. The exact timing is unclear, but I certainly hope it’s sooner rather than later. Google is onto something magical here, and I can’t wait to get my hands on all of it.

Joe Maring
Former Digital Trends Contributor
Joe Maring has been the Section Editor of Digital Trends' Mobile team since June 2022. He leads a team of 13 writers and…
Cost-cutting strips Pixel 9a of the best Gemini AI features in Pixel 9
Person holds Pixel 9a in hand while sitting in a car.

The Pixel 9a has been officially revealed, and while it's an eye candy, there are some visible cutbacks over the more premium Pixel 9 and 9 Pro series phones. The other cutbacks we don't see include lower RAM than the Pixel 9 phones, which can limit the new mid-ranger's ability to run AI applications, despite running the same Tensor G4 chipset.

Google's decision to limit the RAM to 8GB, compared to the 12GB on the more premium Pixel 9 phones, sacrifices its ability to run certain AI tasks locally. ArsTechnica has reported that as a result of the cost-cutting, Pixel 9a runs an "extra extra small" or XXS variant -- instead of the "extra small" variant on Pixel 9 -- of the Gemini Nano 1.0 model that drives on-device AI functions.

Read more
Low-cost smart ring shows the future of sign language input on phones
Person wearing SpellRing on their thumb.

The smart ring segment has matured significantly over the past couple of years. We have entered the era of miniaturised sensors that are ready for ultrasound-based blood pressure monitoring. The likes of Circular are taking a different dual-sensor approach to measuring blood pressure levels, and are even eyeing glucose trend analysis by next year.
Health sensing, however, has remained the predominant application area for smart rings. Now, experts at Cornell University have developed a smart ring platform that can continuously track American Sign Language in real time, and send it as input to computers and smartphones.
Dubbed the SpellRing, it can recognize the full 26-letter range of the English alphabet pool. Worn on the thumb, this ring comes equipped with a speaker and mic array. Together, they allow the back-and-forth transfer of audio waves generated by hand motion, while a gyroscope measures the angular data.

Accessibility for all

Read more
Google says it’s using AI to respond to more of your ailment searches
Google search on Android app.

Over the last year, Google has been expanding its AI overviews in search results, offering responses compiled with generative AI rather than a list of links. The goal, as you would expect, is to make information directly accessible while helping you save time from clicking each link individually and deciding its merit for yourself. Now, Google seems confident these overviews can help you better with your medical queries too.

At its annual health-centric event, The Check Up, Google announced a several ways in which its products will improve how you find information about health-related topics. More specifically, the company said it is now expanding AI overviews in search results "to cover thousands more health topics," beyond the basic ones such as flu. These results are also being expanded to other languages besides English, including Spanish, Portuguese and Japanese, for queries made on mobile.

Read more