Duplex on the Web, NextGen Google Assistant and Beyond
Google I/O, an important event for developers where Google introduced its new products and technologies, was held in San Francisco last month. As in the previous I/O gatherings, the tech giant said a lot about its activities around AI and conversational AI and Google Dublex
Google declared its mission as “organizing the world’s information and making it universally accessible and useful”. The CEO, Sundar Pichai described the company’s evolution as “ moving from a company that helps you find answers to a company that helps you get things done.” The main idea of this event was “a more helpful Google for everybody”, promising to be helpful to increase people’s knowledge, success, health and happiness, focusing on the revolutionary approach and scalability of its services.
Google Duplex is coming to the web
Last year in Google I/O, Sundar Pichai presented “Google Dublex” showing how it was humanlike making an appointment with a hairdresser saloon and a reservation at a restaurant. The idea is to make Google Assistant more helpful touching more aspects of people’s lives.
In November 2018, the company announced that Google Duplex was rolling out to a select number of public users in a few U.S. cities. Google put in a few changes to address the security concerns of some users – like identifying itself as being from Google and telling the receiver that the call is being recorded.
In the I/O 2019, Google said that now Google Duplex is available in 44 states across the US and defined it as “an approach by which they train AI on simple but familiar tasks to accomplish them and save people’s time.” Pichai, said that now Duplex is moving beyond voice and extending to the tasks on the web. Google launched “Duplex on the web”.
This years’ sample case was rental car bookings and movie ticketing. Let’s start with booking a car – when you ask Google, to book a car rental for your next trip, the Assistant opens the website and automatically starts filling out your information on your behalf, including the dates of the trip. You can confirm the details with just a tap, and then the Assistant continues to navigate the site, selects which car you like. You can check everything one last time and just tap to finalize the reservation. This sample shows how the Assistant completes a task online on the user’s behalf in a personalized way. It understands many things – the dates of your trip, your car preferences etc. The interesting point is that no action is required on part of the business to implement.
The next generation Google Assistant
Just to remind, Google Assistant is available on over 1 billion devices in over 30 languages across 80 countries. This year Google underlined that the focus is to make the Assistant faster— “so fast that tapping to use your phone would seem slow.” This is a feature that can transform the future of the Assistant. The next generation Assistant is running on device, can process and understand requests in real time, and deliver the answers up to 10 times faster. Just by voice commands you can open and navigate the apps, set a timer, see photos, get the weather forecast, see a Twitter account, call a Lyft, turn the flashlight on and take a selfie. It can also realize several requests in a row without having to say “hey Google” each time. It is an instant, easy and effortless way to use your phone and it is a serious step towards the invisible interfaces between human and the technology.
An example is that you can ask the Assistant to show you any photo taken in any place or any on any date, or even share it with a friend. For instance, you need to learn an information – let’s say the time of your flight – and ask the Assistant to give the information. The Assistant replies by reaching the relevant app and getting the info. So a multitask manner is required for switching among different apps.
The next generation Assistant can handle more complex speech scenarios, like composing and sending an email. In the I/O 2019, Google showcased how it can compose an email just by voice commands like dictating the mail body, the subject and saying “send it!”. Google Assistant differentiates when the user is dictating a part of the message and when he/she is asking it to complete an action.
These AI models make people use their phone with voice to switch among apps and complete actions like sending an email or finding a photo. It is definitely a paradigm shift. Google announced that this next generation Assistant is coming to the new Pixel phones later this year.
Personalized help by the Assistant
This latest phase of digitization that we are going through is dominated by the concept of personalization. People expect personalization from almost every service. Of course the Assistant is one of these services especially when people’s preferences completely differ – like choosing what to listen to, what to do on the weekend, or even what to eat. If you are asking Google what to cook for dinner, it really has to know your preferences. It shows some recipe picks to you based on the ones that you have cooked before. It is a new feature called “picks for you”.
When you ask Google how the traffic is to your mom’s home, it directly understands which destination you are talking about. Everytime you say “mom” Google knows who you are talking about. Similarly you may want to see the photos of your son, request it to remind your anniversary, or so on.
Google Assistant in the car
Last year, Google brought the Assistant to Android Auto. In Google I/O 2019, they talked about how they improved mobil driving experience. On the driving mode dashboard, you can see a convenient shortcut to navigate to the restaurant, if you have a reservation for dinner. So it displays the most convenient information for that person for that specific time. For instance, you have started to listen a podcast but could not finish in the morning at home, it will display a shortcut to resume the episode right where you left off.
Everything is voice enabled. If a call comes in, the Assistant will tell you who’s calling and ask if you want to answer without having to take your eyes off the road.
Nest Hub: A personalized experience for the entire household
Aligned with its “technology for everyone”, Google, mentioned Nest Hub as a product to provide a personalized experience for the entire household. Google Home Hub is renamed as Nest Hub, which has a camera and a larger 10-inch display. Hub Max has a dashboard where you can see your Nest cams, switch on lights, control your music, and adjust your thermostat. The Assistant greets you in the morning when it sees that you are awake, and when you come home, Hub Max welcomes you with the reminders, offers personalized recommendations for music and TV shows. Another interesting feature is gesture recognition. When the volume is up, instead of yelling at the Assistant to turn it down, you just raise your hand, Hub Max uses on-device machine learning to identify your gesture and turn the volume down.
Google assets that it is not a smart home but a helpful home!
Conclusion
Mentioning all these new approaches, products and technologies, it can be asserted that the world we are living in is rapidly changing. It is not only changing in a way that transforms our daily lives, but also the way we do business. Definitely many companies will adopt their strategies aligned with all these technological developments to interact with their customers, sell products and services and manage their employees in a different way. Cbot will be one of the technology leaders that help companies in this new world.