Google is launching a new experimental functionality “AI” in research that seeks to take popular services such as OpenAi and Openai chatgpt research. The technology giant announced Wednesday that the new mode is designed to allow users to ask complex questions and in several parts and follow -ups to dig more deeply on a subject directly in Google research.
AI mode takes place on Google One Ai Premium subscribers from this week and is accessible via Research laboratoriesGoogle experimental arm.
The functionality uses a personalized version of Gemini 2.0 and is particularly useful for questions that require exploration and more in -depth comparisons thanks to advanced reasoning, reflection and multimodal capacities.
For example, you might ask: “What is the difference in sleep tracking features between an intelligent ring, a smartwatch and a monitoring carpet?”
AI mode can then give you a detailed comparison of what each product offers, as well as links to articles from which it draws the information. You could then ask a follow-up question, such as: “What happens at your heart rate during deep sleep?” To continue your research.

Google says that in the past, it would have taken several requests to compare detailed options or explore a new concept thanks to traditional research.
In AI mode, you can access web content, but also use real -time sources such as the knowledge graph, real world information and purchase data for billions of products.
“What we see in the tests is that people ask questions that represent double the duration of the query of traditional research, and they also follow and ask follow-up questions over a quarter of the time,” said Robby Stein, product vice-president at Google Search, in Techcrunch in an interview. “And so they really reach these more difficult questions, those who need more back and forth, and we think that it creates an extended opportunity to do more with Google Search, and that’s what really fascinates us.”
Stein noted that when Google has deployed IA previews, a functionality that displays an instantaneous information at the top of the results page, he heard that users wanted a means of obtaining this type of responses fueled by AI for even more of their research, which is why the company introduces IA mode.
The AI mode works using a “request fan-out-out” technique which emits several related research at the same time between several data sources to then bring these results in an easy to understand response.

“The model has learned to really prioritize the billing and to save what it says through information that can be verified, and it is really important, and that pays particular attention to really sensitive areas,” said Stein. “So it can be health, for example, and where it is not confident, it could in fact respond with a list of web links and web URL, because it is the most useful in the moment. He will do his best and be very useful given the context of the information available and his confidence in the answer. This does not mean that it will never make mistakes. It is very likely that it makes mistakes, as with each new type of advanced cutting -edge technology that is published. »»
Since this is a first experience, Google notes that it will continue to refine the user experience and extend features. For example, the company plans to make the experience more visual and also surface information from a range of different sources, such as the content generated by users. Google teaches the model to determine when adding a hyperlink to an answer (for example, booking tickets) or when to prioritize the image or video (for example, practical requests).
Google One AI Premium subscribers can access AI mode by opting in research laboratories, then entering a question in the search bar and pressing the “AI” Mode “tab. Or, they can navigate directly to Google.com/aimode to access functionality. On mobile, they can open the Google application and press the “AI mode” icon under the search bar on the home screen.
As part of today’s announcement, Google also shared that it was launched Gemini 2.0 for AI previews in the United States, the company claims that IA previews will now be able to help more difficult questions, starting with coding, advanced mathematics and multimodal requests. In addition, Google has announced that users no longer needed to connect to access the IA previews, and the functionality is now also deployed to adolescent users.