Google is constantly updating its Gemini AI assistant. A few months ago, it introduced Third-Party Music Streaming Support.
Recently, Google introduced a new feature to Gemini that allows users to double-check the accuracy of AI-generated content.
This new “double-check response” feature is available for the Gemini web as well as the mobile apps.
This feature enables users to cross-verify the content generated by the chatbot with Google’s Search results. Thus, users can compareni’s information and ensure it is reliable and accurate.
When Gemini responds to a user’s query, a Google logo appears at the bottom of the content.
Users can initiate a Google search by clicking on this logo to verify the information.
The search results are then color-coded to help users understand the level of similarity between Gemini’s response and the content found on the web.
Text highlighted in green indicates that Google Search has found the information generated to be very similar to the AI-generated content, and a link to the relevant content is provided.
The orange-colored text suggests that Google found no relevant content compared to the AI-generated information.
The unhighlighted portion of the text suggests more information needs to be found on the web to evaluate the accuracy of that part of the response.
This new feature seems useful as large language models like Gemini AI are sometimes known to generate inaccurate or somewhat fabricated information.
Google acknowledges that this new technology is in the early developing stage and wants to provide clear signals to users to review the AI-generated content and double-check it using Google Search when in doubt.
The introduction of the double-check feature in Gemini AI is part of Google’s broader efforts to integrate its AI technology into various products and services, including its education-focused Google Workspace for Education.
Google says that the double-check feature will be automatically enabled for students in the Gemini AI version.
This should ensure that they can verify the accuracy of the information. This can be helpful for educational contexts where AI tools raise concerns about plagiarism. Students depend on AI assistants, and there remains the potential for AI to generate inappropriate content.
Google Gemini’s double-check feature is a step towards improving the transparency and reliability of AI-generated content. It empowers users to evaluate the information they receive and make informed decisions critically.
Also read: Google Rolls Out New AI Features Powered By Its Gemini Models