Ever feel like words aren’t quite enough for what you want to ask Google? But, at the same time, the Google Image Search isn’t right for the job either? You’ll be excited to hear about Multisearch, the new way to use both text and images to find exactly what you’re looking for when searching the web.
Multisearch is a new feature in Google Lens, designed to deliver results based on contextual text phrases to better understand visual queries. In the announcement, Google says it designed the feature to “go beyond the search box and ask questions about what you see.”
How Multisearch Works
As part of Google Lens, Multisearch is still first-and-foremost about visual search. You start by opening the Google app on Android or iOS devices and uploading or taking a picture using your device’s camera. Then, you can provide more information about what you’re looking for by swiping up and tapping the “+ Add to your search” button.
Google offers a few examples of how people can use Multisearch to get better search results:
- Screenshot a stylish orange dress and add the query “green” to find it in another color
- Snap a photo of your dining set and add the query “coffee table” to find a matching table
- Take a picture of your rosemary plant and add the query “care instructions”
In its current shape, Multisearch is best used for shopping search results. This means it is something e-commerce brands should definitely keep an eye on in the near future.
While the feature uses Google’s AI systems, the announcement clarifies it does not use the search engine’s most recent AI model, MUM – yet:
“We’re also exploring ways in which this feature might be enhanced by MUM– our latest AI model in Search– to improve results for all the questions you could imagine asking.”
Multisearch is available now to US users who have downloaded the most recent update of the Google app. For more information, check out the blog post revealing the feature here.