AI Is Now Used by Google Maps to Determine Where People Are Having Fun

google Maps Ai

With the recent Maps update, Google users can now create schedules by just putting “things to do” into the search field. As per a Google press release on Thursday, users will now see photo-first results for activities such as art exhibitions or lists that inspire activities.

Google Maps analyzes billions of photographs uploaded by users and concentrates on particular activities using artificial intelligence. Users will be able to “discover new spots that match exactly what you’re looking for,” according to Google, thanks to AI. Searching “things to do” in the app’s categorized search results can provide ideas for people who are idle.

This week, users in the United States will be able to access the function that was first introduced in France, Germany, Japan, and the UK.

Google added further capabilities to the navigation app, such as a more comprehensive immersive view of cities, the ability to locate EV charging stations, and the ability to create detailed representations of 3D scenes using artificial intelligence. With its “Search with Live View” feature, the Lens feature also uses artificial intelligence. This allows you to choose an icon and hold up your phone to scan the area for details about neighboring landmarks.

The new changes come as Google keeps experimenting with AI in an effort to overtake rival tech giants. Google worked hard to catch up with the launching of its chatbot Bard and other AI initiatives across the firm after ChatGPT launched in November of last year.

Google employed generative AI to show customers what a product looks like when they search for it, and back in August, the company launched a beta version of its Search Generative Experience (SGE). Although SGE has not yet been made available to the general public, Google CEO Sundar Pichai stated that leveraging AI will open up “new opportunities for content to be discovered” during a Q3 earnings call earlier this week.

Google’s New Updates Assist in Identifying AI Deepfakes

Google provides several fresh methods for determining the true source of that bizarre picture of Pope Francis wearing Balenciaga. There’s a pattern to the company’s most recent improvements for Android, the Play Store, and Search. All of these are intended to provide users with more ways to learn about the shady past of a deepfake and report it.

Consider Google Play informed developers on Wednesday that it will be updating its app policies to include generative AI apps. Any app including an AI chatbot or picture generator must include an in-app option for reporting objectionable AI content by early next year. Google stated that these reports should “inform content filtering and moderation” for each AI-based app. The reporting method must be in-app.

It doesn’t alter the company’s current regulations prohibiting the creation of restricted content, such as child abuse material, and it essentially restates the requirement included in the user-generated content policy.

Additionally, Google intends to provide consumers with the ability to view an image’s direct history, which may come in handy when dealing with AI deepfakes. Although Google first revealed its “About this image” feature at Google I/O in May, users could finally begin using it on Wednesday. The tool is comparable to Google’s text-based “About this result” function, which was first released back in 2021. However, rather than providing details about a webpage, the program tracks the previous locations of images on the internet.

Once you’ve clicked on an image, you can see site results where it’s been used previously by clicking “About this image” after selecting the three dots in the top bar. A history of the most recent pages to utilize it will be included, providing a ballpark estimate of when it was originally posted online. Google stated, “Now, with ‘About this image,’ you can see that crooked house is actually real, and not a digital illusion,” citing the well-known bent house in Poland as an example.

The tool will provide additional information on whether it was lifted verbatim from someone’s page without permission or cited by any fact-checking websites. It will also release the image’s metadata, which might contain some information on whether or not AI was used to create the image. Though that ignores how simple it is to change those metadata tags, Google and Adobe have vowed to add a metadata mark to everything created with their own AI art generators.

For now, “about this image” can only be found through standard searches, but Google promises to expand the tool’s access options in the future.

The ongoing Search Generative Experience (SGE) beta, which essentially integrates an AI chatbot into Google’s search engine, is the subject of the other significant upgrade. Now, depending on data from other “high-quality sites,” the built-in chatbot ought to be able to provide more details about “some sources.” In the event that neither Wikipedia nor the Google Knowledge Graph contains any data, this will show up in the “About this result.”

Also Read: The Fascinating World of Quantum Mechanics: Exploring the Quantum Realm

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *