April 18, 2025 – A new and concerning trend has emerged online, as highlighted by TechCrunch today: an increasing number of users are leveraging ChatGPT to uncover the precise locations of photos. This development is raising eyebrows across the digital landscape.
This week, OpenAI unveiled two novel models, o3 and o4-mini, both equipped with image – reasoning capabilities. These models can dissect the intricate details of uploaded images, going so far as to crop, rotate, and enlarge blurry or distorted pictures for more in – depth analysis.

Harnessing this analytical prowess, coupled with the models’ web – searching functionality, ChatGPT has transformed into a formidable “location – finding tool.” Users on X have swiftly realized that models like o3 excel at deducing cities, landmarks, and even specific restaurants and bars from various minute details within images.
Significantly, the models’ assessments don’t rely on ChatGPT’s prior conversation history or the EXIF metadata of photos that typically reveals the shooting location.
Now, a substantial number of users on X are uploading pictures of restaurant menus, neighborhood scenes, building facades, and even selfies, challenging o3 as if it were playing an online location – guessing game called GeoGuessr (which uses Google Street View to let players identify locations).
This trend has naturally sparked privacy – related fears. If someone takes a screenshot of someone else’s Instagram story, they can use ChatGPT to attempt to locate the person in question, with no technical hurdles in the way.
However, o3 is not without its flaws in terms of location – finding. During tests, the model often got stuck in loops, unable to reach a definitive conclusion, or simply provided incorrect locations. Some users on X have also reported that o3 occasionally makes significant errors in location inference.
This phenomenon underscores the growing security risks associated with the increasing power of the new generation of “reasoning – type” AI models. Currently, ChatGPT lacks effective measures to prevent such “reverse location – finding,” and OpenAI has not mentioned this issue in its safety reports for o3 and o4-mini.