Adding random offsets to precise location doesn't reduce accuracy given repeated observations
I noticed these changes f2345638 fcd237b6 to fetch precise location even when city-level accuracy is requested, then attempt to reduce the accuracy of the resulting location by adding a random offset to it. (This was in response to #64 (closed): using the IP address alone can give wildly incorrect results.)
The accuracy-reduction is done here: https://gitlab.freedesktop.org/geoclue/geoclue/blob/979692897149c9542bfcfeec13b423267ab3d283/src/gclue-location-source.c#L408-417
/* Randomization is needed to stop apps from calculationg the * actual location. */ distance = (gdouble) g_random_int_range (1, 3); if (g_random_boolean ()) latitude += distance * LATITUDE_IN_KM; else latitude -= distance * LATITUDE_IN_KM; accuracy += 3000;
Assume that each time we ask MLS for our location, it gives us the same coordinate back. Then, only considering latitude:
- The first time the app requests our location, it gets back latitude a. It knows the true latitude is in the range [a - 3, a + 3].
- Next time (can apps explicitly trigger an update? otherwise, maybe a new wifi AP becomes visible) it gets latitude b. Now it has more information: it knows the true latitude is in range [max(a - 3, b - 3), min(a + 3, b + 3)]
Observation: |------a------| Observation: |------b------| Inference: |----------| Observation: |------c------| Inference: |-------|
The approach we took in Empathy was to truncate the coordinates, rather than randomizing them. This means the error is consistent between readings, so you can't draw this kind of inference. (I don't have the full record of conversations at the time but the commit message suggests the same rationale I remember and am parroting here.) When the device moves across the "border" between two ranges of coordinates, the client can work this out, but that's equally true with the randomization approach.