Gestures unaccelerated data is not like the others
See !586 (closed) and my longer analysis in !586 (comment 830809) - the gestures dx/dy unaccelerated code doesn't really match the data provided by pointer motion events and/or pointer axis events.
For pointer motion events:
-
dx
is accelerated data in "pixel" coordinates, so a caller can generally assume that a delta of 1 is a one-pixel movement on a traditional (low-dpi) screen. -
dx_unaccelerated
is "raw" device data and it's up to the caller to figure out what that means (e.g. checkMOUSE_DPI
or whatever else is available) - touchpad events are only special in that they're always scaled to the x axis resolution
For pointer axis events:
-
value
is in the same coordinate system as pointer motiondx
. there is no actual pointer acceleration applied but that's an implementation detail, otherwise it follows the delta of 1 is a one-pixel movement. - there is no API to get "raw" device data
For pointer gesture events:
-
dx
is the accelerated value, the same as pointer motion events -
dx_unaccelerated
is normalized to 1000dpi but otherwise not accelerated - there is no API to get "raw" data
One of the things about touchpad acceleration is the TP_MAGIC_SLOWDOWN
, a factor of 0.2968 [1] as of libinput 1.17.0. This factor is applied to motion and axis events to slow down the movement. The factor is the result of empirical measurements, a 1:1 motion would otherwise be uncontrollably fast.
This factor isn't applied to pointer gesture events, so we have a double mismatch for the gesture unaccelerated data: it's not raw data and it's not slowed down to ~30% of the delta. This makes gestures too fast to be useful but it's tricky for callers to match the gesture movement with the pointer movement.
I looked at changing the gesture dx_unaccelerated
to match the pointer motion's unaccelerated data but this is sufficient to break clients. The change is from delta * (1000/25.4/resolution)
to just delta
. For example our test devices range from 10 units/mm to 200 units/mm, so the factor dropping off here is between 3.9 and 0.2 - a massive change for those devices and would make any caller using this data unusable.
Bringing the TP_MAGIC_SLOWDOWN
into the fold is roughly similar, it'll make the data more useful than now but it's still a massive change in output data with the potential to break things.
So looks like we'll probably need a new API to standardise this across the board and make the data useful.
[1] For the default linear profile the baseline of the curve is at 0.9 * TP_MAGIC_SLOWDOWN
so it's closer to 0.267.