Due to an influx of spam, we have had to impose restrictions on new accounts. Please see this wiki page for instructions on how to get full permissions. Sorry for the inconvenience.
Admin message
Our infrastructure migration is complete. Please remember to update your SSH remote to point to ssh.gitlab.freedesktop.org; SSH to the old hostname will time out. You should not see any problems apart from that. Please let us know if you do have any other issues.
The failures here are caused by our tests requesting modes that cannot be satisfied by the link. Link training failures on this output have restricted us to a single lane and data rate of 270000.
Here's an excerpt from one of the logs; we can see it initially failing with two lanes at the lowest link rate, dropping down to one lane and still failing at a 540000 rate. It settles on one lane a rate of 270000 as possible:
<7> [893.126576] [drm:intel_dp_compute_config [i915]] DP link computation with max lane count 2 max rate 162000 max bpp 24 pixel clock 135000KHz <7> [893.126609] [drm:intel_dp_compute_config [i915]] Force DSC en = 0 <7> [893.126640] [drm:intel_dp_compute_config [i915]] DP lane count 2 clock 162000 bpp 18 <7> [893.126672] [drm:intel_dp_compute_config [i915]] DP link rate required 303750 available 324000
... <7> [895.074804] [drm:intel_dp_start_link_train [i915]] [CONNECTOR:224:DP-4] Link Training failed at link rate = 162000, lane count = 2
... <7> [901.225019] [drm:intel_dp_start_link_train [i915]] [CONNECTOR:224:DP-4] Link Training failed at link rate = 540000, lane count = 1
... <7> [901.376322] [drm:intel_dp_compute_config [i915]] DP link computation with max lane count 1 max rate 270000 max bpp 24 pixel clock 148500KHz
The 148500 clock on this mode * a minimum RGB bpp of 18 means that we require a data rate of 2683000/8 = 334125 to drive this mode. Since 334125 > 270000 the modeset fails:
<7> [901.376487] [drm:intel_dp_compute_config [i915]] Force DSC en = 0 <7> [901.376593] [drm:intel_atomic_check [i915]] Encoder config failure: -22
Ultimately the behavior here is working as expected, our IGT tests just aren't sophisticated enough to recognize link training failures and retry with a smaller mode like a real userspace should.
If any part of the source, link, or link cannot satisfy the desired mode it should be rejected. In this case the link seems to be the limiting factor (due to earlier link failures when we attempted to use more lanes or higher data rates). The immediate failures of these tests might be "fixed" by replacing the cable on the fi-icl-u4 machine since that appears to be the only machine suffering from these issues.
The end-user impact of this should be nil; real userspace should already react to link training failures and downgrade to smaller modes (Manasi added this capability to the software stack a couple years ago). The kernel itself is acting as expected by rejecting modes that it knows the hardware/connections cannot support.