Due to an influx of spam, we have had to impose restrictions on new accounts. Please see this wiki page for instructions on how to get full permissions. Sorry for the inconvenience.
Admin message
The migration is almost done, at least the rest should happen in the background. There are still a few technical difference between the old cluster and the new ones, and they are summarized in this issue. Please pay attention to the TL:DR at the end of the comment.
Weston DRM backend fails on 5120x1440 resolution using Intel Kaby Lake graphics
It seems that Weston is not able to find a working setting when using DRM backend on a Dell U4919DW (5120x1440@30Hz) connected via HDMI. On the same setup Gnome Mutter seems to work.
I created two log files with drm_debug redirected to weston_log output, one requesting 3840x1080@60.0 explicitly 4k.txt and 5120x1440@30.0 5k.txt.
Edited
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
The irony here is the bug no#. Mind posting the kernel ring buffer with drm.debug set to 0xff to see more? Wondering if Mutter is smarter here and will fallback to the next resolution available? I'd expect that it is noticeable in resolution drop (and higher frame rate) so I'm just checking, but I guess it doesn't hurt to ask. Comparing the logs I don't see anything (besides different resolution) weird, so I'd expect some pipe related issues on the driver (i915) side.
Unfortunately Gnome Settings does not show the screen refresh rate. However, using D-Bus shows its using the same mode busctl --user call org.gnome.Mutter.DisplayConfig /org/gnome/Mutter/DisplayConfig org.gnome.Mutter.DisplayConfig GetCurrentStategnome-mode-from-dbus.txt. That is also what the screen itself tells me.
I was planning to look into this myself a bit deeper. It really seems Atomic mode set fails for this case for some reason. I noticed that Gnome uses a different plane (1B vs. 1A), and the src/dst resolutions of that plane seem to be different (although, here what Mutter is doing seems odd to me).
That also clears up some confusion I had: I was pretty sure that when I bought the monitor some months back, I had to run the monitor on 4k when using my Laptop. After some time I was surprised that I was actually able to use 5k. I was wondering if it was just me being too stupid to select the resolution right. But I see now that this really got changed with Linux 5.4. It seems I was sane after all
However, the question still is standing, why does it not work when using Weston? I am trying to get some more information what the exact plane configuration is with Mutter.
Stefan Agnerchanged title from Weston DRM backend fails on 5120x1440 resolution to Weston DRM backend fails on 5120x1440 resolution using Intel Kaby Lake graphics
changed title from Weston DRM backend fails on 5120x1440 resolution to Weston DRM backend fails on 5120x1440 resolution using Intel Kaby Lake graphics
So it seems that Weston is using a different framebuffer modifier. From what I understand this is chosen by Mesa? With that in mind I was thinking that the pixman backend may use a different framebuffer modifier, and indeed, with using --use-pixman 5k works: weston-log-start-5k-use-pixman.txt.
Ah, right. Some modifiers use more bandwidth than others. Mutter doesn't support modifiers so is able to enable a higher resolution.
In wlroots, we had to implement a fallback to disable modifiers if modesetting fails.
This also interacts with multi-output setups (where you need to disable modifiers on already-enabled outputs to be able to light up an additional output). This part isn't yet implemented in wlroots.
Hm, this seems somewhat hard to do in practise as the modifiers are advertised to GBM on drm_output_enable, however, the atomic state commit fails on drm_repaint_flush, when we try to apply the initial state.
I guess we would either have to build a state and use drm_pending_state_test at output enable time, or recreate the gbm surface when the state applying fails during repaint. Both option sound not very nice.
Is there maybe another method to influence modifier selection?
have to build a state and use drm_pending_state_test at output enable
This doesn't sound that bad, there could be other situations where you might want this, where you might want to test out the pending state before doing the commit. I think that the cloning/mirror had something similar (where you could end up with a invalid commit -- for desktop was this when I tried). We could fallback in the same manner to some simpler state.
I think one the issues was that we don't really know what actions to take if we don't have distinct errno returned, but maybe having just a few use cases around we could just verify and try out one after one?
state-propose.c has some code which gets called by drm_assign_planes which tries out layer variants. But this is well after drm_output_enable. From what I can tell at drm_output_enable time we really do not do any state checks so far. I am not sure if we would have the means to build a meaningful state that early already.
@emersion do you meant to do that at drm_output_enable time or later, at render time?
A rather cheap option to side step the whole issue is to introduce a environment variable, e.g. WESTON_DISABLE_FB_MODIFIERS, similar to WESTON_DISABLE_UNIVERSAL_PLANES we already have. That isn't a proper solution but would at least give a runtime knob to get it working. Not sure how common such modifier restrictions are in practise.
do you meant to do that at drm_output_enable time or later, at render time?
Ideally should be done at enable time, so that we don't try to enable an output that really can't be enabled. This may require more work of course…
A rather cheap option to side step the whole issue is to introduce a environment variable, e.g. WESTON_DISABLE_FB_MODIFIERS, similar to WESTON_DISABLE_UNIVERSAL_PLANES we already have.
We have exactly this in wlroots as well to help with broken multi-output issues.
A rather cheap option to side step the whole issue is to introduce a environment variable, e.g. WESTON_DISABLE_FB_MODIFIERS, similar to WESTON_DISABLE_UNIVERSAL_PLANES we already have. That isn't a proper solution but would at least give a runtime knob to get it working. Not sure how common such modifier restrictions are in practise.
I don't know of any driver other than Intel where using modifiers breaks single-plane scanout.
Modifiers increase bw usage in general as far as I've understood, so blowing up the limit could happen on any driver theoretically. Maybe it only happens on Intel in practice....
In general it's not necessarily an increase or decrease.
The specific issue is that Intel's scanout engine still fetches single scanlines at a time, and the tiling -> linear conversion happens after the fetch, rather than being closer to the memory interface. It relies on a FIFO which is a) globally shared between all CRTCs, and b) way smaller than hoped. For Y-tiled layouts which are the most efficient for the GPU, you need to fill the FIFO with like the entire frame due to column-major tile ordering, and end up blowing through the limit pretty quickly.
Other designs - particularly AFBC - have tiling located far closer to the memory interface like AXI, don't have global FIFOs, don't have tiny FIFOs, etc. So far I haven't seen any of this other hardware suffer from the same problem as Intel.