Due to an influx of spam, we have had to impose restrictions on new accounts. Please see this wiki page for instructions on how to get full permissions. Sorry for the inconvenience.
Admin message
Our infrastructure migration is complete. Please remember to update your SSH remote to point to ssh.gitlab.freedesktop.org; SSH to the old hostname will time out. You should not see any problems apart from that. Please let us know if you do have any other issues.
The issue is a bit complicated and requires some context, so:
I've been using Ubuntu 14.04.0 (with updates) for many years and can't switch to any newer Linux distribution.
Actually on attempt to upgrade to anything newer I got problems with eyes and head (actually problem feelings are started in muscle at my right head side and then in eyes - irritation, burn, discomfort and so on, then I got a weakness in my full body).
My hardware configuration is same during attempt of upgrade (Dell U2711 (CCFL) display + AMD HD5450 + DVI-I connection) and I have similar filing regardless of used driver (opensource or proprietary). Problem feelings doesn't depends on used graphic card at least I've tried AMD/ATI HD5450 and HD6670, Nvidia Quadro P400 and integrated Intel graphics and monitor (Dell U2711 and Dell U2007 CCFL) and connection type (DVI, DP, HDMI).
The hardness of feelings just depends on distribution: the hardest ones on Ubuntu 19.04 and less ones on Mint 19.1
I think that some relevant changes was added to graphic stack around 2014 year
or maybe even before 2014 and default behaviour was switched to them around 2014.
There are several report of same problem with intel graphic made in 2014:
Actually both my CCFL monitors are 8 bits (dell U3011 and dell U2711), but they can support 10 bit via FRC. Disable dithering doen't help me, so I think that FRC is triggered somehow on modern OS in some way regardless of 24 bit colors specified in system. Is there any way do switch off frc?
I'm not really a display expert, but I guess you'd probably have to modify the driver. I'd start with set_temporal_dither() in drivers/gpu/drm/amd/display/dc/dce/dce_opp.c or drivers/gpu/drm/amd/display/dc/dcn10/dcn10_opp.c for amdgpu, or in one of the following for radeon:
drivers/gpu/drm/radeon/atombios_encoders.c
drivers/gpu/drm/radeon/cik.c
drivers/gpu/drm/radeon/evergreen.c
drivers/gpu/drm/radeon/r600.c
drivers/gpu/drm/radeon/radeon_connectors.c
drivers/gpu/drm/radeon/rs600.c
depending on which asic you have.
@agd5f I've patched dce4_program_fmt function in evergreen.c file in linux kernel (hope it's a proper place to do it and linux doesn't use another unpatched module for working with graphic) with additional debug information.
So current dithering flags in my system are FMT_TRUNCATE_EN | FMT_TRUNCATE_DEPTH (17). Reseting them to 0 also don't help.
There is a debug output from kern.log:
[drm] in dce4_program_fmt evergreen.c bpc = 8, dither=0
According to git historydce4_program_fmt was introduced by you. Could you clarify a bit FMT_TRUNCATE_EN, FMT_TRUNCATE_DEPTH flags meaning, what is the difference between FMT_TRUNCATE_EN | FMT_TRUNCATE_DEPTH and 0 behaviour?
In commit message you also mentioned that
The FMT blocks control how data is sent from the backend of the display pipe to to monitor.
Are there any other flags that affect this behaviour? And maybe other flags/changes that could affect data representation on display (including for example changes in display frequences (because as i mentioned above i don't have any problem with old Ubuntu 14.04))?
FMT_BIT_DEPTH_CONTROL controls the output bit depth. FMT_TRUNCATE_EN enables bit reduction by truncation (e.g., if the display is 6 bpc and the source data is 8 bpc, it will be truncated to 6 bpc). FMT_TRUNCATE_DEPTH determines what depth you are truncating to (0 = 6 bpc, 1 = 8 bpc). I'm not sure what the default is if the register is programmed to 0.
As I remember on my working machine there were some kernel craches/problems with console errors during boot (installation was fine). Current testing machine (where i performed kernel patching above) requires fresh kernel with new hardware support (so I'm not sure that i can make proper bisect on it. There is MX Linux 19 + 5.2 kernel)
ucDigSel selects the digital encoder to progeam to program. IIRC, there is a selectable front end encoder which can drive any backend, and a backend which is tied to a physical display connector. Other than DP MST (where you have multiple streams feeding a single connector), we usually use a 1:1 mapping from front end to back end. I wouldn't recommend changing it.
Sure, you just have to be consistent when you program the display pipeline. All of the encoders are the same, there are just multiple instances. That said on some chips, the front end and back end are actually not routable, so you have to use a particular front end for a specific back end. See radeon_atom_pick_dig_encoder() for more information.
All of the encoders are the same, there are just multiple instances
So, do ENCODER_OBJECT_ID_INTERNAL_UNIPHY/ENCODER_OBJECT_ID_INTERNAL_UNIPHY1/ENCODER_OBJECT_ID_INTERNAL_UNIPHY2 make identically same encoding for signal to display transfer? If yes then it's not that I'm looking for.
I think that if problem not linked to dithering than I could be affected by the way how data encoded/transfered from video card to display. Is it possible to somehow change signal encoding/transfering scheme?
The digital encoders and phys are universal. They can output DP or TMDS (or LVDS on some asics). If they are wired to an HDMI port, you program them for TMDS. If they are wired to an eDP display, program them to output DP. Etc.
The FMT block is independent of the encoders and controls the bit depth output to the encoders. You can set up the FMT blocks for either dithering (spatial or temporal) or truncation/bypass.
@agd5f@hwentland I wonder could there be any backlight related changes? As I know most of early LED backlighted panels have PWM to work with brightness levels: so could kernel/drm/driver/whatever have some related changes for this (that can suddenly affect also CCFL backlighted displays)? Or maybe any other changes that was required for LED backlighed display support (that at some point start to work on other display types)?
If you're driving the card through DVI our HW won't be able to do anything with backlight. We can only control backlight on LVDS or eDP (integrated) panels.
I'm not very familiar with our older cards, like the HD 5450, but as far as I can tell FMT is the only block that would do temporal dithering. FMT_TRUNCATE_EN is what you'll want, with a FMT_TRUNCATE_DEPTH of 1 to make sure our FMT is truncating to 8-bit, rather than dithering. You can also try a FMT_TRUNCATE_DEPTH of 0 to truncate down to 6-bit (although color gradients will show significant banding in this case).
Yes, display connected via DVI (dual link) to HD 5450. If it's matter I also have similar issue with a bit newer radeon HD 6670 on both DVI and DP connections.
If you're driving the card through DVI our HW won't be able to do anything with backlight
Could kernel (or another module) do it (or it's totally impossible for DVI connection)?
Am I right that there should be exactly same pipe and encoding scheme to transfer signal/image from video card to display as it was 10 years ago? (Except maybe of sequence of transfered data/commands and interval between them?)
For external panels (DVI, DP, HDMI, VGA, etc.) the backlight is controlled via the monitor itself (e.g., via OSD or buttons on the monitor). A few select old monitors supported what's called DDC/CI which was a configuration interface for applications to talk to the monitor over i2c via the GPU board. However, most of the DDC/CI commands were vendor specific and there is no common infrastructure for this on any OS, including Linux. You would have to go out of your way to set something like that up using tools you'd probably have to hack up yourself. So in general, I'd say it's safe to assume nothing messing with the backlight on Linux.
We've barely touched the radeon driver in last 5 years or so, so I doubt much as changed. Prior to that we did make periodic changes but mainly to fix bugs or add new features or asic support.
Thank you! Looks like my both monitors support DDC/CI and maybe smth was changed around it's support in last years. So this support is not drm related at all?
I've also tried to compare two slo-mo videos (with 240fps) taken on good and bad OS. But this comparison didn't get me any new information about differences in pixel rendering on display because pictures on both videos looks quite identical.
But on all Linuxes (Arch based, Debian based including Ubuntu 16-19, Fedora, Suse, Mandriva, Mageia...etc) that I've tried after old good Ubuntu there is one noticeable change in color representation: colors on them are much brighter and cooler than on Ubuntu (on same video card, display and display settings). Looks like that there was some core changes (in drm?, kernel?) in color processing (or gamma correction). @agd5f@hwentland Maybe you know anything about this color switch? Could you point to code or module that could perform such correction/processing?
The user (e.g., desktop environment) handles the display gamma. The driver just provides the knobs. You can use xrandr or xgamma (if using X) to adjust the gamma. For wayland compositors, they handle it directly via KMS.
@agd5f@hwentland Is it possible to obtain somehow (logging?) data that is sent from video card to display in each frame? Could you point to place where is best to perform such logging?
I'm also considering to buy a recent AMD card to test with amdGPU driver, maybe it will help me find a root of my problem (at least amdGPU driver is actively developed).
The GPU sends a signal to the monitor. The state of that signal is determined by the register state in the display controller. You could dump the register state of the display hardware to see how it's configured, but unless you know what particular state is the cause of the issue, it will be hard to identify it. The registers we've pointed out are the ones related to dithering. If you could get a working and non-working case on the same hardware, you could dump the registers and diff then to see what was different.
Grab the radeonreg tool from here:
https://cgit.freedesktop.org/~airlied/radeontool/
Then run (as root):
radeonreg regs dce4
or
radeonreg regs dce5
etc.
Depending on what asic you have. dce3 is r6xx/r7xx asics. dce4 is evergreen asics. dce5 is NI, and dce6 is SI.
Is it correct that all register numbers on left side are exactly corresponds to ones defined in evergreend.h/r600d.h and neighbour files (so there is no offset that is used in radeon driver to write register values, e.g. WREG32(FMT_BIT_DEPTH_CONTROL + radeon_crtc->crtc_offset, tmp);)?
@agd5f I've tried to update registers via sudo radeonreg regset registry valueFromGoodOS (e.g sudo radeonreg regset 00000004 04000201), but without any success: values were not changed. What could I do here?
Is there also any reference where I can find registers meaning?
Yes, the first column is the register offset in bytes and the second column is the register value. Ignore the diff below 0x6000, that is just legacy vga stuff and audio relates things. All of the display registers live at higher offsets. You can also ignore address register (e.g., CUR_SURFACE_ADDRESS, CUR_SURFACE_ADDRESS_HIGH) since the memory addresses of surfaces may be different between runs of the driver.