Due to an influx of spam, we have had to impose restrictions on new accounts. Please see this wiki page for instructions on how to get full permissions. Sorry for the inconvenience.
Admin message
Equinix is shutting down its operations with us on April 30, 2025. They have graciously supported us for almost 5 years, but all good things come to an end.
Given the time frame, it's going to be hard to make a smooth transition of the cluster to somewhere else (TBD). Please expect in the next months some hiccups in the service and probably at least a full week of downtime to transfer gitlab to a different place.
I have a really crappy Philips "HD ready" tv. This tv has no options to change the color format, and it always assumes full rgb. The problem is now that I upgraded my graphics card, and now with the RX 580 I'm using the amdgpu driver. amdgpu chooses YCrCb or limited RGB (but seemingly not reported as supported, so not likely) automatically instead of RGB, and all my colors look really bad. Black levels are grey and everything looks washed out and blurry.
From xorg log:
Information [ 6.523] (II) AMDGPU(0): Supported color encodings: RGB 4:4:4 YCrCb 4:4:4
If I use an adapter and plug it in via DVI instead of HDMI, it looks fine. That has its compromises though (I now have to connect my 2nd monitor by a VGA to HDMI adapter, and that has its issues). Kernel 4.18 also breaks this work-around; it doesn't recognize any resolutions, and seemingly no EDID at all. HDMI is same as before.
I also tried overriding the EDID and setting YCrCb as not supported, but failed
The solution to all of this hassle would be to have the option "output_csc" available on amdgpu as it's apparently possible with radeon
With amdgpu.dc=0 it seems to work just fine. I've booted up with my normal setup, switched the cables so I am connected by HDMI directly and I get good colors
Same problem. Disabling DisplayCore gets me crisp and beautiful colors, but at the cost of audio over HDMI. Faded colors with dc=1 has been a longstanding issue for me.
I just upgraded to new Navi 10 hardware (5700 XT). Using amdgpu.dc=0 no longer seems an option as no monitor output is detected at all and X doesn't start. So it looks like now we are forced to enable DisplayCore and use the wrong color range. Do you know of any other work around other than replacing the old HDMI monitors?
GPU: Out DisplayPort. Monitor In: HDMI (using an adapter cable)
GPU: Out DVI. Monitor In: DVI
GPU: Out HDMI. (TV) Monitor In: HDMI
When I use amdgpu.dc=1 the blacks get crushed (noticeable greys become black) and overall there are subtle colour differences. However only the TV looks correct.
With amdgpu.dc=0 everything looks correct.
I also have a Laptop with integrated Vega (Ryzen 2500U) using amdgpu.dc=1 and the colours are fine on its embedded laptop monitor.
I'm attaching the output of xorg and xrand --verbose (TV was not plugged when I dumped this):
Today I noticed the same problem is actually happening on Windows! It seems when plugged via the DisplayPort port (using HDMI to DP cable) the HW/driver prefers YCrCb444 instead of RGB.
But unlike Linux, switching to RGB is a few steps away using the AMD Radeon control panel center.
I don't see a way to force RGB with the amdgpu driver. Is this possible? Can a feature be added? (like the radeonsi and Intel drivers have)
I have worked up a patch (attached, made against Fedora 32's 5.7.17) which implements a choice of either RGB or YCbCr444 as a module parameter for amdgpu.
The parameter is hdmi_pixenc and can be specified on the kernel command line at boot as, e.g.
amdgpu.hdmi_pixenc=HDMI-A-1:RGB
to force the use of RGB encoding on output HDMI-A-1. Note the HDMI-A-1: can be omitted to apply to all HDMI outputs. To use YCbCr444, replace RGB with YCBCBR444. If you specify any other encoding (or anything else incorrect), it'll behave as before (as if the parameter were absent).
You can also change it at runtime by modifying /sys/module/amdgpu/parameters/hdmi_pixenc, but this will not immediately change the format -- you'll need to replug or power-cycle the monitor (in my case tested with a Dell P2720).
Never made a kernel patch at this level before, use at your own risk, etc.
Those in charge -- I realise the patch is probably a bit rough, but what do I have to do to get this upstreamed?
OK I should confess -- I have really no idea where to begin here as I have approximately zero knowledge of the KMS API. Is there an example of DRM property exposure elsewhere in amdgpu I could study? Also (as hinted for the need to power-cycle the screen), I don't know my way around amdgpu enough (yet) to understand how to properly apply the encoding change at runtime.
The property is a 'user HDMI pixel encoding' which right now may be auto (pre-patch behaviour), RGB or YCbCr444.
Note that the module parameter can be set at boot-time only in this patch (which is what I'm using now to force RGB). I don't yet know how to modify the user_hdmi_pixenc property from userspace so at the moment this part of the functionality is untested.
Will do; however I don't have any non-HDMI amdgpu hardware so I won't be able to test this myself.
What I'm thinking of doing is reworking the option values to "auto", "rgb", "ycbcr420", "ycbcr422" and "ycbcr444" in line with enum dc_pixel_encoding. fill_stream_properties_from_drm_display_mode() has some non-trivial logic for deciding the encoding, and the final choice will be written back into the property (which may be different to what the user requested).
If you see cdv_intel_dp_set_property from cdv_intel_dp.c (which does the same thing, but for Intel), they call drm_crtc_helper_set_mode once they're done.
I believe you are missing a call to amdgpu_connector_property_change_mode(&amdgpu_encoder->base), like underscan_hborder and underscan_type do
That's in amdgpu_connector_set_property() in amdgpu_connectors.c -- what's the difference between that function and amdgpu_dm_atomic_connector_set_property() in amdgpu_dm.c? Are they both called when that property is modified?
I'm no expert in this area. It looks like the atomic one is an alternate path for setting parameters via atomic modesetting.
i.e.
The primary benefit of Atomic mode-setting is allowing to fully test a mode-set operation to ensure it can be handled by the driver and hardware before committing.
AFAIK X11 currently defaults to disabling atomic modesetting.
If I understand this right, without atomic modesetting each setting change will be flushed to the monitor e.g. change from 1600x900@75hz to 1920x1080@60 -> first changes resolution from 1600x900 to 1920x1080; then change frequency from 75hz to 60hz.
This is a problem if the monitor does not support 1920x1080@75 (since it would temporarily switch to this combo); hence atomic modesetting was born; which postpones the final flush until commit.
Thanks for the links, I'll read up. For now, can't even assess whether what I've bodged together here is architecturally correct. Thinking out loud...
The the DRM property (so, it's existence/definition) is attached (using drm_object_attach_property) to the aconnector->base.base (of types amdgpu_dm_connector, drm_connector and drm_mode_object respectively) -- like e.g. underscan_hborder_property.
The value of the DRM property is stored using amdgpu_dm_connector_atomic_set_property, which is given a drm_connector_state. The value is placed in the user_hdmi_pixenc field of the dm_connector_state associated the drm_connector_state (again, like other properties).
The value is only read via pixel_encoding_from_drm_property() (off the drm_connector_state) under fill_stream_properties_from_drm_display_mode() -- which we know is called at least when the display is initialised or the monitor power-cycled. (Although it looks like this is called under dm_update_crtc_state() -- maybe this is the clue?)
So I'll take a pause at this point lest I end up coding my way further down the wrong rabbit hole...
I wonder what this amdgpu_dm_atomic_commit_tail() does?