Due to an influx of spam, we have had to impose restrictions on new accounts. Please see this wiki page for instructions on how to get full permissions. Sorry for the inconvenience.
MCLK stuck at 1000MHz on 6800 XT on some refresh rates
Hi! I'm somewhat new to discrete GPUs, so maybe I'm missing something, but it looks like my card's memory clock is stuck at 1 Ghz.
I did try some random kernel module parameters (amdgpu.ppfeaturemask=0xfffd7fff amdgpu.runpm=1), but they seem to have no effect. Setting /sys/class/drm/card0/device/power_dpm_force_performance_level to low also has no effect on the memory clocks.
Hardware description:
CPU: Ryzen 5950X
GPU: Radeon 6800 XT
Display: Philips 326M6VJRMB_00
Type of Diplay Connection: DP
System infomration:
Distro name and Version: Arch Linux
Kernel version: 5.10.0-rc6-1-mainline
How to reproduce the issue:
$ cat /sys/class/drm/card0/device/pp_dpm_mclk0: 96Mhz 1: 456Mhz 2: 673Mhz 3: 1000Mhz *# cat /sys/kernel/debug/dri/0/amdgpu_pm_infoGFX Clocks and Power: 1000 MHz (MCLK) 500 MHz (SCLK) 0 MHz (PSTATE_SCLK) 0 MHz (PSTATE_MCLK) 875 mV (VDDGFX) 59.0 W (average GPU)GPU Temperature: 55 CGPU Load: 0 %MEM Load: 1 %SMC Feature Mask: 0x00003763a36b4dffVCN: DisabledClock Gating Flags Mask: 0x38018300 Graphics Medium Grain Clock Gating: Off Graphics Medium Grain memory Light Sleep: Off Graphics Coarse Grain Clock Gating: Off Graphics Coarse Grain memory Light Sleep: Off Graphics Coarse Grain Tree Shader Clock Gating: Off Graphics Coarse Grain Tree Shader Light Sleep: Off Graphics Command Processor Light Sleep: Off Graphics Run List Controller Light Sleep: Off Graphics 3D Coarse Grain Clock Gating: Off Graphics 3D Coarse Grain memory Light Sleep: Off Memory Controller Light Sleep: On Memory Controller Medium Grain Clock Gating: On System Direct Memory Access Light Sleep: Off System Direct Memory Access Medium Grain Clock Gating: Off Bus Interface Medium Grain Clock Gating: Off Bus Interface Light Sleep: Off Unified Video Decoder Medium Grain Clock Gating: Off Video Compression Engine Medium Grain Clock Gating: Off Host Data Path Light Sleep: On Host Data Path Medium Grain Clock Gating: On Digital Right Management Medium Grain Clock Gating: Off Digital Right Management Light Sleep: Off Rom Medium Grain Clock Gating: Off Data Fabric Medium Grain Clock Gating: Off Address Translation Hub Medium Grain Clock Gating: On Address Translation Hub Light Sleep: On
Try a custom resolution with higher vertical blanking timings (but proceed at your own risk), Navi has some weird limitations with it with regards to dynamic VRAM clocking (way more restricted than Polaris or Nvidia).
I'm having the same issue as well, but with a non-XT Radeon 6800. My /sys/class/drm/card0/device/pp_dpm_mclk is the same. I also noticed /sys/class/drm/card0/device/pp_od_clk_voltage is empty as well if I try get the available clocks and voltages.
This is on 5.9.14 which I think also has support for series 6000 on amdgpu.
As an aside, @jmissao do you know if this is the right issue tracker for our problem? It seems a little quiet -- it wouldn't be a nice thing to do, but I'm almost tempted to ping one of the devs over at the Phoronx forums .
Yea, just tried with 5.10 and latest linux-firmware-git. To no avail.
Seems weird that people that made all those benchmarks on Linux like Phoronix and Level1Techs don't seem to have experienced it. I mean, the GPU can reach mem clocks of 2000Mhz, running at half that much seems like it should be somewhat significant.
1000MHz is maximum stock frequency for memory, read up on DDR memory. The problem @aufkrawall was answering is that memory doesn't actually clock down on these chips when you have certain settings on your outputs.
So if you get to 1000MHz you got the performance, but since it doesn't clock down the card draws more power than necessary in idle.
Seems to be same behavior here. With Navi10 when I plugged off my other monitors, memory was working correctly. There are still plenty of things to do with these cards regarding clocking.
@tsih Yes, I was missinformed and didn't realize that G6 memory would show its regular clock, which would be half of the effective clock which is what vendors advertise I guess. It's also confusing because in Windows, the ATI indicators show the effective clock, and old Polaris card configurations also went above 1Ghz (due to not being G6, its not doubled).
Anyway, I took a read at the amdgpu documentation and it is stated there. Thanks.
Nothing changed as fas as I know, but today it's downclocking well enough:
GFX Clocks and Power: 96 MHz (MCLK) 0 MHz (SCLK) 0 MHz (PSTATE_SCLK) 0 MHz (PSTATE_MCLK) 6 mV (VDDGFX) 8.0 W (average GPU)GPU Temperature: 44 CGPU Load: 0 %MEM Load: 9 %SMC Feature Mask: 0x00003763a37f7dffVCN: DisabledClock Gating Flags Mask: 0x38118305 Graphics Medium Grain Clock Gating: On Graphics Medium Grain memory Light Sleep: Off Graphics Coarse Grain Clock Gating: On Graphics Coarse Grain memory Light Sleep: Off Graphics Coarse Grain Tree Shader Clock Gating: Off Graphics Coarse Grain Tree Shader Light Sleep: Off Graphics Command Processor Light Sleep: Off Graphics Run List Controller Light Sleep: Off Graphics 3D Coarse Grain Clock Gating: On Graphics 3D Coarse Grain memory Light Sleep: Off Memory Controller Light Sleep: On Memory Controller Medium Grain Clock Gating: On System Direct Memory Access Light Sleep: Off System Direct Memory Access Medium Grain Clock Gating: Off Bus Interface Medium Grain Clock Gating: Off Bus Interface Light Sleep: Off Unified Video Decoder Medium Grain Clock Gating: Off Video Compression Engine Medium Grain Clock Gating: Off Host Data Path Light Sleep: On Host Data Path Medium Grain Clock Gating: On Digital Right Management Medium Grain Clock Gating: Off Digital Right Management Light Sleep: Off Rom Medium Grain Clock Gating: Off Data Fabric Medium Grain Clock Gating: Off Address Translation Hub Medium Grain Clock Gating: On Address Translation Hub Light Sleep: On
I noticed this after playing with the monitor options, but it downclocking no matter what combination of FreeSync and HDR I choose. Maybe it's some kind of initialization issue?
$ cat /sys/kernel/debug/dri/0/amdgpu_pm_infoGFX Clocks and Power: 1000 MHz (MCLK) 500 MHz (SCLK) 0 MHz (PSTATE_SCLK) 0 MHz (PSTATE_MCLK) 800 mV (VDDGFX) 31.0 W (average GPU)GPU Temperature: 50 CGPU Load: 0 %MEM Load: 1 %SMC Feature Mask: 0x00003763a37f7dffVCN: DisabledClock Gating Flags Mask: 0x38118305 Graphics Medium Grain Clock Gating: On Graphics Medium Grain memory Light Sleep: Off Graphics Coarse Grain Clock Gating: On Graphics Coarse Grain memory Light Sleep: Off Graphics Coarse Grain Tree Shader Clock Gating: Off Graphics Coarse Grain Tree Shader Light Sleep: Off Graphics Command Processor Light Sleep: Off Graphics Run List Controller Light Sleep: Off Graphics 3D Coarse Grain Clock Gating: On Graphics 3D Coarse Grain memory Light Sleep: Off Memory Controller Light Sleep: On Memory Controller Medium Grain Clock Gating: On System Direct Memory Access Light Sleep: Off System Direct Memory Access Medium Grain Clock Gating: Off Bus Interface Medium Grain Clock Gating: Off Bus Interface Light Sleep: Off Unified Video Decoder Medium Grain Clock Gating: Off Video Compression Engine Medium Grain Clock Gating: Off Host Data Path Light Sleep: On Host Data Path Medium Grain Clock Gating: On Digital Right Management Medium Grain Clock Gating: Off Digital Right Management Light Sleep: Off Rom Medium Grain Clock Gating: Off Data Fabric Medium Grain Clock Gating: Off Address Translation Hub Medium Grain Clock Gating: On Address Translation Hub Light Sleep: On
Right! I forgot to mention that I use an RX6800 (non XT) which could explain the lower power draw?
If I were to try mesa-git, are there any other git packages I should use as well to match?
Yeah I've kept my eye on that issue, and I thought that was the issue I was experiencing, but I guess it only concerns RX5000 series cards?
No my display is a BenQ 4k@60Hz.
Please tell me if there is any more information I could provide to help figure something out. I'm all out of ideas at the moment.
GFX Clocks and Power: 1000 MHz (MCLK) 0 MHz (SCLK) 1825 MHz (PSTATE_SCLK) 1000 MHz (PSTATE_MCLK) 6 mV (VDDGFX) 35.0 W (average GPU)GPU Temperature: 57 CGPU Load: 0 %MEM Load: 0 %SMC Feature Mask: 0x00003763a37f7dffVCN: DisabledClock Gating Flags Mask: 0x38118305 Graphics Fine Grain Clock Gating: Off Graphics Medium Grain Clock Gating: On Graphics Medium Grain memory Light Sleep: Off Graphics Coarse Grain Clock Gating: On Graphics Coarse Grain memory Light Sleep: Off Graphics Coarse Grain Tree Shader Clock Gating: Off Graphics Coarse Grain Tree Shader Light Sleep: Off Graphics Command Processor Light Sleep: Off Graphics Run List Controller Light Sleep: Off Graphics 3D Coarse Grain Clock Gating: On Graphics 3D Coarse Grain memory Light Sleep: Off Memory Controller Light Sleep: On Memory Controller Medium Grain Clock Gating: On System Direct Memory Access Light Sleep: Off System Direct Memory Access Medium Grain Clock Gating: Off Bus Interface Medium Grain Clock Gating: Off Bus Interface Light Sleep: Off Unified Video Decoder Medium Grain Clock Gating: Off Video Compression Engine Medium Grain Clock Gating: Off Host Data Path Light Sleep: On Host Data Path Medium Grain Clock Gating: On Digital Right Management Medium Grain Clock Gating: Off Digital Right Management Light Sleep: Off Rom Medium Grain Clock Gating: Off Data Fabric Medium Grain Clock Gating: Off VCN Medium Grain Clock Gating: Off Host Data Path Deep Sleep: Off Host Data Path Shutdown: Off Interrupt Handler Clock Gating: On JPEG Medium Grain Clock Gating: Off Address Translation Hub Medium Grain Clock Gating: On Address Translation Hub Light Sleep: On
Just found a weird fix on my system. GNOME Settings shows:
I switched from the second 60.00 Hz to the first one and now it's much better:
GFX Clocks and Power: 96 MHz (MCLK) 0 MHz (SCLK) 1825 MHz (PSTATE_SCLK) 1000 MHz (PSTATE_MCLK) 6 mV (VDDGFX) 8.0 W (average GPU)GPU Temperature: 58 CGPU Load: 0 %MEM Load: 10 %SMC Feature Mask: 0x00003763a37f7dffVCN: DisabledClock Gating Flags Mask: 0x38118305 Graphics Fine Grain Clock Gating: Off Graphics Medium Grain Clock Gating: On Graphics Medium Grain memory Light Sleep: Off Graphics Coarse Grain Clock Gating: On Graphics Coarse Grain memory Light Sleep: Off Graphics Coarse Grain Tree Shader Clock Gating: Off Graphics Coarse Grain Tree Shader Light Sleep: Off Graphics Command Processor Light Sleep: Off Graphics Run List Controller Light Sleep: Off Graphics 3D Coarse Grain Clock Gating: On Graphics 3D Coarse Grain memory Light Sleep: Off Memory Controller Light Sleep: On Memory Controller Medium Grain Clock Gating: On System Direct Memory Access Light Sleep: Off System Direct Memory Access Medium Grain Clock Gating: Off Bus Interface Medium Grain Clock Gating: Off Bus Interface Light Sleep: Off Unified Video Decoder Medium Grain Clock Gating: Off Video Compression Engine Medium Grain Clock Gating: Off Host Data Path Light Sleep: On Host Data Path Medium Grain Clock Gating: On Digital Right Management Medium Grain Clock Gating: Off Digital Right Management Light Sleep: Off Rom Medium Grain Clock Gating: Off Data Fabric Medium Grain Clock Gating: Off VCN Medium Grain Clock Gating: Off Host Data Path Deep Sleep: Off Host Data Path Shutdown: Off Interrupt Handler Clock Gating: On JPEG Medium Grain Clock Gating: Off Address Translation Hub Medium Grain Clock Gating: On Address Translation Hub Light Sleep: On
Laurențiu Nicolachanged title from MCLK stuck at 1000MHz on 6800 XT to MCLK stuck at 1000MHz on 6800 XT on some refresh rates
changed title from MCLK stuck at 1000MHz on 6800 XT to MCLK stuck at 1000MHz on 6800 XT on some refresh rates
Clock Gating Flags Mask: 0x38118305
Graphics Fine Grain Clock Gating: Off
Graphics Medium Grain Clock Gating: On
Graphics Medium Grain memory Light Sleep: Off
Graphics Coarse Grain Clock Gating: On
Graphics Coarse Grain memory Light Sleep: Off
Graphics Coarse Grain Tree Shader Clock Gating: Off
Graphics Coarse Grain Tree Shader Light Sleep: Off
Graphics Command Processor Light Sleep: Off
Graphics Run List Controller Light Sleep: Off
Graphics 3D Coarse Grain Clock Gating: On
Graphics 3D Coarse Grain memory Light Sleep: Off
Memory Controller Light Sleep: On
Memory Controller Medium Grain Clock Gating: On
System Direct Memory Access Light Sleep: Off
System Direct Memory Access Medium Grain Clock Gating: Off
Bus Interface Medium Grain Clock Gating: Off
Bus Interface Light Sleep: Off
Unified Video Decoder Medium Grain Clock Gating: Off
Video Compression Engine Medium Grain Clock Gating: Off
Host Data Path Light Sleep: On
Host Data Path Medium Grain Clock Gating: On
Digital Right Management Medium Grain Clock Gating: Off
Digital Right Management Light Sleep: Off
Rom Medium Grain Clock Gating: Off
Data Fabric Medium Grain Clock Gating: Off
VCN Medium Grain Clock Gating: Off
Host Data Path Deep Sleep: Off
Host Data Path Shutdown: Off
Interrupt Handler Clock Gating: On
JPEG Medium Grain Clock Gating: Off
Address Translation Hub Medium Grain Clock Gating: On
Address Translation Hub Light Sleep: On\
Same problem here since I got this card in december, on recent 5.12 kernels only happens on my monitor's native resolution 2560x1440 at any refresh rate, and it does not happen on 1920x1080 at any refresh rate for example. Gentoo Linux and KDE Plasma 5.20.5 user here, kernel 5.12.4 and linux-firmware 20210511
` $ xrandr -q
Screen 0: minimum 320 x 200, current 2560 x 1440, maximum 16384 x 16384
DisplayPort-0 disconnected (normal left inverted right x axis y axis)
DisplayPort-1 connected primary 2560x1440+0+0 (normal left inverted right x axis y axis) 597mm x 336mm
2560x1440 59.95 + 143.86* 119.88 99.95
1920x1200 59.95
1920x1080 143.85 60.00 50.00 59.94
1600x1200 59.95
1680x1050 59.95
1280x1024 75.02 60.02
1440x900 59.89
1280x960 60.00
1280x800 59.81
1152x864 75.00
1280x720 60.00 50.00 59.94
1024x768 75.03 70.07 60.00
832x624 74.55
800x600 72.19 75.00 60.32 56.25
720x576 50.00
720x480 60.00 59.94
640x480 75.00 72.81 66.67 60.00 59.94
720x400 70.08
DisplayPort-2 disconnected (normal left inverted right x axis y axis)
HDMI-A-0 disconnected (normal left inverted right x axis y axis)
I have the same bug and solved it by incrasing the vertical back porch. Looks like some weird signaling issue but doesn't happen on my Windows dual boot.
Generate a custom resolution with cvt -r <width> <height> <rate> (eg cvt -r 2560 1440 60). The Modeline has this format:
+-- horizontal ---+ +-- vertical -----+ v v v vModeline "2560x1440R" 241.50 2560 2608 2640 2720 1440 1443 1448 1481 +hsync -vsync ^ ^ ^ ^ ^ pixel clock active pixels | | | front porch | | sync pulse | back porch
Increment the vertical back porch in small steps (~10). Then calculate the pixel clock with VBackPorch * HBackPorch * Rate.
I raised the vertical back porch to 1521 and got 248.227 as pixel clock. Test the new modeline(Modeline "MCLK-Fix" 248.227 2560 2608 2640 2720 1440 1443 1448 1521 +hsync -vsync) with xrandr:
Many thx @zquarefish for the nice hint and guide. I also have a 3440x1440 144hz display and tried @tac.minux solution. but mine only wents down to 465mhz, not to 96mhz :( . Any clue what esle i coul try (tried up to 1608 vbackporch instead of 1568) ?
@pingubot I think you've got to get your own values for your monitor, because we have not the same (I've got a Iiyama GB3466WQSU-B1)
Get you're values with "$ cvt 3440 1440 144", and adapt your modeline with a new calculate pixel clock value based on vbackporch you want, it should work.
@tac.minux i am very well aware the copy and paste usually does not work, therefore i used my own values, but those are identical with yours , i actually have the same monitor :): Iiyama GB3466WQSU-B1 .
do you have any feature masks ? I am using amdgpu.ppfeaturemask=0xffffffff .
Btw, dou you also face the following issue: #1422 (closed)?
I also have constant 1000mhz memclock on my 6900xt (reference), but i always thought that is still the issue that memclock can't be lowered in multi-monitor mode due to some flickering when changing clock speeds - or is that issue resolved?
Anyway, i tried this and gone up in steps of 10 from Modeline "2560x1440R" 497.25 2560 2608 2640 2720 1440 1443 1448 1525 +hsync -vsync to xrandr --newmode "MCLK-Fix7" 520.61 2560 2608 2640 2720 1440 1443 1448 1595 +hsync -vsync (using 120hz) for both displays, but still am stuck on 1000mhz..
Do i NEED the amdgpu.ppfeaturemask=0xffffffff? I've none set at the moment.
@tac.minux i am on kernel 5.12, will give 5.12.5 a try later on, lets see if that helps. BTw, lowering the res to 2560x1080, the card clocks down to 96mhz without any issues.
@zquarefish Yes, it will switch to higher frequencies, but the lowest i had seen with kernel 5.12 was 4xx mhz. I tried with kernel 5.12.5 now and partly it goes down to 96mhz, but doesn't stay there. With 2 dispays attached it is 4xx - 6xx mhz.
Can you please explain how to "calculate the pixel clock with VBackPorch * HBackPorch * Rate" ?? I don't quite understand this line of your comment. I would like to test if i can get normal power consumption. Thank you.
I have reference RX 6700 XT with monitor DELL S2721DGFA and at native resolution 2560 x 1440 x 60/120 Hz is power draw around 28 watt. Only on 1080p 60Hz it will properly idle like in Windows around 7-8 watt.
That did not work for me. My default back porch value is 1568, tried +10 steps until 1688, calcualted the pixel clock but it did not work I just got a blank screen when applying (it is not permanent fortunately and I could get my display to work again on reboot), also tried -10 steps until 1548 but got the same result :(
This is the deafault modeline for me don't know if I did something wrong
Thanks, tried increasing horizontal back porch but still did not work (blank screen). Fun fact is it works when increasing vertical porch at 1440p 60Hz, will try if it works at least at 120Hz as this fix is a no go for 144Hz in my case at the moment.
We have confirmed the same issue on a Notebook GPU (Raven)
#1455 (closed) where IMO the higher clock rate hurts even more. Fingers crossed we'll hear back from AMD again.
I have this problem with a 5700XT too. At 2560x1080@144Hz, my MCLK goes to maximum and power consumption and heat increases.
I've opened a bug 9 months ago, without any answers yet.
The mclk switching has to happen during a display blanking period. If the time to do the mclk switch is too long for the blanking period, we can't change the clock or you will get display flicker. Higher clocked modes tend to have shorter blanking periods and hence we can't always change the mclk during that period.
Also, why would the clock speed get so high even in the power-saving dpm mode and nothing happening on the screen (like in the framebuffer console)? This is in idle, not in a game.
Windows should be pulling the same mode from the monitor’s EDID, so the vblank interval is definitely adequate. Either the calculations are off on Linux, the driver is being much more conservative with the time estimate, or something is locking and preventing the clock from changing.
I have done that custom modeline thingy and it seems that memory is not anymore stuck on 1000Mhz on my reference RX 6700 XT. Now is idling around 96 Mhz and 7-10 watt. But it seems that in gaming it goes only up to 1000Mhz. I must verify this on more games. These metrics are from app Radeon Profile, hopefully they are accurate.
In Windows is memory boosting properly up to 1990 Mhz (usually floating variously) but in linux it's bugged.
@X6205 1000mhz is the maximum stock clock of the memory speed. Linux reports the actual 1000mhz clock, while Windows reports the effective clock because it's double date rate (DDR) memory. So if it's boosting to 1000mhz in games it's working as intended :)
Setting GPU power profile either through the cli or a GUI program like CoreCtrl reduced the power consumption within expected range (6-10W)
echo "2" > /sys/class/drm/card0/device/pp_power_profile_mode
@mamusr : Many thx. That also solved the issue here, but only works in combination with a mode which already dropped the power usage to 24W here (from 34). I am also on 3440x1440 fwiw.
Hey @pingubot, are you attempting to resolve the issue at 3440x1440 60, 100, or greater fps via hdmi or displayport?
Your dropping the horinzontal or vertical back porch incrementally as suggested by @zquarefish using the resolution mode figures provided in xrandr --verbose?
I have a similar problem with a 6700xt (mesa 21.2.0), with a 2560x1440 resolution the power usage is > 30 watts at 60hz but when using 100hz it's below 10 watts, anything above 100hz does not work at all (nothing useful in the logs so far).
@lukasbecker2, so no abnormal idle power consumption at 2560x1440 100hz over displayport without creation of custom resolution modes to mitigate the memory clock issue?
Did you try the suggested vertical or horoniztal back porch fix suggested by @zquarefish at 2560x1440 60hz or greater than 100hz?
Currently, only the 100hz profile has no abnormal idle power consumption (I only tried Display port). The 100hz mode for the 2560x1440 was however not available by default (I had to add it manually but I did not modify the output of the cvt 2560 1440 100 command). I tried the workaround described above for refresh rates > 100 (2560x1440) but anything > 100hz results in black screens (this issue was not present with my 5700xt) no matter what modification I made. The default 60hz profile for the 2560x1440 resolution is the only mode that is detected by default, but that mode has abnormal idle power consumption.
UPDATE:
The blackscreens > 110hz were related to incorrect edid.
Sadly does not work for my monitor (3440x1440, iiyama G-Master GB3466WQSU-B1) but i had low hopes as i also have the issue on windows.
Edit: With those patches my custom 3440x1440 120hz resolution created via cvt now works fine with 7w without any back porch adjustment which is great ! Its fully usable now that way ! Many thx.
Sadly i have no idea why the 100hz and 144hz modes which are default for the display do not work with 7w (40w instead).
Thats great to hear. Ordinary BFU like me will have to wait propably for next kernel or free radeon software driver from AMD. I have no idea how could i install it on my ubuntu.
I'm using three different monitors from different vendor, each with a different native resolution and slightly different refresh rates. Their specs are respectively 2560x1440@143.998993Hz, 1920x1080@144.001007Hz and 1440x900@59.887Hz. I had tried just about every combination of resolution and refresh rate which I felt I could use, mclk always maxed at 1000MHz and my GPU was using 40W no matter what. I use wayland and AFAIK there's no way to change back porch or other settings like xrandr does yet, so I couldn't try much else.
Using this patchset I was able to use my main monitor at 2560x1440@119.876Hz, and the secondaries at 1920x1080@50.000Hz and 1440x900@59.887Hz, and my gpus mclk now stays at 96Hz as opposed to 1000MHz.
Power draw went from 40W to 15W which is way more acceptable given the configuration.
5.14.6 looks like it has 3 out of 4 patches, but [3/4] drm/amd/display: Remove duplicate dml init wasn't included. Without that, it seems it isn't fixed.
edit
I guess [4/4] drm/amd/display: Move AllowDRAMSelfRefreshOrDRAMClockChangeInVblank to bounding box
wasn't included either. This commit looks pretty necessary for the fix.
I modified my monitor EDID file in wxedid (increased V-blank lines only), added modified file as kernel parameter (drm_kms_helper.edid_firmware=edid/edid_modified.bin) and now MCLK goes down from 1000MHz to 96 (3440x1440@100Hz on 6900XT), tested in GNOME40/Wayland, Plasma/Wayland, Plasma/x.org. Now waiting for 5.15.
I'm a bit confused on how to interpret the stable tree correctly. The commit was merged on 01.09 (12 days ago) to stable, but it's not included in todays stable release (5.14.3). Seems there is a step inbetween for kernel releases?
That's Linus's branch. You need to follow the 5.14.x branch. As far as I know, this patch set hasn't been set up for stable at all yet. You'll likely have to wait for 5.15 or build your own.
FWIW, the posted patchset works fine for me when applied to 5.13.13, i'm getting ~6w power draw and 96MHz MCLK on a 6700xt and 2 monitors at 2560x1440@144hz + 1920x1080@60hz.
However, the same patchset applied to 5.14.2 only has low power draw with a single monitor connected, with multiple monitors in the configuration described above it's back to ~27w 1000MHz MCLK again :(
Chiming in to confirm that I have essentially the same setup (6700 XT and the same monitor config) and have the same issue as described, but also in 5.15.0-rc1. I can however get the memory to down-clock when I set my secondary monitor to 1440x1080@60hz. Changing to reduced blanking modes, any resolution larger, or changing the display mode on my primary monitor do not reduce the memory clock.
Also, not sure if I should make a separate comment for this, but in my testing for this issue, I experience flickering as described above, both in the framebuffer console, and in X. The flickering is much worse in the console, happening every ~10 seconds or so, whereas in X it can be once every ~10 to 15 minutes. This was on kernel 5.13.x when I used the modified display timings trick mentioned above; I have yet to notice any flickering in X when on 5.15.0-rc1 with the displays set to 1440x1080@60 and 2560x1440@144 (the console does still flicker).
Edit: The flickering only occurs on the 1440p monitor. The console is set to 1080p60 and 1440p144 and the memory clock is at 96MHz in 5.15.0-rc1. Something I forgot to mention was that if I disable the lowest memory clock state, making the new minimum 456MHz, the flickering goes away completely AFAICT. The GPU then idles at 18-22W instead of 28-32W, but is still a far bit from the desired 6-10W when at 96MHz.
The idle powerdraw regression with multiple monitors between 5.13 and 5.14 is the same as reported in #1709 (closed) and reverting 136e55e7a9 helps to bring it down again, thanks @birdspider for bisecting!
I am glad that was discovered; I too noticed that my secondary display now maxed the memory clock. Turns out that I have to set it to 1920x1080 at <=59hz in order for the memory clock to drop which really feels like an off-by-one error. The 1440p monitor seems to be able to use any timing it wants without affecting the memory clock now with the new patches.
Some other oddities include that any timing is acceptable on the secondary (1080p) monitor when it is by itself, and that with that one running at native, the 1440p display can run at 1600x900, but not 1280x720... perhaps a connection type issue? 1440p is DP while the 1080p is a DP to DVI adapter.
Also, I have noticed that while the flickering still occurs, it seems like it happens less often now, so there must be some improvement!