Due to an influx of spam, we have had to impose restrictions on new accounts. Please see this wiki page for instructions on how to get full permissions. Sorry for the inconvenience.
Admin message
The migration is almost done, at least the rest should happen in the background. There are still a few technical difference between the old cluster and the new ones, and they are summarized in this issue. Please pay attention to the TL:DR at the end of the comment.
Despite #1002 (closed) is closed, I'm experiencing the same issue. To better see the issue I limit fps to 99, so the monitor should display the same 99 fps. But the frequency jumps all the time up to a maximum of 165. You can see it in the video below.
Glad to see someone else describing this, I swore I was losing my mind. I experience the same thing and can verify it with my monitor's built in refresh rate viewer as well. Monster Hunter World and Elden Ring are two games that also exhibit this issue for me.
There is one thing I noticed I think will be useful in finding the roots of the bug. Display VRR fps is a lot stable when I set profile to 3D_FULL_SCREEN by using commands below. There are still fps jumps, but they are not as strong. So basically when GPU Clock frequency is almost flat then display fps is also flat. When I set profile to default BOOTUP_DEFAULT GPU Clock frequency start jumping from min to max very quickly and somehow it affects the display's fps.
GameMode is not an alternative to choosing the 3D_FULL_SCREEN profile.
GameMode, when correctly configured, just sets power_dpm_force_performance_level to high, which is better than stock auto setting, but manually forcing the profile as described above yields even better performance in my 6800 XT.
I don't know if this is actually related but I had to tweak and overclock RT 6700 XT GPU via commands to eliminate that weird stuttering in games. I also suffered flickering at low GPU usage (mostly TTY @ 144/165Hz) but it went away when I changed power_dpm_force_performance_level and pp_od_clk_voltage settings. I also noticed that GHz went aggressively down while gaming when it should have been higher. I haven't tested FPS limiters yet. I set minimum clock 100MHz lower compared to the maximum clock to get much better gaming experience.
I understand that my GPU is a bit different but these issues in my experience have appeared mostly on 6000 serie cards as far I'm aware.
This is a dirty script but you can use it for a reference if you find something that is useful. I also feel that power saving features are way too aggressive and card doesn't seem to understand when to keep and push frequencies higher.
#!/bin/sh# Don't aimlessly run this as a script.# Make sure you understand what every# command is for.# Enable manual setting for dpmecho "manual" > /sys/class/drm/card0/device/power_dpm_force_performance_level# Set your power profileecho "1" > /sys/class/drm/card0/device/pp_power_profile_mode# Minimum GPU clock to useecho "s 0 2500" > /sys/class/drm/card0/device/pp_od_clk_voltage# Maximum GPU clock to useecho "s 1 2600" > /sys/class/drm/card0/device/pp_od_clk_voltage# GPU undervolt offset to useecho "vo -100" > /sys/class/drm/card0/device/pp_od_clk_voltage# Performance level for memoryecho "3" > /sys/class/drm/card0/device/pp_dpm_mclk# MCLK/memory overclock (unstable?)# echo "m 1 2000" > /sys/class/drm/card0/device/pp_od_clk_voltage# Make settings activeecho "c" > /sys/class/drm/card0/device/pp_od_clk_voltage
The fact that the 3D_FULL_SCREEN profile mitigates the issue somewhat might be related to #1500 (closed)
tl;dr: The frequency switching algorithm is very bad on RDNA2, and forcing to that profile is the only way on my system (Sapphire RX 6800 XT) to have a stable frametime graph. On the default profile even a very light game can stutter because it's constantly switching to the lowest frequency.
Are there users of the affected cards who don't have issues with VRR? It seems strange to me that this issue doesn't seem generally known / complained about and it makes me wonder how many users simply assume VRR is meant to be like this or can't tell the differences when refresh rates are so high.
It might be helpful to see someone who thinks theirs works fine showing their monitor's built in refresh rate monitor next to mangohud in one of the affected games. If it's not universal then we could at least try and find commonalities between affected systems.
Seeing the same problem on my 5700 XT. If I turn my monitor off and back on after starting sway, the problem goes away for the rest of the session. Logging out and back in causes the problem to reappear. Windows does not exhibit this issue.
Same issue for me on Arch (EndeavourOS) with KDE Plasma with my 5700 XT. Mostly play FFXIV and that is where I see the same behavior. I lock my FPS to 100 but the display fps counter jumps to 120Hz frequently and aside from my gameplay not feeling smooth since VRR isn't properly activated, it also introduces brightness flickering.
Tested FFXIV also on Windows with a locked 100 fps there and zero issues. No brightness flickering, refresh rate stays near 100 and game performance is VRR fluid as it should be.
Really hope this can be fixed, since gaming on Linux with not working VRR is not fun.
Now, as another example. PCSX2 running Persona 3. Since the GPU usage is very low, regardless of power profile, the refresh rate is all over the place and the screen flickers at lot with VRR on.
High raises the GPU power more compared to 3D Fullscreen.
With emulators, High is pretty much a must because the GPU will basically idle without it, even with 3D Fullscreen profile. I tried running Super Mario 64 on paraLLEl-RDP with a 8x resolution scale and it was unplayable on Auto or 3D Fullscreen. On High, it got smooth.
I'm led to believe the way RDNA2 sets power profiles on Linux is super broken.
@KawaiiDinosaur I was doing a few tests to see if that was the case on my end as well, and now I've found out I can't even control my shader clocks consistently by setting power_dpm_force_performance_level to high anymore.
I've tried a lot of times and it seems unstable. Sometimes it raises the clocks immediately. Other times it only raises them when placing a load on the GPU, and other times it never raises them even with a load.
More bizarrely, sometimes it only raises the clocks if radeontop is opened WHILE I set the value to high.
I've tried by both manually setting the value of power_dpm_force_performance_level and by using gamemode but I can't find a pattern. Anyone else experiencing this?
I'd also like to add that even when forcing extremely high clocks (such as setting a very tight range between minimum and maximum in the 3D Fullscreen mode), the jumpy VRR behavior remains. The power saving behavior seems to be a big part of the problem, but even once that's eliminated it still doesn't work properly.
I've been having this issue for a while on my 6700 XT but after doing some testing I'm not sure VRR is at fault. For some reason, the frametime instability doesn't show up in MangoHud when playing a game, but it does show up when replaying a trace of the game. I can replay the trace with VRR turned off, or in windowed mode, and the frametimes are still fluctuating rapidly.
Replaying this trace causes it fairly often when using 3D_FULL_SCREEN on a 6700 XT:
Full command: MANGOHUD=1 MANGOHUD_CONFIG="fps_limit=60,gpu_core_clock" gfxrecon replay -m rebind --remove-unsupported WoW_compressed.gfxr
Good example, 3D_FULL_SCREEN:
Bad example, 3D_FULL_SCREEN:
Good example, max core clock:
Removing the FPS limiter fixes the stuttering but I don't think the FPS limiter is broken because the frametimes are always stable when setting power_dpm_force_performance_level to high.
I think part of what is making this difficult and inconsistent to talk about is that we're assuming the framerate counters in our monitors are all accurate.
Earlier I came across this thread on the Blur Busters forums and while it isn't about Linux specifically, it does bring up the question of if the source of some of these troubles (after working around the power management troubles) is just from the counters on our monitors jumping around because they're meant to measure the Windows implementation of FreeSync and not the implementations of VRR on Linux or even GSync on Windows. I've noticed that often things feel like frames are being matched by my monitor properly but they're moving around wildly on the montior's Hz counter and while I previously assumed this was just me not noticing, after some more testing it really does seem about as smooth as VRR is supposed to be.
Basically I think this might just be the power saving issue from #1500 (closed) that, after being worked around by setting a power profile, does not appear to be fixed because the tool we're using to judge if it's working or not was tuned to measure a completely different implementation of the feature.
This might also explain why VRRTest seems to work fine / nearly perfectly: it's giving us MUCH more perfect frametimes than any real-world game, not giving any opportunities to see the irregularities that makes the implementation in Linux different from that in Windows which shows up in our monitor's counters.
After testing for the past week, I'm pretty certain this is the case, at least on the 6700 XT. Something is wrong with the way FreeSync displays with refresh rate counters read the VRR implementation on Linux, as well as RDNA power profiles by default, but apparently not VRR under Linux itself.
This is beyond the scope of this issue, but if possible I'd love to see input from someone with the technical knowledge to explain this so I can feel a little more comfortable mentioning this in wikis and support forums to get the information out there.
Don't use rounded numbers for monitor's refresh rate inside your compositor, meaning use 119.982Hz or 120.044Hz and don't go above monitor's max refresh rate (e.g. 121Hz): #2657 (closed).
Update: I'm not seeing this issue with kernel 6.7. However, I'm also using the video=DP-1:3840x1600@120 kernel parameter.
[VRR] Workaround for screen flickering:
Use wlr_randr to identify possible refresh rates, switch between available values till flickering stops (#2793 (comment 2063259)). I've also had success fixing flickering with highest refresh rate by simply unplugging the monitor for 12 hours to let capacitors drain and lastly re-seating the DisplayPort cables on both ends.
I wonder if Sway without direct scanout simply isn't triggering some widespread issue in amdgpu codebase around VRR? Furthermore this isn't the only issue which needs to be solved to completely fix vblank:
Mouse pane updates are also a part the problem:
#2186 (closed)
I think this is fundamentally a frame scheduling issue: The frame rate is artificially limited, but the frames become ready for presentation at irregular intervals, so the effective refresh rate jumps around instead of being constant.
As such, it's first and foremost an issue in whatever limits the frame rate. In the future, something like wayland/wayland-protocols!248 (merged) might help frame rate limiters achieve more consistent effective refresh rate.