Regression with Minecraft/Optifine performance with all VRAM mapped
System information
inxi output:
System:
Host: mrgency Kernel: 5.10.1-102-tkg-upds x86_64 bits: 64 compiler: gcc v: 10.2.0
Desktop: N/A dm: GDM Distro: Arch Linux
CPU:
Info: 8-Core model: AMD Ryzen 7 2700 bits: 64 type: MT MCP arch: Zen+ rev: 2
L2 cache: 4096 KiB
flags: avx avx2 lm nx pae sse sse2 sse3 sse4_1 sse4_2 sse4a ssse3 svm
bogomips: 102480
Speed: 2759 MHz min/max: 1550/3200 MHz boost: enabled Core speeds (MHz): 1: 3039
2: 2926 3: 2674 4: 2937 5: 2526 6: 2977 7: 2536 8: 2498 9: 2647 10: 3113 11: 2399
12: 2601 13: 2729 14: 2278 15: 2906 16: 2989
Graphics:
Device-1: AMD Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] vendor: ASUSTeK
driver: amdgpu v: kernel bus ID: 26:00.0 chip ID: 1002:67df
Display: wayland server: X.Org 1.20.99.1 compositor: wayfire driver: modesetting
alternate: ati,fbdev,vesa resolution: 1: 1920x1080~60Hz 2: 1920x1080~60Hz
s-dpi: 96
OpenGL: renderer: AMD Radeon RX 480 Graphics (POLARIS10 DRM 3.40.0
5.10.1-102-tkg-upds LLVM 11.0.0)
v: 4.6 Mesa 21.0.0-devel (git-6fecdc6dda) direct render: Yes
Describe the issue
When booting my machine with 64 bit MMIO support enabled, and thus the entire 8GB of VRAM mapped, Minecraft with Optifine and some heavy shaders runs rather poorly, as bad as the AMD Windows drivers without the MMIO switch turned on.
Regression
It worked perfectly before commit 913c06f5
6fecdc6d is the last good commit, even reverting the above commit against current HEAD does not fix the performance.
Log files as attachment
Any extra information would be greatly appreciated
This particular issue would have been impossible for me to notice before I started using this 64 bit MMIO mode with OSes other than Windows. I can't use it in Windows, because AMD has not worked fixes into their drivers for it, and because my onboard Ethernet NIC ceases to function.