Vulkan Mesa drivers (AMD, Intel, llvmpipe) produce incorrect behaviour when using a buffer_reference to write a vec4, then immediately read a single vector component
System information
System:
Host: Chunky Kernel: 6.1.0-16-amd64 arch: x86_64 bits: 64 compiler: gcc
v: 12.2.0 Desktop: Cinnamon v: 5.6.8 tk: GTK v: 3.24.38 dm: 1: GDM3
2: LightDM Distro: Debian GNU/Linux 12 (bookworm)
CPU:
Info: 16-core (8-mt/8-st) model: 12th Gen Intel Core i9-12900K bits: 64
type: MST AMCP arch: Alder Lake rev: 2 cache: L1: 1.4 MiB L2: 14 MiB
L3: 30 MiB
Speed (MHz): avg: 837 high: 1315 min/max: 800/5100:5200:3900 cores:
1: 1013 2: 800 3: 915 4: 800 5: 800 6: 800 7: 1315 8: 800 9: 784 10: 800
11: 838 12: 800 13: 815 14: 800 15: 805 16: 808 17: 804 18: 801 19: 800
20: 800 21: 800 22: 800 23: 800 24: 800 bogomips: 152985
Flags: avx avx2 ht lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3
Graphics:
Device-1: Intel AlderLake-S GT1 vendor: Micro-Star MSI driver: i915
v: kernel ports: active: none empty: DP-1, DP-2, HDMI-A-1, HDMI-A-2,
HDMI-A-3, HDMI-A-4 bus-ID: 00:02.0 chip-ID: 8086:4680
Device-2: NVIDIA GA102 [GeForce RTX 3090] vendor: Micro-Star MSI
driver: nvidia v: 535.146.02 arch: Ampere pcie: speed: 16 GT/s lanes: 16
bus-ID: 01:00.0 chip-ID: 10de:2204
Display: x11 server: X.Org v: 1.21.1.7 with: Xwayland v: 22.1.9 driver: X:
loaded: nvidia gpu: i915,nvidia display-ID: :0 screens: 1
Screen-1: 0 s-res: 6000x3840 s-dpi: 144
Monitor-1: DP-0 pos: primary,bottom-l res: 3840x2160 dpi: 163
diag: 685mm (26.97")
Monitor-2: HDMI-0 pos: top-right res: 2160x3840 dpi: 184
diag: 609mm (23.97")
API: OpenGL v: 4.6.0 NVIDIA 535.146.02 renderer: NVIDIA GeForce RTX
3090/PCIe/SSE2 direct-render: Yes
Mesa version: Mesa 22.3.6 (LLVM 15.0.6)
Describe the issue
The following GLSL code behaves incorrectly on Mesa-based Vulkan drivers on Linux:
layout(buffer_reference, std140, buffer_reference_align = 16) buffer VectorRef { vec4 Vector; };
void TestRef(VectorRef in_Ref)
{
in_Ref.Vector = vec4(11, 22, 33, 44);
debugPrintfEXT("x=%g, y=%g, z=%g, w=%g.\n", in_Ref.Vector.x, in_Ref.Vector.y, in_Ref.Vector.z, in_Ref.Vector.w);
}
This correctly prints x=11, y=22, z=33, w=44
as expected on proprietary NVIDIA driver, but results in x=11, y=11, z=11, w=11
being printed on multiple other drivers: AMD, Intel and llvmpipe drivers that I've tested all return .x
for any single-component read.
It does behave correctly on all drivers when reading the full vec4
or any swizzle (.yz
prints 22, 33
), or if the assignment is done before the function call instead of inside the function.
It's quite a consistent incorrect behaviour. It's not specific to debugPrintfEXT
either, returning the value or writing it to another buffer behaves the same.
Vulkan validation layers don't warn about anything. Here is a minimal repro program that demonstrates the behaviour: https://gitlab.com/julienbarnoin/vectorbufferreferencebug