v3dv: implement VK_KHR_buffer_device_address
This extension allows buffer references in shader code that can be implicitly dereferenced to access data. The spec for this makes explicit that these GPU pointers are 64-bit values, which means that these references can be explicitly casted to/from an uvec2 type for example. There are actually CTS tests designed to check this, that have code like this:
accum |= int(T1(uvec2(T1(x.c[0]))).e[0][0] - 6);
Where T1 is a buffer reference. That GLSL would generate SPIR-V with 64-bit OpBitCast instructions for these casts, causing validation to fail if we use nir_address_format_32bit_global address format for these references.
For a 32-bit GPU that doesn't support Int64, we shouldn't use nir_address_format_64bit_global either, since this would cause injection of 64-bit integer instructions for conversions and pack/unpack which we can't support natively.
To work around these issues, this series (specifically the first patch) adds a new type of global address where the address is handled as a uvec2 (adding a 2x32 suffix) and a new set of global intrinsics that consume this address. This fixes the issue derived from buffer references being explicit 64-bit addresses from the API point of view allowing converting back and forth between them and uvec2, while still allowing backends that are limited to 32-bit GPU addresses to ignore the .Y component of the address (Which must be zero anyway) when implementing the intrinsics.
@jekstrand: do you have any objections to this?