Skip to content

asahi,agx: Implement buffer textures with gnarly NIR

Alyssa Rosenzweig requested to merge alyssa/mesa:agx/buffer-texture into main
Implement buffer textures in full generality.  There are a few issues here:

* OpenGL requires buffer textures support a minimum size of 65536 elements,
  however 1D textures in AGX are (at most) 8192 elements.

* OpenGL 4.0 (and OpenGL ES) require buffer textures to support the "RGB32"
  texture formats. These are 3 packed channels of 32-bits each. In general,
  non-power-of-two texel sizes are problematic. AGX does not support any such
  formats and we rely on the GL frontend to lower to a padded format (RGBX) if
  necessary. Such a lowering cannot work for buffer textures, however, so we
  need to find a way to implement RGB32 buffer textures.

We solve these issues in the follow way:

* Use 2D texture descriptors for buffer textures, with a large fixed
  power-of-two size along one axis. Then large texel indices may be accessed at
  a small vec2 texel coordinate, and since the fixed dimension is a
  power-of-two, that vector may be recovered by simply shifting and masking.
  This effectively avoids size restriction. We do need to clamp texel indices to
  the buffer size to avoid faulting on OOB reads, since we may read past the end
  of the buffer (if the app binds a non-page-aligned offset into the buffer).

* Use a general purpose memory load for RGB32 buffer textures. Lower the texture
  load instruction to a memory load from the buffer and some address arithmetic.
  There's no format conversion needed for RGB32, other than maybe filling in a
  format-appropriate alpha, so this is straightforward. Again, we need to clamp
  the texel index for robustness with OOB reads.

Each of these solutions brings its own problem.

* Using 2D textures instead of 1D requires physically rounding up the buffer
  size when packing the descriptor, so we can no longer implement textureSize()
  by reading off the texture descriptor like normal.

* We don't know at compile-time whether a given texture load will read from an
  RGB32 buffer texture or not, so we need to emit code for both. In Vulkan, we
  can't key the shader to this property, either, since it's descriptor set state
  and not pipeline state.

And each of these problems in turn brings its own solution:

* The texture descriptor is linear, so the "compression buffer address" field is
  ignored by the hardware. We stash the real buffer size there so that
  textureSize becomes a load from the texture descriptor like usual, without
  requiring a sideband (which would complicate bindless textures).

* If we determine a texture descriptor contains RGB32 data, then it will never
  be interpreted by the hardware and hence does not need to be a valid texture
  descriptor. So, we extend the hardware's format enum to contain a
  software-defined RGB32 format enum. Then, when lowering texture buffer loads,
  we either read it as a typed RGB32 memory load or as a texture load depending
  on the value of the format field in the texture descriptor.

All of this is accomplished with a big NIR pass generating a pile of strange
looking code. But it should be good enough in practice for this silly feature.

Merge request reports