zink: descriptor caching
This is the live version of the preliminary zmike/mesa!5 (closed) MR, which can now be put up without conflicts.
What is descriptor caching?
Descriptor caching is the reuse of descriptor sets. Each set has a unique identifier calculated for it based on all the contained descriptors, and then it can be reused in order to massively reduce overhead by skipping over the writes to the underlying descriptor table BO in the vulkan driver and, eventually skipping any updating/iterating at all.
What is included here?
- improved descriptor set allocation to reduce overhead from allocating a new set on each draw
- a context-based mechanism for calculating descriptor set identifiers (including disambiguation for collisions)
- splitting the existing "single set" model into separate sets by descriptor type to reduce calculation overhead
- splitting by type also simplifies resource and barrier tracking since only a single type of barrier is needed
- aggregating and consolidating barriers for each set to reduce flushes
- reuse of barriers for cached sets to eliminate successive calculations
- invalidation of sets when required resources are destroyed
- reusable descriptor pool objects that can be shared across pipelines
Why is this so big?
I attempted to split this into a number of smaller patches in order to make it more easily reviewable. This was somewhat successful, but the result is that it's a lot of patches. Rather than break things up into successive MRs and have things go in and out of context for reviewers, it seems more optimal to try and get to a "good" stopping point in the first MR here.
Generally speaking, this should handle everything that zink could previously handle. It fixes a couple bugs along the way, but mostly I just need to get this out since it includes a huge amount of core refactoring that blocks other patches.
Any testing should be additionally done on lavapipe/RADV and not just ANV due to #4364 (closed), which is still pending.