Skip to content

RFC: mesa/st: i965: Enable fake support for GL_NVX_gpu_memory_info

Ian Romanick requested to merge idr/mesa:review/GL_NVX_gpu_memory_info into main

Some of the datasets for SPECviewperf13 require GL_NVX_gpu_memory_info. They don't check for the extension string before running. Instead, they look for "NVIDIA" in the vendor string (see a8b4e690).

Here's the problem. GL_NVX_gpu_memory_info only makes sense on GPUs with dedicated memory. As a result, some drivers, especially for UMA devices, may choose to not implement this. The issues section of the NVIDIA extension spec even says that it's not applicable to UMA devices:

  1. Should Tegra advertise and support this extension?

    RESOLVED: No. Tegra's unified memory architecture doesn't sensibly map to the queries of this extensions.

    A future extension is needed to address this.

This lands many drivers squarely between a rock and a hard place. It doesn't make sense properly enable the extension, but it would be beneficial for these datasets to run.

This MR partially implements "fake" support for this extension using the same memory reporting infrastructure used for GLX_MESA_query_renderer. In Gallium, this is through PIPE_CAP_VIDEO_MEMORY. In i965, this is through __DRI2_RENDERER_VIDEO_MEMORY.

What is missing is some way to advertise the extension. For testing to this point, I have just used MESA_EXTENSION_OVERRIDE="+GL_NVX_gpu_memory_info". This works, but it's not really ideal.

Options considered:

  1. Do nothing. MESA_EXTENSION_OVERRIDE requires that anyone wanting to run SPECviewperf13 on UMA GPUs knows what they're doing. All of those people could probably fit in a single elevator together. 😄

  2. Add a driconf option to specifically enable this extension. There are several driconf options to disable an extension, but this would be the first that enables an extension. Next to doing nothing, this would be the least amount of work.

  3. Add a generic driconf option that behaves like the MESA_EXTENSION_OVERRIDE environment variable. This could allow us to remove the per-extension disable options mentioned above. This is more work now than just adding a per-extension enable option, but it may save work the next time something like this comes along.

  4. Actually implement the extension for Intel UMA GPUs. I don't know if this would be tractable. Some of the information, like number of evictions and total memory evicted, may not be available from the kernel. We may want these features for future GPUs anyway.

I am definitely looking for feedback here.

There's another issue with SPECviewperf13 and i965 and Iris. The datasets that use this extension "require" 4GB of video memory, but, even on Gen11 GPUs, the Intel drivers only advertise 3GB. There were address space issues on older GPUs that caused the Intel drivers to only advertise 3GB, but I thought later GPUs didn't have those limitations. I thought the driver should advertise some fraction of total system memory instead. My system has 32GB, so I would have expected the driver to advertise much more than 3GB. Either way, this is an orthogonal issue.

@kwg @jljusten

Merge request reports