intel/blorp: Disable depth testing for slow depth clears
We'll start doing slow depth clears more often on HIZ_CCS buffers in a future commit. Reduce the performance impact by making them use less bandwidth. From the Depth Test section of the BSpec: This function is enabled by the Depth Test Enable state variable. If enabled, the pixel's ("source") depth value is first computed. After computation the pixel's depth value is clamped to the range defined by Minimum Depth and Maximum Depth in the selected CC_VIEWPORT state. Then the current ("destination") depth buffer value for this pixel is read. and from the Depth Buffer Updates section of the BSpec: If depth testing is disabled or the depth test passed, the incoming pixel's depth value is written to the Depth Buffer. Taken together, it's clear that depth testing isn't necessary to perform a depth buffer clear. Mark Janes and I analyzed this patch with frameretrace and a depthrange piglit test. I disabled HiZ to ensure we'd get slow depth clears. We've observed the bandwidth consumption by the depth buffer access to be cut ~50% on BDW and SKL during depth clears. On a more graphically intensive workload, the Shadowmapping Sascha benchmark, I took the average of 3 runs on a BDW with a display resolution of about 1920x1200 (minus some desktop environment decorations). I measured a 22.61% FPS improvement when HiZ is disabled. v2. The BSpec doesn't mandate this behavior, update comment accordingly. (Ken) Fixes: bc4bb5a7 ("intel/blorp: Emit more complete DEPTH_STENCIL state") Reviewed-by:Jason Ekstrand <jason@jlekstrand.net> Reviewed-by:
Kenneth Graunke <kenneth@whitecape.org> (cherry picked from commit d5fb9ccc)
Please register or sign in to comment