[GEN9+] large perf drop (up to 1/3) in most 3D benchmarks from force-enabling IOMMU
@eero-t
Submitted by Eero Tamminen Assigned to Intel GFX Bugs mailing list
Link to original bug (#111731)
Description
Setup:
- HW: SKL i7-6770HQ
- OS: Ubuntu 18.04 desktop
- SW stack: git versions of drm-tip kernel, X and Mesa
Between following drm-tip kernel 5.3-rc8 commits:
* 2019-09-10_13-35-40 32c81a317364: drm-tip: 2019y-09m-10d-13h-34m-53s UTC integration manifest
* 2019-09-11_15-07-28 b27acd37b7de: drm-tip: 2019y-09m-11d-15h-06m-37s UTC integration manifest
Kernel performance dropped in most 3D benchmarks. Worst cases were:
* 27% SynMark CSDof (fullscreen)
* 20-25% GpuTest Triangle (1/2 screen window), SynMark VSTangent
* 20% Unigine Heaven, GpuTest Triangle (fullscreen), SynMark DeferredAA
With few months old git version of Mesa & X server, perf drop in SymMark Fill* tests was also ~20%.
There seems also to be few percent improvement in synMark TexMem* tests kernel performance at the same time, but that's visible only with specific Mesa driver (i965, or Iris) and X server versions. Performance change in other tests than memory bandwidth ones isn't significantly impacted by Mesa/Xorg version, only by kernel.
This drop is specific SkullCanyon, it's not visible on others platforms (KBL GT3e, SKL/BDW GT2, BXT). While 3D benchmarks are impacted most, there seems to be marginal perf drop also in Media (transcode) tests.
Although this impacts only SkullCanyon, setting severity as major because the perf drop is so large.