1. 07 Mar, 2018 3 commits
  2. 01 Mar, 2018 4 commits
    • José Casanova Crespo's avatar
      anv: Enable VK_KHR_16bit_storage for PushConstant · ba642ee3
      José Casanova Crespo authored
      Enables storagePushConstant16 features of VK_KHR_16bit_storage for Gen8+.
      Reviewed-by: Jason Ekstrand's avatarJason Ekstrand <jason@jlekstrand.net>
    • José Casanova Crespo's avatar
      anv: Enable VK_KHR_16bit_storage for SSBO and UBO · 994d2104
      José Casanova Crespo authored
      Enables storageBuffer16BitAccess and uniformAndStorageBuffer16BitAccesss
      features of VK_KHR_16bit_storage for Gen8+.
      Reviewed-by: Jason Ekstrand's avatarJason Ekstrand <jason@jlekstrand.net>
    • José Casanova Crespo's avatar
      isl/i965/fs: SSBO/UBO buffers need size padding if not multiple of 32-bit · 67d7dd59
      José Casanova Crespo authored
      The surfaces that backup the GPU buffers have a boundary check that
      considers that access to partial dwords are considered out-of-bounds.
      For example, buffers with 1,3 16-bit elements has size 2 or 6 and the
      last two bytes would always be read as 0 or its writting ignored.
      The introduction of 16-bit types implies that we need to align the size
      to 4-bytew multiples so that partial dwords could be read/written.
      Adding an inconditional +2 size to buffers not being multiple of 2
      solves this issue for the general cases of UBO or SSBO.
      But, when unsized arrays of 16-bit elements are used it is not possible
      to know if the size was padded or not. To solve this issue the
      implementation calculates the needed size of the buffer surfaces,
      as suggested by Jason:
      surface_size = isl_align(buffer_size, 4) +
                     (isl_align(buffer_size, 4) - buffer_size)
      So when we calculate backwards the buffer_size in the backend we
      update the resinfo return value with:
      buffer_size = (surface_size & ~3) - (surface_size & 3)
      It is also exposed this buffer requirements when robust buffer access
      is enabled so these buffer sizes recommend being multiple of 4.
      v2: (Jason Ekstrand)
          Move padding logic fron anv to isl_surface_state.
          Move calculus of original size from spirv to driver backend.
      v3: (Jason Ekstrand)
          Rename some variables and use a similar expresion when calculating.
          padding than when obtaining the original buffer size.
          Avoid use of unnecesary component call at brw_fs_nir.
      v4: (Jason Ekstrand)
          Complete comment with buffer size calculus explanation in brw_fs_nir.
      Reviewed-by: Jason Ekstrand's avatarJason Ekstrand <jason@jlekstrand.net>
    • Jason Ekstrand's avatar
      anv: Always set has_context_priority · 6d3edbea
      Jason Ekstrand authored
      We don't zalloc the physical device so we need to unconditionally set
      everything.  Crucible helpfully initializes all allocations to 139 so it
      was getting true regardless of whether or not the kernel actually
      supports context priorities.
      Fixes: 6d8ab533 "anv: implement VK_EXT_global_priority extension"
      Reviewed-by: Kenneth Graunke's avatarKenneth Graunke <kenneth@whitecape.org>
  3. 28 Feb, 2018 2 commits
  4. 27 Feb, 2018 2 commits
  5. 21 Feb, 2018 1 commit
    • Jason Ekstrand's avatar
      anv/image: Separate modifiers from legacy scanout · adca1e4a
      Jason Ekstrand authored
      For a bit there, we had a bug in i965 where it ignored the tiling of the
      modifier and used the one from the BO instead.  At one point, we though
      this was best fixed by setting a tiling from Vulkan.  However, we've
      decided that i965 was just doing the wrong thing and have fixed it as of
      The old assumptions also affected the solution we used for legacy
      scanout in Vulkan.  Instead of treating it specially, we just treated it
      like a modifier like we do in GL.  This commit goes back to making it
      it's own thing so that it's clear in the driver when we're using
      modifiers and when we're using legacy paths.
      v2 (Jason Ekstrand):
       - Rename legacy_scanout to needs_set_tiling
      Reviewed-by: Daniel Stone's avatarDaniel Stone <daniels@collabora.com>
  6. 16 Feb, 2018 1 commit
  7. 14 Feb, 2018 1 commit
  8. 06 Feb, 2018 1 commit
  9. 23 Jan, 2018 9 commits
  10. 17 Jan, 2018 1 commit
  11. 11 Jan, 2018 1 commit
  12. 28 Dec, 2017 1 commit
  13. 17 Dec, 2017 1 commit
  14. 08 Dec, 2017 3 commits
    • Jason Ekstrand's avatar
      anv: Enable UBO pushing · 4c7af87f
      Jason Ekstrand authored
      Push constants on Intel hardware are significantly more performant than
      pull constants.  Since most Vulkan applications don't actively use push
      constants on Vulkan or at least don't use it heavily, we're pulling way
      more than we should be.  By enabling pushing chunks of UBOs we can get
      rid of a lot of those pulls.
      On my SKL GT4e, this improves the performance of Dota 2 and Talos by
      around 2.5% and improves Aztec Ruins by around 2%.
      Reviewed-by: Jordan Justen's avatarJordan Justen <jordan.l.justen@intel.com>
    • Jason Ekstrand's avatar
      anv/device: Increase the UBO alignment requirement to 32 · 8d340771
      Jason Ekstrand authored
      Push constants work in terms of 32-byte chunks so if we want to be able
      to push UBOs, every thing needs to be 32-byte aligned.  Currently, we
      only require 16-byte which is too small.
      Reviewed-by: Jordan Justen's avatarJordan Justen <jordan.l.justen@intel.com>
    • Jason Ekstrand's avatar
      anv: Disable VK_KHR_16bit_storage · 597c1944
      Jason Ekstrand authored
      The testing for this extension is currently very poor.  The CTS tests
      only test accessing UBOs and SSBOs at dynamic offsets so none of our
      constant-offset paths get triggered at all.  Also, there's an assertion
      in our handling of nir_intrinsic_load_uniform that offset % 4 == 0 which
      is never triggered indicating that nothing every gets loaded from an
      offset which is not a dword.  Both push constants and the constant
      offset pull paths are complex enough, we really don't want to ship
      without tests.  We'll turn the extension back on once we have decent
  15. 06 Dec, 2017 3 commits
  16. 04 Dec, 2017 4 commits
  17. 22 Nov, 2017 2 commits