1. 21 Oct, 2021 1 commit
  2. 20 Oct, 2021 1 commit
  3. 19 Oct, 2021 1 commit
    • Mario Kleiner's avatar
      Fix RandR leasing for more than 1 simultaneously active lease. · f467f85c
      Mario Kleiner authored
      Due to a switched order of parameters in the xorg_list_add()
      call inside ProcRRCreateLease(), adding a new lease for RandR
      output leasing does not actually add the new RRLeasePtr lease
      record to the list of existing leases for a X-Screen, but instead
      replaces the existing list with a new list that has the new lease
      as the only element, and probably leaks a bit of memory.
      Therefore the server "forgets" all active leases for a screen,
      except for the last added lease. If multiple leases are created
      in a session, then destruction of all leases but the last one
      will fail in many cases, e.g., during server shutdown in
      RRCloseScreen(), or resource destruction, e.g., in
      Most importantly, it fails if a client simply close(fd)'es the
      DRM master descriptor to release a lease, quits, gets killed or
      crashes. In this case the kernel will destroy the lease and shut
      down the display output, then send a lease event via udev to the
      ddx, which e.g., in the modesetting-ddx will trigger a call to
      That function is supposed to detect the released lease and tell
      the server to terminate the lease on the server side as well,
      via xf86CrtcLeaseTerminated(), but this doesn't happen for all
      the leases the server has forgotten. The end result is a dead
      video output, as the server won't reinitialize the crtc's
      corresponding to the terminated but forgotten lease.
      This bug was observed when using the amdvlk AMD OSS Vulkan
      driver and trying to lease multiple VKDisplay's, and also
      under Mesa radv, as both Mesa Vulkan/WSI/Display and amdvlk
      terminate leases by simply close()ing the lease fd, not by
      sending explicit RandR protocol requests to free leases.
      Leasing worked, but ending a session with multiple active
      leases ended in a lot of unpleasant darkness.
      Fixing the wrong argument order to xorg_list_add() fixes the
      problem. Tested on single-X-Screen and dual-X-Screen setups,
      with one, two or three active leases.
      Please merge this for the upcoming server 21.1 branch.
      Merging into server 1.20 would also make a lot of sense.
      Fixes: e4e34476
      Signed-off-by: Mario Kleiner's avatarMario Kleiner <mario.kleiner.de@gmail.com>
      Cc: Keith Packard <keithp@keithp.com>
  4. 14 Oct, 2021 1 commit
  5. 12 Oct, 2021 1 commit
    • Olivier Fourdan's avatar
      xwayland: Notify of root size change with XRandR emulation · 246ae00b
      Olivier Fourdan authored and Olivier Fourdan's avatar Olivier Fourdan committed
      Some clients (typically Java, but maybe others) rely on ConfigureNotify
      or RRScreenChangeNotify events to tell that the XRandR request is
      When emulated XRandR is used in Xwayland, compute the emulated root size
      and send the expected ConfigureNotify and RRScreenChangeNotify events
      with the emulated size of the root window to the asking X11 client.
      Note that the root window size does not actually change, as XRandR
      emulation is achieved by scaling the client window using viewports in
      Wayland, so this event is sort of misleading.
      Also, because Xwayland is using viewports, emulating XRandR does not
      reconfigure the outputs location, meaning that the actual size of the
      root window which encompasses all the outputs together may not change
      in a multi-monitor setup. To work around this limitation, when using an
      emulated mode, we report the size of that emulated mode alone as the
      root size for the configure notify event.
      Signed-off-by: Olivier Fourdan's avatarOlivier Fourdan <ofourdan@redhat.com>
      Reviewed-by: default avatarHans de Goede <hdegoede@redhat.com>
  6. 08 Oct, 2021 4 commits
    • Alexander Richardson's avatar
      dix/privates.c: Avoid undefined behaviour after realloc() · f9f705bf
      Alexander Richardson authored and Povilas Kanapickas's avatar Povilas Kanapickas committed
      Adding the offset between the realloc result and the old allocation to
      update pointers into the new allocation is undefined behaviour: the
      old pointers are no longer valid after realloc() according to the C
      standard. While this works on almost all architectures and compilers,
      it causes  problems on architectures that track pointer bounds (e.g.
      CHERI or Arm's Morello): the DevPrivateKey pointers will still have the
      bounds of the previous allocation and therefore any dereference will
      result in a run-time trap.
      I found this due to a crash (dereferencing an invalid capability) while
      trying to run `XVnc` on a CHERI-RISC-V system. With this commit I can
      successfully connect to the XVnc instance running inside a QEMU with a
      VNC viewer on my host.
      This also changes the check whether the allocation was moved to use
      uintptr_t instead of a pointer since according to the C standard:
      "The value of a pointer becomes indeterminate when the object it
      points to (...
    • nerdopolis's avatar
      xf86: Accept devices with the 'simpledrm' driver. · b9218fad
      nerdopolis authored and Povilas Kanapickas's avatar Povilas Kanapickas committed
      SimpleDRM 'devices' are a fallback device, and do not have a busid
      so they are getting skipped. This will allow simpledrm to work
      with the modesetting driver
    • Mario Kleiner's avatar
      modesetting: Consider RandR primary output for selectioh of sync crtc. · 4b75e657
      Mario Kleiner authored and Povilas Kanapickas's avatar Povilas Kanapickas committed
      The "sync crtc" is the crtc used to drive the display timing of a
      drawable under DRI2 and DRI3/Present. If a drawable intersects
      multiple video outputs, then normally the crtc is chosen which has
      the largest intersection area with the drawable.
      If multiple outputs / crtc's have exacty the same intersection
      area then the crtc chosen was simply the first one with maximum
      intersection. Iow. the choice was random, depending on plugging
      order of displays.
      This adds the ability to choose a preferred output in such a tie
      situation. The RandR output marked as "primary output" is chosen
      on such a tie.
      This new behaviour and its implementation is consistent with other
      video ddx drivers. See amdgpu-ddx, ati-ddx and nouveau-ddx for
      reference. This commit is a straightforward port from amdgpu-ddx.
      Signed-off-by: Mario Kleiner's avatarMario Kleiner <mario.kleiner.de@gmail.com>
    • Mario Kleiner's avatar
      modesetting: Handle mixed VRR and non-VRR display setups better. · 017ce263
      Mario Kleiner authored and Povilas Kanapickas's avatar Povilas Kanapickas committed
      In a setup with both VRR capable and non-VRR capable displays,
      it was so far inconsistent if the driver would allow use of
      VRR support or not, as "is_connector_vrr_capable" was set to
      whatever the capabilities of the last added drm output were.
      Iow. the plugging order of monitors determined the outcome.
      Fix this: Now if at least one display is VRR capable, the driver
      will treat an X-Screen as capable for VRR, plugging order no
      longer matters.
      Tested with a dual-display setup with one VRR monitor and one
      non-VRR monitor. This is also beneficial with the new Option
      When we are at it, also add some so far missing description of
      the "VariableRefresh" driver option, copied from amdgpu-ddx.
      Signed-off-by: Mario Kleiner's avatarMario Kleiner <mario.kleiner.de@gmail.com>
  7. 07 Oct, 2021 2 commits
    • Mario Kleiner's avatar
      modesetting: Enable GAMMA_LUT for lut's with up to 4096 slots. · 66e5a5bb
      Mario Kleiner authored and Povilas Kanapickas's avatar Povilas Kanapickas committed
      A lut size of 4096 slots has been verified to work correctly,
      as tested with amdgpu-kms. Intel Tigerlake Gen12 hw has a very
      large GAMMA_LUT size of 262145 slots, but also issues with its
      current GAMMA_LUT implementation, as of Linux 5.14.
      Therefore we keep GAMMA_LUT off for large lut's. This currently
      excludes Intel Icelake, Tigerlake and later.
      This can be overriden via the "UseGammaLUT" boolean xorg.conf option
      to force use of GAMMA_LUT on or off.
      See following link for the Tigerlake situation:
      drm/intel#3916 (comment 1085315)
      Signed-off-by: Mario Kleiner's avatarMario Kleiner <mario.kleiner.de@gmail.com>
    • Ray Strode's avatar
      xkb: Drop check for XkbSetMapResizeTypes · 8b7f4d32
      Ray Strode authored and Ray Strode's avatar Ray Strode committed
      Commit 446ff2d3 added checks to
      prevalidate the size of incoming SetMap requests.
      That commit checks for the XkbSetMapResizeTypes flag to be set before
      allowing key types data to be processed.
      key types data can be changed or even just sent wholesale unchanged
      without the number of key types changing, however. The check for
      XkbSetMapResizeTypes rejects those legitimate requests. In particular,
      XkbChangeMap never sets XkbSetMapResizeTypes and so always fails now
      any time XkbKeyTypesMask is in the changed mask.
      This commit drops the check for XkbSetMapResizeTypes in flags when
      prevalidating the request length.
  8. 06 Oct, 2021 1 commit
  9. 05 Oct, 2021 2 commits
    • James Jones's avatar
      Use EGL_LINUX_DMA_BUF_EXT to create GBM bo EGLImages · f1572937
      James Jones authored
      Xwayland was passing GBM bos directly to
      eglCreateImageKHR using the EGL_NATIVE_PIXMAP_KHR
      target. Given the EGL GBM platform spec claims it
      is invalid to create a EGLSurface from a native
      pixmap on the GBM platform, implying there is no
      mapping between GBM objects and EGL's concept of
      native pixmaps, this seems a bit questionable.
      This change modifies the bo import function to
      extract all the required data from the bo and then
      imports it as a dma-buf instead when the dma-buf +
      modifiers path is available.
      Signed-off-by: James Jones's avatarJames Jones <jajones@nvidia.com>
      Reviewed-by: Simon Ser's avatarSimon Ser <contact@emersion.fr>
    • Olivier Fourdan's avatar
      xwayland/shm: Avoid integer overflow on large pixmaps · 079c5ccb
      Olivier Fourdan authored and Olivier Fourdan's avatar Olivier Fourdan committed
      Xwayland's xwl_shm_create_pixmap() computes the size of the shared
      memory pool to create using a size_t, yet the Wayland protocol uses an
      integer for that size.
      If the pool size becomes larger than INT32_MAX, we end up asking Wayland
      to create a shared memory pool of negative size which in turn will raise
      a protocol error which terminates the Wayland connection, and therefore
      Avoid that issue early by return a NULL pixmap in that case, which will
      trigger a BadAlloc error, but leave Xwayland alive.
      Signed-off-by: Olivier Fourdan's avatarOlivier Fourdan <ofourdan@redhat.com>
      Reviewed-by: Jonas Ådahl's avatarJonas Ådahl <jadahl@gmail.com>
  10. 27 Sep, 2021 3 commits
    • Mario Kleiner's avatar
      Revert "modesetting: Only use GAMMA_LUT if its size is 1024" · 545fa90c
      Mario Kleiner authored and Povilas Kanapickas's avatar Povilas Kanapickas committed
      This reverts commit 617f591f
      The problem described in that commit exists, but the two
      preceeding commits with improvements to the servers RandR
      code should avoid the mentioned problems while allowing the
      use of GAMMA_LUT's instead of legacy gamma lut.
      Use of legacy gamma lut's is not a good fix, because it will reduce
      color output precision of gpu's with more than 1024 GAMMA_LUT
      slots, e.g., AMD, ARM MALI and KOMEDA with 4096 slot luts,
      and some Mediathek parts with 512 slot luts. On KOMEDA, legacy
      lut's are completely unsupported by the kms driver, so gamma
      correction gets disabled.
      The situation is especially bad on Intel Icelake and later:
      Use of legacy gamma tables will cause the kms driver to switch
      to hardware legacy lut's with 256 slots, 8 bit wide, without
      interpolation. This way color output precision is restricted to
      8 bpc and any deep color / HDR output (10 bpc, fp16, fixed point 16)
      becomes impossible. The latest Intel gen gpu's would have worse
      color precision than parts which are more than 10 years old.
      Signed-off-by: Mario Kleiner's avatarMario Kleiner <mario.kleiner.de@gmail.com>
    • Mario Kleiner's avatar
      xfree86: Let xf86RandR12CrtcComputeGamma() deal with non-power-of-2 sizes. · 7326e131
      Mario Kleiner authored and Povilas Kanapickas's avatar Povilas Kanapickas committed
      The assumption in the upsampling code was that the crtc->gamma_size
      size of the crtc's gamma table is a power of two. This is true for
      almost all current driver + gpu combos at least on Linux, with typical
      sizes of 256, 512, 1024 or 4096 slots.
      However, Intel Gen-11 Icelake and later are outliers, as their gamma
      table has 2^18 + 1 slots, very big and not a power of two!
      Try to make upsampling behave at least reasonable: Replicate the
      last gamma value to fill up remaining crtc->gamma_red/green/blue
      slots, which would normally stay uninitialized. This is important,
      because while the intel display driver does not actually use all
      2^18+1 values passed as part of a GAMMA_LUT, it does need the
      very last slot, which would not get initialized by the old code.
      This should hopefully create reasonable behaviour with Icelake+
      but is untested on the actual Intel hw due to lack of suitable
      Signed-off-by: Mario Kleiner's avatarMario Kleiner <mario.kleiner.de@gmail.com>
    • Mario Kleiner's avatar
      xfree86: Avoid crash in xf86RandR12CrtcSetGamma() memcpy path. · 966f5674
      Mario Kleiner authored and Povilas Kanapickas's avatar Povilas Kanapickas committed
      If randrp->palette_size is zero, the memcpy() path can read past the
      end of the randr_crtc's gammaRed/Green/Blue tables if the hw crtc's
      gamma_size is greater than the randr_crtc's gammaSize.
      Avoid this by clamping the to-be-copied size to the smaller of both
      Note that during regular server startup, the memcpy() path is only
      taken initially twice, but then a suitable palette is created for
      use during a session. Therefore during an actual running X-Session,
      the xf86RandR12CrtcComputeGamma() will be used, which makes sure that
      data is properly up- or down-sampled for mismatching source and
      target crtc gamma sizes.
      This should avoid reading past randr_crtc gamma memory for gpu's
      with big crtc->gamma_size, e.g., AMD/MALI/KOMEDA 4096 slots, or
      Intel Icelake and later with 262145 slots.
      Tested against modesetting-ddx and amdgpu-ddx under screen color
      depth 24 (8 bpc) and 30 (10 bpc) to make sure that clamping happens
      This is an alternative fix for the one attempted in commit
      Signed-off-by: Mario Kleiner's avatarMario Kleiner <mario.kleiner.de@gmail.com>
  11. 23 Sep, 2021 1 commit
    • Adam Jackson's avatar
      xwayland/glx: Enable sRGB fbconfigs · 6c1e6429
      Adam Jackson authored and Adam Jackson's avatar Adam Jackson committed
      We turn this on if the GL underneath us can enable GL_FRAMEBUFFER_SRGB.
      We do try to generate both capable and incapable configs, which is to
      keep llvmpipe working until the client side gets smarter about its srgb
  12. 17 Sep, 2021 1 commit
  13. 15 Sep, 2021 10 commits
  14. 10 Sep, 2021 2 commits
    • Aaron Plattner's avatar
      xfree86: NUL-terminate strings in hwEnableIO · 72c5d153
      Aaron Plattner authored
      The Linux version of xf86EnableIO calls a helper function called hwEnableIO().
      Except on Alpha, this function reads /proc/ioports looking for the 'keyboard'
      and 'timer' ports, extracts the port ranges, and enables access to them. It does
      this by reading 4 bytes from the string for the start port number and 4 bytes
      for the last port number, passing those to atoi(). However, it doesn't add a
      fifth byte for a NUL terminator, so some implementations of atoi() read past the
      end of this string, triggering an AddressSanitizer error:
        ==1383==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7fff71fd5b74 at pc 0x7fe1be0de3e0 bp 0x7fff71fd5ae0 sp 0x7fff71fd5288
        READ of size 5 at 0x7fff71fd5b74 thread T0
            #0 0x7fe1be0de3df in __interceptor_atoi /build/gcc/src/gcc/libsanitizer/asan/asan_interceptors.cpp:520
            #1 0x564971adcc45 in hwEnableIO ../hw/xfree86/os-support/linux/lnx_video.c:138
            #2 0x564971adce87 in xf86EnableIO ../hw/xfree86/os-support/linux/lnx_video.c:174
            #3 0x5649719f6a30 in InitOutput ../hw/xfree86/common/xf86Init.c:439
            #4 0x564971585924 in dix_main ../dix/main.c:190
            #5 0x564971b6246e in main ../dix/stubmain.c:34
            #6 0x7fe1bdab6b24 in __libc_start_main (/usr/lib/libc.so.6+0x27b24)
            #7 0x564971490e9d in _start (/home/aaron/git/x/xserver/build.asan/hw/xfree86/Xorg+0xb2e9d)
        Address 0x7fff71fd5b74 is located in stack of thread T0 at offset 100 in frame
            #0 0x564971adc96a in hwEnableIO ../hw/xfree86/os-support/linux/lnx_video.c:118
          This frame has 3 object(s):
            [32, 40) 'n' (line 120)
            [64, 72) 'buf' (line 122)
            [96, 100) 'target' (line 122) <== Memory access at offset 100 overflows this variable
        HINT: this may be a false positive if your program uses some custom stack unwind mechanism, swapcontext or vfork
              (longjmp and C++ exceptions *are* supported)
        SUMMARY: AddressSanitizer: stack-buffer-overflow /build/gcc/src/gcc/libsanitizer/asan/asan_interceptors.cpp:520 in __interceptor_atoi
        Shadow bytes around the buggy address:
          0x10006e3f2b10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
          0x10006e3f2b20: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
          0x10006e3f2b30: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
          0x10006e3f2b40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
          0x10006e3f2b50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
        =>0x10006e3f2b60: 00 00 f1 f1 f1 f1 00 f2 f2 f2 00 f2 f2 f2[04]f3
          0x10006e3f2b70: f3 f3 00 00 00 00 00 00 00 00 00 00 00 00 00 00
          0x10006e3f2b80: 00 00 00 00 00 00 00 00 00 00 00 00 f1 f1 f1 f1
          0x10006e3f2b90: f1 f1 f8 f2 00 f2 f2 f2 f8 f3 f3 f3 00 00 00 00
          0x10006e3f2ba0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 f1 f1
          0x10006e3f2bb0: f1 f1 00 f3 f3 f3 00 00 00 00 00 00 00 00 00 00
        Shadow byte legend (one shadow byte represents 8 application bytes):
          Addressable:           00
          Partially addressable: 01 02 03 04 05 06 07
          Heap left redzone:       fa
          Freed heap region:       fd
          Stack left redzone:      f1
          Stack mid redzone:       f2
          Stack right redzone:     f3
          Stack after return:      f5
          Stack use after scope:   f8
          Global redzone:          f9
          Global init order:       f6
          Poisoned by user:        f7
          Container overflow:      fc
          Array cookie:            ac
          Intra object redzone:    bb
          ASan internal:           fe
          Left alloca redzone:     ca
          Right alloca redzone:    cb
          Shadow gap:              cc
      Fix this by NUL-terminating the string.
      Fixes: #1193 (comment 1053306)
      Signed-off-by: Aaron Plattner's avatarAaron Plattner <aplattner@nvidia.com>
    • Aaron Plattner's avatar
      modesetting: Only use GAMMA_LUT if its size is 1024 · 617f591f
      Aaron Plattner authored
      GAMMA_LUT sizes other than 1024 cause a crash during startup if the memcpy()
      calls in xf86RandR12CrtcSetGamma() read past the end of the legacy X11 /
      XVidMode gamma ramp.
      This is a problem on Intel ICL / GEN11 platforms because they report a GAMMA_LUT
      size of 262145. Since it's not clear that the modesetting driver will generate a
      proper gamma ramp at that size even if xf86RandR12CrtcSetGamma() is fixed, just
      disable use of GAMMA_LUT for sizes other than 1024 for now. This will cause the
      modesetting driver to disable the CTM property and fall back to the legacy gamma
      Signed-off-by: Aaron Plattner's avatarAaron Plattner <aplattner@nvidia.com>
      Fixes: #1193
      Tested-by: Mark Herbert
  15. 09 Sep, 2021 2 commits
    • Povilas Kanapickas's avatar
      glamor: Fix handling of 1-bit pixmaps · e59e24c8
      Povilas Kanapickas authored
      Since 8702c938 the pixmap formats are
      handled in a single place. In the process of conversion the difference
      between pixmap formats that can be uploaded and those that can be
      rendered on GL side has been lost. This affects only 1-bit pixmaps: as
      they aren't supported on GL, but can be converted to a R8 or A8 format
      for rendering (see glamor_get_tex_format_type_from_pictformat()).
      To work around this we add a separate flag that specifies whether the
      format actually supports rendering in GL, convert all checks to use this
      flag and then add 1-bit pixmap formats that don't support rendering in
      Fixes: 8702c938
      Closes: #1210
      Acked-by: Olivier Fourdan's avatarOlivier Fourdan <ofourdan@redhat.com>
      Tested-by: Olivier Fourdan's avatarOlivier Fourdan <ofourdan@redhat.com>
      Reviewed-by: Adam Jackson's avatarAdam Jackson <ajax@redhat.com>
      Signed-off-by: Povilas Kanapickas's avatarPovilas Kanapickas <povilas@radix.lt>
    • Mario Kleiner's avatar
      modesetting: Add option for non-vsynced flips for "secondary" outputs. · 68f01c0f
      Mario Kleiner authored and Povilas Kanapickas's avatar Povilas Kanapickas committed
      Whenever an unredirected fullscreen window uses pageflipping for a
      DRI3/Present PresentPixmap() operation and the X-Screen has more than
      one active output, multiple crtc's need to execute pageflips. Only
      after the last flip has completed can the PresentPixmap operation
      as a whole complete.
      If a sync_flip is requested for the present, then the current
      implementation will synchronize each pageflip to the vblank of
      its associated crtc. This provides tear-free image presentation
      across all outputs, but introduces a different artifact, if not
      all outputs run at the same refresh rate with perfect synchrony:
      The slowest output throttles the presentation rate, and present
      completion is delayed to flip completion of the "latest" output
      to complete. This means degraded performance, e.g., a dual-display
      setup with a 144 Hz monitor and a 60 Hz monitor will always be
      throttled to at most 60 fps. It also means non-constant present
      rate if refresh cycles drift against each other, creating complex
      "beat patterns", tremors, stutters and periodic slowdowns - quite
      Such a scenario will be especially annoying if one uses multiple
      outputs in "mirror mode" aka "clone mode". One output will usually
      be the "production output" with the highest quality and fastest
      display attached, whereas a secondary mirror output just has a
      cheaper display for monitoring attached. Users care about perfect
      and perfectly timed tear-free presentation on the "production output",
      but cares less about quality on the secondary "mirror output". They
      are willing to trade quality on secondary outputs away in exchange
      for better presentation timing on the "production output".
      One example use case for such production + monitoring displays are
      neuroscience / medical science applications where one high quality
      display device is used to present visual animations to test subjects
      or patients in a fMRI scanner room (production display), whereas
      an operator monitors the same visual animations from a control room
      on a lower quality display. Presentation timing needs to be perfect,
      and animations high-speed and tear-free for the production display,
      whereas quality and timing don't matter for the monitoring display.
      This commit gives users the option to choose such a trade-off as
      It adds a new boolean option "AsyncFlipSecondaries" to the device section
      of xorg.conf. If this option is specified as true, then DRI3 pageflip
      behaviour changes as follows:
      1. The "reference crtc" for a windows PresentPixmap operation does a
         vblank synced flip, or a DRM_MODE_PAGE_FLIP_ASYNC non-synchronized
         flip, as requested by the caller, just as in the past. Typically
         flips will be requested to be vblank synchronized for tear-free
         presentation. The "reference crtc" is the one chosen by the caller
         to drive presentation timing (as specified by PresentPixmap()'s
         "target_msc", "divisor", "remainder" parameters and implemented by
         vblank events) and to deliver Present completion timestamps (msc
         and ust) extracted from its pageflip completion event.
      2. All other crtc's, which also page-flip in a multi-display configuration,
         will try to flip with DRM_MODE_PAGE_FLIP_ASYNC, ie. immediately and
         not synchronized to vblank. This allows the PresentPixmap operation
         to complete with little delay compared to a single-display present,
         especially if the different crtc's run at different video refresh
         rates or their refresh cycles are not perfectly synchronized, but
         drift against each other. The downside is potential tearing artifacts
         on all outputs apart from the one of the "reference crtc".
      Successfully tested on a AMD gpu with single-display, dual-display and
      triple-display setups, and with single-X-Screen as well as dual-X-Screen
      "ZaphodHeads" configurations.
      Please consider merging this commit for the upcoming server 1.21 branch.
      Signed-off-by: Mario Kleiner's avatarMario Kleiner <mario.kleiner.de@gmail.com>
  16. 07 Sep, 2021 3 commits
    • Ignacio Casal's avatar
      touchevents: set the screen pointer after checking the device is enabled · 1fd5dec1
      Ignacio Casal authored
      If the device is disabled the sprite is NULL so we get a seg fault
    • Povilas Kanapickas's avatar
      Drop DMX DDX · b3b81c8c
      Povilas Kanapickas authored
      It turns out xdmx currently crashes when any client attempts to use GL
      and it has been in such state for about 14 years. There was a patch to
      fix the problem [1] 4 years ago, but it never got merged. The last
      activity on any bugs referring to xdmx has been more than 4 years ago.
      Given such situation, I find it unlikely that anyone is still using xdmx
      and just having the code is a drain of resources.
      [1]: https://lists.x.org/archives/xorg-devel/2017-June/053919.html
      Signed-off-by: Povilas Kanapickas's avatarPovilas Kanapickas <povilas@radix.lt>
    • Mario Kleiner's avatar
      modesetting: Allow Present flips with mismatched stride on atomic drivers. · 8f8ebf87
      Mario Kleiner authored and Povilas Kanapickas's avatar Povilas Kanapickas committed
      When using DRI3+Present with PRIME render offload, sometimes there is
      a mismatch between the stride of the to-be-presented Pixmap and the
      frontbuffer. The current code would reject a pageflip present in this
      case if atomic modesetting is not enabled, ie. always, as atomic
      modesetting is disabled by default due to brokeness in the current
      Fullscreen presents without page flipping however trigger the copy
      path as fallback, which causes not only unreliable presentation timing
      and degraded performance, but also massive tearing artifacts due to
      rendering to the framebuffer without any hardware sync to vblank.
      Tearing is extra awful on modesetting-ddx because glamor afaics seems
      to use drawing of a textured triangle strip for the copy implementation,
      not a dedicated blitter engine. The rasterization pattern creates extra
      awful tearing artifacts.
      We can do better: According to a tip from Michel Daenzer (thanks!),
      at least atomic modesetting capable kms drivers should be able to
      reliably change scanout stride during a pageflip, even if atomic
      modesetting is not actually enabled for the modesetting client.
      This commit adds detection logic to find out if the underlying kms
      driver is atomic_modeset_capable, and if so, it no longer rejects
      page flip presents on mismatched stride between new Pixmap and
      We (ab)use a call to drmSetClientCap(ms->fd, DRM_CLIENT_CAP_ATOMIC, 0);
      for this purpose. The call itself has no practical effect, as it
      requests disabling atomic mode, although atomic mode is disabled by
      default. However, the return value of drmSetClientCap() tells us if the
      underlying kms driver is atomic modesetting capable: An atomic driver
      will return 0 for success. A legacy non-atomic driver will return a
      non-zero error code, either -EINVAL for early atomic Linux versions
      4.0 - 4.19 (or for non-atomic Linux 3.x and earlier), or -EOPNOTSUPP
      for Linux 4.20 and later.
      Testing on a MacBookPro 2017 with Intel Kabylake display server gpu +
      AMD Polaris11 as prime renderoffload gpu, X-Server master + Mesa 21.0.3
      show improvement from unbearable tearing to perfect, despite a stride
      mismatch between display gpu and Pixmap of 11776 Bytes vs. 11520
      Bytes. That this is correct behaviour was also confirmed by comparing the
      behaviour and .check_flip implementation of the patched modesetting-ddx
      against the current intel-ddx SNA Present implementation.
      Please consider merging this patch before the server-1.21 branch point.
      This patch could also be cherry-picked into the server 1.20 branch to
      fix the same limitation.
      Signed-off-by: Mario Kleiner's avatarMario Kleiner <mario.kleiner.de@gmail.com>
  17. 06 Sep, 2021 1 commit
  18. 01 Sep, 2021 2 commits
    • Mario Kleiner's avatar
      Revert "glamor: Enable modifier support for xfree86 too" · 7c63c582
      Mario Kleiner authored and Povilas Kanapickas's avatar Povilas Kanapickas committed
      This reverts commit 9b899941.
      Turns out that defaulting glamor_egl->dmabuf_capable = TRUE
      breaks kms page-flipping on various Mesa+Linux/DRM-KMS+hardware
      combos, resulting in broken presentation timing, degraded performance
      and awful tearing. E.g., my testing shows that X-Server master +
      Mesa 21.2 + Linux 5.3 on Intel Kabylake has broken pageflipping.
      Similar behaviour was observed in the past on RaspberryPi 4/400
      with VideoCore-6 by myself and others, and iirc by myself on some
      AMD gpu's, although my memories of the latter are a bit dim.
      Cfe. mesa/mesa#3601 and
      possibly xorg/xserver!254
      for related problems.
      The reason for pageflip failure on the modesetting-ddx under
      DRI3/Present seems to be the following sequence:
      1. Atomic modesetting for the modesetting-ddx is broken and therefore
         both disabled by default in the modesetting-ddx itself and also
         force-disabled by the Linux kernel since quite a while. If the kernel
         detects drmSetClientCap(fd, DRM_CLIENT_CAP_ATOMIC, 1); from the
         X-Server, it will reject the request, as a countermeasure to all the
         past and current brokeness.
      2. Without DRM_CLIENT_CAP_ATOMIC we don't get the implied universal
         planes support (DRM_CLIENT_CAP_UNIVERSAL_PLANES).
      3. Without DRM_CLIENT_CAP_UNIVERSAL_PLANES, drmModeGetPlaneResources()
         will only return overlay planes, but not primary- or cursor planes.
      4. As modesetting-ddx drmmode_crtc_create_planes() function can only
         operate on primary planes, but can't get any from drmModeGetPlaneResources(),
         the drmmode_crtc_create_planes() mostly turns into a no-op, never
         executes populate_format_modifiers() and therefore the Linux kernels
         DRM-KMS driver is not ever queried for the list of scanout/pageflip
         capable DRM format modifiers. Iow. the drmmode_crtc->formats[i].modifiers
         list stays empty with zero drmmode_crtc->formats[i].num_modifiers.
      5. The list from step 4 provides the format+modifiers for intersection
         which would get returned by the X-Servers DRI3 backend as response to
         a xcb_dri3_get_supported_modifiers_window_modifiers() request. Given
         an empty list was returned in step 4, this will lead to return of an
         empty modifiers list by xcb_dri3_get_supported_modifiers_window_modifiers().
      6. Both Mesa's DRI3/Present OpenGL backbuffer allocation logic and iirc
         Mesa/Vulkan/WSI/X11's swapchain image allocation logic use the list
         from xcb_dri3_get_supported_modifiers_window_modifiers() for format+
         modifier selection for scanout/pageflip capable buffers. Cfe. Mesa's
         dri3_alloc_render_buffer() function.
         Due to the empty list, the Mesa code falls back to the format+modifiers
         reported by xcb_dri3_get_supported_modifiers_screen_modifiers()
         instead. This list contains all modifiers reported by GLAMOR as
         result of glamor_get_formats() and glamor_get_modifiers(), which
         in turn are query results from Mesa eglQueryDmaBufFormatsEXT()
         and eglQueryDmaBufModifiersEXT(). Iow. all format+modifiers which
         are supported for rendering are considered for the OpenGL backbuffers
         and Vulkan swapchain buffers.
      7. Depending on kms driver + gpu combo and Mesa version, such buffers
         are often not direct-scanout / pageflip capable, and so pageflipping
         can't be used for DRI3/Present of fullscreen windows. Whenever the
         system has to fallback to copies instead of pageflips, the results
         are broken presentation timing, degraded performance and quite
         horrible tearing, as the current DRI3/Present implementation does not
         perform any hardware synchronization of copy presents to the start
         of vblank or similar.
      By defaulting glamor_egl->dmabuf_capable = FALSE instead, as the server
      1.20 branch does, we avoid this failure:
      1. glamor_get_modifiers() turns into a no-op and returns false, not
         reporting any supported dmabuf modifiers to the servers DRI3 code,
         ie. the servers cache_formats_and_modifiers() function can't retrieve
         and cache any format+modifiers. Therefore the servers DRI3 code now
         also reports an empty format+modifiers list when Mesa does a
         xcb_dri3_get_supported_modifiers_screen_modifiers() query.
      2. Mesa's buffer allocation code therefore falls back to using the old
         DRI image extensions createImage() function to allocate buffers
         and our OpenGL backbuffers / Vulkan swapchain images get allocated
         in a direct-scanout / pageflip capable format. Pageflipping works,
         timing and performance is good, presentation is tear-free.
      Please consider merging this for branching the X-Server 1.21 branch.
      Signed-off-by: Mario Kleiner's avatarMario Kleiner <mario.kleiner.de@gmail.com>
    • Jon Turney's avatar
      Don't underlink inputtest on targets which require complete linkage · c5a9287d
      Jon Turney authored
      Don't underlink inputtest on targets which require complete linkage
      (e.g. when building for PE/COFF)
  19. 31 Aug, 2021 1 commit
    • Jon Turney's avatar
      Fix compilation with windows.h from latest w32api · d68b50ec
      Jon Turney authored
      misc.h has complex logic (checking MAXSHORT is undefined etc.)
      controlling if it includes assert.h or not.
      Including windows.h from w32api 9.0.0 now trips over that, causing
      assert.h to not be included, causing various errors, e.g.
      In file included from ../include/cursor.h:53,
                       from ../include/dix.h:54,
                       from ../os/osdep.h:139,
                       from ../hw/xwin/winauth.c:40:
      ../include/privates.h: In function ‘dixGetPrivateAddr’:
      ../include/privates.h:121:5: error: implicit declaration of function ‘assert’ [-Werror=implicit-function-declaration]
      Fix this by IWYU in privates.h