1. 22 Nov, 2021 2 commits
  2. 16 Nov, 2021 1 commit
    • Povilas Kanapickas's avatar
      Revert "hw/xfree86: Propagate physical dimensions from DRM connector" · 35af1299
      Povilas Kanapickas authored
      Quite a lot of applications currently expect the screen DPI exposed by
      the X server to be 96 even when the real display DPI is different.
      Additionally, currently Xwayland completely ignores any hardware
      information and sets the DPI to 96. Accordingly the new behavior, even
      if it fixes a bug, should not be enabled automatically to all users.
      A better solution would be to make the default DPI stay as is and enable
      the correct behavior with a command line option (maybe -dpi auto, or
      similar). For now let's just revert the bug fix.
      This reverts commit 05b3c681
      Signed-off-by: Povilas Kanapickas's avatarPovilas Kanapickas <povilas@radix.lt>
  3. 10 Nov, 2021 1 commit
  4. 08 Nov, 2021 1 commit
  5. 06 Nov, 2021 3 commits
  6. 04 Nov, 2021 6 commits
  7. 03 Nov, 2021 1 commit
  8. 27 Oct, 2021 1 commit
  9. 25 Oct, 2021 3 commits
  10. 22 Oct, 2021 1 commit
  11. 21 Oct, 2021 1 commit
  12. 20 Oct, 2021 1 commit
  13. 19 Oct, 2021 1 commit
    • Mario Kleiner's avatar
      Fix RandR leasing for more than 1 simultaneously active lease. · f467f85c
      Mario Kleiner authored
      Due to a switched order of parameters in the xorg_list_add()
      call inside ProcRRCreateLease(), adding a new lease for RandR
      output leasing does not actually add the new RRLeasePtr lease
      record to the list of existing leases for a X-Screen, but instead
      replaces the existing list with a new list that has the new lease
      as the only element, and probably leaks a bit of memory.
      Therefore the server "forgets" all active leases for a screen,
      except for the last added lease. If multiple leases are created
      in a session, then destruction of all leases but the last one
      will fail in many cases, e.g., during server shutdown in
      RRCloseScreen(), or resource destruction, e.g., in
      Most importantly, it fails if a client simply close(fd)'es the
      DRM master descriptor to release a lease, quits, gets killed or
      crashes. In this case the kernel will destroy the lease and shut
      down the display output, then send a lease event via udev to the
      ddx, which e.g., in the modesetting-ddx will trigger a call to
      That function is supposed to detect the released lease and tell
      the server to terminate the lease on the server side as well,
      via xf86CrtcLeaseTerminated(), but this doesn't happen for all
      the leases the server has forgotten. The end result is a dead
      video output, as the server won't reinitialize the crtc's
      corresponding to the terminated but forgotten lease.
      This bug was observed when using the amdvlk AMD OSS Vulkan
      driver and trying to lease multiple VKDisplay's, and also
      under Mesa radv, as both Mesa Vulkan/WSI/Display and amdvlk
      terminate leases by simply close()ing the lease fd, not by
      sending explicit RandR protocol requests to free leases.
      Leasing worked, but ending a session with multiple active
      leases ended in a lot of unpleasant darkness.
      Fixing the wrong argument order to xorg_list_add() fixes the
      problem. Tested on single-X-Screen and dual-X-Screen setups,
      with one, two or three active leases.
      Please merge this for the upcoming server 21.1 branch.
      Merging into server 1.20 would also make a lot of sense.
      Fixes: e4e34476
      Signed-off-by: Mario Kleiner's avatarMario Kleiner <mario.kleiner.de@gmail.com>
      Cc: Keith Packard <keithp@keithp.com>
  14. 14 Oct, 2021 1 commit
  15. 12 Oct, 2021 1 commit
    • Olivier Fourdan's avatar
      xwayland: Notify of root size change with XRandR emulation · 246ae00b
      Olivier Fourdan authored and Olivier Fourdan's avatar Olivier Fourdan committed
      Some clients (typically Java, but maybe others) rely on ConfigureNotify
      or RRScreenChangeNotify events to tell that the XRandR request is
      When emulated XRandR is used in Xwayland, compute the emulated root size
      and send the expected ConfigureNotify and RRScreenChangeNotify events
      with the emulated size of the root window to the asking X11 client.
      Note that the root window size does not actually change, as XRandR
      emulation is achieved by scaling the client window using viewports in
      Wayland, so this event is sort of misleading.
      Also, because Xwayland is using viewports, emulating XRandR does not
      reconfigure the outputs location, meaning that the actual size of the
      root window which encompasses all the outputs together may not change
      in a multi-monitor setup. To work around this limitation, when using an
      emulated mode, we report the size of that emulated mode alone as the
      root size for the configure notify event.
      Signed-off-by: Olivier Fourdan's avatarOlivier Fourdan <ofourdan@redhat.com>
      Reviewed-by: default avatarHans de Goede <hdegoede@redhat.com>
  16. 08 Oct, 2021 4 commits
    • Alexander Richardson's avatar
      dix/privates.c: Avoid undefined behaviour after realloc() · f9f705bf
      Alexander Richardson authored and Povilas Kanapickas's avatar Povilas Kanapickas committed
      Adding the offset between the realloc result and the old allocation to
      update pointers into the new allocation is undefined behaviour: the
      old pointers are no longer valid after realloc() according to the C
      standard. While this works on almost all architectures and compilers,
      it causes  problems on architectures that track pointer bounds (e.g.
      CHERI or Arm's Morello): the DevPrivateKey pointers will still have the
      bounds of the previous allocation and therefore any dereference will
      result in a run-time trap.
      I found this due to a crash (dereferencing an invalid capability) while
      trying to run `XVnc` on a CHERI-RISC-V system. With this commit I can
      successfully connect to the XVnc instance running inside a QEMU with a
      VNC viewer on my host.
      This also changes the check whether the allocation was moved to use
      uintptr_t instead of a pointer since according to the C standard:
      "The value of a pointer becomes indeterminate when the object it
      points to (...
    • nerdopolis's avatar
      xf86: Accept devices with the 'simpledrm' driver. · b9218fad
      nerdopolis authored and Povilas Kanapickas's avatar Povilas Kanapickas committed
      SimpleDRM 'devices' are a fallback device, and do not have a busid
      so they are getting skipped. This will allow simpledrm to work
      with the modesetting driver
    • Mario Kleiner's avatar
      modesetting: Consider RandR primary output for selectioh of sync crtc. · 4b75e657
      Mario Kleiner authored and Povilas Kanapickas's avatar Povilas Kanapickas committed
      The "sync crtc" is the crtc used to drive the display timing of a
      drawable under DRI2 and DRI3/Present. If a drawable intersects
      multiple video outputs, then normally the crtc is chosen which has
      the largest intersection area with the drawable.
      If multiple outputs / crtc's have exacty the same intersection
      area then the crtc chosen was simply the first one with maximum
      intersection. Iow. the choice was random, depending on plugging
      order of displays.
      This adds the ability to choose a preferred output in such a tie
      situation. The RandR output marked as "primary output" is chosen
      on such a tie.
      This new behaviour and its implementation is consistent with other
      video ddx drivers. See amdgpu-ddx, ati-ddx and nouveau-ddx for
      reference. This commit is a straightforward port from amdgpu-ddx.
      Signed-off-by: Mario Kleiner's avatarMario Kleiner <mario.kleiner.de@gmail.com>
    • Mario Kleiner's avatar
      modesetting: Handle mixed VRR and non-VRR display setups better. · 017ce263
      Mario Kleiner authored and Povilas Kanapickas's avatar Povilas Kanapickas committed
      In a setup with both VRR capable and non-VRR capable displays,
      it was so far inconsistent if the driver would allow use of
      VRR support or not, as "is_connector_vrr_capable" was set to
      whatever the capabilities of the last added drm output were.
      Iow. the plugging order of monitors determined the outcome.
      Fix this: Now if at least one display is VRR capable, the driver
      will treat an X-Screen as capable for VRR, plugging order no
      longer matters.
      Tested with a dual-display setup with one VRR monitor and one
      non-VRR monitor. This is also beneficial with the new Option
      When we are at it, also add some so far missing description of
      the "VariableRefresh" driver option, copied from amdgpu-ddx.
      Signed-off-by: Mario Kleiner's avatarMario Kleiner <mario.kleiner.de@gmail.com>
  17. 07 Oct, 2021 2 commits
    • Mario Kleiner's avatar
      modesetting: Enable GAMMA_LUT for lut's with up to 4096 slots. · 66e5a5bb
      Mario Kleiner authored and Povilas Kanapickas's avatar Povilas Kanapickas committed
      A lut size of 4096 slots has been verified to work correctly,
      as tested with amdgpu-kms. Intel Tigerlake Gen12 hw has a very
      large GAMMA_LUT size of 262145 slots, but also issues with its
      current GAMMA_LUT implementation, as of Linux 5.14.
      Therefore we keep GAMMA_LUT off for large lut's. This currently
      excludes Intel Icelake, Tigerlake and later.
      This can be overriden via the "UseGammaLUT" boolean xorg.conf option
      to force use of GAMMA_LUT on or off.
      See following link for the Tigerlake situation:
      drm/intel#3916 (comment 1085315)
      Signed-off-by: Mario Kleiner's avatarMario Kleiner <mario.kleiner.de@gmail.com>
    • Ray Strode's avatar
      xkb: Drop check for XkbSetMapResizeTypes · 8b7f4d32
      Ray Strode authored and Ray Strode's avatar Ray Strode committed
      Commit 446ff2d3 added checks to
      prevalidate the size of incoming SetMap requests.
      That commit checks for the XkbSetMapResizeTypes flag to be set before
      allowing key types data to be processed.
      key types data can be changed or even just sent wholesale unchanged
      without the number of key types changing, however. The check for
      XkbSetMapResizeTypes rejects those legitimate requests. In particular,
      XkbChangeMap never sets XkbSetMapResizeTypes and so always fails now
      any time XkbKeyTypesMask is in the changed mask.
      This commit drops the check for XkbSetMapResizeTypes in flags when
      prevalidating the request length.
  18. 06 Oct, 2021 1 commit
  19. 05 Oct, 2021 2 commits
    • James Jones's avatar
      Use EGL_LINUX_DMA_BUF_EXT to create GBM bo EGLImages · f1572937
      James Jones authored
      Xwayland was passing GBM bos directly to
      eglCreateImageKHR using the EGL_NATIVE_PIXMAP_KHR
      target. Given the EGL GBM platform spec claims it
      is invalid to create a EGLSurface from a native
      pixmap on the GBM platform, implying there is no
      mapping between GBM objects and EGL's concept of
      native pixmaps, this seems a bit questionable.
      This change modifies the bo import function to
      extract all the required data from the bo and then
      imports it as a dma-buf instead when the dma-buf +
      modifiers path is available.
      Signed-off-by: James Jones's avatarJames Jones <jajones@nvidia.com>
      Reviewed-by: Simon Ser's avatarSimon Ser <contact@emersion.fr>
    • Olivier Fourdan's avatar
      xwayland/shm: Avoid integer overflow on large pixmaps · 079c5ccb
      Olivier Fourdan authored and Olivier Fourdan's avatar Olivier Fourdan committed
      Xwayland's xwl_shm_create_pixmap() computes the size of the shared
      memory pool to create using a size_t, yet the Wayland protocol uses an
      integer for that size.
      If the pool size becomes larger than INT32_MAX, we end up asking Wayland
      to create a shared memory pool of negative size which in turn will raise
      a protocol error which terminates the Wayland connection, and therefore
      Avoid that issue early by return a NULL pixmap in that case, which will
      trigger a BadAlloc error, but leave Xwayland alive.
      Signed-off-by: Olivier Fourdan's avatarOlivier Fourdan <ofourdan@redhat.com>
      Reviewed-by: Jonas Ådahl's avatarJonas Ådahl <jadahl@gmail.com>
  20. 27 Sep, 2021 3 commits
    • Mario Kleiner's avatar
      Revert "modesetting: Only use GAMMA_LUT if its size is 1024" · 545fa90c
      Mario Kleiner authored and Povilas Kanapickas's avatar Povilas Kanapickas committed
      This reverts commit 617f591f
      The problem described in that commit exists, but the two
      preceeding commits with improvements to the servers RandR
      code should avoid the mentioned problems while allowing the
      use of GAMMA_LUT's instead of legacy gamma lut.
      Use of legacy gamma lut's is not a good fix, because it will reduce
      color output precision of gpu's with more than 1024 GAMMA_LUT
      slots, e.g., AMD, ARM MALI and KOMEDA with 4096 slot luts,
      and some Mediathek parts with 512 slot luts. On KOMEDA, legacy
      lut's are completely unsupported by the kms driver, so gamma
      correction gets disabled.
      The situation is especially bad on Intel Icelake and later:
      Use of legacy gamma tables will cause the kms driver to switch
      to hardware legacy lut's with 256 slots, 8 bit wide, without
      interpolation. This way color output precision is restricted to
      8 bpc and any deep color / HDR output (10 bpc, fp16, fixed point 16)
      becomes impossible. The latest Intel gen gpu's would have worse
      color precision than parts which are more than 10 years old.
      Signed-off-by: Mario Kleiner's avatarMario Kleiner <mario.kleiner.de@gmail.com>
    • Mario Kleiner's avatar
      xfree86: Let xf86RandR12CrtcComputeGamma() deal with non-power-of-2 sizes. · 7326e131
      Mario Kleiner authored and Povilas Kanapickas's avatar Povilas Kanapickas committed
      The assumption in the upsampling code was that the crtc->gamma_size
      size of the crtc's gamma table is a power of two. This is true for
      almost all current driver + gpu combos at least on Linux, with typical
      sizes of 256, 512, 1024 or 4096 slots.
      However, Intel Gen-11 Icelake and later are outliers, as their gamma
      table has 2^18 + 1 slots, very big and not a power of two!
      Try to make upsampling behave at least reasonable: Replicate the
      last gamma value to fill up remaining crtc->gamma_red/green/blue
      slots, which would normally stay uninitialized. This is important,
      because while the intel display driver does not actually use all
      2^18+1 values passed as part of a GAMMA_LUT, it does need the
      very last slot, which would not get initialized by the old code.
      This should hopefully create reasonable behaviour with Icelake+
      but is untested on the actual Intel hw due to lack of suitable
      Signed-off-by: Mario Kleiner's avatarMario Kleiner <mario.kleiner.de@gmail.com>
    • Mario Kleiner's avatar
      xfree86: Avoid crash in xf86RandR12CrtcSetGamma() memcpy path. · 966f5674
      Mario Kleiner authored and Povilas Kanapickas's avatar Povilas Kanapickas committed
      If randrp->palette_size is zero, the memcpy() path can read past the
      end of the randr_crtc's gammaRed/Green/Blue tables if the hw crtc's
      gamma_size is greater than the randr_crtc's gammaSize.
      Avoid this by clamping the to-be-copied size to the smaller of both
      Note that during regular server startup, the memcpy() path is only
      taken initially twice, but then a suitable palette is created for
      use during a session. Therefore during an actual running X-Session,
      the xf86RandR12CrtcComputeGamma() will be used, which makes sure that
      data is properly up- or down-sampled for mismatching source and
      target crtc gamma sizes.
      This should avoid reading past randr_crtc gamma memory for gpu's
      with big crtc->gamma_size, e.g., AMD/MALI/KOMEDA 4096 slots, or
      Intel Icelake and later with 262145 slots.
      Tested against modesetting-ddx and amdgpu-ddx under screen color
      depth 24 (8 bpc) and 30 (10 bpc) to make sure that clamping happens
      This is an alternative fix for the one attempted in commit
      Signed-off-by: Mario Kleiner's avatarMario Kleiner <mario.kleiner.de@gmail.com>
  21. 23 Sep, 2021 1 commit
    • Adam Jackson's avatar
      xwayland/glx: Enable sRGB fbconfigs · 6c1e6429
      Adam Jackson authored and Adam Jackson's avatar Adam Jackson committed
      We turn this on if the GL underneath us can enable GL_FRAMEBUFFER_SRGB.
      We do try to generate both capable and incapable configs, which is to
      keep llvmpipe working until the client side gets smarter about its srgb
  22. 17 Sep, 2021 1 commit
  23. 15 Sep, 2021 1 commit
    • Patrik Jakobsson's avatar
      modesetting: Fix dirty updates for sw rotation · db9e9d45
      Patrik Jakobsson authored and Povilas Kanapickas's avatar Povilas Kanapickas committed
      Rotation is broken for all drm drivers not providing hardware rotation
      support. Drivers that give direct access to vram and not needing dirty
      updates still work but only by accident. The problem is caused by
      modesetting not sending the correct fb_id to drmModeDirtyFB() and
      passing the damage rects in the rotated state and not as the crtc
      expects them. This patch takes care of both problems.
      Signed-off-by: Patrik Jakobsson's avatarPatrik Jakobsson <pjakobsson@suse.de>