1. 25 Jul, 2013 1 commit
    • Aaron Plattner's avatar
      xfree86: detach scanout pixmaps when detaching output GPUs · bdd1e22c
      Aaron Plattner authored
      Commit 8f4640bd fixed a bit of a
      chicken-and-egg problem by detaching GPU screens when their providers
      are destroyed, which happens before CloseScreen is called.  However,
      this created a new problem: the GPU screen tears down its RandR crtc
      objects during CloseScreen and if one of them is active, it tries to
      detach the scanout pixmap then.  This crashes because
      RRCrtcDetachScanoutPixmap tries to get the master screen's screen
      pixmap, but crtc->pScreen->current_master is already NULL at that
      It doesn't make sense for an unbound GPU screen to still be scanning
      out its former master screen's pixmap, so detach them first when the
      provider is destroyed.
      Signed-off-by: Aaron Plattner's avatarAaron Plattner <aplattner@nvidia.com>
      Reviewed-by: default avatarDave Airlie <airlied@redhat.com>
      Signed-off-by: Keith Packard's avatarKeith Packard <keithp@keithp.com>
  2. 23 Jul, 2013 15 commits
  3. 22 Jul, 2013 1 commit
    • Peter Hutterer's avatar
      dix: scale y back instead of x up when pre-scaling coordinates · 21ea7ebb
      Peter Hutterer authored
      The peculiar way we handle coordinates results in relative coordinates on
      absolute devices being added to the last value, then that value is mapped to
      the screen (taking the device dimensions into account). From that mapped
      value we get the final coordinates, both screen and device coordinates.
      To avoid uneven scaling on relative coordinates, they are pre-scaled by
      screen ratio:resolution:device ratio factor before being mapped. This
      ensures that a circle drawn on the device is a circle on the screen.
      Previously, we used the ratio to scale x up. Synaptics already does its own
      scaling based on the resolution and that is done by scaling y down by the
      ratio. So we can remove the code from the driver and get approximately the
      same behaviour here.
      Minor ABI bump, so we can remove this from synaptics.
      Signed-off-by: Peter Hutterer's avatarPeter Hutterer <peter.hutterer@who-t.net>
      Tested-by: default avatarEmmanuel Benisty <benisty.e@gmail.com>
  4. 18 Jul, 2013 9 commits
  5. 17 Jul, 2013 4 commits
  6. 10 Jul, 2013 2 commits
  7. 02 Jul, 2013 1 commit
  8. 18 Jun, 2013 2 commits
    • Eric Anholt's avatar
      Revert "DRI2: re-allocate DRI2 drawable if pixmap serial changes" · 77e51d5b
      Eric Anholt authored
      This reverts commit 3209b094.  After a
      long debug session by Paul Berry, it appears that this was the commit
      that has been producing sporadic failures in piglit front buffer
      rendering tests for the last several years.
      GetBuffers may return fresh buffers with invalid contents at a couple
      reasonable times:
      - When first asked for a non-fake-front buffer.
      - When the drawable size is changed, an Invalidate has been sent, and
        obviously the app needs to redraw the whole buffer.
      - After a glXSwapBuffers(), GL allows the backbuffer to be undefined,
        and an Invalidate was sent to tell the GL that it should grab these
        appropriate new buffers to avoid stalling.
      But with the patch being reverted, GetBuffers would also return fresh
      invalid buffers when the drawable serial number changed, which is
      approximately "whenever, for any reason".  The app is not expecting
      invalid buffer contents "whenever", nor is it valid.  Because the GL
      usually only GetBuffers after an Invalidate is sent, and the new
      buffer allocation only happened during a GetBuffers, most apps saw no
      problems.  But apps that do (fake-)frontbuffer rendering do frequently
      ask the server for the front buffer (since we drop the fake front
      allocation when we're not doing front buffer rendering), and if the
      drawable serial got bumped midway through a draw, the server would
      pointlessly ditch the front *and* backbuffer full of important
      drawing, resulting in bad rendering.
      The patch was originally to fix bugzilla:
          To reproduce, start with a large-ish display (i.e. 1680x1050 on my
          laptop), use the patched glxgears from bug 28252 to add the
          -override option.  Then run glxgears -override -geometry 640x480
          to create a 640x480 window in the top left corner, which will work
          fine.  Next, run xrandr -s 640x480 and watch the fireworks.
      I've tested with an override-redirect glxgears, both with vblank sync
      enabled and disabled, both with gnome-shell and no window manager at
      all, before and after this patch.  The only problem observed was that
      before and after the revert, sometimes when alt-tabbing to kill my
      gears after completing the test gnome-shell would get confused about
      override-redirectness of the glxgears window (according to a log
      message) and apparently not bother doing any further compositing.
      Signed-off-by: Eric Anholt's avatarEric Anholt <eric@anholt.net>
      Reviewed-by: Keith Packard's avatarKeith Packard <keithp@keithp.com>
      Reviewed-by: Chris Wilson's avatarChris Wilson <chris@chris-wilson.co.uk>
      Tested-by: Chris Wilson's avatarChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: Keith Packard's avatarKeith Packard <keithp@keithp.com>
    • Keith Packard's avatar
  9. 10 Jun, 2013 5 commits