1. 23 Mar, 2022 2 commits
  2. 03 Dec, 2021 1 commit
  3. 16 Nov, 2020 1 commit
    • Frediano Ziglio's avatar
      Fix poll_for_response race condition · dbb55e1a
      Frediano Ziglio authored and Peter Hutterer's avatar Peter Hutterer committed
      In poll_for_response is it possible that event replies are skipped
      and a more up to date message reply is returned.
      This will cause next poll_for_event call to fail aborting the program.
      
      This was proved using some slow ssh tunnel or using some program
      to slow down server replies (I used a combination of xtrace and strace).
      
      How the race happens:
      - program enters into poll_for_response;
      - poll_for_event is called but the server didn't still send the reply;
      - pending_requests is not NULL because we send a request (see call
        to  append_pending_request in _XSend);
      - xcb_poll_for_reply64 is called from poll_for_response;
      - xcb_poll_for_reply64 will read from server, at this point
        server reply with an event (say sequence N) and the reply to our
        last request (say sequence N+1);
      - xcb_poll_for_reply64 returns the reply for the request we asked;
      - last_request_read is set to N+1 sequence in poll_for_response;
      - poll_for_response returns the response to the request;
      - poll_for_event is called (for instance from another poll_for_response);
      - event with sequence N is retrieved;
      - the N sequence is widen, however, as the "new" number computed from
        last_request_read is less than N the number is widened to N + 2^32
        (assuming last_request_read is still contained in 32 bit);
      - poll_for_event enters the nested if statement as req is NULL;
      - we compare the widen N (which now does not fit into 32 bit) with
        request (which fits into 32 bit) hitting the throw_thread_fail_assert.
      
      To avoid the race condition and to avoid the sequence to go back
      I check again for new events after getting the response and
      return this last event if present saving the reply to return it
      later.
      
      To test the race and the fix it's helpful to add a delay (I used a
      "usleep(5000)") before calling xcb_poll_for_reply64.
      
      Original patch written by Frediano Ziglio, see
      xorg/lib/libx11!34
      
      Reworked primarily for readability by Peter Hutterer, see
      xorg/lib/libx11!53
      
      
      
      Signed-off-by: Peter Hutterer's avatarPeter Hutterer <peter.hutterer@who-t.net>
      dbb55e1a
  4. 15 Oct, 2020 1 commit
  5. 28 Sep, 2018 1 commit
  6. 25 Sep, 2018 1 commit
  7. 20 Aug, 2017 1 commit
  8. 14 Aug, 2017 1 commit
  9. 21 Sep, 2015 1 commit
  10. 21 May, 2014 1 commit
  11. 10 May, 2013 1 commit
  12. 11 Mar, 2012 1 commit
  13. 20 May, 2011 1 commit
    • Daniel Stone's avatar
      XCB: Add more friendly error messages for common asserts · 761b8aa0
      Daniel Stone authored
      
      
      This patch adds more friendly error messages for three common classes of
      assertion:
          - missed sequence numbers due to being griefed by another thread
          - unknown requests in queue due to being griefed by another thread
          - extensions dequeuing too much or too little reply data
      
      It adds error messages offering advice (e.g. call XInitThreads() first)
      on stderr, but still generates actual assertions.  Hopefully this means
      it's a little more Googleable and a little less frightening.
      
      Signed-off-by: Daniel Stone's avatarDaniel Stone <daniel@fooishbar.org>
      Reviewed-by: Peter Hutterer's avatarPeter Hutterer <peter.hutterer@who-t.net>
      761b8aa0
  14. 14 Mar, 2011 1 commit
    • Jamey Sharp's avatar
      Ignore user locks after sleeping in _XReply and _XReadEvents. · fd85aca7
      Jamey Sharp authored
      This bug appears as a hang in applications that wait for replies from
      multiple threads, where one such thread has taken a user lock using
      XLockDisplay.
      
      Prior to this fix, the code could deadlock in this way: If thread 1 goes
      to sleep waiting for a reply, and then thread 2 takes a user lock and
      waits for a reply, then thread 2 will wait for thread 1 to process its
      reply (because responses must be processed in order), but thread 1 will
      wait for thread 2 to drop its user lock.
      
      Fixed by making thread 1 not wait for thread 2 to drop its user lock.
      This makes the semantics of user locks hard to define, but they were
      already hard to define. The new behavior appears to be consistent with
      the way Xlib worked historically, anyway.
      
      Fixes: http://lists.freedesktop.org/archives/xcb/2011-March/006802.html
      
      
      
      There was a similar potential for deadlock in _XReadEvents, fixed the
      same way, with the same caveats about user-lock semantics.
      
      Signed-off-by: Jamey Sharp's avatarJamey Sharp <jamey@minilop.net>
      fd85aca7
  15. 06 Aug, 2010 1 commit
  16. 21 Jun, 2010 4 commits
  17. 19 Jun, 2010 1 commit
  18. 18 Apr, 2010 4 commits
  19. 16 Apr, 2010 1 commit
  20. 15 Apr, 2010 2 commits
    • Josh Triplett's avatar
      Stop returning an int from _XIDHandler and _XSeqSyncFunction · a3f5f1b9
      Josh Triplett authored and Jamey Sharp's avatar Jamey Sharp committed
      
      
      _XIDHandler and _XSeqSyncFunction originally ran from dpy->synchandler, and
      thus had to return an int.  Now, they only run from _XPrivSyncHandler or
      LockDisplay, neither of which needs to check their return value since they
      always returned 0.  Make them both void.
      
      Signed-off-by: Josh Triplett's avatarJosh Triplett <josh@freedesktop.org>
      Signed-off-by: Jamey Sharp's avatarJamey Sharp <jamey@minilop.net>
      a3f5f1b9
    • Jamey Sharp's avatar
      Move XID and sync handling from SyncHandle to LockDisplay to fix races. · a6d974dc
      Jamey Sharp authored
      XID and sync handling happened via _XPrivSyncHandler, assigned to
      dpy->synchandler and called from SyncHandle.  _XPrivSyncHandler thus ran
      without the Display lock, so manipulating the Display caused races, and
      these races led to assertions in multithreaded code (demonstrated via
      ico).
      
      In the XTHREADS case, after you've called XInitThreads, we can hook
      LockDisplay and UnlockDisplay.  Use that to run _XIDHandler and
      _XSeqSyncHandler from LockDisplay rather than SyncHandle; we then know
      that we hold the lock, and thus we can avoid races.  We think it makes
      sense to do these both from LockDisplay rather than UnlockDisplay, so
      that you know you have valid sync and a valid XID before you start
      setting up the request you locked to prepare.
      
      In the !XTHREADS case, or if you haven't called XInitThreads, you don't
      get to use Xlib from multiple threads, so we can use the logic we have
      now (with synchandler and savedsynchandler) without any concern about
      races.
      
      This approach gets a bit exciting when the XID and sequence sync
      handlers drop and re-acquire the Display lock. Reacquisition will re-run
      the handlers, but they return immediately unless they have work to do,
      so they can't recurse more than once.  In the worst case, if both of
      them have work to do, we can nest the Display lock three deep.  In the
      case of the _XIDHandler, we drop the lock to call xcb_generate_id, which
      takes the socket back if it needs to request more XIDs, and taking the
      socket back will reacquire the lock; we take care to avoid letting
      _XIDHandler run again and re-enter XCB from the return_socket callback
      (which causes Very Bad Things, and is Not Allowed).
      
      Tested with ico (with 1 and 20 threads), and with several test programs
      for XID and sequence sync.  Tested with and without XInitThreads(), and
      with and without XCB.
      
      Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=23192
      
      
      
      Signed-off-by: Jamey Sharp's avatarJamey Sharp <jamey@minilop.net>
      Signed-off-by: Josh Triplett's avatarJosh Triplett <josh@freedesktop.org>
      a6d974dc
  21. 14 Apr, 2010 2 commits
  22. 13 Apr, 2010 2 commits
  23. 14 Oct, 2009 1 commit
  24. 21 Feb, 2009 1 commit
  25. 29 Jan, 2009 1 commit
  26. 04 Nov, 2008 5 commits
    • Jamey Sharp's avatar
      Support multiple independent internal sync handlers · e6a7b70c
      Jamey Sharp authored
      
      
      Xlib has several independent tasks that need to be performed with the
      display unlocked. It does this by replacing the existing sync handler with
      one of a variety of internal sync handlers. However, if multiple internal
      sync handlers need to run, then the last one registering wins and
      previously registered internal sync handlers are never invoked. This
      manifested as a bug with DRI applications on Xlib/XCB as that requires
      both an XID handler after every XID allocation, and the periodic sequence
      number handler. The XID handler would win, and the sequence number handler
      would never be invoked.
      
      Fix this by unifying the internal sync handler mechanism into a single
      function that calls all of the known internal sync handlers. They all need
      to deal with being called when not strictly necessary now.
      
      Signed-off-by: Keith Packard's avatarKeith Packard <keithp@keithp.com>
      Signed-off-by: Jamey Sharp's avatarJamey Sharp <jamey@minilop.net>
      Signed-off-by: Josh Triplett's avatarJosh Triplett <josh@freedesktop.org>
      e6a7b70c
    • Keith Packard's avatar
      Ensure that _XReadEvents always leaves an event in the queue on return · 2dbaaab9
      Keith Packard authored and Jamey Sharp's avatar Jamey Sharp committed
      
      
      XNextEvent assumes that the event queue will be non-empty on return from
      _XReadEvents, but with multiple event readers running, the previous change
      could leave the queue empty on return from process_responses. Re-invoke
      process_responses until the queue is non-empty.
      
      Signed-off-by: Keith Packard's avatarKeith Packard <keithp@keithp.com>
      2dbaaab9
    • Keith Packard's avatar
      Permit only one Xlib thread to block waiting for events · bedfe682
      Keith Packard authored and Jamey Sharp's avatar Jamey Sharp committed
      
      
      As Xlib queues events internally, we must prevent multiple Xlib threads from
      entering XCB to wait for an event in case the queued event is to be
      delivered to the thread which didn't manage to read it. In other words, let
      only one Xlib thread into xcb_wait_for_event at a time.
      
      Jamey Sharp looked over my shoulder while making this fix and, while hating
      my whitespace conventions, appears happy enough with the actual code.
      
      Signed-off-by: Keith Packard's avatarKeith Packard <keithp@keithp.com>
      bedfe682
    • Jamey Sharp's avatar
      Fix XAllocID race: hold the user display lock until we have a new XID. · cc19618d
      Jamey Sharp authored
      Xlib built --without-xcb is also vulnerable to this race, and a similar
      fix might work there too.
      
      Also, use an XID that's truly invalid while waiting for the next XID to be
      requested.
      cc19618d
    • Josh Triplett's avatar
      Use XCB's new socket handoff mechanism rather than the old XCB Xlib lock. · 54e5c094
      Josh Triplett authored and Jamey Sharp's avatar Jamey Sharp committed
      Previously, Xlib/XCB used XCB's Xlib lock to prevent XCB from sending
      requests between calls to Xlib's LockDisplay and UnlockDisplay macros.
      Xlib/XCB then sent all of its requests using XCB's xcb_send_request, and
      had to flush its requests when unlocking the display.
      
      XCB 1.2 adds a new socket handoff mechanism, xcb_take_socket.  Replace
      much of the existing Xlib/XCB implementation with the use of
      xcb_take_socket to take ownership of the write side of the X connection
      socket, and a return_socket callback which writes any outstanding requests
      with xcb_writev.  This approach allows Xlib/XCB to use the same buffering
      as traditional Xlib did.  In particular, programs which use Xlib/XCB and
      never make XCB calls will never need to hand the socket back to XCB, and
      vice versa.
      
      This allows us to discard large quantities of synchronization code from
      Xlib/XCB, together with the synchronization bugs present in that code.
      Several test cases which previously failed now work perfectly, including
      multi-threaded ico.  In addition, the infamous locking correctness
      assertions, triggered when double-locking or when unlocking without a
      previous lock, no longer exist, because Xlib/XCB no longer has any reason
      to care more about application locking than traditional Xlib does.
      
      Furthermore, the handoff approach provides great improvements to
      performance.  Results from x11perf's XNoOp test, which represented the
      worst case for the lock-based Xlib/XCB:
      
      Traditional Xlib:       average 19100000/sec
      Lock-based Xlib/XCB:    average  3350000/sec
      Handoff-based Xlib/XCB: average 17400000/sec
      
      Thus, for no-ops, the handoff mechanism provides more than a 4x speedup to
      Xlib/XCB, bringing Xlib/XCB within 9% of traditional Xlib no-op
      performance.  Of course, real-world workloads do not use no-op, so your
      mileage may vary.  In particular, since no-ops represent the worst case,
      we expect real workloads to more closely match the performance of
      traditional Xlib.
      
      While removing synchronization code, we changed _XReply to not drop any
      locks when calling xcb_wait_for_reply; previously, we had to carefully
      avoid a deadlock between the Display lock and the XCB Xlib lock. Holding
      the locks reduces implementation complexity and should not impact
      applications.
      
      Commit by Jamey Sharp and Josh Triplett.
      XCB's handoff mechanism inspired by Keith Packard.
      54e5c094