1. 15 Jan, 2006 1 commit
  2. 13 Jan, 2006 1 commit
  3. 10 Jan, 2006 1 commit
  4. 07 Jan, 2006 4 commits
  5. 05 Jan, 2006 1 commit
  6. 09 Dec, 2005 1 commit
  7. 30 Nov, 2005 3 commits
  8. 28 Nov, 2005 2 commits
  9. 18 Nov, 2005 1 commit
    • Roland Dreier's avatar
      IB/umad: make sure write()s have sufficient data · eabc7793
      Roland Dreier authored
      Make sure that userspace passes in enough data when sending a MAD.  We
      always copy at least sizeof (struct ib_user_mad) + IB_MGMT_RMPP_HDR
      bytes from userspace, so anything less is definitely invalid.  Also,
      if the length is less than this limit, it's possible for the second
      copy_from_user() to get a negative length and trigger a BUG().
      Signed-off-by: default avatarRoland Dreier <rolandd@cisco.com>
      eabc7793
  10. 10 Nov, 2005 5 commits
  11. 07 Nov, 2005 2 commits
  12. 06 Nov, 2005 1 commit
  13. 03 Nov, 2005 1 commit
  14. 02 Nov, 2005 1 commit
  15. 31 Oct, 2005 2 commits
    • Roland Dreier's avatar
      [IB] uverbs: Avoid NULL pointer deref on CQ async event · 7162a3e0
      Roland Dreier authored
      Userspace CQs that have no completion event channel attached end up
      with their cq_context set to NULL.  However, asynchronous events like
      "CQ overrun" can still occur on such CQs, so add a uverbs_file member
      to struct ib_ucq_object that we can follow to deliver these events.
      Signed-off-by: default avatarRoland Dreier <rolandd@cisco.com>
      7162a3e0
    • Tim Schmielau's avatar
      [PATCH] fix missing includes · 4e57b681
      Tim Schmielau authored
      I recently picked up my older work to remove unnecessary #includes of
      sched.h, starting from a patch by Dave Jones to not include sched.h
      from module.h. This reduces the number of indirect includes of sched.h
      by ~300. Another ~400 pointless direct includes can be removed after
      this disentangling (patch to follow later).
      However, quite a few indirect includes need to be fixed up for this.
      
      In order to feed the patches through -mm with as little disturbance as
      possible, I've split out the fixes I accumulated up to now (complete for
      i386 and x86_64, more archs to follow later) and post them before the real
      patch.  This way this large part of the patch is kept simple with only
      adding #includes, and all hunks are independent of each other.  So if any
      hunk rejects or gets in the way of other patches, just drop it.  My scripts
      will pick it up again in the next round.
      Signed-off-by: default avatarTim Schmielau <tim@physik3.uni-rostock.de>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      4e57b681
  16. 29 Oct, 2005 1 commit
  17. 28 Oct, 2005 6 commits
  18. 25 Oct, 2005 2 commits
    • Roland Dreier's avatar
      [IB] simplify mad_rmpp.c:alloc_response_msg() · 7cc656ef
      Roland Dreier authored
      Change alloc_response_msg() in mad_rmpp.c to return the struct
      it allocates directly (or an error code a la ERR_PTR), rather than
      returning a status and passing the struct back in a pointer param.
      This simplifies the code and gets rid of warnings like
      
          drivers/infiniband/core/mad_rmpp.c: In function nack_recv:
          drivers/infiniband/core/mad_rmpp.c:192: warning: msg may be used uninitialized in this function
      
      with newer versions of gcc.
      Signed-off-by: default avatarRoland Dreier <rolandd@cisco.com>
      7cc656ef
    • Sean Hefty's avatar
      [IB] Fix MAD layer DMA mappings to avoid touching data buffer once mapped · 34816ad9
      Sean Hefty authored
      The MAD layer was violating the DMA API by touching data buffers used
      for sends after the DMA mapping was done.  This causes problems on
      non-cache-coherent architectures, because the device doing DMA won't
      see updates to the payload buffers that exist only in the CPU cache.
      
      Fix this by having all MAD consumers use ib_create_send_mad() to
      allocate their send buffers, and moving the DMA mapping into the MAD
      layer so it can be done just before calling send (and after any
      modifications of the send buffer by the MAD layer).
      
      Tested on a non-cache-coherent PowerPC 440SPe system.
      Signed-off-by: default avatarSean Hefty <sean.hefty@intel.com>
      Signed-off-by: default avatarRoland Dreier <rolandd@cisco.com>
      34816ad9
  19. 24 Oct, 2005 2 commits
  20. 20 Oct, 2005 2 commits