1. 14 10月, 2010 1 次提交
  2. 24 5月, 2010 1 次提交
  3. 16 5月, 2010 1 次提交
  4. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  5. 12 3月, 2010 1 次提交
  6. 07 3月, 2010 1 次提交
  7. 07 9月, 2009 2 次提交
    • H
      IB/mad: Allow tuning of QP0 and QP1 sizes · b76aabc3
      Hal Rosenstock 提交于
      MADs are UD and can be dropped if there are no receives posted, so
      allow receive queue size to be set with a module parameter in case the
      queue needs to be lengthened.  Send side tuning is done for symmetry
      with receive.
      Signed-off-by: NHal Rosenstock <hal.rosenstock@gmail.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      b76aabc3
    • R
      IB/mad: Fix possible lock-lock-timer deadlock · 6b2eef8f
      Roland Dreier 提交于
      Lockdep reported a possible deadlock with cm_id_priv->lock,
      mad_agent_priv->lock and mad_agent_priv->timed_work.timer; this
      happens because the mad module does
      
      	cancel_delayed_work(&mad_agent_priv->timed_work);
      
      while holding mad_agent_priv->lock.  cancel_delayed_work() internally
      does del_timer_sync(&mad_agent_priv->timed_work.timer).
      
      This can turn into a deadlock because mad_agent_priv->lock is taken
      inside cm_id_priv->lock, so we can get the following set of contexts
      that deadlock each other:
      
       A: holding cm_id_priv->lock, waiting for mad_agent_priv->lock
       B: holding mad_agent_priv->lock, waiting for del_timer_sync()
       C: interrupt during mad_agent_priv->timed_work.timer that takes
          cm_id_priv->lock
      
      Fix this by using the new __cancel_delayed_work() interface (which
      internally does del_timer() instead of del_timer_sync()) in all the
      places where we are holding a lock.
      
      Addresses: http://bugzilla.kernel.org/show_bug.cgi?id=13757Reported-by: NBart Van Assche <bart.vanassche@gmail.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      6b2eef8f
  8. 06 9月, 2009 1 次提交
    • R
      IB: Use DEFINE_SPINLOCK() for static spinlocks · 6276e08a
      Roland Dreier 提交于
      Rather than just defining static spinlock_t variables and then
      initializing them later in init functions, simply define them with
      DEFINE_SPINLOCK() and remove the calls to spin_lock_init().  This cleans
      up the source a tad and also shrinks the compiled code; eg on x86-64:
      
      add/remove: 0/0 grow/shrink: 0/3 up/down: 0/-40 (-40)
      function                                     old     new   delta
      ib_uverbs_init                               336     326     -10
      ib_mad_init_module                           147     137     -10
      ib_sa_init                                   123     103     -20
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      6276e08a
  9. 04 3月, 2009 1 次提交
    • R
      IB/mad: Fix ib_post_send_mad() returning 0 with no generate send comp · 4780c195
      Ralph Campbell 提交于
      If ib_post_send_mad() returns 0, the API guarantees that there will be
      a callback to send_buf->mad_agent->send_handler() so that the sender
      can call ib_free_send_mad().  Otherwise, the ib_mad_send_buf will be
      leaked and the mad_agent reference count will never go to zero and the
      IB device module cannot be unloaded.  The above can happen without
      this patch if process_mad() returns (IB_MAD_RESULT_SUCCESS |
      IB_MAD_RESULT_CONSUMED).
      
      If process_mad() returns IB_MAD_RESULT_SUCCESS and there is no agent
      registered to receive the mad being sent, handle_outgoing_dr_smp()
      returns zero which causes a MAD packet which is at the end of the
      directed route to be incorrectly sent on the wire but doesn't cause a
      hang since the HCA generates a send completion.
      Signed-off-by: NRalph Campbell <ralph.campbell@qlogic.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      4780c195
  10. 28 2月, 2009 2 次提交
    • R
      IB/mad: initialize mad_agent_priv before putting on lists · d9620a4c
      Ralph Campbell 提交于
      There is a potential race in ib_register_mad_agent() where the struct
      ib_mad_agent_private is not fully initialized before it is added to
      the list of agents per IB port. This means the ib_mad_agent_private
      could be seen before the refcount, spin locks, and linked lists are
      initialized.  The fix is to initialize the structure earlier.
      Signed-off-by: NRalph Campbell <ralph.campbell@qlogic.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      d9620a4c
    • R
      IB/mad: Fix null pointer dereference in local_completions() · 1d9bc6d6
      Ralph Campbell 提交于
      handle_outgoing_dr_smp() can queue a struct ib_mad_local_private
      *local on the mad_agent_priv->local_work work queue with
      local->mad_priv == NULL if device->process_mad() returns
      IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY and
      (!ib_response_mad(&mad_priv->mad.mad) ||
      !mad_agent_priv->agent.recv_handler).
      
      In this case, local_completions() will be called with local->mad_priv
      == NULL. The code does check for this case and skips calling
      recv_mad_agent->agent.recv_handler() but recv == 0 so
      kmem_cache_free() is called with a NULL pointer.
      
      Also, since recv isn't reinitialized each time through the loop, it
      can cause a memory leak if recv should have been zero.
      Signed-off-by: NRalph Campbell <ralph.campbell@qlogic.com>
      1d9bc6d6
  11. 15 10月, 2008 1 次提交
  12. 21 9月, 2008 1 次提交
  13. 24 5月, 2008 1 次提交
  14. 26 1月, 2008 4 次提交
  15. 04 8月, 2007 2 次提交
  16. 20 7月, 2007 1 次提交
    • P
      mm: Remove slab destructors from kmem_cache_create(). · 20c2df83
      Paul Mundt 提交于
      Slab destructors were no longer supported after Christoph's
      c59def9f change. They've been
      BUGs for both slab and slub, and slob never supported them
      either.
      
      This rips out support for the dtor pointer from kmem_cache_create()
      completely and fixes up every single callsite in the kernel (there were
      about 224, not including the slab allocator definitions themselves,
      or the documentation references).
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      20c2df83
  17. 10 7月, 2007 1 次提交
  18. 07 5月, 2007 1 次提交
    • M
      IB: Add CQ comp_vector support · f4fd0b22
      Michael S. Tsirkin 提交于
      Add a num_comp_vectors member to struct ib_device and extend
      ib_create_cq() to pass in a comp_vector parameter -- this parallels
      the userspace libibverbs API.  Update all hardware drivers to set
      num_comp_vectors to 1 and have all ULPs pass 0 for the comp_vector
      value.  Pass the value of num_comp_vectors to userspace rather than
      hard-coding a value of 1.
      
      We want multiple CQ event vector support (via MSI-X or similar for
      adapters that can generate multiple interrupts), but it's not clear
      how many vectors we want, or how we want to deal with policy issues
      such as how to decide which vector to use or how to set up interrupt
      affinity.  This patch is useful for experimenting, since no core
      changes will be necessary when updating a driver to support multiple
      vectors, and we know that we want to make at least these changes
      anyway.
      Signed-off-by: NMichael S. Tsirkin <mst@dev.mellanox.co.il>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      f4fd0b22
  19. 25 4月, 2007 1 次提交
  20. 05 2月, 2007 1 次提交
    • M
      IB: Return qp pointer as part of ib_wc · 062dbb69
      Michael S. Tsirkin 提交于
      struct ib_wc currently only includes the local QP number: this matches
      the IB spec, but seems mostly useless. The following patch replaces
      this with the pointer to qp itself, and updates all low level drivers
      and all users.
      
      This has the following advantages:
      - Ability to get a per-qp context through wc->qp->qp_context
      - Existing drivers already have the qp pointer ready in poll cq, so
        this change actually saves a tiny bit (extra memory read) on data path
        (for ehca it would actually be expensive to find the QP pointer when
        polling a CQ, but ehca does not support SRQ so we can leave wc->qp as
        NULL for ehca)
      - Users that need the QP number can still get it through wc->qp->qp_num
      
      Use case:
      
      In IPoIB connected mode code, I have a common CQ shared by multiple
      QPs.  To track connection usage, I need a way to get at some per-QP
      context upon the completion, and I would like to avoid allocating
      context object per work request just to stick a QP pointer into it.
      With this code, I can just use wc->qp->qp_context.
      Signed-off-by: NMichael S. Tsirkin <mst@mellanox.co.il>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      062dbb69
  21. 13 12月, 2006 1 次提交
  22. 30 11月, 2006 1 次提交
  23. 22 11月, 2006 1 次提交
  24. 14 11月, 2006 1 次提交
    • R
      IB/mad: Fix race between cancel and receive completion · 39798695
      Roland Dreier 提交于
      When ib_cancel_mad() is called, it puts the canceled send on a list
      and schedules a "flushed" callback from process context.  However,
      this leaves a window where a receive completion could be processed
      before the send is fully flushed.
      
      This is fine, except that ib_find_send_mad() will find the MAD and
      return it to the receive processing, which results in the sender
      getting both a successful receive and a "flushed" send completion for
      the same request.  Understandably, this confuses the sender, which is
      expecting only one of these two callbacks, and leads to grief such as
      a use-after-free in IPoIB.
      
      Fix this by changing ib_find_send_mad() to return a send struct only
      if the status is still successful (and not "flushed").  The search of
      the send_list already had this check, so this patch just adds the same
      check to the search of the wait_list.
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      39798695
  25. 27 9月, 2006 1 次提交
  26. 23 9月, 2006 2 次提交
  27. 25 7月, 2006 1 次提交
  28. 27 6月, 2006 1 次提交
  29. 18 6月, 2006 2 次提交
  30. 13 5月, 2006 1 次提交
    • S
      IB: refcount race fixes · 1b52fa98
      Sean Hefty 提交于
      Fix race condition during destruction calls to avoid possibility of
      accessing object after it has been freed.  Instead of waking up a wait
      queue directly, which is susceptible to a race where the object is
      freed between the reference count going to 0 and the wake_up(), use a
      completion to wait in the function doing the freeing.
      Signed-off-by: NSean Hefty <sean.hefty@intel.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      1b52fa98
  31. 20 4月, 2006 1 次提交
  32. 03 4月, 2006 1 次提交
    • M
      IB/mad: fix oops in cancel_mads · 37289efe
      Michael S. Tsirkin 提交于
      We have seen the following OOPs in cancel_mads, when restarting opensm
      multiple times:
      
          Call Trace:
            [<c010549b>] show_stack+0x9b/0xb0
            [<c01055ec>] show_registers+0x11c/0x190
            [<c01057cd>] die+0xed/0x160
            [<c031b966>] do_page_fault+0x3f6/0x5d0
            [<c010511f>] error_code+0x4f/0x60
            [<f8ac4e38>] cancel_mads+0x128/0x150 [ib_mad]
            [<f8ac2811>] unregister_mad_agent+0x11/0x130 [ib_mad]
            [<f8ac2a12>] ib_unregister_mad_agent+0x12/0x20 [ib_mad]
            [<f8b10f23>] ib_umad_close+0xf3/0x130 [ib_umad]
            [<c0162937>] __fput+0x187/0x1c0
            [<c01627a9>] fput+0x19/0x20
            [<c0160f7a>] filp_close+0x3a/0x60
            [<c0121ca8>] put_files_struct+0x68/0xa0
            [<c0103cf7>] do_signal+0x47/0x100
            [<c0103ded>] do_notify_resume+0x3d/0x40
            [<c0103f9e>] work_notifysig+0x13/0x25
      
      We traced this back to local_completions unlocking mad_agent_priv->lock
      while still keeping a pointer into local_list. A later call to
      list_del(&local->completion_list) would then corrupt the list.
      
      To fix this, remove the entry from local_list after looking it up but
      before releasing mad_agent_priv->lock, to prevent cancel_mads from
      finding and freeing it.
      Signed-off-by: NJack Morgenstein <jackm@mellanox.co.il>
      Signed-off-by: NMichael S. Tsirkin <mst@mellanox.co.il>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      37289efe