1. 22 8月, 2012 2 次提交
    • T
      workqueue: deprecate __cancel_delayed_work() · 136b5721
      Tejun Heo 提交于
      Now that cancel_delayed_work() can be safely called from IRQ handlers,
      there's no reason to use __cancel_delayed_work().  Use
      cancel_delayed_work() instead of __cancel_delayed_work() and mark the
      latter deprecated.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NJens Axboe <axboe@kernel.dk>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Roland Dreier <roland@kernel.org>
      Cc: Tomi Valkeinen <tomi.valkeinen@ti.com>
      136b5721
    • T
      workqueue: use mod_delayed_work() instead of __cancel + queue · e7c2f967
      Tejun Heo 提交于
      Now that mod_delayed_work() is safe to call from IRQ handlers,
      __cancel_delayed_work() followed by queue_delayed_work() can be
      replaced with mod_delayed_work().
      
      Most conversions are straight-forward except for the following.
      
      * net/core/link_watch.c: linkwatch_schedule_work() was doing a quite
        elaborate dancing around its delayed_work.  Collapse it such that
        linkwatch_work is queued for immediate execution if LW_URGENT and
        existing timer is kept otherwise.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Tomi Valkeinen <tomi.valkeinen@ti.com> 
      e7c2f967
  2. 14 8月, 2012 1 次提交
    • T
      workqueue: use mod_delayed_work() instead of cancel + queue · 41f63c53
      Tejun Heo 提交于
      Convert delayed_work users doing cancel_delayed_work() followed by
      queue_delayed_work() to mod_delayed_work().
      
      Most conversions are straight-forward.  Ones worth mentioning are,
      
      * drivers/edac/edac_mc.c: edac_mc_workq_setup() converted to always
        use mod_delayed_work() and cancel loop in
        edac_mc_reset_delay_period() is dropped.
      
      * drivers/platform/x86/thinkpad_acpi.c: No need to remember whether
        watchdog is active or not.  @fan_watchdog_active and related code
        dropped.
      
      * drivers/power/charger-manager.c: Seemingly a lot of
        delayed_work_pending() abuse going on here.
        [delayed_]work_pending() are unsynchronized and racy when used like
        this.  I converted one instance in fullbatt_handler().  Please
        conver the rest so that it invokes workqueue APIs for the intended
        target state rather than trying to game work item pending state
        transitions.  e.g. if timer should be modified - call
        mod_delayed_work(), canceled - call cancel_delayed_work[_sync]().
      
      * drivers/thermal/thermal_sys.c: thermal_zone_device_set_polling()
        simplified.  Note that round_jiffies() calls in this function are
        meaningless.  round_jiffies() work on absolute jiffies not delta
        delay used by delayed_work.
      
      v2: Tomi pointed out that __cancel_delayed_work() users can't be
          safely converted to mod_delayed_work().  They could be calling it
          from irq context and if that happens while delayed_work_timer_fn()
          is running, it could deadlock.  __cancel_delayed_work() users are
          dropped.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NHenrique de Moraes Holschuh <hmh@hmh.eng.br>
      Acked-by: NDmitry Torokhov <dmitry.torokhov@gmail.com>
      Acked-by: NAnton Vorontsov <cbouatmailru@gmail.com>
      Acked-by: NDavid Howells <dhowells@redhat.com>
      Cc: Tomi Valkeinen <tomi.valkeinen@ti.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Doug Thompson <dougthompson@xmission.com>
      Cc: David Airlie <airlied@linux.ie>
      Cc: Roland Dreier <roland@kernel.org>
      Cc: "John W. Linville" <linville@tuxdriver.com>
      Cc: Zhang Rui <rui.zhang@intel.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: "J. Bruce Fields" <bfields@fieldses.org>
      Cc: Johannes Berg <johannes@sipsolutions.net>
      41f63c53
  3. 28 7月, 2012 2 次提交
  4. 12 7月, 2012 1 次提交
  5. 09 7月, 2012 6 次提交
  6. 30 6月, 2012 1 次提交
    • P
      netlink: add netlink_kernel_cfg parameter to netlink_kernel_create · a31f2d17
      Pablo Neira Ayuso 提交于
      This patch adds the following structure:
      
      struct netlink_kernel_cfg {
              unsigned int    groups;
              void            (*input)(struct sk_buff *skb);
              struct mutex    *cb_mutex;
      };
      
      That can be passed to netlink_kernel_create to set optional configurations
      for netlink kernel sockets.
      
      I've populated this structure by looking for NULL and zero parameters at the
      existing code. The remaining parameters that always need to be set are still
      left in the original interface.
      
      That includes optional parameters for the netlink socket creation. This allows
      easy extensibility of this interface in the future.
      
      This patch also adapts all callers to use this new interface.
      Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a31f2d17
  7. 27 6月, 2012 1 次提交
  8. 20 6月, 2012 1 次提交
  9. 12 5月, 2012 1 次提交
  10. 09 5月, 2012 5 次提交
    • O
      IB/core: Add raw packet QP type · c938a616
      Or Gerlitz 提交于
      IB_QPT_RAW_PACKET allows applications to build a complete packet,
      including L2 headers, when sending; on the receive side, the HW will
      not strip any headers.
      
      This QP type is designed for userspace direct access to Ethernet; for
      example by applications that do TCP/IP themselves.  Only processes
      with the NET_RAW capability are allowed to create raw packet QPs (the
      name "raw packet QP" is supposed to suggest an analogy to AF_PACKET /
      SOL_RAW sockets).
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Reviewed-by: NSean Hefty <sean.hefty@intel.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      c938a616
    • S
      RDMA/cma: Fix lockdep false positive recursive locking · b6cec8aa
      Sean Hefty 提交于
      The following lockdep problem was reported by Or Gerlitz <ogerlitz@mellanox.com>:
      
          [ INFO: possible recursive locking detected ]
          3.3.0-32035-g1b2649e-dirty #4 Not tainted
          ---------------------------------------------
          kworker/5:1/418 is trying to acquire lock:
           (&id_priv->handler_mutex){+.+.+.}, at: [<ffffffffa0138a41>] rdma_destroy_i    d+0x33/0x1f0 [rdma_cm]
      
          but task is already holding lock:
           (&id_priv->handler_mutex){+.+.+.}, at: [<ffffffffa0135130>] cma_disable_ca    llback+0x24/0x45 [rdma_cm]
      
          other info that might help us debug this:
           Possible unsafe locking scenario:
      
                 CPU0
                 ----
            lock(&id_priv->handler_mutex);
            lock(&id_priv->handler_mutex);
      
           *** DEADLOCK ***
      
           May be due to missing lock nesting notation
      
          3 locks held by kworker/5:1/418:
           #0:  (ib_cm){.+.+.+}, at: [<ffffffff81042ac1>] process_one_work+0x210/0x4a    6
           #1:  ((&(&work->work)->work)){+.+.+.}, at: [<ffffffff81042ac1>] process_on    e_work+0x210/0x4a6
           #2:  (&id_priv->handler_mutex){+.+.+.}, at: [<ffffffffa0135130>] cma_disab    le_callback+0x24/0x45 [rdma_cm]
      
          stack backtrace:
          Pid: 418, comm: kworker/5:1 Not tainted 3.3.0-32035-g1b2649e-dirty #4
          Call Trace:
           [<ffffffff8102b0fb>] ? console_unlock+0x1f4/0x204
           [<ffffffff81068771>] __lock_acquire+0x16b5/0x174e
           [<ffffffff8106461f>] ? save_trace+0x3f/0xb3
           [<ffffffff810688fa>] lock_acquire+0xf0/0x116
           [<ffffffffa0138a41>] ? rdma_destroy_id+0x33/0x1f0 [rdma_cm]
           [<ffffffff81364351>] mutex_lock_nested+0x64/0x2ce
           [<ffffffffa0138a41>] ? rdma_destroy_id+0x33/0x1f0 [rdma_cm]
           [<ffffffff81065a78>] ? trace_hardirqs_on_caller+0x11e/0x155
           [<ffffffff81065abc>] ? trace_hardirqs_on+0xd/0xf
           [<ffffffffa0138a41>] rdma_destroy_id+0x33/0x1f0 [rdma_cm]
           [<ffffffffa0139c02>] cma_req_handler+0x418/0x644 [rdma_cm]
           [<ffffffffa012ee88>] cm_process_work+0x32/0x119 [ib_cm]
           [<ffffffffa0130299>] cm_req_handler+0x928/0x982 [ib_cm]
           [<ffffffffa01302f3>] ? cm_req_handler+0x982/0x982 [ib_cm]
           [<ffffffffa0130326>] cm_work_handler+0x33/0xfe5 [ib_cm]
           [<ffffffff81065a78>] ? trace_hardirqs_on_caller+0x11e/0x155
           [<ffffffffa01302f3>] ? cm_req_handler+0x982/0x982 [ib_cm]
           [<ffffffff81042b6e>] process_one_work+0x2bd/0x4a6
           [<ffffffff81042ac1>] ? process_one_work+0x210/0x4a6
           [<ffffffff813669f3>] ? _raw_spin_unlock_irq+0x2b/0x40
           [<ffffffff8104316e>] worker_thread+0x1d6/0x350
           [<ffffffff81042f98>] ? rescuer_thread+0x241/0x241
           [<ffffffff81046a32>] kthread+0x84/0x8c
           [<ffffffff8136e854>] kernel_thread_helper+0x4/0x10
           [<ffffffff81366d59>] ? retint_restore_args+0xe/0xe
           [<ffffffff810469ae>] ? __init_kthread_worker+0x56/0x56
           [<ffffffff8136e850>] ? gs_change+0xb/0xb
      
      The actual locking is fine, since we're dealing with different locks,
      but from the same lock class.  cma_disable_callback() acquires the
      listening id mutex, whereas rdma_destroy_id() acquires the mutex for
      the new connection id.  To fix this, delay the call to
      rdma_destroy_id() until we've released the listening id mutex.
      Signed-off-by: NSean Hefty <sean.hefty@intel.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      b6cec8aa
    • R
      IB/uverbs: Lock SRQ / CQ / PD objects in a consistent order · 5909ce54
      Roland Dreier 提交于
      Since XRC support was added, the uverbs code has locked SRQ, CQ and PD
      objects needed during QP and SRQ creation in different orders
      depending on the the code path.  This leads to the (at least
      theoretical) possibility of deadlock, and triggers the lockdep splat
      below.
      
      Fix this by making sure we always lock the SRQ first, then CQs and
      finally the PD.
      
          ======================================================
          [ INFO: possible circular locking dependency detected ]
          3.4.0-rc5+ #34 Not tainted
          -------------------------------------------------------
          ibv_srq_pingpon/2484 is trying to acquire lock:
           (SRQ-uobj){+++++.}, at: [<ffffffffa00af51b>] idr_read_uobj+0x2f/0x4d [ib_uverbs]
      
          but task is already holding lock:
           (CQ-uobj){+++++.}, at: [<ffffffffa00af51b>] idr_read_uobj+0x2f/0x4d [ib_uverbs]
      
          which lock already depends on the new lock.
      
          the existing dependency chain (in reverse order) is:
      
          -> #2 (CQ-uobj){+++++.}:
                 [<ffffffff81070fd0>] lock_acquire+0xbf/0xfe
                 [<ffffffff81384f28>] down_read+0x34/0x43
                 [<ffffffffa00af51b>] idr_read_uobj+0x2f/0x4d [ib_uverbs]
                 [<ffffffffa00af542>] idr_read_obj+0x9/0x19 [ib_uverbs]
                 [<ffffffffa00b16c3>] ib_uverbs_create_qp+0x180/0x684 [ib_uverbs]
                 [<ffffffffa00ae3dd>] ib_uverbs_write+0xb7/0xc2 [ib_uverbs]
                 [<ffffffff810fe47f>] vfs_write+0xa7/0xee
                 [<ffffffff810fe65f>] sys_write+0x45/0x69
                 [<ffffffff8138cdf9>] system_call_fastpath+0x16/0x1b
      
          -> #1 (PD-uobj){++++++}:
                 [<ffffffff81070fd0>] lock_acquire+0xbf/0xfe
                 [<ffffffff81384f28>] down_read+0x34/0x43
                 [<ffffffffa00af51b>] idr_read_uobj+0x2f/0x4d [ib_uverbs]
                 [<ffffffffa00af542>] idr_read_obj+0x9/0x19 [ib_uverbs]
                 [<ffffffffa00af8ad>] __uverbs_create_xsrq+0x96/0x386 [ib_uverbs]
                 [<ffffffffa00b31b9>] ib_uverbs_detach_mcast+0x1cd/0x1e6 [ib_uverbs]
                 [<ffffffffa00ae3dd>] ib_uverbs_write+0xb7/0xc2 [ib_uverbs]
                 [<ffffffff810fe47f>] vfs_write+0xa7/0xee
                 [<ffffffff810fe65f>] sys_write+0x45/0x69
                 [<ffffffff8138cdf9>] system_call_fastpath+0x16/0x1b
      
          -> #0 (SRQ-uobj){+++++.}:
                 [<ffffffff81070898>] __lock_acquire+0xa29/0xd06
                 [<ffffffff81070fd0>] lock_acquire+0xbf/0xfe
                 [<ffffffff81384f28>] down_read+0x34/0x43
                 [<ffffffffa00af51b>] idr_read_uobj+0x2f/0x4d [ib_uverbs]
                 [<ffffffffa00af542>] idr_read_obj+0x9/0x19 [ib_uverbs]
                 [<ffffffffa00b1728>] ib_uverbs_create_qp+0x1e5/0x684 [ib_uverbs]
                 [<ffffffffa00ae3dd>] ib_uverbs_write+0xb7/0xc2 [ib_uverbs]
                 [<ffffffff810fe47f>] vfs_write+0xa7/0xee
                 [<ffffffff810fe65f>] sys_write+0x45/0x69
                 [<ffffffff8138cdf9>] system_call_fastpath+0x16/0x1b
      
          other info that might help us debug this:
      
          Chain exists of:
            SRQ-uobj --> PD-uobj --> CQ-uobj
      
           Possible unsafe locking scenario:
      
                 CPU0                    CPU1
                 ----                    ----
            lock(CQ-uobj);
                                         lock(PD-uobj);
                                         lock(CQ-uobj);
            lock(SRQ-uobj);
      
           *** DEADLOCK ***
      
          3 locks held by ibv_srq_pingpon/2484:
           #0:  (QP-uobj){+.+...}, at: [<ffffffffa00b162c>] ib_uverbs_create_qp+0xe9/0x684 [ib_uverbs]
           #1:  (PD-uobj){++++++}, at: [<ffffffffa00af51b>] idr_read_uobj+0x2f/0x4d [ib_uverbs]
           #2:  (CQ-uobj){+++++.}, at: [<ffffffffa00af51b>] idr_read_uobj+0x2f/0x4d [ib_uverbs]
      
          stack backtrace:
          Pid: 2484, comm: ibv_srq_pingpon Not tainted 3.4.0-rc5+ #34
          Call Trace:
           [<ffffffff8137eff0>] print_circular_bug+0x1f8/0x209
           [<ffffffff81070898>] __lock_acquire+0xa29/0xd06
           [<ffffffffa00af37c>] ? __idr_get_uobj+0x20/0x5e [ib_uverbs]
           [<ffffffffa00af51b>] ? idr_read_uobj+0x2f/0x4d [ib_uverbs]
           [<ffffffff81070fd0>] lock_acquire+0xbf/0xfe
           [<ffffffffa00af51b>] ? idr_read_uobj+0x2f/0x4d [ib_uverbs]
           [<ffffffff81070eee>] ? lock_release+0x166/0x189
           [<ffffffff81384f28>] down_read+0x34/0x43
           [<ffffffffa00af51b>] ? idr_read_uobj+0x2f/0x4d [ib_uverbs]
           [<ffffffffa00af51b>] idr_read_uobj+0x2f/0x4d [ib_uverbs]
           [<ffffffffa00af542>] idr_read_obj+0x9/0x19 [ib_uverbs]
           [<ffffffffa00b1728>] ib_uverbs_create_qp+0x1e5/0x684 [ib_uverbs]
           [<ffffffff81070fec>] ? lock_acquire+0xdb/0xfe
           [<ffffffff81070c09>] ? lock_release_non_nested+0x94/0x213
           [<ffffffff810d470f>] ? might_fault+0x40/0x90
           [<ffffffff810d470f>] ? might_fault+0x40/0x90
           [<ffffffffa00ae3dd>] ib_uverbs_write+0xb7/0xc2 [ib_uverbs]
           [<ffffffff810fe47f>] vfs_write+0xa7/0xee
           [<ffffffff810ff736>] ? fget_light+0x3b/0x99
           [<ffffffff810fe65f>] sys_write+0x45/0x69
           [<ffffffff8138cdf9>] system_call_fastpath+0x16/0x1b
      Reported-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      5909ce54
    • R
      IB/uverbs: Make lockdep output more readable · 3bea57a5
      Roland Dreier 提交于
      Add names for our lockdep classes, so instead of having to decipher
      lockdep output with mysterious names:
      
          Chain exists of:
            key#14 --> key#11 --> key#13
      
      lockdep will give us something nicer:
      
          Chain exists of:
            SRQ-uobj --> PD-uobj --> CQ-uobj
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      3bea57a5
    • O
      IB/core: Use qp->usecnt to track multicast attach/detach · c3bccbfb
      Or Gerlitz 提交于
      Just as we don't allow PDs, CQs, etc. to be destroyed if there are QPs
      that are attached to them, don't let a QP be destroyed if there are
      multicast group(s) attached to it.  Use the existing usecnt field of
      struct ib_qp which was added by commit 0e0ec7e0 ("RDMA/core: Export
      ib_open_qp() to share XRC TGT QPs") to track this.
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      c3bccbfb
  11. 25 4月, 2012 2 次提交
  12. 21 4月, 2012 2 次提交
  13. 05 4月, 2012 1 次提交
  14. 03 4月, 2012 1 次提交
  15. 02 4月, 2012 1 次提交
  16. 08 3月, 2012 1 次提交
    • S
      RDMA/iwcm: Reject connect requests if cmid is not in LISTEN state · 3eae7c9f
      Steve Wise 提交于
      When destroying a listening cmid, the iwcm first marks the state of
      the cmid as DESTROYING, then releases the lock and calls into the
      iWARP provider to destroy the endpoint.  Since the cmid is not locked,
      its possible for the iWARP provider to pass a connection request event
      to the iwcm, which will be silently dropped by the iwcm.  This causes
      the iWARP provider to never free up the resources from this connection
      because the assumption is the iwcm will accept or reject this connection.
      
      The solution is to reject these connection requests.
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      3eae7c9f
  17. 06 3月, 2012 2 次提交
    • H
      RDMA/ucma: Fix AB-BA deadlock · 186834b5
      Hefty, Sean 提交于
      When we destroy a cm_id, we must purge associated events from the
      event queue.  If the cm_id is for a listen request, we also purge
      corresponding pending connect requests.  This requires destroying
      the cm_id's associated with the connect requests by calling
      rdma_destroy_id().  rdma_destroy_id() blocks until all outstanding
      callbacks have completed.
      
      The issue is that we hold file->mut while purging events from the
      event queue.  We also acquire file->mut in our event handler.  Calling
      rdma_destroy_id() while holding file->mut can lead to a deadlock,
      since the event handler callback cannot acquire file->mut, which
      prevents rdma_destroy_id() from completing.
      
      Fix this by moving events to purge from the event queue to a temporary
      list.  We can then release file->mut and call rdma_destroy_id()
      outside of holding any locks.
      
      Bug report by Or Gerlitz <ogerlitz@mellanox.com>:
      
          [ INFO: possible circular locking dependency detected ]
          3.3.0-rc5-00008-g79f1e43-dirty #34 Tainted: G          I
      
          tgtd/9018 is trying to acquire lock:
           (&id_priv->handler_mutex){+.+.+.}, at: [<ffffffffa0359a41>] rdma_destroy_id+0x33/0x1f0 [rdma_cm]
      
          but task is already holding lock:
           (&file->mut){+.+.+.}, at: [<ffffffffa02470fe>] ucma_free_ctx+0xb6/0x196 [rdma_ucm]
      
          which lock already depends on the new lock.
      
      
          the existing dependency chain (in reverse order) is:
      
          -> #1 (&file->mut){+.+.+.}:
                 [<ffffffff810682f3>] lock_acquire+0xf0/0x116
                 [<ffffffff8135f179>] mutex_lock_nested+0x64/0x2e6
                 [<ffffffffa0247636>] ucma_event_handler+0x148/0x1dc [rdma_ucm]
                 [<ffffffffa035a79a>] cma_ib_handler+0x1a7/0x1f7 [rdma_cm]
                 [<ffffffffa0333e88>] cm_process_work+0x32/0x119 [ib_cm]
                 [<ffffffffa03362ab>] cm_work_handler+0xfb8/0xfe5 [ib_cm]
                 [<ffffffff810423e2>] process_one_work+0x2bd/0x4a6
                 [<ffffffff810429e2>] worker_thread+0x1d6/0x350
                 [<ffffffff810462a6>] kthread+0x84/0x8c
                 [<ffffffff81369624>] kernel_thread_helper+0x4/0x10
      
          -> #0 (&id_priv->handler_mutex){+.+.+.}:
                 [<ffffffff81067b86>] __lock_acquire+0x10d5/0x1752
                 [<ffffffff810682f3>] lock_acquire+0xf0/0x116
                 [<ffffffff8135f179>] mutex_lock_nested+0x64/0x2e6
                 [<ffffffffa0359a41>] rdma_destroy_id+0x33/0x1f0 [rdma_cm]
                 [<ffffffffa024715f>] ucma_free_ctx+0x117/0x196 [rdma_ucm]
                 [<ffffffffa0247255>] ucma_close+0x77/0xb4 [rdma_ucm]
                 [<ffffffff810df6ef>] fput+0x117/0x1cf
                 [<ffffffff810dc76e>] filp_close+0x6d/0x78
                 [<ffffffff8102b667>] put_files_struct+0xbd/0x17d
                 [<ffffffff8102b76d>] exit_files+0x46/0x4e
                 [<ffffffff8102d057>] do_exit+0x299/0x75d
                 [<ffffffff8102d599>] do_group_exit+0x7e/0xa9
                 [<ffffffff8103ae4b>] get_signal_to_deliver+0x536/0x555
                 [<ffffffff81001717>] do_signal+0x39/0x634
                 [<ffffffff81001d39>] do_notify_resume+0x27/0x69
                 [<ffffffff81361c03>] retint_signal+0x46/0x83
      
          other info that might help us debug this:
      
           Possible unsafe locking scenario:
      
                 CPU0                    CPU1
                 ----                    ----
            lock(&file->mut);
                                         lock(&id_priv->handler_mutex);
                                         lock(&file->mut);
            lock(&id_priv->handler_mutex);
      
           *** DEADLOCK ***
      
          1 lock held by tgtd/9018:
           #0:  (&file->mut){+.+.+.}, at: [<ffffffffa02470fe>] ucma_free_ctx+0xb6/0x196 [rdma_ucm]
      
          stack backtrace:
          Pid: 9018, comm: tgtd Tainted: G          I  3.3.0-rc5-00008-g79f1e43-dirty #34
          Call Trace:
           [<ffffffff81029e9c>] ? console_unlock+0x18e/0x207
           [<ffffffff81066433>] print_circular_bug+0x28e/0x29f
           [<ffffffff81067b86>] __lock_acquire+0x10d5/0x1752
           [<ffffffff810682f3>] lock_acquire+0xf0/0x116
           [<ffffffffa0359a41>] ? rdma_destroy_id+0x33/0x1f0 [rdma_cm]
           [<ffffffff8135f179>] mutex_lock_nested+0x64/0x2e6
           [<ffffffffa0359a41>] ? rdma_destroy_id+0x33/0x1f0 [rdma_cm]
           [<ffffffff8106546d>] ? trace_hardirqs_on_caller+0x11e/0x155
           [<ffffffff810654b1>] ? trace_hardirqs_on+0xd/0xf
           [<ffffffffa0359a41>] rdma_destroy_id+0x33/0x1f0 [rdma_cm]
           [<ffffffffa024715f>] ucma_free_ctx+0x117/0x196 [rdma_ucm]
           [<ffffffffa0247255>] ucma_close+0x77/0xb4 [rdma_ucm]
           [<ffffffff810df6ef>] fput+0x117/0x1cf
           [<ffffffff810dc76e>] filp_close+0x6d/0x78
           [<ffffffff8102b667>] put_files_struct+0xbd/0x17d
           [<ffffffff8102b5cc>] ? put_files_struct+0x22/0x17d
           [<ffffffff8102b76d>] exit_files+0x46/0x4e
           [<ffffffff8102d057>] do_exit+0x299/0x75d
           [<ffffffff8102d599>] do_group_exit+0x7e/0xa9
           [<ffffffff8103ae4b>] get_signal_to_deliver+0x536/0x555
           [<ffffffff810654b1>] ? trace_hardirqs_on+0xd/0xf
           [<ffffffff81001717>] do_signal+0x39/0x634
           [<ffffffff8135e037>] ? printk+0x3c/0x45
           [<ffffffff8106546d>] ? trace_hardirqs_on_caller+0x11e/0x155
           [<ffffffff810654b1>] ? trace_hardirqs_on+0xd/0xf
           [<ffffffff81361803>] ? _raw_spin_unlock_irq+0x2b/0x40
           [<ffffffff81039011>] ? set_current_blocked+0x44/0x49
           [<ffffffff81361bce>] ? retint_signal+0x11/0x83
           [<ffffffff81001d39>] do_notify_resume+0x27/0x69
           [<ffffffff8118a1fe>] ? trace_hardirqs_on_thunk+0x3a/0x3f
           [<ffffffff81361c03>] retint_signal+0x46/0x83
      Signed-off-by: NSean Hefty <sean.hefty@intel.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      186834b5
    • O
      IB: Use central enum for speed instead of hard-coded values · 2e96691c
      Or Gerlitz 提交于
      The kernel IB stack uses one enumeration for IB speed, which wasn't
      explicitly specified in the verbs header file.  Add that enum, and use
      it all over the code.
      
      The IB speed/width notation is also used by iWARP and IBoE HW drivers,
      which use the convention of rate = speed * width to advertise their
      port link rate.
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      2e96691c
  18. 28 2月, 2012 1 次提交
  19. 27 2月, 2012 1 次提交
  20. 26 2月, 2012 1 次提交
  21. 28 1月, 2012 2 次提交
    • S
      RDMA/ucma: Discard all events for new connections until accepted · 9ced69ca
      Sean Hefty 提交于
      After reporting a new connection request to user space, the rdma_ucm
      will discard subsequent events until the user has associated a user
      space idenfier with the kernel cm_id.  This is needed to avoid
      reporting a reject/disconnect event to the user for a request that
      they may not have processed.
      
      The user space identifier is set once the user tries to accept the
      connection request.  However, the following race exists in ucma_accept():
      
      	ctx->uid = cmd.uid;
      	<events may be reported now>
      	ret = rdma_accept(ctx->cm_id, ...);
      
      Once ctx->uid has been set, new events may be reported to the user.
      While the above mentioned race is avoided, there is an issue that the
      user _may_ receive a reject/disconnect event if rdma_accept() fails,
      depending on when the event is processed.  To simplify the use of
      rdma_accept(), discard all events unless rdma_accept() succeeds.
      
      This problem was discovered based on questions from Roland Dreier
      <roland@purestorage.com>.
      Signed-off-by: NSean Hefty <sean.hefty@intel.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      9ced69ca
    • B
      RDMA/core: Fix kernel panic by always initializing qp->usecnt · e47e321a
      Bernd Schubert 提交于
      We have just been investigating kernel panics related to
      cq->ibcq.event_handler() completion calls.  The problem is that
      ib_destroy_qp() fails with -EBUSY.
      
      Further investigation revealed qp->usecnt is not initialized.  This
      counter was introduced in linux-3.2 by commit 0e0ec7e0
      ("RDMA/core: Export ib_open_qp() to share XRC TGT QPs") but it only
      gets initialized for IB_QPT_XRC_TGT, but it is checked in
      ib_destroy_qp() for any QP type.
      
      Fix this by initializing qp->usecnt for every QP we create.
      Signed-off-by: NBernd Schubert <bernd.schubert@itwm.fraunhofer.de>
      Signed-off-by: NSven Breuner <sven.breuner@itwm.fraunhofer.de>
      
      [ Initialize qp->usecnt in uverbs too.  - Sean ]
      Signed-off-by: NSean Hefty <sean.hefty@intel.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      e47e321a
  22. 26 1月, 2012 1 次提交
  23. 05 1月, 2012 2 次提交
  24. 04 1月, 2012 1 次提交