1. 13 1月, 2009 1 次提交
  2. 30 10月, 2008 1 次提交
  3. 29 10月, 2008 1 次提交
  4. 01 10月, 2008 1 次提交
    • R
      IPoIB: Use netif_tx_lock() and get rid of private tx_lock, LLTX · 943c246e
      Roland Dreier 提交于
      Currently, IPoIB is an LLTX driver that uses its own IRQ-disabling
      tx_lock.  Not only do we want to get rid of LLTX, this actually causes
      problems because of the skb_orphan() done with this tx_lock held: some
      skb destructors expect to be run with interrupts enabled.
      
      The simplest fix for this is to get rid of the driver-private tx_lock
      and stop using LLTX.  We kill off priv->tx_lock and use
      netif_tx_lock[_bh]() instead; the patch to do this is a tiny bit
      tricky because we need to update places that take priv->lock inside
      the tx_lock to disable IRQs, rather than relying on tx_lock having
      already disabled IRQs.
      
      Also, there are a couple of places where we need to disable BHs to
      make sure we have a consistent context to call netif_tx_lock() (since
      we no longer can use _irqsave() variants), and we also have to change
      ipoib_send_comp_handler() to call drain_tx_cq() through a timer rather
      than directly, because ipoib_send_comp_handler() runs in interrupt
      context and drain_tx_cq() must run in BH context so it can call
      netif_tx_lock().
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      943c246e
  5. 17 9月, 2008 1 次提交
    • Y
      IPoIB: Fix deadlock on RTNL between bcast join comp and ipoib_stop() · e8224e4b
      Yossi Etigin 提交于
      Taking rtnl_lock in ipoib_mcast_join_complete() causes a deadlock with
      ipoib_stop().  We avoid it by scheduling the piece of code that takes
      the lock on ipoib_workqueue instead of executing it directly.  This
      works because we only flush the ipoib_workqueue with the RTNL not held.
      
      The deadlock happens because ipoib_stop() calls ipoib_ib_dev_down()
      which calls ipoib_mcast_dev_flush(), which calls ipoib_mcast_free(),
      which calls ipoib_mcast_leave(). The latter calls
      ib_sa_free_multicast(), and this waits until the multicast completion
      handler finishes.  This handler is ipoib_mcast_join_complete(), which
      waits for the rtnl_lock(), which was already taken by ipoib_stop().
      
      This bug was introduced in commit a77a57a1 ("IPoIB: Fix deadlock on
      RTNL in ipoib_stop()").
      Signed-off-by: NYossi Etigin <yosefe@voltaire.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      e8224e4b
  6. 20 8月, 2008 1 次提交
    • R
      IPoIB: Fix deadlock on RTNL in ipoib_stop() · a77a57a1
      Roland Dreier 提交于
      Commit c8c2afe3 ("IPoIB: Use rtnl lock/unlock when changing device
      flags") added a call to rtnl_lock() in ipoib_mcast_join_task(), which
      is run from the ipoib_workqueue.  However, ipoib_stop() (which is run
      inside rtnl_lock()) flushes this workqueue, which leads to a deadlock
      if the join task is pending.
      
      Fix this by simply not flushing the workqueue from ipoib_stop().  It
      turns out that we really don't care about workqueue tasks running
      during or after ipoib_stop(), as long as we make sure to flush the
      workqueue before unregistering a netdev.
      
      This fixes <https://bugs.openfabrics.org/show_bug.cgi?id=1114>.
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      a77a57a1
  7. 15 7月, 2008 8 次提交
  8. 21 5月, 2008 1 次提交
  9. 24 4月, 2008 1 次提交
  10. 12 3月, 2008 1 次提交
    • O
      IPoIB: Don't drop multicast sends when they can be queued · b3e2749b
      Or Gerlitz 提交于
      When set_multicast_list() is called the multicast task is restarted
      and the IPOIB_MCAST_STARTED bit is cleared.  As a result for some
      window of time, multicast packets are not transmitted nor queued but
      rather dropped by ipoib_mcast_send().  These dropped packets are
      painful in two cases:
      
       - bonding fail-over which both calls set_multicast_list() on the new
         active slave and sends Gratuitous ARP through that slave.
      
       - IP_DROP_MEMBERSHIP code which both calls set_multicast_list() on the
         device and issues IGMP leave.
      
      In both these cases, depending on the scheduling of the IPoIB
      multicast task, the packets would be dropped.  As a result, in the
      bonding case, the failover would not be detected by the peers until
      their neighbour is renewed the neighbour (which takes a few tens of
      seconds).  In the IGMP case, the IP router doesn't get an IGMP leave
      and would only learn on that from further probes on the group (also a
      delay of at least a few tens of seconds).
      
      Fix this by allowing transmission (or queuing) depending on the
      IPOIB_FLAG_OPER_UP flag instead of the IPOIB_MCAST_STARTED flag.
      Signed-off-by: NOlga Shern <olgas@voltaire.com>
      Signed-off-by: NOr Gerlitz <ogerlitz@voltaire.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      b3e2749b
  11. 26 1月, 2008 2 次提交
  12. 16 10月, 2007 1 次提交
    • M
      IB/ipoib: Bound the net device to the ipoib_neigh structue · 732a2170
      Moni Shoua 提交于
      IPoIB uses a two layer neighboring scheme, such that for each struct neighbour
      whose device is an ipoib one, there is a struct ipoib_neigh buddy which is
      created on demand at the tx flow by an ipoib_neigh_alloc(skb->dst->neighbour)
      call.
      
      When using the bonding driver, neighbours are created by the net stack on behalf
      of the bonding (master) device. On the tx flow the bonding code gets an skb such
      that skb->dev points to the master device, it changes this skb to point on the
      slave device and calls the slave hard_start_xmit function.
      
      Under this scheme, ipoib_neigh_destructor assumption that for each struct
      neighbour it gets, n->dev is an ipoib device and hence netdev_priv(n->dev)
      can be casted to struct ipoib_dev_priv is buggy.
      
      To fix it, this patch adds a dev field to struct ipoib_neigh which is used
      instead of the struct neighbour dev one, when n->dev->flags has the
      IFF_MASTER bit set.
      
      Signed-off-by: Moni Shoua <monis at voltaire.com>
      Signed-off-by: Or Gerlitz <ogerlitz at voltaire.com>
      Acked-by: NRoland Dreier <rdreier@cisco.com>
      Signed-off-by: NJeff Garzik <jeff@garzik.org>
      732a2170
  13. 11 10月, 2007 2 次提交
    • R
      [IPoIB]: Convert to netdevice internal stats · de903512
      Roland Dreier 提交于
      Use the stats member of struct netdevice in IPoIB, so we can save
      memory by deleting the stats member of struct ipoib_dev_priv, and save
      code by deleting ipoib_get_stats().
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      de903512
    • O
      IPoIB: Allow setting policy to ignore multicast groups · 335a64a5
      Or Gerlitz 提交于
      The kernel IB stack allows (through the RDMA CM) userspace
      applications to join and use multicast groups from the IPoIB MGID
      range.  This allows multicast traffic to be handled directly from
      userspace QPs, without going through the kernel stack, which gives
      better performance for some applications.
      
      However, to fully interoperate with IP multicast, such userspace
      applications need to participate in IGMP reports and queries, or else
      routers may not forward the multicast traffic to the system where the
      application is running.  The simplest way to do this is to share the
      kernel IGMP implementation by using the IP_ADD_MEMBERSHIP option to
      join multicast groups that are being handled directly in userspace.
      
      However, in such cases, the actual multicast traffic should not also
      be handled by the IPoIB interface, because that would burn resources
      handling multicast packets that will just be discarded in the kernel.
      
      To handle this, this patch adds lookup on the database used for IB
      multicast group reference counting when IPoIB is joining multicast
      groups, and if a multicast group is already handled by user space,
      then the IPoIB kernel driver ignores the group.  This is controlled by
      a per-interface policy flag.  When the flag is set, IPoIB will not
      join and attach its QP to a multicast group which already has an entry
      in the database; when the flag is cleared, IPoIB will behave as before
      this change.
      
      For each IPoIB interface, the /sys/class/net/$intf/umcast attribute
      controls the policy flag.  The default value is off/0.
      Signed-off-by: NOr Gerlitz <ogerlitz@voltaire.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      335a64a5
  14. 10 10月, 2007 1 次提交
  15. 22 5月, 2007 1 次提交
  16. 23 3月, 2007 1 次提交
  17. 09 3月, 2007 1 次提交
  18. 22 2月, 2007 1 次提交
  19. 17 2月, 2007 1 次提交
    • S
      IB/sa: Track multicast join/leave requests · faec2f7b
      Sean Hefty 提交于
      The IB SA tracks multicast join/leave requests on a per port basis and
      does not do any reference counting: if two users of the same port join
      the same group, and one leaves that group, then the SA will remove the
      port from the group even though there is one user who wants to stay a
      member left.  Therefore, in order to support multiple users of the
      same multicast group from the same port, we need to perform reference
      counting locally.
      
      To do this, add an multicast submodule to ib_sa to perform reference
      counting of multicast join/leave operations.  Modify ib_ipoib (the
      only in-kernel user of multicast) to use the new interface.
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      faec2f7b
  20. 11 2月, 2007 1 次提交
    • M
      IPoIB: Connected mode experimental support · 839fcaba
      Michael S. Tsirkin 提交于
      The following patch adds experimental support for IPoIB connected
      mode, as defined by the draft from the IETF ipoib working group.  The
      idea is to increase performance by increasing the MTU from the maximum
      of 2K (theoretically 4K) supported by IPoIB on top of UD.  With this
      code, I'm able to get 800MByte/sec or more with netperf without
      options on a Mellanox 4x back-to-back DDR system.
      
      Some notes on code:
      1. SRQ is used for scalability to large cluster sizes
      2. Only RC connections are used (UC does not support SRQ now)
      3. Retry count is set to 0 since spec draft warns against retries
      4. Each connection is used for data transfers in only 1 direction, so
         each connection is either active(TX) or passive (RX).  2 sides that
         want to communicate create 2 connections.
      5. Each active (TX) connection has a separate CQ for send completions -
         this keeps the code simple without CQ resize and other tricks
      6. To detect stale passive side connections (where the remote side is
         down), we keep an LRU list of passive connections (updated once per
         second per connection) and destroy a connection after it has been
         unused for several seconds. The LRU rule makes it possible to avoid
         scanning connections that have recently been active.
      Signed-off-by: NMichael S. Tsirkin <mst@mellanox.co.il>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      839fcaba
  21. 30 11月, 2006 1 次提交
  22. 22 11月, 2006 1 次提交
  23. 23 9月, 2006 3 次提交
  24. 15 9月, 2006 1 次提交
  25. 25 7月, 2006 1 次提交
  26. 27 6月, 2006 1 次提交
  27. 18 6月, 2006 2 次提交
    • H
      [NET]: Add netif_tx_lock · 932ff279
      Herbert Xu 提交于
      Various drivers use xmit_lock internally to synchronise with their
      transmission routines.  They do so without setting xmit_lock_owner.
      This is fine as long as netpoll is not in use.
      
      With netpoll it is possible for deadlocks to occur if xmit_lock_owner
      isn't set.  This is because if a printk occurs while xmit_lock is held
      and xmit_lock_owner is not set can cause netpoll to attempt to take
      xmit_lock recursively.
      
      While it is possible to resolve this by getting netpoll to use
      trylock, it is suboptimal because netpoll's sole objective is to
      maximise the chance of getting the printk out on the wire.  So
      delaying or dropping the message is to be avoided as much as possible.
      
      So the only alternative is to always set xmit_lock_owner.  The
      following patch does this by introducing the netif_tx_lock family of
      functions that take care of setting/unsetting xmit_lock_owner.
      
      I renamed xmit_lock to _xmit_lock to indicate that it should not be
      used directly.  I didn't provide irq versions of the netif_tx_lock
      functions since xmit_lock is meant to be a BH-disabling lock.
      
      This is pretty much a straight text substitution except for a small
      bug fix in winbond.  It currently uses
      netif_stop_queue/spin_unlock_wait to stop transmission.  This is
      unsafe as an IRQ can potentially wake up the queue.  So it is safer to
      use netif_tx_disable.
      
      The hamradio bits used spin_lock_irq but it is unnecessary as
      xmit_lock must never be taken in an IRQ handler.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      932ff279
    • J
      IPoIB: Fix kernel unaligned access on ia64 · 37c22a77
      Jack Morgenstein 提交于
      Fix misaligned access faults on ia64: never cast a misaligned
      neighbour->ha + 4 pointer to union ib_gid type; pass a void * pointer
      instead.  The memcpy was being optimized to use full word accesses
      because the compiler thought that union ib_gid is always aligned.
      
      The cast in IPOIB_GID_ARG is safe, since it is fixed to access each
      byte separately.
      Signed-off-by: NJack Morgenstein <jackm@mellanox.co.il>
      Signed-off-by: NMichael S. Tsirkin <mst@mellanox.co.il>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      37c22a77
  28. 11 4月, 2006 1 次提交