1. 09 12月, 2017 1 次提交
  2. 02 11月, 2017 1 次提交
  3. 01 11月, 2017 4 次提交
  4. 21 10月, 2017 2 次提交
  5. 20 10月, 2017 2 次提交
  6. 15 10月, 2017 2 次提交
  7. 04 10月, 2017 1 次提交
  8. 26 9月, 2017 1 次提交
  9. 23 9月, 2017 3 次提交
  10. 22 9月, 2017 1 次提交
  11. 21 8月, 2017 1 次提交
  12. 19 8月, 2017 1 次提交
  13. 15 8月, 2017 1 次提交
  14. 12 8月, 2017 1 次提交
  15. 08 8月, 2017 4 次提交
  16. 21 7月, 2017 1 次提交
  17. 12 7月, 2017 1 次提交
    • G
      cxgb4: fix BUG() on interrupt deallocating path of ULD · 6a146f3a
      Guilherme G. Piccoli 提交于
      Since the introduction of ULD (Upper-Layer Drivers), the MSI-X
      deallocating path changed in cxgb4: the driver frees the interrupts
      of ULD when unregistering it or on shutdown PCI handler.
      
      Problem is that if a MSI-X is not freed before deallocated in the PCI
      layer, it will trigger a BUG() due to still "alive" interrupt being
      tentatively quiesced.
      
      The below trace was observed when doing a simple unbind of Chelsio's
      adapter PCI function, like:
        "echo 001e:80:00.4 > /sys/bus/pci/drivers/cxgb4/unbind"
      
      Trace:
      
        kernel BUG at drivers/pci/msi.c:352!
        Oops: Exception in kernel mode, sig: 5 [#1]
        ...
        NIP [c0000000005a5e60] free_msi_irqs+0xa0/0x250
        LR [c0000000005a5e50] free_msi_irqs+0x90/0x250
        Call Trace:
        [c0000000005a5e50] free_msi_irqs+0x90/0x250 (unreliable)
        [c0000000005a72c4] pci_disable_msix+0x124/0x180
        [d000000011e06708] disable_msi+0x88/0xb0 [cxgb4]
        [d000000011e06948] free_some_resources+0xa8/0x160 [cxgb4]
        [d000000011e06d60] remove_one+0x170/0x3c0 [cxgb4]
        [c00000000058a910] pci_device_remove+0x70/0x110
        [c00000000064ef04] device_release_driver_internal+0x1f4/0x2c0
        ...
      
      This patch fixes the issue by refactoring the shutdown path of ULD on
      cxgb4 driver, by properly freeing and disabling interrupts on PCI
      remove handler too.
      
      Fixes: 0fbc81b3 ("Allocate resources dynamically for all cxgb4 ULD's")
      Reported-by: NHarsha Thyagaraja <hathyaga@in.ibm.com>
      Signed-off-by: NGuilherme G. Piccoli <gpiccoli@linux.vnet.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6a146f3a
  18. 05 7月, 2017 2 次提交
  19. 24 6月, 2017 1 次提交
    • A
      cxgb4: Update T6 Buffer Group and Channel Mappings · 193c4c28
      Arjun Vynipadath 提交于
      We were using t4_get_mps_bg_map() for both t4_get_port_stats()
      to determine which MPS Buffer Groups to report statistics on for a given
      Port, and also for t4_sge_alloc_rxq() to provide a TP Ingress Channel
      Congestion Map.  For T4/T5 these are actually the same values (because they
      are ~somewhat~ related), but for T6 they should return different values
      (T6 has Port 0 associated with MPS Buffer Group 0 (with MPS Buffer Group 1
      silently cascading off) and Port 1 is associated with MPS Buffer Group 2
      (with 3 cascading off)).
      
      Based on the original work by Casey Leedom <leedom@chelsio.com>
      Signed-off-by: NArjun Vynipadath <arjun@chelsio.com>
      Signed-off-by: NGanesh Goudar <ganeshgr@chelsio.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      193c4c28
  20. 19 6月, 2017 3 次提交
    • R
      cxgb4: notify uP to route ctrlq compl to rdma rspq · dec6b331
      Raju Rangoju 提交于
      During the module initialisation there is a possible race
      (basically race between uld and lld) where neither the uld
      nor lld notifies the uP about where to route the ctrl queue
      completions. LLD skips notifying uP as the rdma queues were
      not created by then (will leave it to ULD to notify the uP).
      As the ULD comes up, it also skips notifying the uP as the
      flag FULL_INIT_DONE is not set yet (ULD assumes that the
      interface is not up yet).
      
      Consequently, this race between uld and lld leaves uP
      unnotified about where to send the ctrl queue completions
      to, leading to iwarp RI_RES WR failure.
      
      Here is the race:
      
      CPU 0                                   CPU1
      
      - allocates nic rx queus
      - t4_sge_alloc_ctrl_txq()
      (if rdma rsp queues exists,
      tell uP to route ctrl queue
      compl to rdma rspq)
                                      - acquires the mutex_lock
                                      - allocates rdma response queues
                                      - if FULL_INIT_DONE set,
                                        tell uP to route ctrl queue compl
                                        to rdma rspq
                                      - relinquishes mutex_lock
      - acquires the mutex_lock
      - enable_rx()
      - set FULL_INIT_DONE
      - relinquishes mutex_lock
      
      This patch fixes the above issue.
      
      Fixes: e7519f99('cxgb4: avoid enabling napi twice to the same queue')
      Signed-off-by: NRaju Rangoju <rajur@chelsio.com>
      Acked-by: NSteve Wise <swise@opengridcomputing.com>
      CC: Stable <stable@vger.kernel.org> # 4.9+
      Signed-off-by: NGanesh Goudar <ganeshgr@chelsio.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      dec6b331
    • R
      cxgb4: notify uP to route ctrlq compl to rdma rspq · 91060381
      Raju Rangoju 提交于
      During the module initialisation there is a possible race
      (basically race between uld and lld) where neither the uld
      nor lld notifies the uP about where to route the ctrl queue
      completions. LLD skips notifying uP as the rdma queues were
      not created by then (will leave it to ULD to notify the uP).
      As the ULD comes up, it also skips notifying the uP as the
      flag FULL_INIT_DONE is not set yet (ULD assumes that the
      interface is not up yet).
      
      Consequently, this race between uld and lld leaves uP
      unnotified about where to send the ctrl queue completions
      to, leading to iwarp RI_RES WR failure.
      
      Here is the race:
      
      CPU 0                                   CPU1
      
      - allocates nic rx queus
      - t4_sge_alloc_ctrl_txq()
      (if rdma rsp queues exists,
      tell uP to route ctrl queue
      compl to rdma rspq)
                                      - acquires the mutex_lock
                                      - allocates rdma response queues
                                      - if FULL_INIT_DONE set,
                                        tell uP to route ctrl queue compl
                                        to rdma rspq
                                      - relinquishes mutex_lock
      - acquires the mutex_lock
      - enable_rx()
      - set FULL_INIT_DONE
      - relinquishes mutex_lock
      
      This patch fixes the above issue.
      
      Fixes: e7519f99('cxgb4: avoid enabling napi twice to the same queue')
      Signed-off-by: NRaju Rangoju <rajur@chelsio.com>
      Acked-by: NSteve Wise <swise@opengridcomputing.com>
      CC: Stable <stable@vger.kernel.org> # 4.9+
      Signed-off-by: NGanesh Goudar <ganeshgr@chelsio.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      91060381
    • G
      cxgb4: fix a NULL dereference · d427caee
      Ganesh Goudar 提交于
      Avoid NULL dereference in setup_sge_queues() when the adapter is
      in non offload mode.
      
      Fixes: 0fbc81b3 ('chcr/cxgb4i/cxgbit/RDMA/cxgb4: Allocate resources dynamically for all cxgb4 ULD's')
      Signed-off-by: NGanesh Goudar <ganeshgr@chelsio.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d427caee
  21. 16 6月, 2017 1 次提交
    • J
      networking: make skb_put & friends return void pointers · 4df864c1
      Johannes Berg 提交于
      It seems like a historic accident that these return unsigned char *,
      and in many places that means casts are required, more often than not.
      
      Make these functions (skb_put, __skb_put and pskb_put) return void *
      and remove all the casts across the tree, adding a (u8 *) cast only
      where the unsigned char pointer was used directly, all done with the
      following spatch:
      
          @@
          expression SKB, LEN;
          typedef u8;
          identifier fn = { skb_put, __skb_put };
          @@
          - *(fn(SKB, LEN))
          + *(u8 *)fn(SKB, LEN)
      
          @@
          expression E, SKB, LEN;
          identifier fn = { skb_put, __skb_put };
          type T;
          @@
          - E = ((T *)(fn(SKB, LEN)))
          + E = fn(SKB, LEN)
      
      which actually doesn't cover pskb_put since there are only three
      users overall.
      
      A handful of stragglers were converted manually, notably a macro in
      drivers/isdn/i4l/isdn_bsdcomp.c and, oddly enough, one of the many
      instances in net/bluetooth/hci_sock.c. In the former file, I also
      had to fix one whitespace problem spatch introduced.
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4df864c1
  22. 10 6月, 2017 1 次提交
  23. 08 6月, 2017 3 次提交
    • J
      net: propagate tc filter chain index down the ndo_setup_tc call · a5fcf8a6
      Jiri Pirko 提交于
      We need to push the chain index down to the drivers, so they have the
      information to which chain the rule belongs. For now, no driver supports
      multichain offload, so only chain 0 is supported. This is needed to
      prevent chain squashes during offload for now. Later this will be used
      to implement multichain offload.
      Signed-off-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a5fcf8a6
    • D
      net: Fix inconsistent teardown and release of private netdev state. · cf124db5
      David S. Miller 提交于
      Network devices can allocate reasources and private memory using
      netdev_ops->ndo_init().  However, the release of these resources
      can occur in one of two different places.
      
      Either netdev_ops->ndo_uninit() or netdev->destructor().
      
      The decision of which operation frees the resources depends upon
      whether it is necessary for all netdev refs to be released before it
      is safe to perform the freeing.
      
      netdev_ops->ndo_uninit() presumably can occur right after the
      NETDEV_UNREGISTER notifier completes and the unicast and multicast
      address lists are flushed.
      
      netdev->destructor(), on the other hand, does not run until the
      netdev references all go away.
      
      Further complicating the situation is that netdev->destructor()
      almost universally does also a free_netdev().
      
      This creates a problem for the logic in register_netdevice().
      Because all callers of register_netdevice() manage the freeing
      of the netdev, and invoke free_netdev(dev) if register_netdevice()
      fails.
      
      If netdev_ops->ndo_init() succeeds, but something else fails inside
      of register_netdevice(), it does call ndo_ops->ndo_uninit().  But
      it is not able to invoke netdev->destructor().
      
      This is because netdev->destructor() will do a free_netdev() and
      then the caller of register_netdevice() will do the same.
      
      However, this means that the resources that would normally be released
      by netdev->destructor() will not be.
      
      Over the years drivers have added local hacks to deal with this, by
      invoking their destructor parts by hand when register_netdevice()
      fails.
      
      Many drivers do not try to deal with this, and instead we have leaks.
      
      Let's close this hole by formalizing the distinction between what
      private things need to be freed up by netdev->destructor() and whether
      the driver needs unregister_netdevice() to perform the free_netdev().
      
      netdev->priv_destructor() performs all actions to free up the private
      resources that used to be freed by netdev->destructor(), except for
      free_netdev().
      
      netdev->needs_free_netdev is a boolean that indicates whether
      free_netdev() should be done at the end of unregister_netdevice().
      
      Now, register_netdevice() can sanely release all resources after
      ndo_ops->ndo_init() succeeds, by invoking both ndo_ops->ndo_uninit()
      and netdev->priv_destructor().
      
      And at the end of unregister_netdevice(), we invoke
      netdev->priv_destructor() and optionally call free_netdev().
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cf124db5
    • G
      cxgb4: Fix tids count for ipv6 offload connection · 1dec4cec
      Ganesh Goudar 提交于
      the adapter consumes two tids for every ipv6 offload
      connection be it active or passive, calculate tid usage
      count accordingly.
      
      Also change the signatures of relevant functions to get
      the address family.
      Signed-off-by: NRizwan Ansari <rizwana@chelsio.com>
      Signed-off-by: NVarun Prakash <varun@chelsio.com>
      Signed-off-by: NGanesh Goudar <ganeshgr@chelsio.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1dec4cec
  24. 07 6月, 2017 1 次提交