1. 30 3月, 2015 14 次提交
    • U
      net: smc91x: make use of 4th parameter to devm_gpiod_get_index · cb6e0b36
      Uwe Kleine-König 提交于
      Since 39b2bbe3 (gpio: add flags argument to gpiod_get*() functions)
      which appeared in v3.17-rc1, the gpiod_get* functions take an additional
      parameter that allows to specify direction and initial value for output.
      Simplify accordingly.
      
      Moreover use devm_gpiod_get_index_optional for still simpler handling.
      Signed-off-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cb6e0b36
    • H
      hv_netvsc: Implement batching in send buffer · 7c3877f2
      Haiyang Zhang 提交于
      With this patch, we can send out multiple RNDIS data packets in one send buffer
      slot and one VMBus message. It reduces the overhead associated with VMBus messages.
      Signed-off-by: NHaiyang Zhang <haiyangz@microsoft.com>
      Reviewed-by: NK. Y. Srinivasan <kys@microsoft.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7c3877f2
    • D
      Merge git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf-next · 4ef295e0
      David S. Miller 提交于
      Pablo Neira Ayuso says:
      
      ====================
      Netfilter updates for net-next
      
      The following patchset contains Netfilter updates for your net-next tree.
      Basically, nf_tables updates to add the set extension infrastructure and finish
      the transaction for sets from Patrick McHardy. More specifically, they are:
      
      1) Move netns to basechain and use recently added possible_net_t, from
         Patrick McHardy.
      
      2) Use LOGLEVEL_<FOO> from nf_log infrastructure, from Joe Perches.
      
      3) Restore nf_log_trace that was accidentally removed during conflict
         resolution.
      
      4) nft_queue does not depend on NETFILTER_XTABLES, starting from here
         all patches from Patrick McHardy.
      
      5) Use raw_smp_processor_id() in nft_meta.
      
      Then, several patches to prepare ground for the new set extension
      infrastructure:
      
      6) Pass object length to the hash callback in rhashtable as needed by
         the new set extension infrastructure.
      
      7) Cleanup patch to restore struct nft_hash as wrapper for struct
         rhashtable
      
      8) Another small source code readability cleanup for nft_hash.
      
      9) Convert nft_hash to rhashtable callbacks.
      
      And finally...
      
      10) Add the new set extension infrastructure.
      
      11) Convert the nft_hash and nft_rbtree sets to use it.
      
      12) Batch set element release to avoid several RCU grace period in a row
          and add new function nft_set_elem_destroy() to consolidate set element
          release.
      
      13) Return the set extension data area from nft_lookup.
      
      14) Refactor existing transaction code to add some helper functions
          and document it.
      
      15) Complete the set transaction support, using similar approach to what we
          already use, to activate/deactivate elements in an atomic fashion.
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4ef295e0
    • D
      Merge branch 'tipc-next' · ae7633c8
      David S. Miller 提交于
      Ying Xue says:
      
      ====================
      tipc: fix two corner issues
      
      The patch set aims at resolving the following two critical issues:
      
      Patch #1: Resolve a deadlock which happens while all links are reset
      Patch #2: Correct a mistake usage of RCU lock which is used to protect
                node list
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ae7633c8
    • Y
      tipc: involve reference counter for node structure · 8a0f6ebe
      Ying Xue 提交于
      TIPC node hash node table is protected with rcu lock on read side.
      tipc_node_find() is used to look for a node object with node address
      through iterating the hash node table. As the entire process of what
      tipc_node_find() traverses the table is guarded with rcu read lock,
      it's safe for us. However, when callers use the node object returned
      by tipc_node_find(), there is no rcu read lock applied. Therefore,
      this is absolutely unsafe for callers of tipc_node_find().
      
      Now we introduce a reference counter for node structure. Before
      tipc_node_find() returns node object to its caller, it first increases
      the reference counter. Accordingly, after its caller used it up,
      it decreases the counter again. This can prevent a node being used by
      one thread from being freed by another thread.
      Reviewed-by: NErik Hugne <erik.hugne@ericsson.com>
      Reviewed-by: NJon Maloy <jon.maloy@ericson.com>
      Signed-off-by: NYing Xue <ying.xue@windriver.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8a0f6ebe
    • Y
      tipc: fix potential deadlock when all links are reset · b952b2be
      Ying Xue 提交于
      [   60.988363] ======================================================
      [   60.988754] [ INFO: possible circular locking dependency detected ]
      [   60.989152] 3.19.0+ #194 Not tainted
      [   60.989377] -------------------------------------------------------
      [   60.989781] swapper/3/0 is trying to acquire lock:
      [   60.990079]  (&(&n_ptr->lock)->rlock){+.-...}, at: [<ffffffffa0006dca>] tipc_link_retransmit+0x1aa/0x240 [tipc]
      [   60.990743]
      [   60.990743] but task is already holding lock:
      [   60.991106]  (&(&bclink->lock)->rlock){+.-...}, at: [<ffffffffa00004be>] tipc_bclink_lock+0x8e/0xa0 [tipc]
      [   60.991738]
      [   60.991738] which lock already depends on the new lock.
      [   60.991738]
      [   60.992174]
      [   60.992174] the existing dependency chain (in reverse order) is:
      [   60.992174]
      -> #1 (&(&bclink->lock)->rlock){+.-...}:
      [   60.992174]        [<ffffffff810a9c0c>] lock_acquire+0x9c/0x140
      [   60.992174]        [<ffffffff8179c41f>] _raw_spin_lock_bh+0x3f/0x50
      [   60.992174]        [<ffffffffa00004be>] tipc_bclink_lock+0x8e/0xa0 [tipc]
      [   60.992174]        [<ffffffffa0000f57>] tipc_bclink_add_node+0x97/0xf0 [tipc]
      [   60.992174]        [<ffffffffa0011815>] tipc_node_link_up+0xf5/0x110 [tipc]
      [   60.992174]        [<ffffffffa0007783>] link_state_event+0x2b3/0x4f0 [tipc]
      [   60.992174]        [<ffffffffa00193c0>] tipc_link_proto_rcv+0x24c/0x418 [tipc]
      [   60.992174]        [<ffffffffa0008857>] tipc_rcv+0x827/0xac0 [tipc]
      [   60.992174]        [<ffffffffa0002ca3>] tipc_l2_rcv_msg+0x73/0xd0 [tipc]
      [   60.992174]        [<ffffffff81646e66>] __netif_receive_skb_core+0x746/0x980
      [   60.992174]        [<ffffffff816470c1>] __netif_receive_skb+0x21/0x70
      [   60.992174]        [<ffffffff81647295>] netif_receive_skb_internal+0x35/0x130
      [   60.992174]        [<ffffffff81648218>] napi_gro_receive+0x158/0x1d0
      [   60.992174]        [<ffffffff81559e05>] e1000_clean_rx_irq+0x155/0x490
      [   60.992174]        [<ffffffff8155c1b7>] e1000_clean+0x267/0x990
      [   60.992174]        [<ffffffff81647b60>] net_rx_action+0x150/0x360
      [   60.992174]        [<ffffffff8105ec43>] __do_softirq+0x123/0x360
      [   60.992174]        [<ffffffff8105f12e>] irq_exit+0x8e/0xb0
      [   60.992174]        [<ffffffff8179f9f5>] do_IRQ+0x65/0x110
      [   60.992174]        [<ffffffff8179da6f>] ret_from_intr+0x0/0x13
      [   60.992174]        [<ffffffff8100de9f>] arch_cpu_idle+0xf/0x20
      [   60.992174]        [<ffffffff8109dfa6>] cpu_startup_entry+0x2f6/0x3f0
      [   60.992174]        [<ffffffff81033cda>] start_secondary+0x13a/0x150
      [   60.992174]
      -> #0 (&(&n_ptr->lock)->rlock){+.-...}:
      [   60.992174]        [<ffffffff810a8f7d>] __lock_acquire+0x163d/0x1ca0
      [   60.992174]        [<ffffffff810a9c0c>] lock_acquire+0x9c/0x140
      [   60.992174]        [<ffffffff8179c41f>] _raw_spin_lock_bh+0x3f/0x50
      [   60.992174]        [<ffffffffa0006dca>] tipc_link_retransmit+0x1aa/0x240 [tipc]
      [   60.992174]        [<ffffffffa0001e11>] tipc_bclink_rcv+0x611/0x640 [tipc]
      [   60.992174]        [<ffffffffa0008646>] tipc_rcv+0x616/0xac0 [tipc]
      [   60.992174]        [<ffffffffa0002ca3>] tipc_l2_rcv_msg+0x73/0xd0 [tipc]
      [   60.992174]        [<ffffffff81646e66>] __netif_receive_skb_core+0x746/0x980
      [   60.992174]        [<ffffffff816470c1>] __netif_receive_skb+0x21/0x70
      [   60.992174]        [<ffffffff81647295>] netif_receive_skb_internal+0x35/0x130
      [   60.992174]        [<ffffffff81648218>] napi_gro_receive+0x158/0x1d0
      [   60.992174]        [<ffffffff81559e05>] e1000_clean_rx_irq+0x155/0x490
      [   60.992174]        [<ffffffff8155c1b7>] e1000_clean+0x267/0x990
      [   60.992174]        [<ffffffff81647b60>] net_rx_action+0x150/0x360
      [   60.992174]        [<ffffffff8105ec43>] __do_softirq+0x123/0x360
      [   60.992174]        [<ffffffff8105f12e>] irq_exit+0x8e/0xb0
      [   60.992174]        [<ffffffff8179f9f5>] do_IRQ+0x65/0x110
      [   60.992174]        [<ffffffff8179da6f>] ret_from_intr+0x0/0x13
      [   60.992174]        [<ffffffff8100de9f>] arch_cpu_idle+0xf/0x20
      [   60.992174]        [<ffffffff8109dfa6>] cpu_startup_entry+0x2f6/0x3f0
      [   60.992174]        [<ffffffff81033cda>] start_secondary+0x13a/0x150
      [   60.992174]
      [   60.992174] other info that might help us debug this:
      [   60.992174]
      [   60.992174]  Possible unsafe locking scenario:
      [   60.992174]
      [   60.992174]        CPU0                    CPU1
      [   60.992174]        ----                    ----
      [   60.992174]   lock(&(&bclink->lock)->rlock);
      [   60.992174]                                lock(&(&n_ptr->lock)->rlock);
      [   60.992174]                                lock(&(&bclink->lock)->rlock);
      [   60.992174]   lock(&(&n_ptr->lock)->rlock);
      [   60.992174]
      [   60.992174]  *** DEADLOCK ***
      [   60.992174]
      [   60.992174] 3 locks held by swapper/3/0:
      [   60.992174]  #0:  (rcu_read_lock){......}, at: [<ffffffff81646791>] __netif_receive_skb_core+0x71/0x980
      [   60.992174]  #1:  (rcu_read_lock){......}, at: [<ffffffffa0002c35>] tipc_l2_rcv_msg+0x5/0xd0 [tipc]
      [   60.992174]  #2:  (&(&bclink->lock)->rlock){+.-...}, at: [<ffffffffa00004be>] tipc_bclink_lock+0x8e/0xa0 [tipc]
      [   60.992174]
      
      The correct the sequence of grabbing n_ptr->lock and bclink->lock
      should be that the former is first held and the latter is then taken,
      which exactly happened on CPU1. But especially when the retransmission
      of broadcast link is failed, bclink->lock is first held in
      tipc_bclink_rcv(), and n_ptr->lock is taken in link_retransmit_failure()
      called by tipc_link_retransmit() subsequently, which is demonstrated on
      CPU0. As a result, deadlock occurs.
      
      If the order of holding the two locks happening on CPU0 is reversed, the
      deadlock risk will be relieved. Therefore, the node lock taken in
      link_retransmit_failure() originally is moved to tipc_bclink_rcv()
      so that it's obtained before bclink lock. But the precondition of
      the adjustment of node lock is that responding to bclink reset event
      must be moved from tipc_bclink_unlock() to tipc_node_unlock().
      Reviewed-by: NErik Hugne <erik.hugne@ericsson.com>
      Signed-off-by: NYing Xue <ying.xue@windriver.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b952b2be
    • L
      virtio: simplify the using of received in virtnet_poll · faadb05f
      Li RongQing 提交于
      received is 0, no need to minus it and use "+=" to reassign it
      Signed-off-by: NLi RongQing <roy.qing.li@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      faadb05f
    • D
      Merge branch 'be2net-next' · 3556eaaa
      David S. Miller 提交于
      Sathya Perla says:
      
      ====================
      be2net: patch set
      
      Hi David, this patch set includes 2 feature additions to the be2net driver:
      
      Patch 1 sets up cpu affinity hints for be2net irqs using the
      cpumask_set_cpu_local_first() API that first picks the near numa cores
      and when they are exhausted, selects the far numa cores.
      
      Patch 2 setups up xps queue mapping for be2net's TXQs to avoid,
      by default, TX lock contention.
      
      Patch 3 just bumps up the driver version.
      
      Pls consider applying this patch set to the net-next queue. Thanks!
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3556eaaa
    • S
      265ec927
    • S
      be2net: setup xps queue mapping · 73f394e6
      Sathya Perla 提交于
      This patch sets up xps queue mapping on load, so that TX traffic is
      steered to the queue whose irqs are being processed by the current cpu.
      This helps in avoiding TX lock contention.
      Signed-off-by: NSathya Perla <sathya.perla@emulex.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      73f394e6
    • P
      be2net: assign CPU affinity hints to be2net IRQs · d658d98a
      Padmanabh Ratnakar 提交于
      This patch provides hints to irqbalance to map be2net IRQs to
      specific CPU cores. cpumask_set_cpu_local_first() is used, which first
      maps IRQs to near NUMA cores; when those cores are exhausted, IRQs are
      mapped to far NUMA cores.
      Signed-off-by: NPadmanabh Ratnakar <padmanabh.ratnakar@emulex.com>
      Signed-off-by: NSathya Perla <sathya.perla@emulex.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d658d98a
    • E
      tcp: tcp_syn_flood_action() can be static · 41d25fe0
      Eric Dumazet 提交于
      After commit 1fb6f159 ("tcp: add tcp_conn_request"),
      tcp_syn_flood_action() is no longer used from IPv6.
      
      We can make it static, by moving it above tcp_conn_request()
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Reviewed-by: NOctavian Purdila <octavian.purdila@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      41d25fe0
    • W
      cxgb4: fix boolreturn.cocci warnings · 1fb7cd4e
      Wu Fengguang 提交于
      drivers/net/ethernet/chelsio/cxgb4/cxgb4_fcoe.c:49:9-10: WARNING: return of 0/1 in function 'cxgb_fcoe_sof_eof_supported' with return type bool
      
       Return statements in functions returning bool should use
       true/false instead of 1/0.
      Generated by: scripts/coccinelle/misc/boolreturn.cocci
      
      CC: Varun Prakash <varun@chelsio.com>
      Signed-off-by: NFengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1fb7cd4e
    • W
      fib6: install fib6 ops in the last step · 85b99092
      WANG Cong 提交于
      We should not commit the new ops until we finish
      all the setup, otherwise we have to NULL it on failure.
      Signed-off-by: NCong Wang <xiyou.wangcong@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      85b99092
  2. 28 3月, 2015 12 次提交
  3. 26 3月, 2015 14 次提交