1. 14 9月, 2014 11 次提交
    • W
      net: stmmac: fix return value check in socfpga_dwmac_parse_data() · f19f916d
      Wei Yongjun 提交于
      In case of error, the function devm_ioremap_resource() returns
      ERR_PTR() and never returns NULL. The NULL test in the return
      value check should be replaced with IS_ERR().
      Signed-off-by: NWei Yongjun <yongjun_wei@trendmicro.com.cn>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f19f916d
    • N
      bonding: adjust locking comments · 8c0bc550
      Nikolay Aleksandrov 提交于
      Now that locks have been removed, remove some unnecessary comments and
      adjust others to reflect reality. Also add a comment to "mode_lock" to
      describe its current users and give a brief summary why they need it.
      Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8c0bc550
    • N
      bonding: 3ad: convert to bond->mode_lock · e470259f
      Nikolay Aleksandrov 提交于
      Now that we have bond->mode_lock, we can remove the state_machine_lock
      and use it in its place. There're no fast paths requiring the per-port
      spinlocks so it should be okay to consolidate them into mode_lock.
      Also move it inside the unbinding function as we don't want to expose
      mode_lock outside of the specific modes.
      Suggested-by: NJay Vosburgh <jay.vosburgh@canonical.com>
      Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e470259f
    • N
      bonding: alb: convert to bond->mode_lock · 4bab16d7
      Nikolay Aleksandrov 提交于
      The ALB/TLB specific spinlocks are no longer necessary as we now have
      bond->mode_lock for this purpose, so convert them and remove them from
      struct alb_bond_info.
      Also remove the unneeded lock/unlock functions and use spin_lock/unlock
      directly.
      Suggested-by: NJay Vosburgh <jay.vosburgh@canonical.com>
      Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4bab16d7
    • N
      bonding: convert curr_slave_lock to a spinlock and rename it · b7435628
      Nikolay Aleksandrov 提交于
      curr_slave_lock is now a misleading name, a much better name is
      mode_lock as it'll be used for each mode's purposes and it's no longer
      necessary to use a rwlock, a simple spinlock is enough.
      Suggested-by: NJay Vosburgh <jay.vosburgh@canonical.com>
      Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b7435628
    • N
      bonding: clean curr_slave_lock use · 1c72cfdc
      Nikolay Aleksandrov 提交于
      Mostly all users of curr_slave_lock already have RTNL as we've discussed
      previously so there's no point in using it, the one case where the lock
      must stay is the 3ad code, in fact it's the only one.
      It's okay to remove it from bond_do_fail_over_mac() as it's called with
      RTNL and drops the curr_slave_lock anyway.
      bond_change_active_slave() is one of the main places where
      curr_slave_lock was used, it's okay to remove it as all callers use RTNL
      these days before calling it, that's why we move the ASSERT_RTNL() in
      the beginning to catch any potential offenders to this rule.
      The RTNL argument actually applies to all of the places where
      curr_slave_lock has been removed from in this patch.
      Also remove the unnecessary bond_deref_active_protected() macro and use
      rtnl_dereference() instead.
      Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1c72cfdc
    • N
      bonding: alb: remove curr_slave_lock · 62c5f518
      Nikolay Aleksandrov 提交于
      First in rlb_teach_disabled_mac_on_primary() it's okay to remove
      curr_slave_lock as all callers except bond_alb_monitor() already hold
      RTNL, and in case bond_alb_monitor() is executing we can at most have a
      period with bad throughput (very unlikely though).
      In bond_alb_monitor() it's okay to remove the read_lock as the slave
      list is walked with RCU and the worst that could happen is another
      transmitter at the same time and thus for a period which currently is 10
      seconds (bond_alb.h: BOND_ALB_LP_TICKS).
      And bond_alb_handle_active_change() is okay because it's always called
      with RTNL. Removed the ASSERT_RTNL() because it'll be inserted in the
      parent function in a following patch.
      Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      62c5f518
    • N
      bonding: 3ad: clean up curr_slave_lock usage · 86e74986
      Nikolay Aleksandrov 提交于
      Remove the read_lock in bond_3ad_lacpdu_recv() since when the slave is
      being released its rx_handler is removed before 3ad unbind, so even if
      packets arrive, they won't see the slave in an inconsistent state.
      Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      86e74986
    • R
      virtio_ring: unify direct/indirect code paths. · b25bd251
      Rusty Russell 提交于
      virtqueue_add() populates the virtqueue descriptor table from the sgs
      given.  If it uses an indirect descriptor table, then it puts a single
      descriptor in the descriptor table pointing to the kmalloc'ed indirect
      table where the sg is populated.
      
      Previously vring_add_indirect() did the allocation and the simple
      linear layout.  We replace that with alloc_indirect() which allocates
      the indirect table then chains it like the normal descriptor table so
      we can reuse the core logic.
      
      This slows down pktgen by less than 1/2 a percent (which uses direct
      descriptors), as well as vring_bench, but it's far neater.
      
      vring_bench before:
      	1061485790-1104800648(1.08254e+09+/-6.6e+06)ns
      vring_bench after:
      	1125610268-1183528965(1.14172e+09+/-8e+06)ns
      
      pktgen before:
         787781-796334(793165+/-2.4e+03)pps 365-369(367.5+/-1.2)Mb/sec (365530384-369498976(3.68028e+08+/-1.1e+06)bps) errors: 0
      
      pktgen after:
         779988-790404(786391+/-2.5e+03)pps 361-366(364.35+/-1.3)Mb/sec (361914432-366747456(3.64885e+08+/-1.2e+06)bps) errors: 0
      
      Now, if we make force indirect descriptors by turning off any_header_sg
      in virtio_net.c:
      
      pktgen before:
        713773-721062(718374+/-2.1e+03)pps 331-334(332.95+/-0.92)Mb/sec (331190672-334572768(3.33325e+08+/-9.6e+05)bps) errors: 0
      pktgen after:
        710542-719195(714898+/-2.4e+03)pps 329-333(331.15+/-1.1)Mb/sec (329691488-333706480(3.31713e+08+/-1.1e+06)bps) errors: 0
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b25bd251
    • R
      virtio_ring: assume sgs are always well-formed. · eeebf9b1
      Rusty Russell 提交于
      We used to have several callers which just used arrays.  They're
      gone, so we can use sg_next() everywhere, simplifying the code.
      
      On my laptop, this slowed down vring_bench by 15%:
      
      vring_bench before:
      	936153354-967745359(9.44739e+08+/-6.1e+06)ns
      vring_bench after:
      	1061485790-1104800648(1.08254e+09+/-6.6e+06)ns
      
      However, a more realistic test using pktgen on a AMD FX(tm)-8320 saw
      a few percent improvement:
      
      pktgen before:
        767390-792966(785159+/-6.5e+03)pps 356-367(363.75+/-2.9)Mb/sec (356068960-367936224(3.64314e+08+/-3e+06)bps) errors: 0
      
      pktgen after:
         787781-796334(793165+/-2.4e+03)pps 365-369(367.5+/-1.2)Mb/sec (365530384-369498976(3.68028e+08+/-1.1e+06)bps) errors: 0
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      eeebf9b1
    • R
      virtio_net: pass well-formed sgs to virtqueue_add_*() · a5835440
      Rusty Russell 提交于
      This is the only driver which doesn't hand virtqueue_add_inbuf and
      virtqueue_add_outbuf a well-formed, well-terminated sg.  Fix it,
      so we can make virtio_add_* simpler.
      
      pktgen results:
      	modprobe pktgen
      	echo 'add_device eth0' > /proc/net/pktgen/kpktgend_0
      	echo nowait 1 > /proc/net/pktgen/eth0
      	echo count 1000000 > /proc/net/pktgen/eth0
      	echo clone_skb 100000 > /proc/net/pktgen/eth0
      	echo dst_mac 4e:14:25:a9:30:ac > /proc/net/pktgen/eth0
      	echo dst 192.168.1.2 > /proc/net/pktgen/eth0
      	for i in `seq 20`; do echo start > /proc/net/pktgen/pgctrl; tail -n1 /proc/net/pktgen/eth0; done
      
      Before:
        746547-793084(786421+/-9.6e+03)pps 346-367(364.4+/-4.4)Mb/sec (346397808-367990976(3.649e+08+/-4.5e+06)bps) errors: 0
      
      After:
        767390-792966(785159+/-6.5e+03)pps 356-367(363.75+/-2.9)Mb/sec (356068960-367936224(3.64314e+08+/-3e+06)bps) errors: 0
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a5835440
  2. 13 9月, 2014 3 次提交
  3. 12 9月, 2014 11 次提交
  4. 11 9月, 2014 6 次提交
  5. 10 9月, 2014 9 次提交