1. 05 11月, 2019 1 次提交
  2. 05 9月, 2019 3 次提交
  3. 24 8月, 2019 1 次提交
  4. 21 8月, 2019 3 次提交
  5. 01 8月, 2019 3 次提交
  6. 23 7月, 2019 1 次提交
  7. 31 5月, 2019 1 次提交
  8. 29 5月, 2019 1 次提交
  9. 05 5月, 2019 1 次提交
  10. 02 5月, 2019 1 次提交
  11. 24 4月, 2019 1 次提交
    • S
      net: pass net_device argument to the eth_get_headlen · c43f1255
      Stanislav Fomichev 提交于
      Update all users of eth_get_headlen to pass network device, fetch
      network namespace from it and pass it down to the flow dissector.
      This commit is a noop until administrator inserts BPF flow dissector
      program.
      
      Cc: Maxim Krasnyansky <maxk@qti.qualcomm.com>
      Cc: Saeed Mahameed <saeedm@mellanox.com>
      Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
      Cc: intel-wired-lan@lists.osuosl.org
      Cc: Yisen Zhuang <yisen.zhuang@huawei.com>
      Cc: Salil Mehta <salil.mehta@huawei.com>
      Cc: Michael Chan <michael.chan@broadcom.com>
      Cc: Igor Russkikh <igor.russkikh@aquantia.com>
      Signed-off-by: NStanislav Fomichev <sdf@google.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      c43f1255
  12. 18 4月, 2019 3 次提交
  13. 08 4月, 2019 1 次提交
    • W
      drivers: Remove explicit invocations of mmiowb() · fb24ea52
      Will Deacon 提交于
      mmiowb() is now implied by spin_unlock() on architectures that require
      it, so there is no reason to call it from driver code. This patch was
      generated using coccinelle:
      
      	@mmiowb@
      	@@
      	- mmiowb();
      
      and invoked as:
      
      $ for d in drivers include/linux/qed sound; do \
      spatch --include-headers --sp-file mmiowb.cocci --dir $d --in-place; done
      
      NOTE: mmiowb() has only ever guaranteed ordering in conjunction with
      spin_unlock(). However, pairing each mmiowb() removal in this patch with
      the corresponding call to spin_unlock() is not at all trivial, so there
      is a small chance that this change may regress any drivers incorrectly
      relying on mmiowb() to order MMIO writes between CPUs using lock-free
      synchronisation. If you've ended up bisecting to this commit, you can
      reintroduce the mmiowb() calls using wmb() instead, which should restore
      the old behaviour on all architectures other than some esoteric ia64
      systems.
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      fb24ea52
  14. 02 4月, 2019 1 次提交
    • F
      net: move skb->xmit_more hint to softnet data · 6b16f9ee
      Florian Westphal 提交于
      There are two reasons for this.
      
      First, the xmit_more flag conceptually doesn't fit into the skb, as
      xmit_more is not a property related to the skb.
      Its only a hint to the driver that the stack is about to transmit another
      packet immediately.
      
      Second, it was only done this way to not have to pass another argument
      to ndo_start_xmit().
      
      We can place xmit_more in the softnet data, next to the device recursion.
      The recursion counter is already written to on each transmit. The "more"
      indicator is placed right next to it.
      
      Drivers can use the netdev_xmit_more() helper instead of skb->xmit_more
      to check the "more packets coming" hint.
      
      skb->xmit_more is retained (but always 0) to not cause build breakage.
      
      This change takes care of the simple s/skb->xmit_more/netdev_xmit_more()/
      conversions.  Remaining drivers are converted in the next patches.
      Suggested-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6b16f9ee
  15. 27 3月, 2019 3 次提交
  16. 26 3月, 2019 5 次提交
  17. 25 3月, 2019 2 次提交
  18. 22 3月, 2019 1 次提交
  19. 20 3月, 2019 1 次提交
  20. 26 2月, 2019 2 次提交
  21. 16 1月, 2019 2 次提交
  22. 22 11月, 2018 1 次提交
    • J
      ethernet/intel: consolidate NAPI and NAPI exit · 0bcd952f
      Jesse Brandeburg 提交于
      While reviewing code, I noticed that Eric Dumazet recommends that
      drivers check the return code of napi_complete_done, and use that
      to decide to enable interrupts or not when exiting poll.  One of
      the Intel drivers was already fixed (ixgbe).
      
      Upon looking at the Intel drivers as a whole, we are handling our
      polling and NAPI exit in a few different ways based on whether we
      have multiqueue and whether we have Tx cleanup included. Several
      drivers had the bug of exiting NAPI with return 0, which appears
      to mess up the accounting in the stack.
      
      Consolidate all the NAPI routines to do best known way of exiting
      and to just mostly look like each other.
      1) check return code of napi_complete_done to control interrupt enable
      2) return the actual amount of work done.
      3) return budget immediately if need NAPI poll again
      
      Tested the changes on e1000e with a high interrupt rate set, and
      it shows about an 8% reduction in the CPU utilization when busy
      polling because we aren't re-enabling interrupts when we're about
      to be polled.
      Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Reviewed-by: NJacob Keller <jacob.e.keller@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      0bcd952f
  23. 21 11月, 2018 1 次提交