1. 01 4月, 2018 1 次提交
  2. 22 1月, 2018 1 次提交
  3. 11 1月, 2018 1 次提交
  4. 29 11月, 2017 1 次提交
  5. 03 11月, 2017 1 次提交
  6. 28 10月, 2017 1 次提交
  7. 25 10月, 2017 1 次提交
    • M
      locking/atomics: COCCINELLE/treewide: Convert trivial ACCESS_ONCE() patterns... · 6aa7de05
      Mark Rutland 提交于
      locking/atomics: COCCINELLE/treewide: Convert trivial ACCESS_ONCE() patterns to READ_ONCE()/WRITE_ONCE()
      
      Please do not apply this to mainline directly, instead please re-run the
      coccinelle script shown below and apply its output.
      
      For several reasons, it is desirable to use {READ,WRITE}_ONCE() in
      preference to ACCESS_ONCE(), and new code is expected to use one of the
      former. So far, there's been no reason to change most existing uses of
      ACCESS_ONCE(), as these aren't harmful, and changing them results in
      churn.
      
      However, for some features, the read/write distinction is critical to
      correct operation. To distinguish these cases, separate read/write
      accessors must be used. This patch migrates (most) remaining
      ACCESS_ONCE() instances to {READ,WRITE}_ONCE(), using the following
      coccinelle script:
      
      ----
      // Convert trivial ACCESS_ONCE() uses to equivalent READ_ONCE() and
      // WRITE_ONCE()
      
      // $ make coccicheck COCCI=/home/mark/once.cocci SPFLAGS="--include-headers" MODE=patch
      
      virtual patch
      
      @ depends on patch @
      expression E1, E2;
      @@
      
      - ACCESS_ONCE(E1) = E2
      + WRITE_ONCE(E1, E2)
      
      @ depends on patch @
      expression E;
      @@
      
      - ACCESS_ONCE(E)
      + READ_ONCE(E)
      ----
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: davem@davemloft.net
      Cc: linux-arch@vger.kernel.org
      Cc: mpe@ellerman.id.au
      Cc: shuah@kernel.org
      Cc: snitzer@redhat.com
      Cc: thor.thayer@linux.intel.com
      Cc: tj@kernel.org
      Cc: viro@zeniv.linux.org.uk
      Cc: will.deacon@arm.com
      Link: http://lkml.kernel.org/r/1508792849-3115-19-git-send-email-paulmck@linux.vnet.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6aa7de05
  8. 15 8月, 2017 1 次提交
  9. 05 7月, 2017 1 次提交
  10. 04 2月, 2017 2 次提交
  11. 05 1月, 2017 1 次提交
  12. 19 11月, 2016 1 次提交
  13. 17 11月, 2016 1 次提交
  14. 19 9月, 2016 1 次提交
  15. 19 8月, 2016 1 次提交
  16. 31 7月, 2016 1 次提交
  17. 27 4月, 2016 1 次提交
  18. 12 4月, 2016 1 次提交
    • H
      cxgb4: Stop Rx Queues before freeing it up · ebf4dc2b
      Hariprasad Shenai 提交于
      Stop all Ethernet RX Queues before freeing up various Ingress/Egress
      Queues, etc. We were seeing cases of Ingress Queues not getting serviced
      during the shutdown process leading to Ingress Paths jamming up through
      the chip and blocking the shutdown effort itself.
      
      One such case involved the Firmware sending a "Flush Token" through the
      ULP-TX -> ULP-RX path for an Ethernet TX Queue being freed in order to
      make sure there weren't any remaining TX Work Requests in the pipeline.
      But the return path was stalled by Ingress Data unable to be delivered to
      the Host because those Ingress Queues were no longer being serviced.
      
      Based on original work by Casey Leedom <leedom@chelsio.com>
      Signed-off-by: NHariprasad Shenai <hariprasad@chelsio.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ebf4dc2b
  19. 22 3月, 2016 2 次提交
  20. 03 3月, 2016 2 次提交
  21. 24 12月, 2015 6 次提交
  22. 09 12月, 2015 4 次提交
  23. 19 11月, 2015 2 次提交
    • E
      net: provide generic busy polling to all NAPI drivers · 93d05d4a
      Eric Dumazet 提交于
      NAPI drivers no longer need to observe a particular protocol
      to benefit from busy polling (CONFIG_NET_RX_BUSY_POLL=y)
      
      napi_hash_add() and napi_hash_del() are automatically called
      from core networking stack, respectively from
      netif_napi_add() and netif_napi_del()
      
      This patch depends on free_netdev() and netif_napi_del() being
      called from process context, which seems to be the norm.
      
      Drivers might still prefer to call napi_hash_del() on their
      own, since they might combine all the rcu grace periods into
      a single one, knowing their NAPI structures lifetime, while
      core networking stack has no idea of a possible combining.
      
      Once this patch proves to not bring serious regressions,
      we will cleanup drivers to either remove napi_hash_del()
      or provide appropriate rcu grace periods combining.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      93d05d4a
    • E
      net: move skb_mark_napi_id() into core networking stack · 93f93a44
      Eric Dumazet 提交于
      We would like to automatically provide busy polling support
      to all NAPI drivers, without them having to implement anything.
      
      skb_mark_napi_id() can be called from napi_gro_receive() and
      napi_get_frags().
      
      Few drivers are still calling skb_mark_napi_id() because
      they use netif_receive_skb(). They should eventually call
      napi_gro_receive() instead. I will leave this to drivers
      maintainers.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      93f93a44
  24. 29 9月, 2015 1 次提交
  25. 10 9月, 2015 1 次提交
  26. 13 8月, 2015 1 次提交
  27. 21 7月, 2015 1 次提交
  28. 12 6月, 2015 1 次提交