1. 18 11月, 2016 1 次提交
    • E
      udp: enable busy polling for all sockets · e68b6e50
      Eric Dumazet 提交于
      UDP busy polling is restricted to connected UDP sockets.
      
      This is because sk_busy_loop() only takes care of one NAPI context.
      
      There are cases where it could be extended.
      
      1) Some hosts receive traffic on a single NIC, with one RX queue.
      
      2) Some applications use SO_REUSEPORT and associated BPF filter
         to split the incoming traffic on one UDP socket per RX
      queue/thread/cpu
      
      3) Some UDP sockets are used to send/receive traffic for one flow, but
      they do not bother with connect()
      
      This patch records the napi_id of first received skb, giving more
      reach to busy polling.
      
      Tested:
      
      lpaa23:~# echo 70 >/proc/sys/net/core/busy_read
      lpaa24:~# echo 70 >/proc/sys/net/core/busy_read
      
      lpaa23:~# for f in `seq 1 10`; do ./super_netperf 1 -H lpaa24 -t UDP_RR -l 5; done
      
      Before patch :
         27867   28870   37324   41060   41215
         36764   36838   44455   41282   43843
      After patch :
         73920   73213   70147   74845   71697
         68315   68028   75219   70082   73707
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Willem de Bruijn <willemb@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e68b6e50
  2. 17 11月, 2016 1 次提交
  3. 19 11月, 2015 1 次提交
  4. 14 1月, 2014 1 次提交
    • P
      sched, net: Fixup busy_loop_us_clock() · 37089834
      Peter Zijlstra 提交于
      The only valid use of preempt_enable_no_resched() is if the very next
      line is schedule() or if we know preemption cannot actually be enabled
      by that statement due to known more preempt_count 'refs'.
      
      This busy_poll stuff looks to be completely and utterly broken,
      sched_clock() can return utter garbage with interrupts enabled (rare
      but still) and it can drift unbounded between CPUs.
      
      This means that if you get preempted/migrated and your new CPU is
      years behind on the previous CPU we get to busy spin for a _very_ long
      time.
      
      There is a _REASON_ sched_clock() warns about preemptability -
      papering over it with a preempt_disable()/preempt_enable_no_resched()
      is just terminal brain damage on so many levels.
      
      Replace sched_clock() usage with local_clock() which has a bounded
      drift between CPUs (<2 jiffies).
      
      There is a further problem with the entire busy wait poll thing in
      that the spin time is additive to the syscall timeout, not inclusive.
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: rui.zhang@intel.com
      Cc: jacob.jun.pan@linux.intel.com
      Cc: Mike Galbraith <bitbucket@online.de>
      Cc: hpa@zytor.com
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: lenb@kernel.org
      Cc: rjw@rjwysocki.net
      Cc: Eliezer Tamir <eliezer.tamir@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Link: http://lkml.kernel.org/r/20131119151338.GF3694@twins.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      37089834
  5. 29 8月, 2013 1 次提交
  6. 10 8月, 2013 1 次提交
  7. 05 8月, 2013 1 次提交
  8. 02 8月, 2013 2 次提交
  9. 11 7月, 2013 3 次提交
  10. 10 7月, 2013 1 次提交
  11. 09 7月, 2013 1 次提交
  12. 03 7月, 2013 2 次提交
  13. 02 7月, 2013 2 次提交
  14. 26 6月, 2013 1 次提交
    • E
      net: poll/select low latency socket support · 2d48d67f
      Eliezer Tamir 提交于
      select/poll busy-poll support.
      
      Split sysctl value into two separate ones, one for read and one for poll.
      updated Documentation/sysctl/net.txt
      
      Add a new poll flag POLL_LL. When this flag is set, sock_poll will call
      sk_poll_ll if possible. sock_poll sets this flag in its return value
      to indicate to select/poll when a socket that can busy poll is found.
      
      When poll/select have nothing to report, call the low-level
      sock_poll again until we are out of time or we find something.
      
      Once the system call finds something, it stops setting POLL_LL, so it can
      return the result to the user ASAP.
      Signed-off-by: NEliezer Tamir <eliezer.tamir@linux.intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2d48d67f
  15. 18 6月, 2013 3 次提交
  16. 11 6月, 2013 1 次提交