1. 20 12月, 2011 1 次提交
  2. 12 12月, 2011 1 次提交
  3. 23 11月, 2011 1 次提交
  4. 01 11月, 2011 1 次提交
  5. 09 7月, 2011 1 次提交
  6. 07 7月, 2011 1 次提交
  7. 02 7月, 2011 1 次提交
  8. 17 6月, 2011 1 次提交
  9. 12 6月, 2011 1 次提交
  10. 02 6月, 2011 3 次提交
  11. 13 5月, 2011 2 次提交
  12. 28 4月, 2011 1 次提交
  13. 22 4月, 2011 1 次提交
  14. 20 4月, 2011 2 次提交
  15. 31 3月, 2011 1 次提交
  16. 08 3月, 2011 1 次提交
  17. 23 2月, 2011 1 次提交
  18. 20 1月, 2011 1 次提交
  19. 17 12月, 2010 1 次提交
  20. 11 12月, 2010 1 次提交
  21. 07 12月, 2010 1 次提交
  22. 11 11月, 2010 1 次提交
  23. 04 10月, 2010 2 次提交
  24. 24 9月, 2010 1 次提交
  25. 09 9月, 2010 1 次提交
  26. 07 9月, 2010 1 次提交
  27. 27 8月, 2010 1 次提交
  28. 16 5月, 2010 1 次提交
  29. 02 5月, 2010 1 次提交
    • E
      net: sock_def_readable() and friends RCU conversion · 43815482
      Eric Dumazet 提交于
      sk_callback_lock rwlock actually protects sk->sk_sleep pointer, so we
      need two atomic operations (and associated dirtying) per incoming
      packet.
      
      RCU conversion is pretty much needed :
      
      1) Add a new structure, called "struct socket_wq" to hold all fields
      that will need rcu_read_lock() protection (currently: a
      wait_queue_head_t and a struct fasync_struct pointer).
      
      [Future patch will add a list anchor for wakeup coalescing]
      
      2) Attach one of such structure to each "struct socket" created in
      sock_alloc_inode().
      
      3) Respect RCU grace period when freeing a "struct socket_wq"
      
      4) Change sk_sleep pointer in "struct sock" by sk_wq, pointer to "struct
      socket_wq"
      
      5) Change sk_sleep() function to use new sk->sk_wq instead of
      sk->sk_sleep
      
      6) Change sk_has_sleeper() to wq_has_sleeper() that must be used inside
      a rcu_read_lock() section.
      
      7) Change all sk_has_sleeper() callers to :
        - Use rcu_read_lock() instead of read_lock(&sk->sk_callback_lock)
        - Use wq_has_sleeper() to eventually wakeup tasks.
        - Use rcu_read_unlock() instead of read_unlock(&sk->sk_callback_lock)
      
      8) sock_wake_async() is modified to use rcu protection as well.
      
      9) Exceptions :
        macvtap, drivers/net/tun.c, af_unix use integrated "struct socket_wq"
      instead of dynamically allocated ones. They dont need rcu freeing.
      
      Some cleanups or followups are probably needed, (possible
      sk_callback_lock conversion to a spinlock for example...).
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      43815482
  30. 01 5月, 2010 1 次提交
  31. 29 4月, 2010 2 次提交
  32. 28 4月, 2010 1 次提交
    • E
      net: sk_add_backlog() take rmem_alloc into account · c377411f
      Eric Dumazet 提交于
      Current socket backlog limit is not enough to really stop DDOS attacks,
      because user thread spend many time to process a full backlog each
      round, and user might crazy spin on socket lock.
      
      We should add backlog size and receive_queue size (aka rmem_alloc) to
      pace writers, and let user run without being slow down too much.
      
      Introduce a sk_rcvqueues_full() helper, to avoid taking socket lock in
      stress situations.
      
      Under huge stress from a multiqueue/RPS enabled NIC, a single flow udp
      receiver can now process ~200.000 pps (instead of ~100 pps before the
      patch) on a 8 core machine.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c377411f
  33. 21 4月, 2010 1 次提交
  34. 31 3月, 2010 1 次提交