1. 10 11月, 2011 1 次提交
    • J
      net: add wireless TX status socket option · 6e3e939f
      Johannes Berg 提交于
      The 802.1X EAPOL handshake hostapd does requires
      knowing whether the frame was ack'ed by the peer.
      Currently, we fudge this pretty badly by not even
      transmitting the frame as a normal data frame but
      injecting it with radiotap and getting the status
      out of radiotap monitor as well. This is rather
      complex, confuses users (mon.wlan0 presence) and
      doesn't work with all hardware.
      
      To get rid of that hack, introduce a real wifi TX
      status option for data frame transmissions.
      
      This works similar to the existing TX timestamping
      in that it reflects the SKB back to the socket's
      error queue with a SCM_WIFI_STATUS cmsg that has
      an int indicating ACK status (0/1).
      
      Since it is possible that at some point we will
      want to have TX timestamping and wifi status in a
      single errqueue SKB (there's little point in not
      doing that), redefine SO_EE_ORIGIN_TIMESTAMPING
      to SO_EE_ORIGIN_TXSTATUS which can collect more
      than just the timestamp; keep the old constant
      as an alias of course. Currently the internal APIs
      don't make that possible, but it wouldn't be hard
      to split them up in a way that makes it possible.
      
      Thanks to Neil Horman for helping me figure out
      the functions that add the control messages.
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
      6e3e939f
  2. 09 11月, 2011 1 次提交
  3. 26 10月, 2011 1 次提交
  4. 14 10月, 2011 1 次提交
    • E
      net: more accurate skb truesize · 87fb4b7b
      Eric Dumazet 提交于
      skb truesize currently accounts for sk_buff struct and part of skb head.
      kmalloc() roundings are also ignored.
      
      Considering that skb_shared_info is larger than sk_buff, its time to
      take it into account for better memory accounting.
      
      This patch introduces SKB_TRUESIZE(X) macro to centralize various
      assumptions into a single place.
      
      At skb alloc phase, we put skb_shared_info struct at the exact end of
      skb head, to allow a better use of memory (lowering number of
      reallocations), since kmalloc() gives us power-of-two memory blocks.
      
      Unless SLUB/SLUB debug is active, both skb->head and skb_shared_info are
      aligned to cache lines, as before.
      
      Note: This patch might trigger performance regressions because of
      misconfigured protocol stacks, hitting per socket or global memory
      limits that were previously not reached. But its a necessary step for a
      more accurate memory accounting.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      CC: Andi Kleen <ak@linux.intel.com>
      CC: Ben Hutchings <bhutchings@solarflare.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      87fb4b7b
  5. 08 10月, 2011 1 次提交
  6. 25 8月, 2011 1 次提交
  7. 02 8月, 2011 1 次提交
  8. 06 7月, 2011 1 次提交
  9. 22 6月, 2011 1 次提交
  10. 31 3月, 2011 1 次提交
  11. 07 1月, 2011 1 次提交
  12. 17 12月, 2010 1 次提交
    • O
      net: fix nulls list corruptions in sk_prot_alloc · fcbdf09d
      Octavian Purdila 提交于
      Special care is taken inside sk_port_alloc to avoid overwriting
      skc_node/skc_nulls_node. We should also avoid overwriting
      skc_bind_node/skc_portaddr_node.
      
      The patch fixes the following crash:
      
       BUG: unable to handle kernel paging request at fffffffffffffff0
       IP: [<ffffffff812ec6dd>] udp4_lib_lookup2+0xad/0x370
       [<ffffffff812ecc22>] __udp4_lib_lookup+0x282/0x360
       [<ffffffff812ed63e>] __udp4_lib_rcv+0x31e/0x700
       [<ffffffff812bba45>] ? ip_local_deliver_finish+0x65/0x190
       [<ffffffff812bbbf8>] ? ip_local_deliver+0x88/0xa0
       [<ffffffff812eda35>] udp_rcv+0x15/0x20
       [<ffffffff812bba45>] ip_local_deliver_finish+0x65/0x190
       [<ffffffff812bbbf8>] ip_local_deliver+0x88/0xa0
       [<ffffffff812bb2cd>] ip_rcv_finish+0x32d/0x6f0
       [<ffffffff8128c14c>] ? netif_receive_skb+0x99c/0x11c0
       [<ffffffff812bb94b>] ip_rcv+0x2bb/0x350
       [<ffffffff8128c14c>] netif_receive_skb+0x99c/0x11c0
      Signed-off-by: NLeonard Crestez <lcrestez@ixiacom.com>
      Signed-off-by: NOctavian Purdila <opurdila@ixiacom.com>
      Acked-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fcbdf09d
  13. 10 12月, 2010 1 次提交
    • E
      net: optimize INET input path further · 68835aba
      Eric Dumazet 提交于
      Followup of commit b178bb3d (net: reorder struct sock fields)
      
      Optimize INET input path a bit further, by :
      
      1) moving sk_refcnt close to sk_lock.
      
      This reduces number of dirtied cache lines by one on 64bit arches (and
      64 bytes cache line size).
      
      2) moving inet_daddr & inet_rcv_saddr at the beginning of sk
      
      (same cache line than hash / family / bound_dev_if / nulls_node)
      
      This reduces number of accessed cache lines in lookups by one, and dont
      increase size of inet and timewait socks.
      inet and tw sockets now share same place-holder for these fields.
      
      Before patch :
      
      offsetof(struct sock, sk_refcnt) = 0x10
      offsetof(struct sock, sk_lock) = 0x40
      offsetof(struct sock, sk_receive_queue) = 0x60
      offsetof(struct inet_sock, inet_daddr) = 0x270
      offsetof(struct inet_sock, inet_rcv_saddr) = 0x274
      
      After patch :
      
      offsetof(struct sock, sk_refcnt) = 0x44
      offsetof(struct sock, sk_lock) = 0x48
      offsetof(struct sock, sk_receive_queue) = 0x68
      offsetof(struct inet_sock, inet_daddr) = 0x0
      offsetof(struct inet_sock, inet_rcv_saddr) = 0x4
      
      compute_score() (udp or tcp) now use a single cache line per ignored
      item, instead of two.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      68835aba
  14. 08 12月, 2010 1 次提交
  15. 11 11月, 2010 1 次提交
  16. 26 10月, 2010 1 次提交
  17. 08 10月, 2010 1 次提交
    • P
      net: suppress RCU lockdep false positive in sock_update_classid · 1144182a
      Paul E. McKenney 提交于
      > ===================================================
      > [ INFO: suspicious rcu_dereference_check() usage. ]
      > ---------------------------------------------------
      > include/linux/cgroup.h:542 invoked rcu_dereference_check() without protection!
      >
      > other info that might help us debug this:
      >
      >
      > rcu_scheduler_active = 1, debug_locks = 0
      > 1 lock held by swapper/1:
      >  #0:  (net_mutex){+.+.+.}, at: [<ffffffff813e9010>]
      > register_pernet_subsys+0x1f/0x47
      >
      > stack backtrace:
      > Pid: 1, comm: swapper Not tainted 2.6.35.4-28.fc14.x86_64 #1
      > Call Trace:
      >  [<ffffffff8107bd3a>] lockdep_rcu_dereference+0xaa/0xb3
      >  [<ffffffff813e04b9>] sock_update_classid+0x7c/0xa2
      >  [<ffffffff813e054a>] sk_alloc+0x6b/0x77
      >  [<ffffffff8140b281>] __netlink_create+0x37/0xab
      >  [<ffffffff813f941c>] ? rtnetlink_rcv+0x0/0x2d
      >  [<ffffffff8140cee1>] netlink_kernel_create+0x74/0x19d
      >  [<ffffffff8149c3ca>] ? __mutex_lock_common+0x339/0x35b
      >  [<ffffffff813f7e9c>] rtnetlink_net_init+0x2e/0x48
      >  [<ffffffff813e8d7a>] ops_init+0xe9/0xff
      >  [<ffffffff813e8f0d>] register_pernet_operations+0xab/0x130
      >  [<ffffffff813e901f>] register_pernet_subsys+0x2e/0x47
      >  [<ffffffff81db7bca>] rtnetlink_init+0x53/0x102
      >  [<ffffffff81db835c>] netlink_proto_init+0x126/0x143
      >  [<ffffffff81db8236>] ? netlink_proto_init+0x0/0x143
      >  [<ffffffff810021b8>] do_one_initcall+0x72/0x186
      >  [<ffffffff81d78ebc>] kernel_init+0x23b/0x2c9
      >  [<ffffffff8100aae4>] kernel_thread_helper+0x4/0x10
      >  [<ffffffff8149e2d0>] ? restore_args+0x0/0x30
      >  [<ffffffff81d78c81>] ? kernel_init+0x0/0x2c9
      >  [<ffffffff8100aae0>] ? kernel_thread_helper+0x0/0x10
      
      The sock_update_classid() function calls task_cls_classid(current),
      but the calling task cannot go away, so there is no danger of
      the associated structures disappearing.  Insert an RCU read-side
      critical section to suppress the false positive.
      Reported-by: NSubrata Modak <subrata@linux.vnet.ibm.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      1144182a
  18. 25 9月, 2010 1 次提交
    • E
      net: fix a lockdep splat · f064af1e
      Eric Dumazet 提交于
      We have for each socket :
      
      One spinlock (sk_slock.slock)
      One rwlock (sk_callback_lock)
      
      Possible scenarios are :
      
      (A) (this is used in net/sunrpc/xprtsock.c)
      read_lock(&sk->sk_callback_lock) (without blocking BH)
      <BH>
      spin_lock(&sk->sk_slock.slock);
      ...
      read_lock(&sk->sk_callback_lock);
      ...
      
      (B)
      write_lock_bh(&sk->sk_callback_lock)
      stuff
      write_unlock_bh(&sk->sk_callback_lock)
      
      (C)
      spin_lock_bh(&sk->sk_slock)
      ...
      write_lock_bh(&sk->sk_callback_lock)
      stuff
      write_unlock_bh(&sk->sk_callback_lock)
      spin_unlock_bh(&sk->sk_slock)
      
      This (C) case conflicts with (A) :
      
      CPU1 [A]                         CPU2 [C]
      read_lock(callback_lock)
      <BH>                             spin_lock_bh(slock)
      <wait to spin_lock(slock)>
                                       <wait to write_lock_bh(callback_lock)>
      
      We have one problematic (C) use case in inet_csk_listen_stop() :
      
      local_bh_disable();
      bh_lock_sock(child); // spin_lock_bh(&sk->sk_slock)
      WARN_ON(sock_owned_by_user(child));
      ...
      sock_orphan(child); // write_lock_bh(&sk->sk_callback_lock)
      
      lockdep is not happy with this, as reported by Tetsuo Handa
      
      It seems only way to deal with this is to use read_lock_bh(callbacklock)
      everywhere.
      
      Thanks to Jarek for pointing a bug in my first attempt and suggesting
      this solution.
      Reported-by: NTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Tested-by: NTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      CC: Jarek Poplawski <jarkao2@gmail.com>
      Tested-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f064af1e
  19. 10 9月, 2010 1 次提交
  20. 20 7月, 2010 1 次提交
  21. 13 7月, 2010 1 次提交
  22. 17 6月, 2010 3 次提交
  23. 07 6月, 2010 1 次提交
  24. 27 5月, 2010 1 次提交
    • E
      net: fix lock_sock_bh/unlock_sock_bh · 8a74ad60
      Eric Dumazet 提交于
      This new sock lock primitive was introduced to speedup some user context
      socket manipulation. But it is unsafe to protect two threads, one using
      regular lock_sock/release_sock, one using lock_sock_bh/unlock_sock_bh
      
      This patch changes lock_sock_bh to be careful against 'owned' state.
      If owned is found to be set, we must take the slow path.
      lock_sock_bh() now returns a boolean to say if the slow path was taken,
      and this boolean is used at unlock_sock_bh time to call the appropriate
      unlock function.
      
      After this change, BH are either disabled or enabled during the
      lock_sock_bh/unlock_sock_bh protected section. This might be misleading,
      so we rename these functions to lock_sock_fast()/unlock_sock_fast().
      Reported-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Tested-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8a74ad60
  25. 24 5月, 2010 2 次提交
    • H
      tun: Update classid on packet injection · 82862742
      Herbert Xu 提交于
      This patch makes tun update its socket classid every time we
      inject a packet into the network stack.  This is so that any
      updates made by the admin to the process writing packets to
      tun is effected.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      82862742
    • H
      cls_cgroup: Store classid in struct sock · f8451725
      Herbert Xu 提交于
      Up until now cls_cgroup has relied on fetching the classid out of
      the current executing thread.  This runs into trouble when a packet
      processing is delayed in which case it may execute out of another
      thread's context.
      
      Furthermore, even when a packet is not delayed we may fail to
      classify it if soft IRQs have been disabled, because this scenario
      is indistinguishable from one where a packet unrelated to the
      current thread is processed by a real soft IRQ.
      
      In fact, the current semantics is inherently broken, as a single
      skb may be constructed out of the writes of two different tasks.
      A different manifestation of this problem is when the TCP stack
      transmits in response of an incoming ACK.  This is currently
      unclassified.
      
      As we already have a concept of packet ownership for accounting
      purposes in the skb->sk pointer, this is a natural place to store
      the classid in a persistent manner.
      
      This patch adds the cls_cgroup classid in struct sock, filling up
      an existing hole on 64-bit :)
      
      The value is set at socket creation time.  So all sockets created
      via socket(2) automatically gains the ID of the thread creating it.
      Whenever another process touches the socket by either reading or
      writing to it, we will change the socket classid to that of the
      process if it has a valid (non-zero) classid.
      
      For sockets created on inbound connections through accept(2), we
      inherit the classid of the original listening socket through
      sk_clone, possibly preceding the actual accept(2) call.
      
      In order to minimise risks, I have not made this the authoritative
      classid.  For now it is only used as a backup when we execute
      with soft IRQs disabled.  Once we're completely happy with its
      semantics we can use it as the sole classid.
      
      Footnote: I have rearranged the error path on cls_group module
      creation.  If we didn't do this, then there is a window where
      someone could create a tc rule using cls_group before the cgroup
      subsystem has been registered.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f8451725
  26. 18 5月, 2010 1 次提交
    • E
      net: add a noref bit on skb dst · 7fee226a
      Eric Dumazet 提交于
      Use low order bit of skb->_skb_dst to tell dst is not refcounted.
      
      Change _skb_dst to _skb_refdst to make sure all uses are catched.
      
      skb_dst() returns the dst, regardless of noref bit set or not, but
      with a lockdep check to make sure a noref dst is not given if current
      user is not rcu protected.
      
      New skb_dst_set_noref() helper to set an notrefcounted dst on a skb.
      (with lockdep check)
      
      skb_dst_drop() drops a reference only if skb dst was refcounted.
      
      skb_dst_force() helper is used to force a refcount on dst, when skb
      is queued and not anymore RCU protected.
      
      Use skb_dst_force() in __sk_add_backlog(), __dev_xmit_skb() if
      !IFF_XMIT_DST_RELEASE or skb enqueued on qdisc queue, in
      sock_queue_rcv_skb(), in __nf_queue().
      
      Use skb_dst_force() in dev_requeue_skb().
      
      Note: dst_use_noref() still dirties dst, we might transform it
      later to do one dirtying per jiffies.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7fee226a
  27. 16 5月, 2010 1 次提交
    • E
      net: Introduce sk_route_nocaps · a465419b
      Eric Dumazet 提交于
      TCP-MD5 sessions have intermittent failures, when route cache is
      invalidated. ip_queue_xmit() has to find a new route, calls
      sk_setup_caps(sk, &rt->u.dst), destroying the 
      
      sk->sk_route_caps &= ~NETIF_F_GSO_MASK
      
      that MD5 desperately try to make all over its way (from
      tcp_transmit_skb() for example)
      
      So we send few bad packets, and everything is fine when
      tcp_transmit_skb() is called again for this socket.
      
      Since ip_queue_xmit() is at a lower level than TCP-MD5, I chose to use a
      socket field, sk_route_nocaps, containing bits to mask on sk_route_caps.
      Reported-by: NBhaskar Dutta <bhaskie@gmail.com>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a465419b
  28. 02 5月, 2010 1 次提交
    • E
      net: sock_def_readable() and friends RCU conversion · 43815482
      Eric Dumazet 提交于
      sk_callback_lock rwlock actually protects sk->sk_sleep pointer, so we
      need two atomic operations (and associated dirtying) per incoming
      packet.
      
      RCU conversion is pretty much needed :
      
      1) Add a new structure, called "struct socket_wq" to hold all fields
      that will need rcu_read_lock() protection (currently: a
      wait_queue_head_t and a struct fasync_struct pointer).
      
      [Future patch will add a list anchor for wakeup coalescing]
      
      2) Attach one of such structure to each "struct socket" created in
      sock_alloc_inode().
      
      3) Respect RCU grace period when freeing a "struct socket_wq"
      
      4) Change sk_sleep pointer in "struct sock" by sk_wq, pointer to "struct
      socket_wq"
      
      5) Change sk_sleep() function to use new sk->sk_wq instead of
      sk->sk_sleep
      
      6) Change sk_has_sleeper() to wq_has_sleeper() that must be used inside
      a rcu_read_lock() section.
      
      7) Change all sk_has_sleeper() callers to :
        - Use rcu_read_lock() instead of read_lock(&sk->sk_callback_lock)
        - Use wq_has_sleeper() to eventually wakeup tasks.
        - Use rcu_read_unlock() instead of read_unlock(&sk->sk_callback_lock)
      
      8) sock_wake_async() is modified to use rcu protection as well.
      
      9) Exceptions :
        macvtap, drivers/net/tun.c, af_unix use integrated "struct socket_wq"
      instead of dynamically allocated ones. They dont need rcu freeing.
      
      Some cleanups or followups are probably needed, (possible
      sk_callback_lock conversion to a spinlock for example...).
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      43815482
  29. 28 4月, 2010 1 次提交
    • E
      net: sk_add_backlog() take rmem_alloc into account · c377411f
      Eric Dumazet 提交于
      Current socket backlog limit is not enough to really stop DDOS attacks,
      because user thread spend many time to process a full backlog each
      round, and user might crazy spin on socket lock.
      
      We should add backlog size and receive_queue size (aka rmem_alloc) to
      pace writers, and let user run without being slow down too much.
      
      Introduce a sk_rcvqueues_full() helper, to avoid taking socket lock in
      stress situations.
      
      Under huge stress from a multiqueue/RPS enabled NIC, a single flow udp
      receiver can now process ~200.000 pps (instead of ~100 pps before the
      patch) on a 8 core machine.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c377411f
  30. 21 4月, 2010 1 次提交
  31. 13 4月, 2010 1 次提交
    • E
      net: sk_dst_cache RCUification · b6c6712a
      Eric Dumazet 提交于
      With latest CONFIG_PROVE_RCU stuff, I felt more comfortable to make this
      work.
      
      sk->sk_dst_cache is currently protected by a rwlock (sk_dst_lock)
      
      This rwlock is readlocked for a very small amount of time, and dst
      entries are already freed after RCU grace period. This calls for RCU
      again :)
      
      This patch converts sk_dst_lock to a spinlock, and use RCU for readers.
      
      __sk_dst_get() is supposed to be called with rcu_read_lock() or if
      socket locked by user, so use appropriate rcu_dereference_check()
      condition (rcu_read_lock_held() || sock_owned_by_user(sk))
      
      This patch avoids two atomic ops per tx packet on UDP connected sockets,
      for example, and permits sk_dst_lock to be much less dirtied.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b6c6712a
  32. 08 3月, 2010 1 次提交
  33. 06 3月, 2010 2 次提交
    • Z
      net: backlog functions rename · a3a858ff
      Zhu Yi 提交于
      sk_add_backlog -> __sk_add_backlog
      sk_add_backlog_limited -> sk_add_backlog
      Signed-off-by: NZhu Yi <yi.zhu@intel.com>
      Acked-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a3a858ff
    • Z
      net: add limit for socket backlog · 8eae939f
      Zhu Yi 提交于
      We got system OOM while running some UDP netperf testing on the loopback
      device. The case is multiple senders sent stream UDP packets to a single
      receiver via loopback on local host. Of course, the receiver is not able
      to handle all the packets in time. But we surprisingly found that these
      packets were not discarded due to the receiver's sk->sk_rcvbuf limit.
      Instead, they are kept queuing to sk->sk_backlog and finally ate up all
      the memory. We believe this is a secure hole that a none privileged user
      can crash the system.
      
      The root cause for this problem is, when the receiver is doing
      __release_sock() (i.e. after userspace recv, kernel udp_recvmsg ->
      skb_free_datagram_locked -> release_sock), it moves skbs from backlog to
      sk_receive_queue with the softirq enabled. In the above case, multiple
      busy senders will almost make it an endless loop. The skbs in the
      backlog end up eat all the system memory.
      
      The issue is not only for UDP. Any protocols using socket backlog is
      potentially affected. The patch adds limit for socket backlog so that
      the backlog size cannot be expanded endlessly.
      Reported-by: NAlex Shi <alex.shi@intel.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru
      Cc: "Pekka Savola (ipv6)" <pekkas@netcore.fi>
      Cc: Patrick McHardy <kaber@trash.net>
      Cc: Vlad Yasevich <vladislav.yasevich@hp.com>
      Cc: Sridhar Samudrala <sri@us.ibm.com>
      Cc: Jon Maloy <jon.maloy@ericsson.com>
      Cc: Allan Stephens <allan.stephens@windriver.com>
      Cc: Andrew Hendry <andrew.hendry@gmail.com>
      Signed-off-by: NZhu Yi <yi.zhu@intel.com>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Acked-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8eae939f
  34. 25 2月, 2010 1 次提交
    • P
      net: Add checking to rcu_dereference() primitives · a898def2
      Paul E. McKenney 提交于
      Update rcu_dereference() primitives to use new lockdep-based
      checking. The rcu_dereference() in __in6_dev_get() may be
      protected either by rcu_read_lock() or RTNL, per Eric Dumazet.
      The rcu_dereference() in __sk_free() is protected by the fact
      that it is never reached if an update could change it.  Check
      for this by using rcu_dereference_check() to verify that the
      struct sock's ->sk_wmem_alloc counter is zero.
      Acked-by: NEric Dumazet <eric.dumazet@gmail.com>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <1266887105-1528-5-git-send-email-paulmck@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a898def2
  35. 18 2月, 2010 1 次提交
  36. 18 1月, 2010 1 次提交