1. 08 12月, 2010 1 次提交
  2. 26 10月, 2010 1 次提交
  3. 08 10月, 2010 1 次提交
    • P
      net: suppress RCU lockdep false positive in sock_update_classid · 1144182a
      Paul E. McKenney 提交于
      > ===================================================
      > [ INFO: suspicious rcu_dereference_check() usage. ]
      > ---------------------------------------------------
      > include/linux/cgroup.h:542 invoked rcu_dereference_check() without protection!
      >
      > other info that might help us debug this:
      >
      >
      > rcu_scheduler_active = 1, debug_locks = 0
      > 1 lock held by swapper/1:
      >  #0:  (net_mutex){+.+.+.}, at: [<ffffffff813e9010>]
      > register_pernet_subsys+0x1f/0x47
      >
      > stack backtrace:
      > Pid: 1, comm: swapper Not tainted 2.6.35.4-28.fc14.x86_64 #1
      > Call Trace:
      >  [<ffffffff8107bd3a>] lockdep_rcu_dereference+0xaa/0xb3
      >  [<ffffffff813e04b9>] sock_update_classid+0x7c/0xa2
      >  [<ffffffff813e054a>] sk_alloc+0x6b/0x77
      >  [<ffffffff8140b281>] __netlink_create+0x37/0xab
      >  [<ffffffff813f941c>] ? rtnetlink_rcv+0x0/0x2d
      >  [<ffffffff8140cee1>] netlink_kernel_create+0x74/0x19d
      >  [<ffffffff8149c3ca>] ? __mutex_lock_common+0x339/0x35b
      >  [<ffffffff813f7e9c>] rtnetlink_net_init+0x2e/0x48
      >  [<ffffffff813e8d7a>] ops_init+0xe9/0xff
      >  [<ffffffff813e8f0d>] register_pernet_operations+0xab/0x130
      >  [<ffffffff813e901f>] register_pernet_subsys+0x2e/0x47
      >  [<ffffffff81db7bca>] rtnetlink_init+0x53/0x102
      >  [<ffffffff81db835c>] netlink_proto_init+0x126/0x143
      >  [<ffffffff81db8236>] ? netlink_proto_init+0x0/0x143
      >  [<ffffffff810021b8>] do_one_initcall+0x72/0x186
      >  [<ffffffff81d78ebc>] kernel_init+0x23b/0x2c9
      >  [<ffffffff8100aae4>] kernel_thread_helper+0x4/0x10
      >  [<ffffffff8149e2d0>] ? restore_args+0x0/0x30
      >  [<ffffffff81d78c81>] ? kernel_init+0x0/0x2c9
      >  [<ffffffff8100aae0>] ? kernel_thread_helper+0x0/0x10
      
      The sock_update_classid() function calls task_cls_classid(current),
      but the calling task cannot go away, so there is no danger of
      the associated structures disappearing.  Insert an RCU read-side
      critical section to suppress the false positive.
      Reported-by: NSubrata Modak <subrata@linux.vnet.ibm.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      1144182a
  4. 25 9月, 2010 1 次提交
    • E
      net: fix a lockdep splat · f064af1e
      Eric Dumazet 提交于
      We have for each socket :
      
      One spinlock (sk_slock.slock)
      One rwlock (sk_callback_lock)
      
      Possible scenarios are :
      
      (A) (this is used in net/sunrpc/xprtsock.c)
      read_lock(&sk->sk_callback_lock) (without blocking BH)
      <BH>
      spin_lock(&sk->sk_slock.slock);
      ...
      read_lock(&sk->sk_callback_lock);
      ...
      
      (B)
      write_lock_bh(&sk->sk_callback_lock)
      stuff
      write_unlock_bh(&sk->sk_callback_lock)
      
      (C)
      spin_lock_bh(&sk->sk_slock)
      ...
      write_lock_bh(&sk->sk_callback_lock)
      stuff
      write_unlock_bh(&sk->sk_callback_lock)
      spin_unlock_bh(&sk->sk_slock)
      
      This (C) case conflicts with (A) :
      
      CPU1 [A]                         CPU2 [C]
      read_lock(callback_lock)
      <BH>                             spin_lock_bh(slock)
      <wait to spin_lock(slock)>
                                       <wait to write_lock_bh(callback_lock)>
      
      We have one problematic (C) use case in inet_csk_listen_stop() :
      
      local_bh_disable();
      bh_lock_sock(child); // spin_lock_bh(&sk->sk_slock)
      WARN_ON(sock_owned_by_user(child));
      ...
      sock_orphan(child); // write_lock_bh(&sk->sk_callback_lock)
      
      lockdep is not happy with this, as reported by Tetsuo Handa
      
      It seems only way to deal with this is to use read_lock_bh(callbacklock)
      everywhere.
      
      Thanks to Jarek for pointing a bug in my first attempt and suggesting
      this solution.
      Reported-by: NTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Tested-by: NTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      CC: Jarek Poplawski <jarkao2@gmail.com>
      Tested-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f064af1e
  5. 10 9月, 2010 1 次提交
  6. 20 7月, 2010 1 次提交
  7. 13 7月, 2010 1 次提交
  8. 17 6月, 2010 3 次提交
  9. 07 6月, 2010 1 次提交
  10. 27 5月, 2010 1 次提交
    • E
      net: fix lock_sock_bh/unlock_sock_bh · 8a74ad60
      Eric Dumazet 提交于
      This new sock lock primitive was introduced to speedup some user context
      socket manipulation. But it is unsafe to protect two threads, one using
      regular lock_sock/release_sock, one using lock_sock_bh/unlock_sock_bh
      
      This patch changes lock_sock_bh to be careful against 'owned' state.
      If owned is found to be set, we must take the slow path.
      lock_sock_bh() now returns a boolean to say if the slow path was taken,
      and this boolean is used at unlock_sock_bh time to call the appropriate
      unlock function.
      
      After this change, BH are either disabled or enabled during the
      lock_sock_bh/unlock_sock_bh protected section. This might be misleading,
      so we rename these functions to lock_sock_fast()/unlock_sock_fast().
      Reported-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Tested-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8a74ad60
  11. 24 5月, 2010 2 次提交
    • H
      tun: Update classid on packet injection · 82862742
      Herbert Xu 提交于
      This patch makes tun update its socket classid every time we
      inject a packet into the network stack.  This is so that any
      updates made by the admin to the process writing packets to
      tun is effected.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      82862742
    • H
      cls_cgroup: Store classid in struct sock · f8451725
      Herbert Xu 提交于
      Up until now cls_cgroup has relied on fetching the classid out of
      the current executing thread.  This runs into trouble when a packet
      processing is delayed in which case it may execute out of another
      thread's context.
      
      Furthermore, even when a packet is not delayed we may fail to
      classify it if soft IRQs have been disabled, because this scenario
      is indistinguishable from one where a packet unrelated to the
      current thread is processed by a real soft IRQ.
      
      In fact, the current semantics is inherently broken, as a single
      skb may be constructed out of the writes of two different tasks.
      A different manifestation of this problem is when the TCP stack
      transmits in response of an incoming ACK.  This is currently
      unclassified.
      
      As we already have a concept of packet ownership for accounting
      purposes in the skb->sk pointer, this is a natural place to store
      the classid in a persistent manner.
      
      This patch adds the cls_cgroup classid in struct sock, filling up
      an existing hole on 64-bit :)
      
      The value is set at socket creation time.  So all sockets created
      via socket(2) automatically gains the ID of the thread creating it.
      Whenever another process touches the socket by either reading or
      writing to it, we will change the socket classid to that of the
      process if it has a valid (non-zero) classid.
      
      For sockets created on inbound connections through accept(2), we
      inherit the classid of the original listening socket through
      sk_clone, possibly preceding the actual accept(2) call.
      
      In order to minimise risks, I have not made this the authoritative
      classid.  For now it is only used as a backup when we execute
      with soft IRQs disabled.  Once we're completely happy with its
      semantics we can use it as the sole classid.
      
      Footnote: I have rearranged the error path on cls_group module
      creation.  If we didn't do this, then there is a window where
      someone could create a tc rule using cls_group before the cgroup
      subsystem has been registered.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f8451725
  12. 18 5月, 2010 1 次提交
    • E
      net: add a noref bit on skb dst · 7fee226a
      Eric Dumazet 提交于
      Use low order bit of skb->_skb_dst to tell dst is not refcounted.
      
      Change _skb_dst to _skb_refdst to make sure all uses are catched.
      
      skb_dst() returns the dst, regardless of noref bit set or not, but
      with a lockdep check to make sure a noref dst is not given if current
      user is not rcu protected.
      
      New skb_dst_set_noref() helper to set an notrefcounted dst on a skb.
      (with lockdep check)
      
      skb_dst_drop() drops a reference only if skb dst was refcounted.
      
      skb_dst_force() helper is used to force a refcount on dst, when skb
      is queued and not anymore RCU protected.
      
      Use skb_dst_force() in __sk_add_backlog(), __dev_xmit_skb() if
      !IFF_XMIT_DST_RELEASE or skb enqueued on qdisc queue, in
      sock_queue_rcv_skb(), in __nf_queue().
      
      Use skb_dst_force() in dev_requeue_skb().
      
      Note: dst_use_noref() still dirties dst, we might transform it
      later to do one dirtying per jiffies.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7fee226a
  13. 16 5月, 2010 1 次提交
    • E
      net: Introduce sk_route_nocaps · a465419b
      Eric Dumazet 提交于
      TCP-MD5 sessions have intermittent failures, when route cache is
      invalidated. ip_queue_xmit() has to find a new route, calls
      sk_setup_caps(sk, &rt->u.dst), destroying the 
      
      sk->sk_route_caps &= ~NETIF_F_GSO_MASK
      
      that MD5 desperately try to make all over its way (from
      tcp_transmit_skb() for example)
      
      So we send few bad packets, and everything is fine when
      tcp_transmit_skb() is called again for this socket.
      
      Since ip_queue_xmit() is at a lower level than TCP-MD5, I chose to use a
      socket field, sk_route_nocaps, containing bits to mask on sk_route_caps.
      Reported-by: NBhaskar Dutta <bhaskie@gmail.com>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a465419b
  14. 02 5月, 2010 1 次提交
    • E
      net: sock_def_readable() and friends RCU conversion · 43815482
      Eric Dumazet 提交于
      sk_callback_lock rwlock actually protects sk->sk_sleep pointer, so we
      need two atomic operations (and associated dirtying) per incoming
      packet.
      
      RCU conversion is pretty much needed :
      
      1) Add a new structure, called "struct socket_wq" to hold all fields
      that will need rcu_read_lock() protection (currently: a
      wait_queue_head_t and a struct fasync_struct pointer).
      
      [Future patch will add a list anchor for wakeup coalescing]
      
      2) Attach one of such structure to each "struct socket" created in
      sock_alloc_inode().
      
      3) Respect RCU grace period when freeing a "struct socket_wq"
      
      4) Change sk_sleep pointer in "struct sock" by sk_wq, pointer to "struct
      socket_wq"
      
      5) Change sk_sleep() function to use new sk->sk_wq instead of
      sk->sk_sleep
      
      6) Change sk_has_sleeper() to wq_has_sleeper() that must be used inside
      a rcu_read_lock() section.
      
      7) Change all sk_has_sleeper() callers to :
        - Use rcu_read_lock() instead of read_lock(&sk->sk_callback_lock)
        - Use wq_has_sleeper() to eventually wakeup tasks.
        - Use rcu_read_unlock() instead of read_unlock(&sk->sk_callback_lock)
      
      8) sock_wake_async() is modified to use rcu protection as well.
      
      9) Exceptions :
        macvtap, drivers/net/tun.c, af_unix use integrated "struct socket_wq"
      instead of dynamically allocated ones. They dont need rcu freeing.
      
      Some cleanups or followups are probably needed, (possible
      sk_callback_lock conversion to a spinlock for example...).
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      43815482
  15. 28 4月, 2010 1 次提交
    • E
      net: sk_add_backlog() take rmem_alloc into account · c377411f
      Eric Dumazet 提交于
      Current socket backlog limit is not enough to really stop DDOS attacks,
      because user thread spend many time to process a full backlog each
      round, and user might crazy spin on socket lock.
      
      We should add backlog size and receive_queue size (aka rmem_alloc) to
      pace writers, and let user run without being slow down too much.
      
      Introduce a sk_rcvqueues_full() helper, to avoid taking socket lock in
      stress situations.
      
      Under huge stress from a multiqueue/RPS enabled NIC, a single flow udp
      receiver can now process ~200.000 pps (instead of ~100 pps before the
      patch) on a 8 core machine.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c377411f
  16. 21 4月, 2010 1 次提交
  17. 13 4月, 2010 1 次提交
    • E
      net: sk_dst_cache RCUification · b6c6712a
      Eric Dumazet 提交于
      With latest CONFIG_PROVE_RCU stuff, I felt more comfortable to make this
      work.
      
      sk->sk_dst_cache is currently protected by a rwlock (sk_dst_lock)
      
      This rwlock is readlocked for a very small amount of time, and dst
      entries are already freed after RCU grace period. This calls for RCU
      again :)
      
      This patch converts sk_dst_lock to a spinlock, and use RCU for readers.
      
      __sk_dst_get() is supposed to be called with rcu_read_lock() or if
      socket locked by user, so use appropriate rcu_dereference_check()
      condition (rcu_read_lock_held() || sock_owned_by_user(sk))
      
      This patch avoids two atomic ops per tx packet on UDP connected sockets,
      for example, and permits sk_dst_lock to be much less dirtied.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b6c6712a
  18. 08 3月, 2010 1 次提交
  19. 06 3月, 2010 2 次提交
    • Z
      net: backlog functions rename · a3a858ff
      Zhu Yi 提交于
      sk_add_backlog -> __sk_add_backlog
      sk_add_backlog_limited -> sk_add_backlog
      Signed-off-by: NZhu Yi <yi.zhu@intel.com>
      Acked-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a3a858ff
    • Z
      net: add limit for socket backlog · 8eae939f
      Zhu Yi 提交于
      We got system OOM while running some UDP netperf testing on the loopback
      device. The case is multiple senders sent stream UDP packets to a single
      receiver via loopback on local host. Of course, the receiver is not able
      to handle all the packets in time. But we surprisingly found that these
      packets were not discarded due to the receiver's sk->sk_rcvbuf limit.
      Instead, they are kept queuing to sk->sk_backlog and finally ate up all
      the memory. We believe this is a secure hole that a none privileged user
      can crash the system.
      
      The root cause for this problem is, when the receiver is doing
      __release_sock() (i.e. after userspace recv, kernel udp_recvmsg ->
      skb_free_datagram_locked -> release_sock), it moves skbs from backlog to
      sk_receive_queue with the softirq enabled. In the above case, multiple
      busy senders will almost make it an endless loop. The skbs in the
      backlog end up eat all the system memory.
      
      The issue is not only for UDP. Any protocols using socket backlog is
      potentially affected. The patch adds limit for socket backlog so that
      the backlog size cannot be expanded endlessly.
      Reported-by: NAlex Shi <alex.shi@intel.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru
      Cc: "Pekka Savola (ipv6)" <pekkas@netcore.fi>
      Cc: Patrick McHardy <kaber@trash.net>
      Cc: Vlad Yasevich <vladislav.yasevich@hp.com>
      Cc: Sridhar Samudrala <sri@us.ibm.com>
      Cc: Jon Maloy <jon.maloy@ericsson.com>
      Cc: Allan Stephens <allan.stephens@windriver.com>
      Cc: Andrew Hendry <andrew.hendry@gmail.com>
      Signed-off-by: NZhu Yi <yi.zhu@intel.com>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Acked-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8eae939f
  20. 25 2月, 2010 1 次提交
    • P
      net: Add checking to rcu_dereference() primitives · a898def2
      Paul E. McKenney 提交于
      Update rcu_dereference() primitives to use new lockdep-based
      checking. The rcu_dereference() in __in6_dev_get() may be
      protected either by rcu_read_lock() or RTNL, per Eric Dumazet.
      The rcu_dereference() in __sk_free() is protected by the fact
      that it is never reached if an update could change it.  Check
      for this by using rcu_dereference_check() to verify that the
      struct sock's ->sk_wmem_alloc counter is zero.
      Acked-by: NEric Dumazet <eric.dumazet@gmail.com>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <1266887105-1528-5-git-send-email-paulmck@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a898def2
  21. 18 2月, 2010 1 次提交
  22. 18 1月, 2010 1 次提交
  23. 15 1月, 2010 1 次提交
  24. 08 1月, 2010 1 次提交
  25. 06 11月, 2009 2 次提交
  26. 21 10月, 2009 2 次提交
  27. 15 10月, 2009 1 次提交
  28. 13 10月, 2009 1 次提交
    • N
      net: Generalize socket rx gap / receive queue overflow cmsg · 3b885787
      Neil Horman 提交于
      Create a new socket level option to report number of queue overflows
      
      Recently I augmented the AF_PACKET protocol to report the number of frames lost
      on the socket receive queue between any two enqueued frames.  This value was
      exported via a SOL_PACKET level cmsg.  AFter I completed that work it was
      requested that this feature be generalized so that any datagram oriented socket
      could make use of this option.  As such I've created this patch, It creates a
      new SOL_SOCKET level option called SO_RXQ_OVFL, which when enabled exports a
      SOL_SOCKET level cmsg that reports the nubmer of times the sk_receive_queue
      overflowed between any two given frames.  It also augments the AF_PACKET
      protocol to take advantage of this new feature (as it previously did not touch
      sk->sk_drops, which this patch uses to record the overflow count).  Tested
      successfully by me.
      
      Notes:
      
      1) Unlike my previous patch, this patch simply records the sk_drops value, which
      is not a number of drops between packets, but rather a total number of drops.
      Deltas must be computed in user space.
      
      2) While this patch currently works with datagram oriented protocols, it will
      also be accepted by non-datagram oriented protocols. I'm not sure if thats
      agreeable to everyone, but my argument in favor of doing so is that, for those
      protocols which aren't applicable to this option, sk_drops will always be zero,
      and reporting no drops on a receive queue that isn't used for those
      non-participating protocols seems reasonable to me.  This also saves us having
      to code in a per-protocol opt in mechanism.
      
      3) This applies cleanly to net-next assuming that commit
      97775007 (my af packet cmsg patch) is reverted
      Signed-off-by: NNeil Horman <nhorman@tuxdriver.com>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3b885787
  29. 01 10月, 2009 2 次提交
  30. 22 9月, 2009 1 次提交
  31. 02 9月, 2009 1 次提交
  32. 06 8月, 2009 2 次提交
    • J
      net: implement a SO_DOMAIN getsockoption · 0d6038ee
      Jan Engelhardt 提交于
      This sockopt goes in line with SO_TYPE and SO_PROTOCOL. It makes it
      possible for userspace programs to pass around file descriptors — I
      am referring to arguments-to-functions, but it may even work for the
      fd passing over UNIX sockets — without needing to also pass the
      auxiliary information (PF_INET6/IPPROTO_TCP).
      Signed-off-by: NJan Engelhardt <jengelh@medozas.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0d6038ee
    • J
      net: implement a SO_PROTOCOL getsockoption · 49c794e9
      Jan Engelhardt 提交于
      Similar to SO_TYPE returning the socket type, SO_PROTOCOL allows to
      retrieve the protocol used with a given socket.
      
      I am not quite sure why we have that-many copies of socket.h, and why
      the values are not the same on all arches either, but for where hex
      numbers dominate, I use 0x1029 for SO_PROTOCOL as that seems to be
      the next free unused number across a bunch of operating systems, or
      so Google results make me want to believe. SO_PROTOCOL for others
      just uses the next free Linux number, 38.
      Signed-off-by: NJan Engelhardt <jengelh@medozas.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      49c794e9