1. 22 2月, 2012 1 次提交
    • P
      sock: Introduce the SO_PEEK_OFF sock option · ef64a54f
      Pavel Emelyanov 提交于
      This one specifies where to start MSG_PEEK-ing queue data from. When
      set to negative value means that MSG_PEEK works as ususally -- peeks
      from the head of the queue always.
      
      When some bytes are peeked from queue and the peeking offset is non
      negative it is moved forward so that the next peek will return next
      portion of data.
      
      When non-peeking recvmsg occurs and the peeking offset is non negative
      is is moved backward so that the next peek will still peek the proper
      data (i.e. the one that would have been picked if there were no non
      peeking recv in between).
      
      The offset is set using per-proto opteration to let the protocol handle
      the locking issues and to check whether the peeking offset feature is
      supported by the protocol the socket belongs to.
      Signed-off-by: NPavel Emelyanov <xemul@parallels.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ef64a54f
  2. 11 2月, 2012 1 次提交
  3. 23 1月, 2012 1 次提交
    • G
      net: introduce res_counter_charge_nofail() for socket allocations · 0e90b31f
      Glauber Costa 提交于
      There is a case in __sk_mem_schedule(), where an allocation
      is beyond the maximum, but yet we are allowed to proceed.
      It happens under the following condition:
      
      	sk->sk_wmem_queued + size >= sk->sk_sndbuf
      
      The network code won't revert the allocation in this case,
      meaning that at some point later it'll try to do it. Since
      this is never communicated to the underlying res_counter
      code, there is an inbalance in res_counter uncharge operation.
      
      I see two ways of fixing this:
      
      1) storing the information about those allocations somewhere
         in memcg, and then deducting from that first, before
         we start draining the res_counter,
      2) providing a slightly different allocation function for
         the res_counter, that matches the original behavior of
         the network code more closely.
      
      I decided to go for #2 here, believing it to be more elegant,
      since #1 would require us to do basically that, but in a more
      obscure way.
      Signed-off-by: NGlauber Costa <glommer@parallels.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      CC: Tejun Heo <tj@kernel.org>
      CC: Li Zefan <lizf@cn.fujitsu.com>
      CC: Laurent Chavey <chavey@google.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0e90b31f
  4. 10 1月, 2012 1 次提交
  5. 09 1月, 2012 1 次提交
  6. 08 1月, 2012 1 次提交
    • G
      net: fix sock_clone reference mismatch with tcp memcontrol · f3f511e1
      Glauber Costa 提交于
      Sockets can also be created through sock_clone. Because it copies
      all data in the sock structure, it also copies the memcg-related pointer,
      and all should be fine. However, since we now use reference counts in
      socket creation, we are left with some sockets that have no reference
      counts. It matters when we destroy them, since it leads to a mismatch.
      Signed-off-by: NGlauber Costa <glommer@parallels.com>
      CC: David S. Miller <davem@davemloft.net>
      CC: Greg Thelen <gthelen@google.com>
      CC: Hiroyouki Kamezawa <kamezawa.hiroyu@jp.fujitsu.com>
      CC: Laurent Chavey <chavey@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f3f511e1
  7. 23 12月, 2011 1 次提交
  8. 17 12月, 2011 1 次提交
    • G
      net: fix sleeping while atomic problem in sock mem_cgroup. · 36b77a52
      Glauber Costa 提交于
      We can't scan the proto_list to initialize sock cgroups, as it
      holds a rwlock, and we also want to keep the code generic enough to
      avoid calling the initialization functions of protocols directly,
      
      Convert proto_list_lock into a mutex, so we can sleep and do the
      necessary allocations. This lock is seldom taken, so there shouldn't
      be any performance penalties associated with that
      Signed-off-by: NGlauber Costa <glommer@parallels.com>
      CC: Hiroyouki Kamezawa <kamezawa.hiroyu@jp.fujitsu.com>
      CC: David S. Miller <davem@davemloft.net>
      CC: Eric Dumazet <eric.dumazet@gmail.com>
      CC: Stephen Rothwell <sfr@canb.auug.org.au>
      CC: Randy Dunlap <rdunlap@xenotime.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      36b77a52
  9. 13 12月, 2011 3 次提交
  10. 29 11月, 2011 1 次提交
  11. 23 11月, 2011 1 次提交
    • N
      net: add network priority cgroup infrastructure (v4) · 5bc1421e
      Neil Horman 提交于
      This patch adds in the infrastructure code to create the network priority
      cgroup.  The cgroup, in addition to the standard processes file creates two
      control files:
      
      1) prioidx - This is a read-only file that exports the index of this cgroup.
      This is a value that is both arbitrary and unique to a cgroup in this subsystem,
      and is used to index the per-device priority map
      
      2) priomap - This is a writeable file.  On read it reports a table of 2-tuples
      <name:priority> where name is the name of a network interface and priority is
      indicates the priority assigned to frames egresessing on the named interface and
      originating from a pid in this cgroup
      
      This cgroup allows for skb priority to be set prior to a root qdisc getting
      selected. This is benenficial for DCB enabled systems, in that it allows for any
      application to use dcb configured priorities so without application modification
      Signed-off-by: NNeil Horman <nhorman@tuxdriver.com>
      Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com>
      CC: Robert Love <robert.w.love@intel.com>
      CC: "David S. Miller" <davem@davemloft.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5bc1421e
  12. 10 11月, 2011 1 次提交
    • J
      net: add wireless TX status socket option · 6e3e939f
      Johannes Berg 提交于
      The 802.1X EAPOL handshake hostapd does requires
      knowing whether the frame was ack'ed by the peer.
      Currently, we fudge this pretty badly by not even
      transmitting the frame as a normal data frame but
      injecting it with radiotap and getting the status
      out of radiotap monitor as well. This is rather
      complex, confuses users (mon.wlan0 presence) and
      doesn't work with all hardware.
      
      To get rid of that hack, introduce a real wifi TX
      status option for data frame transmissions.
      
      This works similar to the existing TX timestamping
      in that it reflects the SKB back to the socket's
      error queue with a SCM_WIFI_STATUS cmsg that has
      an int indicating ACK status (0/1).
      
      Since it is possible that at some point we will
      want to have TX timestamping and wifi status in a
      single errqueue SKB (there's little point in not
      doing that), redefine SO_EE_ORIGIN_TIMESTAMPING
      to SO_EE_ORIGIN_TXSTATUS which can collect more
      than just the timestamp; keep the old constant
      as an alias of course. Currently the internal APIs
      don't make that possible, but it wouldn't be hard
      to split them up in a way that makes it possible.
      
      Thanks to Neil Horman for helping me figure out
      the functions that add the control messages.
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
      6e3e939f
  13. 09 11月, 2011 1 次提交
  14. 26 10月, 2011 1 次提交
  15. 14 10月, 2011 1 次提交
    • E
      net: more accurate skb truesize · 87fb4b7b
      Eric Dumazet 提交于
      skb truesize currently accounts for sk_buff struct and part of skb head.
      kmalloc() roundings are also ignored.
      
      Considering that skb_shared_info is larger than sk_buff, its time to
      take it into account for better memory accounting.
      
      This patch introduces SKB_TRUESIZE(X) macro to centralize various
      assumptions into a single place.
      
      At skb alloc phase, we put skb_shared_info struct at the exact end of
      skb head, to allow a better use of memory (lowering number of
      reallocations), since kmalloc() gives us power-of-two memory blocks.
      
      Unless SLUB/SLUB debug is active, both skb->head and skb_shared_info are
      aligned to cache lines, as before.
      
      Note: This patch might trigger performance regressions because of
      misconfigured protocol stacks, hitting per socket or global memory
      limits that were previously not reached. But its a necessary step for a
      more accurate memory accounting.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      CC: Andi Kleen <ak@linux.intel.com>
      CC: Ben Hutchings <bhutchings@solarflare.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      87fb4b7b
  16. 08 10月, 2011 1 次提交
  17. 25 8月, 2011 1 次提交
  18. 02 8月, 2011 1 次提交
  19. 06 7月, 2011 1 次提交
  20. 22 6月, 2011 1 次提交
  21. 31 3月, 2011 1 次提交
  22. 07 1月, 2011 1 次提交
  23. 17 12月, 2010 1 次提交
    • O
      net: fix nulls list corruptions in sk_prot_alloc · fcbdf09d
      Octavian Purdila 提交于
      Special care is taken inside sk_port_alloc to avoid overwriting
      skc_node/skc_nulls_node. We should also avoid overwriting
      skc_bind_node/skc_portaddr_node.
      
      The patch fixes the following crash:
      
       BUG: unable to handle kernel paging request at fffffffffffffff0
       IP: [<ffffffff812ec6dd>] udp4_lib_lookup2+0xad/0x370
       [<ffffffff812ecc22>] __udp4_lib_lookup+0x282/0x360
       [<ffffffff812ed63e>] __udp4_lib_rcv+0x31e/0x700
       [<ffffffff812bba45>] ? ip_local_deliver_finish+0x65/0x190
       [<ffffffff812bbbf8>] ? ip_local_deliver+0x88/0xa0
       [<ffffffff812eda35>] udp_rcv+0x15/0x20
       [<ffffffff812bba45>] ip_local_deliver_finish+0x65/0x190
       [<ffffffff812bbbf8>] ip_local_deliver+0x88/0xa0
       [<ffffffff812bb2cd>] ip_rcv_finish+0x32d/0x6f0
       [<ffffffff8128c14c>] ? netif_receive_skb+0x99c/0x11c0
       [<ffffffff812bb94b>] ip_rcv+0x2bb/0x350
       [<ffffffff8128c14c>] netif_receive_skb+0x99c/0x11c0
      Signed-off-by: NLeonard Crestez <lcrestez@ixiacom.com>
      Signed-off-by: NOctavian Purdila <opurdila@ixiacom.com>
      Acked-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fcbdf09d
  24. 10 12月, 2010 1 次提交
    • E
      net: optimize INET input path further · 68835aba
      Eric Dumazet 提交于
      Followup of commit b178bb3d (net: reorder struct sock fields)
      
      Optimize INET input path a bit further, by :
      
      1) moving sk_refcnt close to sk_lock.
      
      This reduces number of dirtied cache lines by one on 64bit arches (and
      64 bytes cache line size).
      
      2) moving inet_daddr & inet_rcv_saddr at the beginning of sk
      
      (same cache line than hash / family / bound_dev_if / nulls_node)
      
      This reduces number of accessed cache lines in lookups by one, and dont
      increase size of inet and timewait socks.
      inet and tw sockets now share same place-holder for these fields.
      
      Before patch :
      
      offsetof(struct sock, sk_refcnt) = 0x10
      offsetof(struct sock, sk_lock) = 0x40
      offsetof(struct sock, sk_receive_queue) = 0x60
      offsetof(struct inet_sock, inet_daddr) = 0x270
      offsetof(struct inet_sock, inet_rcv_saddr) = 0x274
      
      After patch :
      
      offsetof(struct sock, sk_refcnt) = 0x44
      offsetof(struct sock, sk_lock) = 0x48
      offsetof(struct sock, sk_receive_queue) = 0x68
      offsetof(struct inet_sock, inet_daddr) = 0x0
      offsetof(struct inet_sock, inet_rcv_saddr) = 0x4
      
      compute_score() (udp or tcp) now use a single cache line per ignored
      item, instead of two.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      68835aba
  25. 08 12月, 2010 1 次提交
  26. 11 11月, 2010 1 次提交
  27. 26 10月, 2010 1 次提交
  28. 08 10月, 2010 1 次提交
    • P
      net: suppress RCU lockdep false positive in sock_update_classid · 1144182a
      Paul E. McKenney 提交于
      > ===================================================
      > [ INFO: suspicious rcu_dereference_check() usage. ]
      > ---------------------------------------------------
      > include/linux/cgroup.h:542 invoked rcu_dereference_check() without protection!
      >
      > other info that might help us debug this:
      >
      >
      > rcu_scheduler_active = 1, debug_locks = 0
      > 1 lock held by swapper/1:
      >  #0:  (net_mutex){+.+.+.}, at: [<ffffffff813e9010>]
      > register_pernet_subsys+0x1f/0x47
      >
      > stack backtrace:
      > Pid: 1, comm: swapper Not tainted 2.6.35.4-28.fc14.x86_64 #1
      > Call Trace:
      >  [<ffffffff8107bd3a>] lockdep_rcu_dereference+0xaa/0xb3
      >  [<ffffffff813e04b9>] sock_update_classid+0x7c/0xa2
      >  [<ffffffff813e054a>] sk_alloc+0x6b/0x77
      >  [<ffffffff8140b281>] __netlink_create+0x37/0xab
      >  [<ffffffff813f941c>] ? rtnetlink_rcv+0x0/0x2d
      >  [<ffffffff8140cee1>] netlink_kernel_create+0x74/0x19d
      >  [<ffffffff8149c3ca>] ? __mutex_lock_common+0x339/0x35b
      >  [<ffffffff813f7e9c>] rtnetlink_net_init+0x2e/0x48
      >  [<ffffffff813e8d7a>] ops_init+0xe9/0xff
      >  [<ffffffff813e8f0d>] register_pernet_operations+0xab/0x130
      >  [<ffffffff813e901f>] register_pernet_subsys+0x2e/0x47
      >  [<ffffffff81db7bca>] rtnetlink_init+0x53/0x102
      >  [<ffffffff81db835c>] netlink_proto_init+0x126/0x143
      >  [<ffffffff81db8236>] ? netlink_proto_init+0x0/0x143
      >  [<ffffffff810021b8>] do_one_initcall+0x72/0x186
      >  [<ffffffff81d78ebc>] kernel_init+0x23b/0x2c9
      >  [<ffffffff8100aae4>] kernel_thread_helper+0x4/0x10
      >  [<ffffffff8149e2d0>] ? restore_args+0x0/0x30
      >  [<ffffffff81d78c81>] ? kernel_init+0x0/0x2c9
      >  [<ffffffff8100aae0>] ? kernel_thread_helper+0x0/0x10
      
      The sock_update_classid() function calls task_cls_classid(current),
      but the calling task cannot go away, so there is no danger of
      the associated structures disappearing.  Insert an RCU read-side
      critical section to suppress the false positive.
      Reported-by: NSubrata Modak <subrata@linux.vnet.ibm.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      1144182a
  29. 25 9月, 2010 1 次提交
    • E
      net: fix a lockdep splat · f064af1e
      Eric Dumazet 提交于
      We have for each socket :
      
      One spinlock (sk_slock.slock)
      One rwlock (sk_callback_lock)
      
      Possible scenarios are :
      
      (A) (this is used in net/sunrpc/xprtsock.c)
      read_lock(&sk->sk_callback_lock) (without blocking BH)
      <BH>
      spin_lock(&sk->sk_slock.slock);
      ...
      read_lock(&sk->sk_callback_lock);
      ...
      
      (B)
      write_lock_bh(&sk->sk_callback_lock)
      stuff
      write_unlock_bh(&sk->sk_callback_lock)
      
      (C)
      spin_lock_bh(&sk->sk_slock)
      ...
      write_lock_bh(&sk->sk_callback_lock)
      stuff
      write_unlock_bh(&sk->sk_callback_lock)
      spin_unlock_bh(&sk->sk_slock)
      
      This (C) case conflicts with (A) :
      
      CPU1 [A]                         CPU2 [C]
      read_lock(callback_lock)
      <BH>                             spin_lock_bh(slock)
      <wait to spin_lock(slock)>
                                       <wait to write_lock_bh(callback_lock)>
      
      We have one problematic (C) use case in inet_csk_listen_stop() :
      
      local_bh_disable();
      bh_lock_sock(child); // spin_lock_bh(&sk->sk_slock)
      WARN_ON(sock_owned_by_user(child));
      ...
      sock_orphan(child); // write_lock_bh(&sk->sk_callback_lock)
      
      lockdep is not happy with this, as reported by Tetsuo Handa
      
      It seems only way to deal with this is to use read_lock_bh(callbacklock)
      everywhere.
      
      Thanks to Jarek for pointing a bug in my first attempt and suggesting
      this solution.
      Reported-by: NTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Tested-by: NTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      CC: Jarek Poplawski <jarkao2@gmail.com>
      Tested-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f064af1e
  30. 10 9月, 2010 1 次提交
  31. 20 7月, 2010 1 次提交
  32. 13 7月, 2010 1 次提交
  33. 17 6月, 2010 3 次提交
  34. 07 6月, 2010 1 次提交
  35. 27 5月, 2010 1 次提交
    • E
      net: fix lock_sock_bh/unlock_sock_bh · 8a74ad60
      Eric Dumazet 提交于
      This new sock lock primitive was introduced to speedup some user context
      socket manipulation. But it is unsafe to protect two threads, one using
      regular lock_sock/release_sock, one using lock_sock_bh/unlock_sock_bh
      
      This patch changes lock_sock_bh to be careful against 'owned' state.
      If owned is found to be set, we must take the slow path.
      lock_sock_bh() now returns a boolean to say if the slow path was taken,
      and this boolean is used at unlock_sock_bh time to call the appropriate
      unlock function.
      
      After this change, BH are either disabled or enabled during the
      lock_sock_bh/unlock_sock_bh protected section. This might be misleading,
      so we rename these functions to lock_sock_fast()/unlock_sock_fast().
      Reported-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Tested-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8a74ad60
  36. 24 5月, 2010 1 次提交