1. 23 11月, 2011 1 次提交
  2. 17 11月, 2011 1 次提交
  3. 10 11月, 2011 1 次提交
    • E
      ipv4: PKTINFO doesnt need dst reference · d826eb14
      Eric Dumazet 提交于
      Le lundi 07 novembre 2011 à 15:33 +0100, Eric Dumazet a écrit :
      
      > At least, in recent kernels we dont change dst->refcnt in forwarding
      > patch (usinf NOREF skb->dst)
      >
      > One particular point is the atomic_inc(dst->refcnt) we have to perform
      > when queuing an UDP packet if socket asked PKTINFO stuff (for example a
      > typical DNS server has to setup this option)
      >
      > I have one patch somewhere that stores the information in skb->cb[] and
      > avoid the atomic_{inc|dec}(dst->refcnt).
      >
      
      OK I found it, I did some extra tests and believe its ready.
      
      [PATCH net-next] ipv4: IP_PKTINFO doesnt need dst reference
      
      When a socket uses IP_PKTINFO notifications, we currently force a dst
      reference for each received skb. Reader has to access dst to get needed
      information (rt_iif & rt_spec_dst) and must release dst reference.
      
      We also forced a dst reference if skb was put in socket backlog, even
      without IP_PKTINFO handling. This happens under stress/load.
      
      We can instead store the needed information in skb->cb[], so that only
      softirq handler really access dst, improving cache hit ratios.
      
      This removes two atomic operations per packet, and false sharing as
      well.
      
      On a benchmark using a mono threaded receiver (doing only recvmsg()
      calls), I can reach 720.000 pps instead of 570.000 pps.
      
      IP_PKTINFO is typically used by DNS servers, and any multihomed aware
      UDP application.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d826eb14
  4. 02 11月, 2011 1 次提交
  5. 31 8月, 2011 1 次提交
  6. 18 8月, 2011 1 次提交
  7. 12 8月, 2011 1 次提交
  8. 22 7月, 2011 1 次提交
  9. 22 6月, 2011 2 次提交
    • X
      udp/recvmsg: Clear MSG_TRUNC flag when starting over for a new packet · 9cfaa8de
      Xufeng Zhang 提交于
      Consider this scenario: When the size of the first received udp packet
      is bigger than the receive buffer, MSG_TRUNC bit is set in msg->msg_flags.
      However, if checksum error happens and this is a blocking socket, it will
      goto try_again loop to receive the next packet.  But if the size of the
      next udp packet is smaller than receive buffer, MSG_TRUNC flag should not
      be set, but because MSG_TRUNC bit is not cleared in msg->msg_flags before
      receive the next packet, MSG_TRUNC is still set, which is wrong.
      
      Fix this problem by clearing MSG_TRUNC flag when starting over for a
      new packet.
      Signed-off-by: NXufeng Zhang <xufeng.zhang@windriver.com>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9cfaa8de
    • X
      ipv6/udp: Use the correct variable to determine non-blocking condition · 32c90254
      Xufeng Zhang 提交于
      udpv6_recvmsg() function is not using the correct variable to determine
      whether or not the socket is in non-blocking operation, this will lead
      to unexpected behavior when a UDP checksum error occurs.
      
      Consider a non-blocking udp receive scenario: when udpv6_recvmsg() is
      called by sock_common_recvmsg(), MSG_DONTWAIT bit of flags variable in
      udpv6_recvmsg() is cleared by "flags & ~MSG_DONTWAIT" in this call:
      
          err = sk->sk_prot->recvmsg(iocb, sk, msg, size, flags & MSG_DONTWAIT,
                         flags & ~MSG_DONTWAIT, &addr_len);
      
      i.e. with udpv6_recvmsg() getting these values:
      
      	int noblock = flags & MSG_DONTWAIT
      	int flags = flags & ~MSG_DONTWAIT
      
      So, when udp checksum error occurs, the execution will go to
      csum_copy_err, and then the problem happens:
      
          csum_copy_err:
                  ...............
                  if (flags & MSG_DONTWAIT)
                          return -EAGAIN;
                  goto try_again;
                  ...............
      
      But it will always go to try_again as MSG_DONTWAIT has been cleared
      from flags at call time -- only noblock contains the original value
      of MSG_DONTWAIT, so the test should be:
      
                  if (noblock)
                          return -EAGAIN;
      
      This is also consistent with what the ipv4/udp code does.
      Signed-off-by: NXufeng Zhang <xufeng.zhang@windriver.com>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      32c90254
  10. 24 5月, 2011 1 次提交
    • D
      net: convert %p usage to %pK · 71338aa7
      Dan Rosenberg 提交于
      The %pK format specifier is designed to hide exposed kernel pointers,
      specifically via /proc interfaces.  Exposing these pointers provides an
      easy target for kernel write vulnerabilities, since they reveal the
      locations of writable structures containing easily triggerable function
      pointers.  The behavior of %pK depends on the kptr_restrict sysctl.
      
      If kptr_restrict is set to 0, no deviation from the standard %p behavior
      occurs.  If kptr_restrict is set to 1, the default, if the current user
      (intended to be a reader via seq_printf(), etc.) does not have CAP_SYSLOG
      (currently in the LSM tree), kernel pointers using %pK are printed as 0's.
       If kptr_restrict is set to 2, kernel pointers using %pK are printed as
      0's regardless of privileges.  Replacing with 0's was chosen over the
      default "(null)", which cannot be parsed by userland %p, which expects
      "(nil)".
      
      The supporting code for kptr_restrict and %pK are currently in the -mm
      tree.  This patch converts users of %p in net/ to %pK.  Cases of printing
      pointers to the syslog are not covered, since this would eliminate useful
      information for postmortem debugging and the reading of the syslog is
      already optionally protected by the dmesg_restrict sysctl.
      Signed-off-by: NDan Rosenberg <drosenberg@vsecurity.com>
      Cc: James Morris <jmorris@namei.org>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: Thomas Graf <tgraf@infradead.org>
      Cc: Eugene Teo <eugeneteo@kernel.org>
      Cc: Kees Cook <kees.cook@canonical.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Eric Paris <eparis@parisplace.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      71338aa7
  11. 29 4月, 2011 1 次提交
  12. 23 4月, 2011 1 次提交
  13. 22 4月, 2011 1 次提交
  14. 07 4月, 2011 1 次提交
    • N
      ipv6: Enable RFS sk_rxhash tracking for ipv6 sockets (v2) · 47482f13
      Neil Horman 提交于
      properly record sk_rxhash in ipv6 sockets (v2)
      
      Noticed while working on another project that flows to sockets which I had open
      on a test systems weren't getting steered properly when I had RFS enabled.
      Looking more closely I found that:
      
      1) The affected sockets were all ipv6
      2) They weren't getting steered because sk->sk_rxhash was never set from the
      incomming skbs on that socket.
      
      This was occuring because there are several points in the IPv4 tcp and udp code
      which save the rxhash value when a new connection is established.  Those calls
      to sock_rps_save_rxhash were never added to the corresponding ipv6 code paths.
      This patch adds those calls.  Tested by myself to properly enable RFS
      functionalty on ipv6.
      
      Change notes:
      v2:
      	Filtered UDP to only arm RFS on bound sockets (Eric Dumazet)
      Signed-off-by: NNeil Horman <nhorman@tuxdriver.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      47482f13
  15. 13 3月, 2011 4 次提交
  16. 02 3月, 2011 1 次提交
    • D
      ipv6: Consolidate route lookup sequences. · 68d0c6d3
      David S. Miller 提交于
      Route lookups follow a general pattern in the ipv6 code wherein
      we first find the non-IPSEC route, potentially override the
      flow destination address due to ipv6 options settings, and then
      finally make an IPSEC search using either xfrm_lookup() or
      __xfrm_lookup().
      
      __xfrm_lookup() is used when we want to generate a blackhole route
      if the key manager needs to resolve the IPSEC rules (in this case
      -EREMOTE is returned and the original 'dst' is left unchanged).
      
      Otherwise plain xfrm_lookup() is used and when asynchronous IPSEC
      resolution is necessary, we simply fail the lookup completely.
      
      All of these cases are encapsulated into two routines,
      ip6_dst_lookup_flow and ip6_sk_dst_lookup_flow.  The latter of which
      handles unconnected UDP datagram sockets.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      68d0c6d3
  17. 25 1月, 2011 1 次提交
  18. 17 12月, 2010 1 次提交
    • O
      net: fix nulls list corruptions in sk_prot_alloc · fcbdf09d
      Octavian Purdila 提交于
      Special care is taken inside sk_port_alloc to avoid overwriting
      skc_node/skc_nulls_node. We should also avoid overwriting
      skc_bind_node/skc_portaddr_node.
      
      The patch fixes the following crash:
      
       BUG: unable to handle kernel paging request at fffffffffffffff0
       IP: [<ffffffff812ec6dd>] udp4_lib_lookup2+0xad/0x370
       [<ffffffff812ecc22>] __udp4_lib_lookup+0x282/0x360
       [<ffffffff812ed63e>] __udp4_lib_rcv+0x31e/0x700
       [<ffffffff812bba45>] ? ip_local_deliver_finish+0x65/0x190
       [<ffffffff812bbbf8>] ? ip_local_deliver+0x88/0xa0
       [<ffffffff812eda35>] udp_rcv+0x15/0x20
       [<ffffffff812bba45>] ip_local_deliver_finish+0x65/0x190
       [<ffffffff812bbbf8>] ip_local_deliver+0x88/0xa0
       [<ffffffff812bb2cd>] ip_rcv_finish+0x32d/0x6f0
       [<ffffffff8128c14c>] ? netif_receive_skb+0x99c/0x11c0
       [<ffffffff812bb94b>] ip_rcv+0x2bb/0x350
       [<ffffffff8128c14c>] netif_receive_skb+0x99c/0x11c0
      Signed-off-by: NLeonard Crestez <lcrestez@ixiacom.com>
      Signed-off-by: NOctavian Purdila <opurdila@ixiacom.com>
      Acked-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fcbdf09d
  19. 11 12月, 2010 1 次提交
  20. 10 12月, 2010 1 次提交
    • E
      net: optimize INET input path further · 68835aba
      Eric Dumazet 提交于
      Followup of commit b178bb3d (net: reorder struct sock fields)
      
      Optimize INET input path a bit further, by :
      
      1) moving sk_refcnt close to sk_lock.
      
      This reduces number of dirtied cache lines by one on 64bit arches (and
      64 bytes cache line size).
      
      2) moving inet_daddr & inet_rcv_saddr at the beginning of sk
      
      (same cache line than hash / family / bound_dev_if / nulls_node)
      
      This reduces number of accessed cache lines in lookups by one, and dont
      increase size of inet and timewait socks.
      inet and tw sockets now share same place-holder for these fields.
      
      Before patch :
      
      offsetof(struct sock, sk_refcnt) = 0x10
      offsetof(struct sock, sk_lock) = 0x40
      offsetof(struct sock, sk_receive_queue) = 0x60
      offsetof(struct inet_sock, inet_daddr) = 0x270
      offsetof(struct inet_sock, inet_rcv_saddr) = 0x274
      
      After patch :
      
      offsetof(struct sock, sk_refcnt) = 0x44
      offsetof(struct sock, sk_lock) = 0x48
      offsetof(struct sock, sk_receive_queue) = 0x68
      offsetof(struct inet_sock, inet_daddr) = 0x0
      offsetof(struct inet_sock, inet_rcv_saddr) = 0x4
      
      compute_score() (udp or tcp) now use a single cache line per ignored
      item, instead of two.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      68835aba
  21. 17 11月, 2010 1 次提交
  22. 26 10月, 2010 1 次提交
  23. 21 10月, 2010 2 次提交
  24. 09 9月, 2010 1 次提交
    • E
      udp: add rehash on connect() · 719f8358
      Eric Dumazet 提交于
      commit 30fff923 introduced in linux-2.6.33 (udp: bind() optimisation)
      added a secondary hash on UDP, hashed on (local addr, local port).
      
      Problem is that following sequence :
      
      fd = socket(...)
      connect(fd, &remote, ...)
      
      not only selects remote end point (address and port), but also sets
      local address, while UDP stack stored in secondary hash table the socket
      while its local address was INADDR_ANY (or ipv6 equivalent)
      
      Sequence is :
       - autobind() : choose a random local port, insert socket in hash tables
                    [while local address is INADDR_ANY]
       - connect() : set remote address and port, change local address to IP
                    given by a route lookup.
      
      When an incoming UDP frame comes, if more than 10 sockets are found in
      primary hash table, we switch to secondary table, and fail to find
      socket because its local address changed.
      
      One solution to this problem is to rehash datagram socket if needed.
      
      We add a new rehash(struct socket *) method in "struct proto", and
      implement this method for UDP v4 & v6, using a common helper.
      
      This rehashing only takes care of secondary hash table, since primary
      hash (based on local port only) is not changed.
      Reported-by: NKrzysztof Piotr Oledzki <ole@ans.pl>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Tested-by: NKrzysztof Piotr Oledzki <ole@ans.pl>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      719f8358
  25. 02 6月, 2010 1 次提交
  26. 01 6月, 2010 1 次提交
  27. 29 5月, 2010 1 次提交
  28. 27 5月, 2010 1 次提交
    • E
      net: fix lock_sock_bh/unlock_sock_bh · 8a74ad60
      Eric Dumazet 提交于
      This new sock lock primitive was introduced to speedup some user context
      socket manipulation. But it is unsafe to protect two threads, one using
      regular lock_sock/release_sock, one using lock_sock_bh/unlock_sock_bh
      
      This patch changes lock_sock_bh to be careful against 'owned' state.
      If owned is found to be set, we must take the slow path.
      lock_sock_bh() now returns a boolean to say if the slow path was taken,
      and this boolean is used at unlock_sock_bh time to call the appropriate
      unlock function.
      
      After this change, BH are either disabled or enabled during the
      lock_sock_bh/unlock_sock_bh protected section. This might be misleading,
      so we rename these functions to lock_sock_fast()/unlock_sock_fast().
      Reported-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Tested-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8a74ad60
  29. 07 5月, 2010 1 次提交
  30. 29 4月, 2010 2 次提交
    • E
      net: ip_queue_rcv_skb() helper · f84af32c
      Eric Dumazet 提交于
      When queueing a skb to socket, we can immediately release its dst if
      target socket do not use IP_CMSG_PKTINFO.
      
      tcp_data_queue() can drop dst too.
      
      This to benefit from a hot cache line and avoid the receiver, possibly
      on another cpu, to dirty this cache line himself.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f84af32c
    • E
      net: speedup udp receive path · 4b0b72f7
      Eric Dumazet 提交于
      Since commit 95766fff ([UDP]: Add memory accounting.), 
      each received packet needs one extra sock_lock()/sock_release() pair.
      
      This added latency because of possible backlog handling. Then later,
      ticket spinlocks added yet another latency source in case of DDOS.
      
      This patch introduces lock_sock_bh() and unlock_sock_bh()
      synchronization primitives, avoiding one atomic operation and backlog
      processing.
      
      skb_free_datagram_locked() uses them instead of full blown
      lock_sock()/release_sock(). skb is orphaned inside locked section for
      proper socket memory reclaim, and finally freed outside of it.
      
      UDP receive path now take the socket spinlock only once.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4b0b72f7
  31. 28 4月, 2010 1 次提交
    • E
      net: sk_add_backlog() take rmem_alloc into account · c377411f
      Eric Dumazet 提交于
      Current socket backlog limit is not enough to really stop DDOS attacks,
      because user thread spend many time to process a full backlog each
      round, and user might crazy spin on socket lock.
      
      We should add backlog size and receive_queue size (aka rmem_alloc) to
      pace writers, and let user run without being slow down too much.
      
      Introduce a sk_rcvqueues_full() helper, to avoid taking socket lock in
      stress situations.
      
      Under huge stress from a multiqueue/RPS enabled NIC, a single flow udp
      receiver can now process ~200.000 pps (instead of ~100 pps before the
      patch) on a 8 core machine.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c377411f
  32. 24 4月, 2010 2 次提交
  33. 21 4月, 2010 1 次提交