1. 29 9月, 2014 1 次提交
  2. 05 9月, 2014 1 次提交
  3. 28 7月, 2014 2 次提交
  4. 17 7月, 2014 1 次提交
    • J
      net: clean up some sparse endianness warnings in ipv6.h · 1373a773
      Jeff Layton 提交于
      sparse is throwing warnings when building sunrpc modules due to some
      endianness shenanigans in ipv6.h. Specifically:
      
        CHECK   net/sunrpc/addr.c
      include/net/ipv6.h:573:17: warning: restricted __be64 degrades to integer
      include/net/ipv6.h:577:34: warning: restricted __be32 degrades to integer
      include/net/ipv6.h:573:17: warning: restricted __be64 degrades to integer
      include/net/ipv6.h:577:34: warning: restricted __be32 degrades to integer
      
      Sprinkle some endianness fixups to silence them. These should all get
      fixed up at compile time, so I don't think this will add any extra work
      to be done at runtime.
      Signed-off-by: NJeff Layton <jlayton@primarydata.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1373a773
  5. 09 7月, 2014 1 次提交
    • F
      net: provide stubs for ip6_set_txhash and ip6_make_flowlabel · a37934fc
      Florian Fainelli 提交于
      Commit cb1ce2ef ("ipv6: Implement automatic flow label generation
      on transmit") introduced ip6_make_flowlabel, while commit b73c3d0e
      ("net: Save TX flow hash in sock and set in skbuf on xmit") introduced
      ip6_set_txhash.
      
      ip6_set_tx_hash() uses sk_v6_daddr which references
      __sk_common.skc_v6_daddr from struct sock_common, which is gated with
      IS_ENABLED(CONFIG_IPV6).
      
      ip6_make_flowlabel() uses the ipv6 member from struct net which is
      also gated with IS_ENABLED(CONFIG_IPV6).
      
      When CONFIG_IPV6 is disabled, we will hit a build failure that looks
      like this when the compiler attempts inlining these functions:
      
        CC [M]  drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.o
      In file included from include/net/inet_sock.h:27:0,
                       from include/net/ip.h:30,
                       from drivers/net/ethernet/broadcom/cnic.c:37:
      include/net/ipv6.h: In function 'ip6_set_txhash':
      include/net/sock.h:327:33: error: 'struct sock_common' has no member named 'skc_v6_daddr'
       #define sk_v6_daddr  __sk_common.skc_v6_daddr
                                       ^
      include/net/ipv6.h:696:49: note: in expansion of macro 'sk_v6_daddr'
        keys.dst = (__force __be32)ipv6_addr_hash(&sk->sk_v6_daddr);
                                                       ^
      In file included from include/net/inetpeer.h:15:0,
                       from include/net/route.h:28,
                       from include/net/ip.h:31,
                       from drivers/net/ethernet/broadcom/cnic.c:37:
      include/net/ipv6.h: In function 'ip6_make_flowlabel':
      include/net/ipv6.h:706:37: error: 'struct net' has no member named 'ipv6'
        if (!flowlabel && (autolabel || net->ipv6.sysctl.auto_flowlabels)) {
                                           ^
      
      Fixes: cb1ce2ef ("ipv6: Implement automatic flow label generation on transmit")
      Fixes: b73c3d0e ("net: Save TX flow hash in sock and set in skbuf on xmit")
      Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a37934fc
  6. 08 7月, 2014 2 次提交
    • T
      ipv6: Implement automatic flow label generation on transmit · cb1ce2ef
      Tom Herbert 提交于
      Automatically generate flow labels for IPv6 packets on transmit.
      The flow label is computed based on skb_get_hash. The flow label will
      only automatically be set when it is zero otherwise (i.e. flow label
      manager hasn't set one). This supports the transmit side functionality
      of RFC 6438.
      
      Added an IPv6 sysctl auto_flowlabels to enable/disable this behavior
      system wide, and added IPV6_AUTOFLOWLABEL socket option to enable this
      functionality per socket.
      
      By default, auto flowlabels are disabled to avoid possible conflicts
      with flow label manager, however if this feature proves useful we
      may want to enable it by default.
      
      It should also be noted that FreeBSD has already implemented automatic
      flow labels (including the sysctl and socket option). In FreeBSD,
      automatic flow labels default to enabled.
      
      Performance impact:
      
      Running super_netperf with 200 flows for TCP_RR and UDP_RR for
      IPv6. Note that in UDP case, __skb_get_hash will be called for
      every packet with explains slight regression. In the TCP case
      the hash is saved in the socket so there is no regression.
      
      Automatic flow labels disabled:
      
        TCP_RR:
          86.53% CPU utilization
          127/195/322 90/95/99% latencies
          1.40498e+06 tps
      
        UDP_RR:
          90.70% CPU utilization
          118/168/243 90/95/99% latencies
          1.50309e+06 tps
      
      Automatic flow labels enabled:
      
        TCP_RR:
          85.90% CPU utilization
          128/199/337 90/95/99% latencies
          1.40051e+06
      
        UDP_RR
          92.61% CPU utilization
          115/164/236 90/95/99% latencies
          1.4687e+06
      Signed-off-by: NTom Herbert <therbert@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cb1ce2ef
    • T
      net: Save TX flow hash in sock and set in skbuf on xmit · b73c3d0e
      Tom Herbert 提交于
      For a connected socket we can precompute the flow hash for setting
      in skb->hash on output. This is a performance advantage over
      calculating the skb->hash for every packet on the connection. The
      computation is done using the common hash algorithm to be consistent
      with computations done for packets of the connection in other states
      where thers is no socket (e.g. time-wait, syn-recv, syn-cookies).
      
      This patch adds sk_txhash to the sock structure. inet_set_txhash and
      ip6_set_txhash functions are added which are called from points in
      TCP and UDP where socket moves to established state.
      
      skb_set_hash_from_sk is a function which sets skb->hash from the
      sock txhash value. This is called in UDP and TCP transmit path when
      transmitting within the context of a socket.
      
      Tested: ran super_netperf with 200 TCP_RR streams over a vxlan
      interface (in this case skb_get_hash called on every TX packet to
      create a UDP source port).
      
      Before fix:
      
        95.02% CPU utilization
        154/256/505 90/95/99% latencies
        1.13042e+06 tps
      
        Time in functions:
          0.28% skb_flow_dissect
          0.21% __skb_get_hash
      
      After fix:
      
        94.95% CPU utilization
        156/254/485 90/95/99% latencies
        1.15447e+06
      
        Neither __skb_get_hash nor skb_flow_dissect appear in perf
      Signed-off-by: NTom Herbert <therbert@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b73c3d0e
  7. 03 6月, 2014 1 次提交
    • E
      inetpeer: get rid of ip_id_count · 73f156a6
      Eric Dumazet 提交于
      Ideally, we would need to generate IP ID using a per destination IP
      generator.
      
      linux kernels used inet_peer cache for this purpose, but this had a huge
      cost on servers disabling MTU discovery.
      
      1) each inet_peer struct consumes 192 bytes
      
      2) inetpeer cache uses a binary tree of inet_peer structs,
         with a nominal size of ~66000 elements under load.
      
      3) lookups in this tree are hitting a lot of cache lines, as tree depth
         is about 20.
      
      4) If server deals with many tcp flows, we have a high probability of
         not finding the inet_peer, allocating a fresh one, inserting it in
         the tree with same initial ip_id_count, (cf secure_ip_id())
      
      5) We garbage collect inet_peer aggressively.
      
      IP ID generation do not have to be 'perfect'
      
      Goal is trying to avoid duplicates in a short period of time,
      so that reassembly units have a chance to complete reassembly of
      fragments belonging to one message before receiving other fragments
      with a recycled ID.
      
      We simply use an array of generators, and a Jenkin hash using the dst IP
      as a key.
      
      ipv6_select_ident() is put back into net/ipv6/ip6_output.c where it
      belongs (it is only used from this file)
      
      secure_ip_id() and secure_ipv6_id() no longer are needed.
      
      Rename ip_select_ident_more() to ip_select_ident_segs() to avoid
      unnecessary decrement/increment of the number of segments.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      73f156a6
  8. 14 5月, 2014 1 次提交
    • L
      net: add a sysctl to reflect the fwmark on replies · e110861f
      Lorenzo Colitti 提交于
      Kernel-originated IP packets that have no user socket associated
      with them (e.g., ICMP errors and echo replies, TCP RSTs, etc.)
      are emitted with a mark of zero. Add a sysctl to make them have
      the same mark as the packet they are replying to.
      
      This allows an administrator that wishes to do so to use
      mark-based routing, firewalling, etc. for these replies by
      marking the original packets inbound.
      
      Tested using user-mode linux:
       - ICMP/ICMPv6 echo replies and errors.
       - TCP RST packets (IPv4 and IPv6).
      Signed-off-by: NLorenzo Colitti <lorenzo@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e110861f
  9. 01 5月, 2014 1 次提交
  10. 16 4月, 2014 1 次提交
  11. 22 1月, 2014 1 次提交
  12. 20 1月, 2014 1 次提交
  13. 16 1月, 2014 1 次提交
  14. 02 1月, 2014 1 次提交
  15. 10 12月, 2013 2 次提交
  16. 06 12月, 2013 2 次提交
  17. 24 11月, 2013 1 次提交
  18. 09 11月, 2013 1 次提交
  19. 24 10月, 2013 1 次提交
  20. 20 10月, 2013 1 次提交
  21. 22 9月, 2013 1 次提交
  22. 01 9月, 2013 1 次提交
  23. 24 8月, 2013 1 次提交
  24. 20 6月, 2013 1 次提交
  25. 26 5月, 2013 1 次提交
    • L
      net: ipv6: Add IPv6 support to the ping socket. · 6d0bfe22
      Lorenzo Colitti 提交于
      This adds the ability to send ICMPv6 echo requests without a
      raw socket. The equivalent ability for ICMPv4 was added in
      2011.
      
      Instead of having separate code paths for IPv4 and IPv6, make
      most of the code in net/ipv4/ping.c dual-stack and only add a
      few IPv6-specific bits (like the protocol definition) to a new
      net/ipv6/ping.c. Hopefully this will reduce divergence and/or
      duplication of bugs in the future.
      
      Caveats:
      
      - Setting options via ancillary data (e.g., using IPV6_PKTINFO
        to specify the outgoing interface) is not yet supported.
      - There are no separate security settings for IPv4 and IPv6;
        everything is controlled by /proc/net/ipv4/ping_group_range.
      - The proc interface does not yet display IPv6 ping sockets
        properly.
      
      Tested with a patched copy of ping6 and using raw socket calls.
      Compiles and works with all of CONFIG_IPV6={n,m,y}.
      Signed-off-by: NLorenzo Colitti <lorenzo@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6d0bfe22
  26. 25 3月, 2013 1 次提交
  27. 09 3月, 2013 1 次提交
  28. 08 3月, 2013 1 次提交
  29. 22 2月, 2013 1 次提交
  30. 31 1月, 2013 2 次提交
  31. 30 1月, 2013 1 次提交
  32. 22 1月, 2013 1 次提交
  33. 18 1月, 2013 2 次提交
    • F
      ipv6: fix ipv6_prefix_equal64_half mask conversion · 512613d7
      Fabio Baltieri 提交于
      Fix the 64bit optimized version of ipv6_prefix_equal to convert the
      bitmask to network byte order only after the bit-shift.
      
      The bug was introduced in:
      
      38675170 ipv6: 64bit version of ipv6_prefix_equal().
      Signed-off-by: NFabio Baltieri <fabio.baltieri@linaro.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      512613d7
    • J
      net: increase fragment memory usage limits · c2a93660
      Jesper Dangaard Brouer 提交于
      Increase the amount of memory usage limits for incomplete
      IP fragments.
      
      Arguing for new thresh high/low values:
      
       High threshold = 4 MBytes
       Low  threshold = 3 MBytes
      
      The fragmentation memory accounting code, tries to account for the
      real memory usage, by measuring both the size of frag queue struct
      (inet_frag_queue (ipv4:ipq/ipv6:frag_queue)) and the SKB's truesize.
      
      We want to be able to handle/hold-on-to enough fragments, to ensure
      good performance, without causing incomplete fragments to hurt
      scalability, by causing the number of inet_frag_queue to grow too much
      (resulting longer searches for frag queues).
      
      For IPv4, how much memory does the largest frag consume.
      
      Maximum size fragment is 64K, which is approx 44 fragments with
      MTU(1500) sized packets. Sizeof(struct ipq) is 200.  A 1500 byte
      packet results in a truesize of 2944 (not 2048 as I first assumed)
      
        (44*2944)+200 = 129736 bytes
      
      The current default high thresh of 262144 bytes, is obviously
      problematic, as only two 64K fragments can fit in the queue at the
      same time.
      
      How many 64K fragment can we fit into 4 MBytes:
      
        4*2^20/((44*2944)+200) = 32.34 fragment in queues
      
      An attacker could send a separate/distinct fake fragment packets per
      queue, causing us to allocate one inet_frag_queue per packet, and thus
      attacking the hash table and its lists.
      
      How many frag queue do we need to store, and given a current hash size
      of 64, what is the average list length.
      
      Using one MTU sized fragment per inet_frag_queue, each consuming
      (2944+200) 3144 bytes.
      
        4*2^20/(2944+200) = 1334 frag queues -> 21 avg list length
      
      An attack could send small fragments, the smallest packet I could send
      resulted in a truesize of 896 bytes (I'm a little surprised by this).
      
        4*2^20/(896+200)  = 3827 frag queues -> 59 avg list length
      
      When increasing these number, we also need to followup with
      improvements, that is going to help scalability.  Simply increasing
      the hash size, is not enough as the current implementation does not
      have a per hash bucket locking.
      Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c2a93660
  34. 17 1月, 2013 1 次提交