1. 18 12月, 2013 1 次提交
    • F
      lib: introduce arch optimized hash library · 71ae8aac
      Francesco Fusco 提交于
      We introduce a new hashing library that is meant to be used in
      the contexts where speed is more important than uniformity of the
      hashed values. The hash library leverages architecture specific
      implementation to achieve high performance and fall backs to
      jhash() for the generic case.
      
      On Intel-based x86 architectures, the library can exploit the crc32l
      instruction, part of the Intel SSE4.2 instruction set, if the
      instruction is supported by the processor. This implementation
      is twice as fast as the jhash() implementation on an i7 processor.
      
      Additional architectures, such as Arm64 provide instructions for
      accelerating the computation of CRC, so they could be added as well
      in follow-up work.
      Signed-off-by: NFrancesco Fusco <ffusco@redhat.com>
      Signed-off-by: NDaniel Borkmann <dborkman@redhat.com>
      Signed-off-by: NThomas Graf <tgraf@redhat.com>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      71ae8aac
  2. 14 12月, 2013 10 次提交
  3. 13 12月, 2013 2 次提交
    • J
      net-gro: Prepare GRO stack for the upcoming tunneling support · 299603e8
      Jerry Chu 提交于
      This patch modifies the GRO stack to avoid the use of "network_header"
      and associated macros like ip_hdr() and ipv6_hdr() in order to allow
      an arbitary number of IP hdrs (v4 or v6) to be used in the
      encapsulation chain. This lays the foundation for various IP
      tunneling support (IP-in-IP, GRE, VXLAN, SIT,...) to be added later.
      
      With this patch, the GRO stack traversing now is mostly based on
      skb_gro_offset rather than special hdr offsets saved in skb (e.g.,
      skb->network_header). As a result all but the top layer (i.e., the
      the transport layer) must have hdrs of the same length in order for
      a pkt to be considered for aggregation. Therefore when adding a new
      encap layer (e.g., for tunneling), one must check and skip flows
      (e.g., by setting NAPI_GRO_CB(p)->same_flow to 0) that have a
      different hdr length.
      
      Note that unlike the network header, the transport header can and
      will continue to be set by the GRO code since there will be at
      most one "transport layer" in the encap chain.
      Signed-off-by: NH.K. Jerry Chu <hkchu@google.com>
      Suggested-by: NEric Dumazet <edumazet@google.com>
      Reviewed-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      299603e8
    • V
      macvlan: Remove custom recieve and forward handlers · 2f6a1b66
      Vlad Yasevich 提交于
      Since now macvlan and macvtap use the same receive and
      forward handlers, we can remove them completely and use
      netif_rx and dev_forward_skb() directly.
      Signed-off-by: NVlad Yasevich <vyasevic@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2f6a1b66
  4. 12 12月, 2013 2 次提交
    • J
      ipv6: router reachability probing · 7e980569
      Jiri Benc 提交于
      RFC 4191 states in 3.5:
      
         When a host avoids using any non-reachable router X and instead sends
         a data packet to another router Y, and the host would have used
         router X if router X were reachable, then the host SHOULD probe each
         such router X's reachability by sending a single Neighbor
         Solicitation to that router's address.  A host MUST NOT probe a
         router's reachability in the absence of useful traffic that the host
         would have sent to the router if it were reachable.  In any case,
         these probes MUST be rate-limited to no more than one per minute per
         router.
      
      Currently, when the neighbour corresponding to a router falls into
      NUD_FAILED, it's never considered again. Introduce a new rt6_nud_state
      value, RT6_NUD_FAIL_PROBE, which suggests the route should not be used but
      should be probed with a single NS. The probe is ratelimited by the existing
      code. To better distinguish meanings of the failure values, rename
      RT6_NUD_FAIL_SOFT to RT6_NUD_FAIL_DO_RR.
      Signed-off-by: NJiri Benc <jbenc@redhat.com>
      Acked-by: NHannes Frederic Sowa <hannes@stressinduktion.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7e980569
    • N
      ipv4: fix wildcard search with inet_confirm_addr() · b601fa19
      Nicolas Dichtel 提交于
      Help of this function says: "in_dev: only on this interface, 0=any interface",
      but since commit 39a6d063 ("[NETNS]: Process inet_confirm_addr in the
      correct namespace."), the code supposes that it will never be NULL. This
      function is never called with in_dev == NULL, but it's exported and may be used
      by an external module.
      
      Because this patch restore the ability to call inet_confirm_addr() with in_dev
      == NULL, I partially revert the above commit, as suggested by Julian.
      
      CC: Julian Anastasov <ja@ssi.bg>
      Signed-off-by: NNicolas Dichtel <nicolas.dichtel@6wind.com>
      Reviewed-by: NJulian Anastasov <ja@ssi.bg>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b601fa19
  5. 11 12月, 2013 4 次提交
  6. 10 12月, 2013 15 次提交
  7. 07 12月, 2013 6 次提交
    • J
      ether_addr_equal: Optimize implementation, remove unused compare_ether_addr · 0d74c42f
      Joe Perches 提交于
      Add a new check for CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS to reduce
      the number of or's used in the ether_addr_equal comparison to very
      slightly improve function performance.
      
      Simplify the ether_addr_equal_64bits implementation.
      Integrate and remove the zap_last_2bytes helper as it's now
      used only once.
      
      Remove the now unused compare_ether_addr function.
      
      Update the unaligned-memory-access documentation to remove the
      compare_ether_addr description and show how unaligned accesses
      could occur with ether_addr_equal.
      Signed-off-by: NJoe Perches <joe@perches.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0d74c42f
    • J
      ipv6 addrconf: introduce IFA_F_MANAGETEMPADDR to tell kernel to manage temporary addresses · 53bd6749
      Jiri Pirko 提交于
      Creating an address with this flag set will result in kernel taking care
      of temporary addresses in the same way as if the address was created by
      kernel itself (after RA receive). This allows userspace applications
      implementing the autoconfiguration (NetworkManager for example) to
      implement ipv6 addresses privacy.
      Signed-off-by: NJiri Pirko <jiri@resnulli.us>
      Signed-off-by: NThomas Haller <thaller@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      53bd6749
    • J
      ipv6 addrconf: extend ifa_flags to u32 · 479840ff
      Jiri Pirko 提交于
      There is no more space in u8 ifa_flags. So do what davem suffested and
      add another netlink attr called IFA_FLAGS for carry more flags.
      Signed-off-by: NJiri Pirko <jiri@resnulli.us>
      Signed-off-by: NThomas Haller <thaller@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      479840ff
    • E
      net: introduce dev_consume_skb_any() · e6247027
      Eric Dumazet 提交于
      Some network drivers use dev_kfree_skb_any() and dev_kfree_skb_irq()
      helpers to free skbs, both for dropped packets and TX completed ones.
      
      We need to separate the two causes to get better diagnostics
      given by dropwatch or "perf record -e skb:kfree_skb"
      
      This patch provides two new helpers, dev_consume_skb_any() and
      dev_consume_skb_irq() to be used for consumed skbs.
      
      __dev_kfree_skb_irq() is slightly optimized to remove one
      atomic_dec_and_test() in fast path, and use this_cpu_{r|w} accessors.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e6247027
    • F
      net: phy: breakdown PHY_*_FEATURES defines · e9fbdf17
      Florian Fainelli 提交于
      Breakdown the PHY_*_FEATURES into per speed defines such that we can
      easily re-use them individually.
      Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e9fbdf17
    • E
      tcp: auto corking · f54b3111
      Eric Dumazet 提交于
      With the introduction of TCP Small Queues, TSO auto sizing, and TCP
      pacing, we can implement Automatic Corking in the kernel, to help
      applications doing small write()/sendmsg() to TCP sockets.
      
      Idea is to change tcp_push() to check if the current skb payload is
      under skb optimal size (a multiple of MSS bytes)
      
      If under 'size_goal', and at least one packet is still in Qdisc or
      NIC TX queues, set the TCP Small Queue Throttled bit, so that the push
      will be delayed up to TX completion time.
      
      This delay might allow the application to coalesce more bytes
      in the skb in following write()/sendmsg()/sendfile() system calls.
      
      The exact duration of the delay is depending on the dynamics
      of the system, and might be zero if no packet for this flow
      is actually held in Qdisc or NIC TX ring.
      
      Using FQ/pacing is a way to increase the probability of
      autocorking being triggered.
      
      Add a new sysctl (/proc/sys/net/ipv4/tcp_autocorking) to control
      this feature and default it to 1 (enabled)
      
      Add a new SNMP counter : nstat -a | grep TcpExtTCPAutoCorking
      This counter is incremented every time we detected skb was under used
      and its flush was deferred.
      
      Tested:
      
      Interesting effects when using line buffered commands under ssh.
      
      Excellent performance results in term of cpu usage and total throughput.
      
      lpq83:~# echo 1 >/proc/sys/net/ipv4/tcp_autocorking
      lpq83:~# perf stat ./super_netperf 4 -t TCP_STREAM -H lpq84 -- -m 128
      9410.39
      
       Performance counter stats for './super_netperf 4 -t TCP_STREAM -H lpq84 -- -m 128':
      
            35209.439626 task-clock                #    2.901 CPUs utilized
                   2,294 context-switches          #    0.065 K/sec
                     101 CPU-migrations            #    0.003 K/sec
                   4,079 page-faults               #    0.116 K/sec
          97,923,241,298 cycles                    #    2.781 GHz                     [83.31%]
          51,832,908,236 stalled-cycles-frontend   #   52.93% frontend cycles idle    [83.30%]
          25,697,986,603 stalled-cycles-backend    #   26.24% backend  cycles idle    [66.70%]
         102,225,978,536 instructions              #    1.04  insns per cycle
                                                   #    0.51  stalled cycles per insn [83.38%]
          18,657,696,819 branches                  #  529.906 M/sec                   [83.29%]
              91,679,646 branch-misses             #    0.49% of all branches         [83.40%]
      
            12.136204899 seconds time elapsed
      
      lpq83:~# echo 0 >/proc/sys/net/ipv4/tcp_autocorking
      lpq83:~# perf stat ./super_netperf 4 -t TCP_STREAM -H lpq84 -- -m 128
      6624.89
      
       Performance counter stats for './super_netperf 4 -t TCP_STREAM -H lpq84 -- -m 128':
            40045.864494 task-clock                #    3.301 CPUs utilized
                     171 context-switches          #    0.004 K/sec
                      53 CPU-migrations            #    0.001 K/sec
                   4,080 page-faults               #    0.102 K/sec
         111,340,458,645 cycles                    #    2.780 GHz                     [83.34%]
          61,778,039,277 stalled-cycles-frontend   #   55.49% frontend cycles idle    [83.31%]
          29,295,522,759 stalled-cycles-backend    #   26.31% backend  cycles idle    [66.67%]
         108,654,349,355 instructions              #    0.98  insns per cycle
                                                   #    0.57  stalled cycles per insn [83.34%]
          19,552,170,748 branches                  #  488.244 M/sec                   [83.34%]
             157,875,417 branch-misses             #    0.81% of all branches         [83.34%]
      
            12.130267788 seconds time elapsed
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f54b3111