1. 16 7月, 2016 2 次提交
  2. 14 7月, 2016 2 次提交
  3. 28 6月, 2016 1 次提交
    • G
      drivers: net: stmmac: reworking the PCS code. · 70523e63
      Giuseppe CAVALLARO 提交于
      The 3.xx and 4.xx synopsys gmacs have a very similar
      PCS embedded module and they share almost the same registers:
      for example:
        AN_Control, AN_Status, AN_Advertisement, AN_Link_Partner_Ability,
        AN_Expansion, TBI_Extended_Status.
      
      Just the RGMII/SMII Control/Status register differs.
      
      So This patch aims to reorganize and enhance the PCS support.
      It removes the existent support from the dwmac1000/dwmac4_core.c
      moving basic PCS functions inside a new file called: stmmac_pcs.h.
      
      The patch also reviews the available APIs to be better shared among
      different hardware and easily enhanced to support new features.
      Signed-off-by: NGiuseppe Cavallaro <peppe.cavallaro@st.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      70523e63
  4. 24 6月, 2016 1 次提交
    • F
      netfilter: conntrack: allow increasing bucket size via sysctl too · 3183ab89
      Florian Westphal 提交于
      No need to restrict this to module parameter.
      
      We export a copy of the real hash size -- when user alters the value we
      allocate the new table, copy entries etc before we update the real size
      to the requested one.
      
      This is also needed because the real size is used by concurrent readers
      and cannot be changed without synchronizing the conntrack generation
      seqcnt.
      
      We only allow changing this value from the initial net namespace.
      
      Tested using http-client-benchmark vs. httpterm with concurrent
      
      while true;do
       echo $RANDOM > /proc/sys/net/netfilter/nf_conntrack_buckets
      done
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
      3183ab89
  5. 17 6月, 2016 1 次提交
  6. 08 6月, 2016 1 次提交
    • E
      net: sched: do not acquire qdisc spinlock in qdisc/class stats dump · edb09eb1
      Eric Dumazet 提交于
      Large tc dumps (tc -s {qdisc|class} sh dev ethX) done by Google BwE host
      agent [1] are problematic at scale :
      
      For each qdisc/class found in the dump, we currently lock the root qdisc
      spinlock in order to get stats. Sampling stats every 5 seconds from
      thousands of HTB classes is a challenge when the root qdisc spinlock is
      under high pressure. Not only the dumps take time, they also slow
      down the fast path (queue/dequeue packets) by 10 % to 20 % in some cases.
      
      An audit of existing qdiscs showed that sch_fq_codel is the only qdisc
      that might need the qdisc lock in fq_codel_dump_stats() and
      fq_codel_dump_class_stats()
      
      In v2 of this patch, I now use the Qdisc running seqcount to provide
      consistent reads of packets/bytes counters, regardless of 32/64 bit arches.
      
      I also changed rate estimators to use the same infrastructure
      so that they no longer need to lock root qdisc lock.
      
      [1]
      http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43838.pdfSigned-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Cong Wang <xiyou.wangcong@gmail.com>
      Cc: Jamal Hadi Salim <jhs@mojatatu.com>
      Cc: John Fastabend <john.fastabend@gmail.com>
      Cc: Kevin Athey <kda@google.com>
      Cc: Xiaotian Pei <xiaotian@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      edb09eb1
  7. 30 5月, 2016 1 次提交
  8. 26 5月, 2016 3 次提交
  9. 17 5月, 2016 1 次提交
  10. 09 5月, 2016 1 次提交
  11. 07 5月, 2016 1 次提交
  12. 05 5月, 2016 1 次提交
  13. 29 4月, 2016 1 次提交
  14. 28 4月, 2016 1 次提交
  15. 27 4月, 2016 2 次提交
  16. 26 4月, 2016 2 次提交
  17. 15 4月, 2016 1 次提交
  18. 14 4月, 2016 1 次提交
  19. 12 4月, 2016 1 次提交
    • D
      net: ipv4: Consider failed nexthops in multipath routes · a6db4494
      David Ahern 提交于
      Multipath route lookups should consider knowledge about next hops and not
      select a hop that is known to be failed.
      
      Example:
      
                           [h2]                   [h3]   15.0.0.5
                            |                      |
                           3|                     3|
                          [SP1]                  [SP2]--+
                           1  2                   1     2
                           |  |     /-------------+     |
                           |   \   /                    |
                           |     X                      |
                           |    / \                     |
                           |   /   \---------------\    |
                           1  2                     1   2
               12.0.0.2  [TOR1] 3-----------------3 [TOR2] 12.0.0.3
                           4                         4
                            \                       /
                              \                    /
                               \                  /
                                -------|   |-----/
                                       1   2
                                      [TOR3]
                                        3|
                                         |
                                        [h1]  12.0.0.1
      
      host h1 with IP 12.0.0.1 has 2 paths to host h3 at 15.0.0.5:
      
          root@h1:~# ip ro ls
          ...
          12.0.0.0/24 dev swp1  proto kernel  scope link  src 12.0.0.1
          15.0.0.0/16
                  nexthop via 12.0.0.2  dev swp1 weight 1
                  nexthop via 12.0.0.3  dev swp1 weight 1
          ...
      
      If the link between tor3 and tor1 is down and the link between tor1
      and tor2 then tor1 is effectively cut-off from h1. Yet the route lookups
      in h1 are alternating between the 2 routes: ping 15.0.0.5 gets one and
      ssh 15.0.0.5 gets the other. Connections that attempt to use the
      12.0.0.2 nexthop fail since that neighbor is not reachable:
      
          root@h1:~# ip neigh show
          ...
          12.0.0.3 dev swp1 lladdr 00:02:00:00:00:1b REACHABLE
          12.0.0.2 dev swp1  FAILED
          ...
      
      The failed path can be avoided by considering known neighbor information
      when selecting next hops. If the neighbor lookup fails we have no
      knowledge about the nexthop, so give it a shot. If there is an entry
      then only select the nexthop if the state is sane. This is similar to
      what fib_detect_death does.
      
      To maintain backward compatibility use of the neighbor information is
      based on a new sysctl, fib_multipath_use_neigh.
      Signed-off-by: NDavid Ahern <dsa@cumulusnetworks.com>
      Reviewed-by: NJulian Anastasov <ja@ssi.bg>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a6db4494
  20. 09 4月, 2016 2 次提交
  21. 06 4月, 2016 1 次提交
  22. 05 4月, 2016 3 次提交
  23. 03 4月, 2016 1 次提交
  24. 31 3月, 2016 1 次提交
  25. 25 3月, 2016 1 次提交
  26. 22 3月, 2016 2 次提交
  27. 15 3月, 2016 2 次提交
  28. 10 3月, 2016 1 次提交
  29. 03 3月, 2016 1 次提交