1. 09 6月, 2016 15 次提交
    • F
      sched: remove qdisc->drop · a09ceb0e
      Florian Westphal 提交于
      after removal of TCA_CBQ_OVL_STRATEGY from cbq scheduler, there are no
      more callers of ->drop() outside of other ->drop functions, i.e.
      nothing calls them.
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a09ceb0e
    • F
      sched: remove qdisc_rehape_fail · c3a173d7
      Florian Westphal 提交于
      After the removal of TCA_CBQ_POLICE in cbq scheduler qdisc->reshape_fail
      is always NULL, i.e. qdisc_rehape_fail is now the same as qdisc_drop.
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c3a173d7
    • F
      cbq: remove TCA_CBQ_POLICE support · dd47c1fa
      Florian Westphal 提交于
      iproute2 doesn't implement any cbq option that results in this attribute
      being sent to kernel.
      
      To make use of it, user would have to
      
      - patch iproute2
      - add a class
      - attach a qdisc to the class (default pfifo doesn't work as
        q->handle is 0 and cbq_set_police() is a no-op in this case)
      - re-'add' the same class (tc class change ...) again
      - user must also specifiy a defmap (e.g. 'split 1:0 defmap 3f'), since
        this 'police' feature relies on its presence
      - the added qdisc must be one of bfifo, pfifo or netem
      
      If all of these conditions are met and _some_ leaf qdiscs, namely
      p/bfifo, netem, plug or tbf would drop a packet, kernel calls back into
      cbq, which will attempt to re-queue the skb into a different class
      as indicated by the parents' defmap entry for TC_PRIO_BESTEFFORT.
      
      [ i.e. we behave as if tc_classify returned TC_ACT_RECLASSIFY ].
      
      This feature, which isn't documented or implemented in iproute2,
      and isn't implemented consistently (most qdiscs like sfq, codel, etc
      drop right away instead of attempting this reclassification) is the
      sole reason for the reshape_fail and __parent member in Qdisc struct.
      
      So remove TCA_CBQ_POLICE support from the kernel, reject it via EOPNOTSUPP
      so userspace knows we don't support it, and then remove no-longer needed
      infrastructure in followup commit.
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      dd47c1fa
    • F
      cbq: remove TCA_CBQ_OVL_STRATEGY support · c3498d34
      Florian Westphal 提交于
      since initial revision of cbq in 2004 iproute 2 has never implemented
      support for TCA_CBQ_OVL_STRATEGY, which is what needs to be set to
      activate the class->drop() call (TC_CBQ_OVL_DROP strategy must be
      set by userspace value must be set by userspace).
      
      David Miller says:
         It seems really safe to kill this thing off, flag an error if someone
         tries to set the attribute, and therefore kill off all of the
         non-default cbq_ovl_*() functions.
      
      A followup commit can then remove all .drop qdisc methods since this
      removed the only caller.
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c3498d34
    • S
      ip6gre: Allow live link address change · 76e48f9f
      Shweta Choudaha 提交于
      The ip6 GRE tap device should not be forced to down state to change
      the mac address and should allow live address change for tap device
      similar to ipv4 gre.
      Signed-off-by: NShweta Choudaha <schoudah@brocade.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      76e48f9f
    • D
      net: Add l3mdev rule · 96c63fa7
      David Ahern 提交于
      Currently, VRFs require 1 oif and 1 iif rule per address family per
      VRF. As the number of VRF devices increases it brings scalability
      issues with the increasing rule list. All of the VRF rules have the
      same format with the exception of the specific table id to direct the
      lookup. Since the table id is available from the oif or iif in the
      loopup, the VRF rules can be consolidated to a single rule that pulls
      the table from the VRF device.
      
      This patch introduces a new rule attribute l3mdev. The l3mdev rule
      means the table id used for the lookup is pulled from the L3 master
      device (e.g., VRF) rather than being statically defined. With the
      l3mdev rule all of the basic VRF FIB rules are reduced to 1 l3mdev
      rule per address family (IPv4 and IPv6).
      
      If an admin wishes to insert higher priority rules for specific VRFs
      those rules will co-exist with the l3mdev rule. This capability means
      current VRF scripts will co-exist with this new simpler implementation.
      
      Currently, the rules list for both ipv4 and ipv6 look like this:
          $ ip  ru ls
          1000:       from all oif vrf1 lookup 1001
          1000:       from all iif vrf1 lookup 1001
          1000:       from all oif vrf2 lookup 1002
          1000:       from all iif vrf2 lookup 1002
          1000:       from all oif vrf3 lookup 1003
          1000:       from all iif vrf3 lookup 1003
          1000:       from all oif vrf4 lookup 1004
          1000:       from all iif vrf4 lookup 1004
          1000:       from all oif vrf5 lookup 1005
          1000:       from all iif vrf5 lookup 1005
          1000:       from all oif vrf6 lookup 1006
          1000:       from all iif vrf6 lookup 1006
          1000:       from all oif vrf7 lookup 1007
          1000:       from all iif vrf7 lookup 1007
          1000:       from all oif vrf8 lookup 1008
          1000:       from all iif vrf8 lookup 1008
          ...
          32765:      from all lookup local
          32766:      from all lookup main
          32767:      from all lookup default
      
      With the l3mdev rule the list is just the following regardless of the
      number of VRFs:
          $ ip ru ls
          1000:       from all lookup [l3mdev table]
          32765:      from all lookup local
          32766:      from all lookup main
          32767:      from all lookup default
      
      (Note: the above pretty print of the rule is based on an iproute2
             prototype. Actual verbage may change)
      Signed-off-by: NDavid Ahern <dsa@cumulusnetworks.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      96c63fa7
    • J
      tipc: change node timer unit from jiffies to ms · 5ca509fc
      Jon Paul Maloy 提交于
      The node keepalive interval is recalculated at each timer expiration
      to catch any changes in the link tolerance, and stored in a field in
      struct tipc_node. We use jiffies as unit for the stored value.
      
      This is suboptimal, because it makes the calculation unnecessary
      complex, including two unit conversions. The conversions also lead to
      a rounding error that causes the link "abort limit" to be 3 in the
      normal case, instead of 4, as intended. This again leads to unnecessary
      link resets when the network is pushed close to its limit, e.g., in an
      environment with hundreds of nodes or namesapces.
      
      In this commit, we do instead let the keepalive value be calculated and
      stored in milliseconds, so that there is only one conversion and the
      rounding error is eliminated.
      
      We also remove a redundant "keepalive" field in struct tipc_link. This
      is remnant from the previous implementation.
      Acked-by: NYing Xue <ying.xue@windriver.com>
      Signed-off-by: NJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5ca509fc
    • J
      tipc: correct error in node fsm · c4282ca7
      Jon Paul Maloy 提交于
      commit 88e8ac70 ("tipc: reduce transmission rate of reset messages
      when link is down") revealed a flaw in the node FSM, as defined in
      the log of commit 66996b6c ("tipc: extend node FSM").
      
      We see the following scenario:
      1: Node B receives a RESET message from node A before its link endpoint
         is fully up, i.e., the node FSM is in state SELF_UP_PEER_COMING. This
         event will not change the node FSM state, but the (distinct) link FSM
         will move to state RESETTING.
      2: As an effect of the previous event, the local endpoint on B will
         declare node A lost, and post the event SELF_DOWN to the its node
         FSM. This moves the FSM state to SELF_DOWN_PEER_LEAVING, meaning
         that no messages will be accepted from A until it receives another
         RESET message that confirms that A's endpoint has been reset. This
         is  wasteful, since we know this as a fact already from the first
         received RESET, but worse is that the link instance's FSM has not
         wasted this information, but instead moved on to state ESTABLISHING,
         meaning that it repeatedly sends out ACTIVATE messages to the reset
         peer A.
      3: Node A will receive one of the ACTIVATE messages, move its link FSM
         to state ESTABLISHED, and start repeatedly sending out STATE messages
         to node B.
      4: Node B will consistently drop these messages, since it can only accept
         accept a RESET according to its node FSM.
      5: After four lost STATE messages node A will reset its link and start
         repeatedly sending out RESET messages to B.
      6: Because of the reduced send rate for RESET messages, it is very
         likely that A will receive an ACTIVATE (which is sent out at a much
         higher frequency) before it gets the chance to send a RESET, and A
         may hence quickly move back to state ESTABLISHED and continue sending
         out STATE messages, which will again be dropped by B.
      7: GOTO 5.
      8: After having repeated the cycle 5-7 a number of times, node A will
         by chance get in between with sending a RESET, and the situation is
         resolved.
      
      Unfortunately, we have seen that it may take a substantial amount of
      time before this vicious loop is broken, sometimes in the order of
      minutes.
      
      We correct this by making a small correction to the node FSM: When a
      node in state SELF_UP_PEER_COMING receives a SELF_DOWN event, it now
      moves directly back to state SELF_DOWN_PEER_DOWN, instead of as now
      SELF_DOWN_PEER_LEAVING. This is logically consistent, since we don't
      need to wait for RESET confirmation from of an endpoint that we alread
      know has been reset. It also means that node B in the scenario above
      will not be dropping incoming STATE messages, and the link can come up
      immediately.
      
      Finally, a symmetry comparison reveals that the  FSM has a similar
      error when receiving the event PEER_DOWN in state PEER_UP_SELF_COMING.
      Instead of moving to PERR_DOWN_SELF_LEAVING, it should move directly
      to SELF_DOWN_PEER_DOWN. Although we have never seen any negative effect
      of this logical error, we choose fix this one, too.
      
      The node FSM looks as follows after those changes:
      
                                 +----------------------------------------+
                                 |                           PEER_DOWN_EVT|
                                 |                                        |
        +------------------------+----------------+                       |
        |SELF_DOWN_EVT           |                |                       |
        |                        |                |                       |
        |              +-----------+          +-----------+               |
        |              |NODE_      |          |NODE_      |               |
        |   +----------|FAILINGOVER|<---------|SYNCHING   |-----------+   |
        |   |SELF_     +-----------+ FAILOVER_+-----------+   PEER_   |   |
        |   |DOWN_EVT   |          A BEGIN_EVT  A         |   DOWN_EVT|   |
        |   |           |          |            |         |           |   |
        |   |           |          |            |         |           |   |
        |   |           |FAILOVER_ |FAILOVER_   |SYNCH_   |SYNCH_     |   |
        |   |           |END_EVT   |BEGIN_EVT   |BEGIN_EVT|END_EVT    |   |
        |   |           |          |            |         |           |   |
        |   |           |          |            |         |           |   |
        |   |           |         +--------------+        |           |   |
        |   |           +-------->|   SELF_UP_   |<-------+           |   |
        |   |   +-----------------|   PEER_UP    |----------------+   |   |
        |   |   |SELF_DOWN_EVT    +--------------+   PEER_DOWN_EVT|   |   |
        |   |   |                    A        A                   |   |   |
        |   |   |                    |        |                   |   |   |
        |   |   |         PEER_UP_EVT|        |SELF_UP_EVT        |   |   |
        |   |   |                    |        |                   |   |   |
        V   V   V                    |        |                   V   V   V
      +------------+       +-----------+    +-----------+       +------------+
      |SELF_DOWN_  |       |SELF_UP_   |    |PEER_UP_   |       |PEER_DOWN   |
      |PEER_LEAVING|       |PEER_COMING|    |SELF_COMING|       |SELF_LEAVING|
      +------------+       +-----------+    +-----------+       +------------+
             |               |       A        A       |                |
             |               |       |        |       |                |
             |       SELF_   |       |SELF_   |PEER_  |PEER_           |
             |       DOWN_EVT|       |UP_EVT  |UP_EVT |DOWN_EVT        |
             |               |       |        |       |                |
             |               |       |        |       |                |
             |               |    +--------------+    |                |
             |PEER_DOWN_EVT  +--->|  SELF_DOWN_  |<---+   SELF_DOWN_EVT|
             +------------------->|  PEER_DOWN   |<--------------------+
                                  +--------------+
      Acked-by: NYing Xue <ying.xue@windriver.com>
      Signed-off-by: NJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c4282ca7
    • F
      net: dsa: Initialize CPU port ethtool ops per tree · 0c73c523
      Florian Fainelli 提交于
      Now that we can properly support multiple distinct trees in the system,
      using a global variable: dsa_cpu_port_ethtool_ops is getting clobbered
      as soon as the second switch tree gets probed, and we don't want that.
      
      We need to move this to be dynamically allocated, and since we can't
      really be comparing addresses anymore to determine first time
      initialization versus any other times, just move this to dsa.c and
      dsa2.c where the remainder of the dst/ds initialization happens.
      
      The operations teardown restores the master netdev's ethtool_ops to its
      original ethtool_ops pointer (typically within the Ethernet driver)
      Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Reviewed-by: NAndrew Lunn <andrew@lunn.ch>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0c73c523
    • F
      net: dsa: Add initialization helper for CPU port ethtool_ops · af42192c
      Florian Fainelli 提交于
      Add a helper function: dsa_cpu_port_ethtool_init() which initializes a
      custom ethtool_ops structure with custom DSA ethtool operations for CPU
      ports. This is a preliminary change to move the initialization outside
      of net/dsa/slave.c.
      Reviewed-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com>
      Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      af42192c
    • F
      net: dsa: Provide a slave MII bus if needed · 1eb59443
      Florian Fainelli 提交于
      Mimic what net/dsa/dsa.c does and provide a slave MII bus by default
      which will be created if the driver implements a phy_read method.
      Reviewed-by: NAndrew Lunn <andrew@lunn.ch>
      Reviewed-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com>
      Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1eb59443
    • F
      net: dsa: Initialize ds->enabled_port_mask and ds->phys_mii_mask · 6e830d8f
      Florian Fainelli 提交于
      Some drivers rely on these two bitmasks to contain the correct values
      for them to successfully probe and initialize at drv->setup() time,
      calculate correct values to put in both masks as early as possible in
      dsa_get_ports_dn().
      Reviewed-by: NAndrew Lunn <andrew@lunn.ch>
      Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6e830d8f
    • F
      net: dsa: Provide unique DSA slave MII bus names · 0b7b498d
      Florian Fainelli 提交于
      In case we have multiples trees and switches with the same index, we
      need to add another discriminating id: the switch tree.
      Reviewed-by: NAndrew Lunn <andrew@lunn.ch>
      Reviewed-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com>
      Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0b7b498d
    • E
      net: sched: fix missing doc annotations · 123b3652
      Eric Dumazet 提交于
      "make htmldocs" complains otherwise:
      
      .//net/core/gen_stats.c:168: warning: No description found for parameter 'running'
      .//include/linux/netdevice.h:1867: warning: No description found for parameter 'qdisc_running_key'
      
      Fixes: f9eb8aea ("net_sched: transform qdisc running bit into a seqcount")
      Fixes: edb09eb1 ("net: sched: do not acquire qdisc spinlock in qdisc/class stats dump")
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Reported-by: Nkbuild test robot <fengguang.wu@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      123b3652
    • H
      net: Reduce queue allocation to one in kdump kernel · 40e4e713
      Hariprasad Shenai 提交于
      When in kdump kernel, reduce memory usage by only using a single Queue
      Set for multiqueue devices. So make netif_get_num_default_rss_queues()
      return one, when in kdump kernel.
      Signed-off-by: NHariprasad Shenai <hariprasad@chelsio.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      40e4e713
  2. 08 6月, 2016 12 次提交
    • T
      ila: Perform only one translation in forwarding path · 707a2ca4
      Tom Herbert 提交于
      When setting up ILA in a router we noticed that the the encapsulation
      is invoked twice: once in the route input path and again upon route
      output. To resolve this we add a flag set_csum_neutral for the
      ila_update_ipv6_locator. If this flag is set and the checksum
      neutral bit is also set we assume that checksum-neutral translation
      has already been performed and take no further action. The
      flag is set only in ila_output path. The flag is not set for ila_input and
      ila_xlat.
      
      Tested:
      
      Used 3 netns to set to emulate a router and two hosts. The router
      translates SIR addresses between the two destinations in other two netns.
      Verified ping and netperf are functional.
      Signed-off-by: NTom Herbert <tom@herbertland.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      707a2ca4
    • P
      tcp: accept RST if SEQ matches right edge of right-most SACK block · e00431bc
      Pau Espin Pedrol 提交于
      RFC 5961 advises to only accept RST packets containing a seq number
      matching the next expected seq number instead of the whole receive
      window in order to avoid spoofing attacks.
      
      However, this situation is not optimal in the case SACK is in use at the
      time the RST is sent. I recently run into a scenario in which packet
      losses were high while uploading data to a server, and userspace was
      willing to frequently terminate connections by sending a RST. In
      this case, the ACK sent on the receiver side (rcv_nxt) is frozen waiting
      for a lost packet retransmission and SACK blocks are used to let the
      client continue uploading data. At some point later on, the client sends
      the RST (snd_nxt), which matches the next expected seq number of the
      right-most SACK block on the receiver side which is going forward
      receiving data.
      
      In this scenario, as RFC 5961 defines, the RST SEQ doesn't match the
      frozen main ACK at receiver side and thus gets dropped and a challenge
      ACK is sent, which gets usually lost due to network conditions. The main
      consequence is that the connection stays alive for a while even if it
      made sense to accept the RST. This can get really bad if lots of
      connections like this one are created in few seconds, allocating all the
      resources of the server easily.
      
      For security reasons, not all SACK blocks are checked (there could be a
      big amount of SACK blocks => acceptable SEQ numbers). Furthermore, it
      wouldn't make sense to check for RST in blocks other than the right-most
      received one because the sender is not expected to be sending new data
      after the RST. For simplicity, only up to the 4 most recently updated
      SACK blocks (selective_acks[4] field) are compared to find the
      right-most block, as usually those are the ones with bigger probability
      to contain it.
      
      This patch was tested in a 3.18 kernel and probed to improve the
      situation in the scenario described above.
      Signed-off-by: NPau Espin Pedrol <pau.espin@tessares.net>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Acked-by: NNeal Cardwell <ncardwell@google.com>
      Tested-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e00431bc
    • D
      net: vrf: ipv6 support for local traffic to local addresses · b4869aa2
      David Ahern 提交于
      Add support for locally originated traffic to VRF-local IPv6 addresses.
      Similar to IPv4 a local dst is set on the skb and the packet is
      reinserted with a call to netif_rx. With this patch, ping, tcp and udp
      packets to a local IPv6 address are successfully routed:
      
          $ ip addr show dev eth1
          4: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master red state UP group default qlen 1000
              link/ether 02:e0:f9:1c:b9:74 brd ff:ff:ff:ff:ff:ff
              inet 10.100.1.1/24 brd 10.100.1.255 scope global eth1
                 valid_lft forever preferred_lft forever
              inet6 2100:1::1/120 scope global
                 valid_lft forever preferred_lft forever
              inet6 fe80::e0:f9ff:fe1c:b974/64 scope link
                 valid_lft forever preferred_lft forever
      
          $ ping6 -c1 -I red 2100:1::1
          ping6: Warning: source address might be selected on device other than red.
          PING 2100:1::1(2100:1::1) from 2100:1::1 red: 56 data bytes
          64 bytes from 2100:1::1: icmp_seq=1 ttl=64 time=0.098 ms
      
      ip6_input is exported so the VRF driver can use it for the dst input
      function. The dst_alloc function for IPv4 defaults to setting the input and
      output functions; IPv6's does not. VRF does not need to duplicate the Rx path
      so just export the ipv6 input function.
      Signed-off-by: NDavid Ahern <dsa@cumulusnetworks.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b4869aa2
    • T
      gue: Implement direction IP encapsulation · c1e48af7
      Tom Herbert 提交于
      This patch implements direct encapsulation of IPv4 and IPv6 packets
      in UDP. This is done a version "1" of GUE and as explained in I-D
      draft-ietf-nvo3-gue-03.
      
      Changes here are only in the receive path, fou with IPxIPx already
      supports the transmit side. Both the normal receive path and
      GRO path are modified to check for GUE version and check for
      IP version in the case that GUE version is "1".
      
      Tested:
      
      IPIP with direct GUE encap
        1 TCP_STREAM
          4530 Mbps
        200 TCP_RR
          1297625 tps
          135/232/444 90/95/99% latencies
      
      IP4IP6 with direct GUE encap
        1 TCP_STREAM
          4903 Mbps
        200 TCP_RR
          1184481 tps
          149/253/473 90/95/99% latencies
      
      IP6IP6 direct GUE encap
        1 TCP_STREAM
         5146 Mbps
        200 TCP_RR
          1202879 tps
          146/251/472 90/95/99% latencies
      
      SIT with direct GUE encap
        1 TCP_STREAM
          6111 Mbps
        200 TCP_RR
          1250337 tps
          139/241/467 90/95/99% latencies
      Signed-off-by: NTom Herbert <tom@herbertland.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c1e48af7
    • E
      net: sched: do not acquire qdisc spinlock in qdisc/class stats dump · edb09eb1
      Eric Dumazet 提交于
      Large tc dumps (tc -s {qdisc|class} sh dev ethX) done by Google BwE host
      agent [1] are problematic at scale :
      
      For each qdisc/class found in the dump, we currently lock the root qdisc
      spinlock in order to get stats. Sampling stats every 5 seconds from
      thousands of HTB classes is a challenge when the root qdisc spinlock is
      under high pressure. Not only the dumps take time, they also slow
      down the fast path (queue/dequeue packets) by 10 % to 20 % in some cases.
      
      An audit of existing qdiscs showed that sch_fq_codel is the only qdisc
      that might need the qdisc lock in fq_codel_dump_stats() and
      fq_codel_dump_class_stats()
      
      In v2 of this patch, I now use the Qdisc running seqcount to provide
      consistent reads of packets/bytes counters, regardless of 32/64 bit arches.
      
      I also changed rate estimators to use the same infrastructure
      so that they no longer need to lock root qdisc lock.
      
      [1]
      http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43838.pdfSigned-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Cong Wang <xiyou.wangcong@gmail.com>
      Cc: Jamal Hadi Salim <jhs@mojatatu.com>
      Cc: John Fastabend <john.fastabend@gmail.com>
      Cc: Kevin Athey <kda@google.com>
      Cc: Xiaotian Pei <xiaotian@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      edb09eb1
    • E
      net_sched: transform qdisc running bit into a seqcount · f9eb8aea
      Eric Dumazet 提交于
      Instead of using a single bit (__QDISC___STATE_RUNNING)
      in sch->__state, use a seqcount.
      
      This adds lockdep support, but more importantly it will allow us
      to sample qdisc/class statistics without having to grab qdisc root lock.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Cong Wang <xiyou.wangcong@gmail.com>
      Cc: Jamal Hadi Salim <jhs@mojatatu.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f9eb8aea
    • J
    • J
    • J
      net sched actions: introduce timestamp for firsttime use · 53eb440f
      Jamal Hadi Salim 提交于
      Useful to know when the action was first used for accounting
      (and debugging)
      Signed-off-by: NJamal Hadi Salim <jhs@mojatatu.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      53eb440f
    • J
    • A
      net/sched: cls_flower: Introduce support in SKIP SW flag · e69985c6
      Amir Vadai 提交于
      In order to make a filter processed only by hardware, skip_sw flag
      should be supplied. This is an addition to the already existing skip_hw
      flag (filter will be processed by software only). If no flag is
      specified, filter will be processed by both software and hardware.
      
      If only hardware offloaded filters exist, fl_classify() will return
      without doing anything.
      
      A following userspace patch will be sent once kernel patch is accepted.
      
      Example:
      
      tc filter add dev enp0s9 protocol ip prio 20 parent ffff: \
      	flower \
      		ip_proto 6 \
      		indev enp0s9 \
      		skip_sw \
      	action skbedit mark 0x1234
      Signed-off-by: NAmir Vadai <amirva@mellanox.com>
      Acked-by: NJiri Pirko <jiri@mellanox.com>
      Acked-by: NJohn Fastabend <john.r.fastabend@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e69985c6
    • E
      net: get rid of spin_trylock() in net_tx_action() · 3bcb846c
      Eric Dumazet 提交于
      Note: Tom Herbert posted almost same patch 3 months back, but for
      different reasons.
      
      The reasons we want to get rid of this spin_trylock() are :
      
      1) Under high qdisc pressure, the spin_trylock() has almost no
      chance to succeed.
      
      2) We loop multiple times in softirq handler, eventually reaching
      the max retry count (10), and we schedule ksoftirqd.
      
      Since we want to adhere more strictly to ksoftirqd being waked up in
      the future (https://lwn.net/Articles/687617/), better avoid spurious
      wakeups.
      
      3) calls to __netif_reschedule() dirty the cache line containing
      q->next_sched, slowing down the owner of qdisc.
      
      4) RT kernels can not use the spin_trylock() here.
      
      With help of busylock, we get the qdisc spinlock fast enough, and
      the trylock trick brings only performance penalty.
      
      Depending on qdisc setup, I observed a gain of up to 19 % in qdisc
      performance (1016600 pps instead of 853400 pps, using prio+tbf+fq_codel)
      
      ("mpstat -I SCPU 1" is much happier now)
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Tom Herbert <tom@herbertland.com>
      Acked-by: NTom Herbert <tom@herbertland.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3bcb846c
  3. 06 6月, 2016 1 次提交
    • M
      net: disable fragment reassembly if high_thresh is zero · 30759219
      Michal Kubeček 提交于
      Before commit 6d7b857d ("net: use lib/percpu_counter API for
      fragmentation mem accounting"), setting the reassembly high threshold
      to 0 prevented fragment reassembly as first fragment would be always
      evicted before second could be added to the queue. While inefficient,
      some users apparently relied on this method.
      
      Since the commit mentioned above, a percpu counter is used for
      reassembly memory accounting and high batch size avoids taking slow path
      in most common scenarios. As a result, a whole full sized packet can be
      reassembled without the percpu counter's main counter changing its value
      so that even with high_thresh set to 0, fragmented packets can be still
      reassembled and processed.
      
      Add explicit check preventing reassembly if high threshold is zero.
      Signed-off-by: NMichal Kubecek <mkubecek@suse.cz>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      30759219
  4. 05 6月, 2016 11 次提交
  5. 04 6月, 2016 1 次提交