1. 04 3月, 2019 1 次提交
    • K
      sch_cake: Permit use of connmarks as tin classifiers · 0b5c7efd
      Kevin Darbyshire-Bryant 提交于
      Add flag 'FWMARK' to enable use of firewall connmarks as tin selector.
      The connmark (skbuff->mark) needs to be in the range 1->tin_cnt ie.
      for diffserv3 the mark needs to be 1->3.
      
      Background
      
      Typically CAKE uses DSCP as the basis for tin selection.  DSCP values
      are relatively easily changed as part of the egress path, usually with
      iptables & the mangle table, ingress is more challenging.  CAKE is often
      used on the WAN interface of a residential gateway where passthrough of
      DSCP from the ISP is either missing or set to unhelpful values thus use
      of ingress DSCP values for tin selection isn't helpful in that
      environment.
      
      An approach to solving the ingress tin selection problem is to use
      CAKE's understanding of tc filters.  Naive tc filters could match on
      source/destination port numbers and force tin selection that way, but
      multiple filters don't scale particularly well as each filter must be
      traversed whether it matches or not. e.g. a simple example to map 3
      firewall marks to tins:
      
      MAJOR=$( tc qdisc show dev $DEV | head -1 | awk '{print $3}' )
      tc filter add dev $DEV parent $MAJOR protocol all handle 0x01 fw action skbedit priority ${MAJOR}1
      tc filter add dev $DEV parent $MAJOR protocol all handle 0x02 fw action skbedit priority ${MAJOR}2
      tc filter add dev $DEV parent $MAJOR protocol all handle 0x03 fw action skbedit priority ${MAJOR}3
      
      Another option is to use eBPF cls_act with tc filters e.g.
      
      MAJOR=$( tc qdisc show dev $DEV | head -1 | awk '{print $3}' )
      tc filter add dev $DEV parent $MAJOR bpf da obj my-bpf-fwmark-to-class.o
      
      This has the disadvantages of a) needing someone to write & maintain
      the bpf program, b) a bpf toolchain to compile it and c) needing to
      hardcode the major number in the bpf program so it matches the cake
      instance (or forcing the cake instance to a particular major number)
      since the major number cannot be passed to the bpf program via tc
      command line.
      
      As already hinted at by the previous examples, it would be helpful
      to associate tins with something that survives the Internet path and
      ideally allows tin selection on both egress and ingress.  Netfilter's
      conntrack permits setting an identifying mark on a connection which
      can also be restored to an ingress packet with tc action connmark e.g.
      
      tc filter add dev eth0 parent ffff: protocol all prio 10 u32 \
      	match u32 0 0 flowid 1:1 action connmark action mirred egress redirect dev ifb1
      
      Since tc's connmark action has restored any connmark into skb->mark,
      any of the previous solutions are based upon it and in one form or
      another copy that mark to the skb->priority field where again CAKE
      picks this up.
      
      This change cuts out at least one of the (less intuitive &
      non-scalable) middlemen and permit direct access to skb->mark.
      Signed-off-by: NKevin Darbyshire-Bryant <ldir@darbyshire-bryant.me.uk>
      Signed-off-by: NToke Høiland-Jørgensen <toke@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0b5c7efd
  2. 26 2月, 2019 1 次提交
  3. 17 11月, 2018 2 次提交
    • J
      net: sched: gred: allow manipulating per-DP RED flags · 72111015
      Jakub Kicinski 提交于
      Allow users to set and dump RED flags (ECN enabled and harddrop)
      on per-virtual queue basis.  Validation of attributes is split
      from changes to make sure we won't have to undo previous operations
      when we find out configuration is invalid.
      
      The objective is to allow changing per-Qdisc parameters without
      overwriting the per-vq configured flags.
      
      Old user space will not pass the TCA_GRED_VQ_FLAGS attribute and
      per-Qdisc flags will always get propagated to the virtual queues.
      
      New user space which wants to make use of per-vq flags should set
      per-Qdisc flags to 0 and then configure per-vq flags as it
      sees fit.  Once per-vq flags are set per-Qdisc flags can't be
      changed to non-zero.  Vice versa - if the per-Qdisc flags are
      non-zero the TCA_GRED_VQ_FLAGS attribute has to either be omitted
      or set to the same value as per-Qdisc flags.
      
      Update per-Qdisc parameters:
      per-Qdisc | per-VQ | result
              0 |      0 | all vq flags updated
      	0 |  non-0 | error (vq flags in use)
          non-0 |      0 | -- impossible --
          non-0 |  non-0 | all vq flags updated
      
      Update per-VQ state (flags parameter not specified):
         no change to flags
      
      Update per-VQ state (flags parameter set):
      per-Qdisc | per-VQ | result
              0 |   any  | per-vq flags updated
          non-0 |      0 | -- impossible --
          non-0 |  non-0 | error (per-Qdisc flags in use)
      Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Reviewed-by: NJohn Hurley <john.hurley@netronome.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      72111015
    • J
      net: sched: gred: provide a better structured dump and expose stats · 80e22e96
      Jakub Kicinski 提交于
      Currently all GRED's virtual queue data is dumped in a single
      array in a single attribute.  This makes it pretty much impossible
      to add new fields.  In order to expose more detailed stats add a
      new set of attributes.  We can now expose the 64 bit value of bytesin
      and all the mark stats which were not part of the original design.
      Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Reviewed-by: NJohn Hurley <john.hurley@netronome.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      80e22e96
  4. 12 11月, 2018 1 次提交
  5. 05 10月, 2018 1 次提交
    • V
      tc: Add support for configuring the taprio scheduler · 5a781ccb
      Vinicius Costa Gomes 提交于
      This traffic scheduler allows traffic classes states (transmission
      allowed/not allowed, in the simplest case) to be scheduled, according
      to a pre-generated time sequence. This is the basis of the IEEE
      802.1Qbv specification.
      
      Example configuration:
      
      tc qdisc replace dev enp3s0 parent root handle 100 taprio \
                num_tc 3 \
      	  map 2 2 1 0 2 2 2 2 2 2 2 2 2 2 2 2 \
      	  queues 1@0 1@1 2@2 \
      	  base-time 1528743495910289987 \
      	  sched-entry S 01 300000 \
      	  sched-entry S 02 300000 \
      	  sched-entry S 04 300000 \
      	  clockid CLOCK_TAI
      
      The configuration format is similar to mqprio. The main difference is
      the presence of a schedule, built by multiple "sched-entry"
      definitions, each entry has the following format:
      
           sched-entry <CMD> <GATE MASK> <INTERVAL>
      
      The only supported <CMD> is "S", which means "SetGateStates",
      following the IEEE 802.1Qbv-2015 definition (Table 8-6). <GATE MASK>
      is a bitmask where each bit is a associated with a traffic class, so
      bit 0 (the least significant bit) being "on" means that traffic class
      0 is "active" for that schedule entry. <INTERVAL> is a time duration
      in nanoseconds that specifies for how long that state defined by <CMD>
      and <GATE MASK> should be held before moving to the next entry.
      
      This schedule is circular, that is, after the last entry is executed
      it starts from the first one, indefinitely.
      
      The other parameters can be defined as follows:
      
       - base-time: specifies the instant when the schedule starts, if
        'base-time' is a time in the past, the schedule will start at
      
       	      base-time + (N * cycle-time)
      
         where N is the smallest integer so the resulting time is greater
         than "now", and "cycle-time" is the sum of all the intervals of the
         entries in the schedule;
      
       - clockid: specifies the reference clock to be used;
      
      The parameters should be similar to what the IEEE 802.1Q family of
      specification defines.
      Signed-off-by: NVinicius Costa Gomes <vinicius.gomes@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5a781ccb
  6. 03 9月, 2018 1 次提交
    • F
      net/sched: fix type of htb statistics · b9de3963
      Florent Fourcot 提交于
      tokens and ctokens are defined as s64 in htb_class structure,
      and clamped to 32bits value during netlink dumps:
      
      cl->xstats.tokens = clamp_t(s64, PSCHED_NS2TICKS(cl->tokens),
                                  INT_MIN, INT_MAX);
      
      Defining it as u32 is working since userspace (tc) is printing it as
      signed int, but a correct definition from the beginning is probably
      better.
      
      In the same time, 'giants' structure member is unused since years, so
      update the comment to mark it unused.
      Signed-off-by: NFlorent Fourcot <florent.fourcot@wifirst.fr>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b9de3963
  7. 25 7月, 2018 1 次提交
    • N
      net/sched: add skbprio scheduler · aea5f654
      Nishanth Devarajan 提交于
      Skbprio (SKB Priority Queue) is a queueing discipline that prioritizes packets
      according to their skb->priority field. Under congestion, already-enqueued lower
      priority packets will be dropped to make space available for higher priority
      packets. Skbprio was conceived as a solution for denial-of-service defenses that
      need to route packets with different priorities as a means to overcome DoS
      attacks.
      
      v5
      *Do not reference qdisc_dev(sch)->tx_queue_len for setting limit. Instead set
      default sch->limit to 64.
      
      v4
      *Drop Documentation/networking/sch_skbprio.txt doc file to move it to tc man
      page for Skbprio, in iproute2.
      
      v3
      *Drop max_limit parameter in struct skbprio_sched_data and instead use
      sch->limit.
      
      *Reference qdisc_dev(sch)->tx_queue_len only once, during initialisation for
      qdisc (previously being referenced every time qdisc changes).
      
      *Move qdisc's detailed description from in-code to Documentation/networking.
      
      *When qdisc is saturated, enqueue incoming packet first before dequeueing
      lowest priority packet in queue - improves usage of call stack registers.
      
      *Introduce and use overlimit stat to keep track of number of dropped packets.
      
      v2
      *Use skb->priority field rather than DS field. Rename queueing discipline as
      SKB Priority Queue (previously Gatekeeper Priority Queue).
      
      *Queueing discipline is made classful to expose Skbprio's internal priority
      queues.
      Signed-off-by: NNishanth Devarajan <ndev2021@gmail.com>
      Reviewed-by: NSachin Paryani <sachin.paryani@gmail.com>
      Reviewed-by: NCody Doucette <doucette@bu.edu>
      Reviewed-by: NMichel Machado <michel@digirati.com.br>
      Acked-by: NCong Wang <xiyou.wangcong@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      aea5f654
  8. 11 7月, 2018 1 次提交
    • T
      sched: Add Common Applications Kept Enhanced (cake) qdisc · 046f6fd5
      Toke Høiland-Jørgensen 提交于
      sch_cake targets the home router use case and is intended to squeeze the
      most bandwidth and latency out of even the slowest ISP links and routers,
      while presenting an API simple enough that even an ISP can configure it.
      
      Example of use on a cable ISP uplink:
      
      tc qdisc add dev eth0 cake bandwidth 20Mbit nat docsis ack-filter
      
      To shape a cable download link (ifb and tc-mirred setup elided)
      
      tc qdisc add dev ifb0 cake bandwidth 200mbit nat docsis ingress wash
      
      CAKE is filled with:
      
      * A hybrid Codel/Blue AQM algorithm, "Cobalt", tied to an FQ_Codel
        derived Flow Queuing system, which autoconfigures based on the bandwidth.
      * A novel "triple-isolate" mode (the default) which balances per-host
        and per-flow FQ even through NAT.
      * An deficit based shaper, that can also be used in an unlimited mode.
      * 8 way set associative hashing to reduce flow collisions to a minimum.
      * A reasonable interpretation of various diffserv latency/loss tradeoffs.
      * Support for zeroing diffserv markings for entering and exiting traffic.
      * Support for interacting well with Docsis 3.0 shaper framing.
      * Extensive support for DSL framing types.
      * Support for ack filtering.
      * Extensive statistics for measuring, loss, ecn markings, latency
        variation.
      
      A paper describing the design of CAKE is available at
      https://arxiv.org/abs/1804.07617, and will be published at the 2018 IEEE
      International Symposium on Local and Metropolitan Area Networks (LANMAN).
      
      This patch adds the base shaper and packet scheduler, while subsequent
      commits add the optional (configurable) features. The full userspace API
      and most data structures are included in this commit, but options not
      understood in the base version will be ignored.
      
      Various versions baking have been available as an out of tree build for
      kernel versions going back to 3.10, as the embedded router world has been
      running a few years behind mainline Linux. A stable version has been
      generally available on lede-17.01 and later.
      
      sch_cake replaces a combination of iptables, tc filter, htb and fq_codel
      in the sqm-scripts, with sane defaults and vastly simpler configuration.
      
      CAKE's principal author is Jonathan Morton, with contributions from
      Kevin Darbyshire-Bryant, Toke Høiland-Jørgensen, Sebastian Moeller,
      Ryan Mounce, Tony Ambardar, Dean Scarff, Nils Andreas Svee, Dave Täht,
      and Loganaden Velvindron.
      
      Testing from Pete Heist, Georgios Amanakis, and the many other members of
      the cake@lists.bufferbloat.net mailing list.
      
      tc -s qdisc show dev eth2
       qdisc cake 8017: root refcnt 2 bandwidth 1Gbit diffserv3 triple-isolate split-gso rtt 100.0ms noatm overhead 38 mpu 84
       Sent 51504294511 bytes 37724591 pkt (dropped 6, overlimits 64958695 requeues 12)
        backlog 0b 0p requeues 12
        memory used: 1053008b of 15140Kb
        capacity estimate: 970Mbit
        min/max network layer size:           28 /    1500
        min/max overhead-adjusted size:       84 /    1538
        average network hdr offset:           14
                          Bulk  Best Effort        Voice
         thresh      62500Kbit        1Gbit      250Mbit
         target          5.0ms        5.0ms        5.0ms
         interval      100.0ms      100.0ms      100.0ms
         pk_delay          5us          5us          6us
         av_delay          3us          2us          2us
         sp_delay          2us          1us          1us
         backlog            0b           0b           0b
         pkts          3164050     25030267      9530280
         bytes      3227519915  35396974782  12879808898
         way_inds            0            8            0
         way_miss           21          366           25
         way_cols            0            0            0
         drops               5            0            1
         marks               0            0            0
         ack_drop            0            0            0
         sp_flows            1            3            0
         bk_flows            0            1            1
         un_flows            0            0            0
         max_len         68130        68130        68130
      Tested-by: NPete Heist <peteheist@gmail.com>
      Tested-by: NGeorgios Amanakis <gamanakis@gmail.com>
      Signed-off-by: NDave Taht <dave.taht@gmail.com>
      Signed-off-by: NToke Høiland-Jørgensen <toke@toke.dk>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      046f6fd5
  9. 04 7月, 2018 2 次提交
    • J
      net/sched: Add HW offloading capability to ETF · 88cab771
      Jesus Sanchez-Palencia 提交于
      Add infra so etf qdisc supports HW offload of time-based transmission.
      
      For hw offload, the time sorted list is still used, so packets are
      dequeued always in order of txtime.
      
      Example:
      
      $ tc qdisc replace dev enp2s0 parent root handle 100 mqprio num_tc 3 \
                 map 2 2 1 0 2 2 2 2 2 2 2 2 2 2 2 2 queues 1@0 1@1 2@2 hw 0
      
      $ tc qdisc add dev enp2s0 parent 100:1 etf offload delta 100000 \
      	   clockid CLOCK_REALTIME
      
      In this example, the Qdisc will use HW offload for the control of the
      transmission time through the network adapter. The hrtimer used for
      packets scheduling inside the qdisc will use the clockid CLOCK_REALTIME
      as reference and packets leave the Qdisc "delta" (100000) nanoseconds
      before their transmission time. Because this will be using HW offload and
      since dynamic clocks are not supported by the hrtimer, the system clock
      and the PHC clock must be synchronized for this mode to behave as
      expected.
      Signed-off-by: NJesus Sanchez-Palencia <jesus.sanchez-palencia@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      88cab771
    • V
      net/sched: Introduce the ETF Qdisc · 25db26a9
      Vinicius Costa Gomes 提交于
      The ETF (Earliest TxTime First) qdisc uses the information added
      earlier in this series (the socket option SO_TXTIME and the new
      role of sk_buff->tstamp) to schedule packets transmission based
      on absolute time.
      
      For some workloads, just bandwidth enforcement is not enough, and
      precise control of the transmission of packets is necessary.
      
      Example:
      
      $ tc qdisc replace dev enp2s0 parent root handle 100 mqprio num_tc 3 \
                 map 2 2 1 0 2 2 2 2 2 2 2 2 2 2 2 2 queues 1@0 1@1 2@2 hw 0
      
      $ tc qdisc add dev enp2s0 parent 100:1 etf delta 100000 \
                 clockid CLOCK_TAI
      
      In this example, the Qdisc will provide SW best-effort for the control
      of the transmission time to the network adapter, the time stamp in the
      socket will be in reference to the clockid CLOCK_TAI and packets
      will leave the qdisc "delta" (100000) nanoseconds before its transmission
      time.
      
      The ETF qdisc will buffer packets sorted by their txtime. It will drop
      packets on enqueue() if their skbuff clockid does not match the clock
      reference of the Qdisc. Moreover, on dequeue(), a packet will be dropped
      if it expires while being enqueued.
      
      The qdisc also supports the SO_TXTIME deadline mode. For this mode, it
      will dequeue a packet as soon as possible and change the skb timestamp
      to 'now' during etf_dequeue().
      
      Note that both the qdisc's and the SO_TXTIME ABIs allow for a clockid
      to be configured, but it's been decided that usage of CLOCK_TAI should
      be enforced until we decide to allow for other clockids to be used.
      The rationale here is that PTP times are usually in the TAI scale, thus
      no other clocks should be necessary. For now, the qdisc will return
      EINVAL if any clocks other than CLOCK_TAI are used.
      Signed-off-by: NJesus Sanchez-Palencia <jesus.sanchez-palencia@intel.com>
      Signed-off-by: NVinicius Costa Gomes <vinicius.gomes@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      25db26a9
  10. 28 6月, 2018 1 次提交
    • Y
      netem: slotting with non-uniform distribution · 0a9fe5c3
      Yousuk Seung 提交于
      Extend slotting with support for non-uniform distributions. This is
      similar to netem's non-uniform distribution delay feature.
      
      Commit f043efeae2f1 ("netem: support delivering packets in delayed
      time slots") added the slotting feature to approximate the behaviors
      of media with packet aggregation but only supported a uniform
      distribution for delays between transmission attempts. Tests with TCP
      BBR with emulated wifi links with non-uniform distributions produced
      more useful results.
      
      Syntax:
         slot dist DISTRIBUTION DELAY JITTER [packets MAX_PACKETS] \
            [bytes MAX_BYTES]
      
      The syntax and use of the distribution table is the same as in the
      non-uniform distribution delay feature. A file DISTRIBUTION must be
      present in TC_LIB_DIR (e.g. /usr/lib/tc) containing numbers scaled by
      NETEM_DIST_SCALE. A random value x is selected from the table and it
      takes DELAY + ( x * JITTER ) as delay. Correlation between values is not
      supported.
      
      Examples:
        Normal distribution delay with mean = 800us and stdev = 100us.
        > tc qdisc add dev eth0 root netem slot dist normal 800us 100us
      
        Optionally set the max slot size in bytes and/or packets.
        > tc qdisc add dev eth0 root netem slot dist normal 800us 100us \
          bytes 64k packets 42
      Signed-off-by: NYousuk Seung <ysseung@google.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Acked-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0a9fe5c3
  11. 16 12月, 2017 1 次提交
  12. 13 11月, 2017 2 次提交
    • D
      netem: support delivering packets in delayed time slots · 836af83b
      Dave Taht 提交于
      Slotting is a crude approximation of the behaviors of shared media such
      as cable, wifi, and LTE, which gather up a bunch of packets within a
      varying delay window and deliver them, relative to that, nearly all at
      once.
      
      It works within the existing loss, duplication, jitter and delay
      parameters of netem. Some amount of inherent latency must be specified,
      regardless.
      
      The new "slot" parameter specifies a minimum and maximum delay between
      transmission attempts.
      
      The "bytes" and "packets" parameters can be used to limit the amount of
      information transferred per slot.
      
      Examples of use:
      
      tc qdisc add dev eth0 root netem delay 200us \
               slot 800us 10ms bytes 64k packets 42
      
      A more correct example, using stacked netem instances and a packet limit
      to emulate a tail drop wifi queue with slots and variable packet
      delivery, with a 200Mbit isochronous underlying rate, and 20ms path
      delay:
      
      tc qdisc add dev eth0 root handle 1: netem delay 20ms rate 200mbit \
               limit 10000
      tc qdisc add dev eth0 parent 1:1 handle 10:1 netem delay 200us \
               slot 800us 10ms bytes 64k packets 42 limit 512
      Signed-off-by: NDave Taht <dave.taht@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      836af83b
    • D
      netem: add uapi to express delay and jitter in nanoseconds · 99803171
      Dave Taht 提交于
      netem userspace has long relied on a horrible /proc/net/psched hack
      to translate the current notion of "ticks" to nanoseconds.
      
      Expressing latency and jitter instead, in well defined nanoseconds,
      increases the dynamic range of emulated delays and jitter in netem.
      
      It will also ease a transition where reducing a tick to nsec
      equivalence would constrain the max delay in prior versions of
      netem to only 4.3 seconds.
      Signed-off-by: NDave Taht <dave.taht@gmail.com>
      Suggested-by: NEric Dumazet <edumazet@google.com>
      Reviewed-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      99803171
  13. 08 11月, 2017 1 次提交
  14. 02 11月, 2017 1 次提交
    • G
      License cleanup: add SPDX license identifier to uapi header files with no license · 6f52b16c
      Greg Kroah-Hartman 提交于
      Many user space API headers are missing licensing information, which
      makes it hard for compliance tools to determine the correct license.
      
      By default are files without license information under the default
      license of the kernel, which is GPLV2.  Marking them GPLV2 would exclude
      them from being included in non GPLV2 code, which is obviously not
      intended. The user space API headers fall under the syscall exception
      which is in the kernels COPYING file:
      
         NOTE! This copyright does *not* cover user programs that use kernel
         services by normal system calls - this is merely considered normal use
         of the kernel, and does *not* fall under the heading of "derived work".
      
      otherwise syscall usage would not be possible.
      
      Update the files which contain no license information with an SPDX
      license identifier.  The chosen identifier is 'GPL-2.0 WITH
      Linux-syscall-note' which is the officially assigned identifier for the
      Linux syscall exception.  SPDX license identifiers are a legally binding
      shorthand, which can be used instead of the full boiler plate text.
      
      This patch is based on work done by Thomas Gleixner and Kate Stewart and
      Philippe Ombredanne.  See the previous patch in this series for the
      methodology of how this patch was researched.
      Reviewed-by: NKate Stewart <kstewart@linuxfoundation.org>
      Reviewed-by: NPhilippe Ombredanne <pombredanne@nexb.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      6f52b16c
  15. 28 10月, 2017 1 次提交
  16. 17 10月, 2017 1 次提交
  17. 14 10月, 2017 1 次提交
    • A
      mqprio: Introduce new hardware offload mode and shaper in mqprio · 4e8b86c0
      Amritha Nambiar 提交于
      The offload types currently supported in mqprio are 0 (no offload) and
      1 (offload only TCs) by setting these values for the 'hw' option. If
      offloads are supported by setting the 'hw' option to 1, the default
      offload mode is 'dcb' where only the TC values are offloaded to the
      device. This patch introduces a new hardware offload mode called
      'channel' with 'hw' set to 1 in mqprio which makes full use of the
      mqprio options, the TCs, the queue configurations and the QoS parameters
      for the TCs. This is achieved through a new netlink attribute for the
      'mode' option which takes values such as 'dcb' (default) and 'channel'.
      The 'channel' mode also supports QoS attributes for traffic class such as
      minimum and maximum values for bandwidth rate limits.
      
      This patch enables configuring additional HW shaper attributes associated
      with a traffic class. Currently the shaper for bandwidth rate limiting is
      supported which takes options such as minimum and maximum bandwidth rates
      and are offloaded to the hardware in the 'channel' mode. The min and max
      limits for bandwidth rates are provided by the user along with the TCs
      and the queue configurations when creating the mqprio qdisc. The interface
      can be extended to support new HW shapers in future through the 'shaper'
      attribute.
      
      Introduces a new data structure 'tc_mqprio_qopt_offload' for offloading
      mqprio queue options and use this to be shared between the kernel and
      device driver. This contains a copy of the existing data structure
      for mqprio queue options. This new data structure can be extended when
      adding new attributes for traffic class such as mode, shaper, shaper
      parameters (bandwidth rate limits). The existing data structure for mqprio
      queue options will be shared between the kernel and userspace.
      
      Example:
        queues 4@0 4@4 hw 1 mode channel shaper bw_rlimit\
        min_rate 1Gbit 2Gbit max_rate 4Gbit 5Gbit
      
      To dump the bandwidth rates:
      
      qdisc mqprio 804a: root  tc 2 map 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0
                   queues:(0:3) (4:7)
                   mode:channel
                   shaper:bw_rlimit   min_rate:1Gbit 2Gbit   max_rate:4Gbit 5Gbit
      Signed-off-by: NAmritha Nambiar <amritha.nambiar@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      4e8b86c0
  18. 16 3月, 2017 1 次提交
  19. 23 9月, 2016 1 次提交
    • E
      net_sched: sch_fq: account for schedule/timers drifts · fefa569a
      Eric Dumazet 提交于
      It looks like the following patch can make FQ very precise, even in VM
      or stressed hosts. It matters at high pacing rates.
      
      We take into account the difference between the time that was programmed
      when last packet was sent, and current time (a drift of tens of usecs is
      often observed)
      
      Add an EWMA of the unthrottle latency to help diagnostics.
      
      This latency is the difference between current time and oldest packet in
      delayed RB-tree. This accounts for the high resolution timer latency,
      but can be different under stress, as fq_check_throttled() can be
      opportunistically be called from a dequeue() called after an enqueue()
      for a different flow.
      
      Tested:
      // Start a 10Gbit flow
      $ netperf --google-pacing-rate 1250000000 -H lpaa24 -l 10000 -- -K bbr &
      
      Before patch :
      $ sar -n DEV 10 5 | grep eth0 | grep Average
      Average:         eth0  17106.04 756876.84   1102.75 1119049.02      0.00      0.00      0.52
      
      After patch :
      $ sar -n DEV 10 5 | grep eth0 | grep Average
      Average:         eth0  17867.00 800245.90   1151.77 1183172.12      0.00      0.00      0.52
      
      A new iproute2 tc can output the 'unthrottle latency' :
      
      $ tc -s qd sh dev eth0 | grep latency
        0 gc, 0 highprio, 32490767 throttled, 2382 ns latency
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fefa569a
  20. 21 9月, 2016 1 次提交
  21. 09 5月, 2016 1 次提交
    • E
      fq_codel: add memory limitation per queue · 95b58430
      Eric Dumazet 提交于
      On small embedded routers, one wants to control maximal amount of
      memory used by fq_codel, instead of controlling number of packets or
      bytes, since GRO/TSO make these not practical.
      
      Assuming skb->truesize is accurate, we have to keep track of
      skb->truesize sum for skbs in queue.
      
      This patch adds a new TCA_FQ_CODEL_MEMORY_LIMIT attribute.
      
      I chose a default value of 32 MBytes, which looks reasonable even
      for heavy duty usages. (Prior fq_codel users should not be hurt
      when they upgrade their kernels)
      
      Two fields are added to tc_fq_codel_qd_stats to report :
       - Current memory usage
       - Number of drops caused by memory limits
      
      # tc qd replace dev eth1 root est 1sec 4sec fq_codel memory_limit 4M
      ..
      # tc -s -d qd sh dev eth1
      qdisc fq_codel 8008: root refcnt 257 limit 10240p flows 1024
       quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
       Sent 2083566791363 bytes 1376214889 pkt (dropped 4994406, overlimits 0
      requeues 21705223)
       rate 9841Mbit 812549pps backlog 3906120b 376p requeues 21705223
        maxpacket 68130 drop_overlimit 4994406 new_flow_count 28855414
        ecn_mark 0 memory_used 4190048 drop_overmemory 4994406
        new_flows_len 1 old_flows_len 177
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Jesper Dangaard Brouer <brouer@redhat.com>
      Cc: Dave Täht <dave.taht@gmail.com>
      Cc: Sebastian Möller <moeller0@gmx.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      95b58430
  22. 04 5月, 2016 1 次提交
    • E
      fq_codel: add batch ability to fq_codel_drop() · 9d18562a
      Eric Dumazet 提交于
      In presence of inelastic flows and stress, we can call
      fq_codel_drop() for every packet entering fq_codel qdisc.
      
      fq_codel_drop() is quite expensive, as it does a linear scan
      of 4 KB of memory to find a fat flow.
      Once found, it drops the oldest packet of this flow.
      
      Instead of dropping a single packet, try to drop 50% of the backlog
      of this fat flow, with a configurable limit of 64 packets per round.
      
      TCA_FQ_CODEL_DROP_BATCH_SIZE is the new attribute to make this
      limit configurable.
      
      With this strategy the 4 KB search is amortized to a single cache line
      per drop [1], so fq_codel_drop() no longer appears at the top of kernel
      profile in presence of few inelastic flows.
      
      [1] Assuming a 64byte cache line, and 1024 buckets
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Reported-by: NDave Taht <dave.taht@gmail.com>
      Cc: Jonathan Morton <chromatix99@gmail.com>
      Acked-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Acked-by: Dave Taht
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9d18562a
  23. 26 4月, 2016 1 次提交
  24. 11 1月, 2016 1 次提交
    • D
      net, sched: add clsact qdisc · 1f211a1b
      Daniel Borkmann 提交于
      This work adds a generalization of the ingress qdisc as a qdisc holding
      only classifiers. The clsact qdisc works on ingress, but also on egress.
      In both cases, it's execution happens without taking the qdisc lock, and
      the main difference for the egress part compared to prior version of [1]
      is that this can be applied with _any_ underlying real egress qdisc (also
      classless ones).
      
      Besides solving the use-case of [1], that is, allowing for more programmability
      on assigning skb->priority for the mqprio case that is supported by most
      popular 10G+ NICs, it also opens up a lot more flexibility for other tc
      applications. The main work on classification can already be done at clsact
      egress time if the use-case allows and state stored for later retrieval
      f.e. again in skb->priority with major/minors (which is checked by most
      classful qdiscs before consulting tc_classify()) and/or in other skb fields
      like skb->tc_index for some light-weight post-processing to get to the
      eventual classid in case of a classful qdisc. Another use case is that
      the clsact egress part allows to have a central egress counterpart to
      the ingress classifiers, so that classifiers can easily share state (e.g.
      in cls_bpf via eBPF maps) for ingress and egress.
      
      Currently, default setups like mq + pfifo_fast would require for this to
      use, for example, prio qdisc instead (to get a tc_classify() run) and to
      duplicate the egress classifier for each queue. With clsact, it allows
      for leaving the setup as is, it can additionally assign skb->priority to
      put the skb in one of pfifo_fast's bands and it can share state with maps.
      Moreover, we can access the skb's dst entry (f.e. to retrieve tclassid)
      w/o the need to perform a skb_dst_force() to hold on to it any longer. In
      lwt case, we can also use this facility to setup dst metadata via cls_bpf
      (bpf_skb_set_tunnel_key()) without needing a real egress qdisc just for
      that (case of IFF_NO_QUEUE devices, for example).
      
      The realization can be done without any changes to the scheduler core
      framework. All it takes is that we have two a-priori defined minors/child
      classes, where we can mux between ingress and egress classifier list
      (dev->ingress_cl_list and dev->egress_cl_list, latter stored close to
      dev->_tx to avoid extra cacheline miss for moderate loads). The egress
      part is a bit similar modelled to handle_ing() and patched to a noop in
      case the functionality is not used. Both handlers are now called
      sch_handle_ingress() and sch_handle_egress(), code sharing among the two
      doesn't seem practical as there are various minor differences in both
      paths, so that making them conditional in a single handler would rather
      slow things down.
      
      Full compatibility to ingress qdisc is provided as well. Since both
      piggyback on TC_H_CLSACT, only one of them (ingress/clsact) can exist
      per netdevice, and thus ingress qdisc specific behaviour can be retained
      for user space. This means, either a user does 'tc qdisc add dev foo ingress'
      and configures ingress qdisc as usual, or the 'tc qdisc add dev foo clsact'
      alternative, where both, ingress and egress classifier can be configured
      as in the below example. ingress qdisc supports attaching classifier to any
      minor number whereas clsact has two fixed minors for muxing between the
      lists, therefore to not break user space setups, they are better done as
      two separate qdiscs.
      
      I decided to extend the sch_ingress module with clsact functionality so
      that commonly used code can be reused, the module is being aliased with
      sch_clsact so that it can be auto-loaded properly. Alternative would have been
      to add a flag when initializing ingress to alter its behaviour plus aliasing
      to a different name (as it's more than just ingress). However, the first would
      end up, based on the flag, choosing the new/old behaviour by calling different
      function implementations to handle each anyway, the latter would require to
      register ingress qdisc once again under different alias. So, this really begs
      to provide a minimal, cleaner approach to have Qdisc_ops and Qdisc_class_ops
      by its own that share callbacks used by both.
      
      Example, adding qdisc:
      
         # tc qdisc add dev foo clsact
         # tc qdisc show dev foo
         qdisc mq 0: root
         qdisc pfifo_fast 0: parent :1 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
         qdisc pfifo_fast 0: parent :2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
         qdisc pfifo_fast 0: parent :3 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
         qdisc pfifo_fast 0: parent :4 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
         qdisc clsact ffff: parent ffff:fff1
      
      Adding filters (deleting, etc works analogous by specifying ingress/egress):
      
         # tc filter add dev foo ingress bpf da obj bar.o sec ingress
         # tc filter add dev foo egress  bpf da obj bar.o sec egress
         # tc filter show dev foo ingress
         filter protocol all pref 49152 bpf
         filter protocol all pref 49152 bpf handle 0x1 bar.o:[ingress] direct-action
         # tc filter show dev foo egress
         filter protocol all pref 49152 bpf
         filter protocol all pref 49152 bpf handle 0x1 bar.o:[egress] direct-action
      
      A 'tc filter show dev foo' or 'tc filter show dev foo parent ffff:' will
      show an empty list for clsact. Either using the parent names (ingress/egress)
      or specifying the full major/minor will then show the related filter lists.
      
      Prior work on a mqprio prequeue() facility [1] was done mainly by John Fastabend.
      
        [1] http://patchwork.ozlabs.org/patch/512949/Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NJohn Fastabend <john.r.fastabend@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1f211a1b
  25. 13 5月, 2015 1 次提交
    • D
      net_sched: gred: add TCA_GRED_LIMIT attribute · a3eb95f8
      David Ward 提交于
      In a GRED qdisc, if the default "virtual queue" (VQ) does not have drop
      parameters configured, then packets for the default VQ are not subjected
      to RED and are only dropped if the queue is larger than the net_device's
      tx_queue_len. This behavior is useful for WRED mode, since these packets
      will still influence the calculated average queue length and (therefore)
      the drop probability for all of the other VQs. However, for some drivers
      tx_queue_len is zero. In other cases the user may wish to make the limit
      the same for all VQs (including the default VQ with no drop parameters).
      
      This change adds a TCA_GRED_LIMIT attribute to set the GRED queue limit,
      in bytes, during qdisc setup. (This limit is in bytes to be consistent
      with the drop parameters.) The default limit is the same as for a bfifo
      queue (tx_queue_len * psched_mtu). If the drop parameters of any VQ are
      configured with a smaller limit than the GRED queue limit, that VQ will
      still observe the smaller limit instead.
      Signed-off-by: NDavid Ward <david.ward@ll.mit.edu>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a3eb95f8
  26. 11 5月, 2015 1 次提交
    • E
      codel: add ce_threshold attribute · 80ba92fa
      Eric Dumazet 提交于
      For DCTCP or similar ECN based deployments on fabrics with shallow
      buffers, hosts are responsible for a good part of the buffering.
      
      This patch adds an optional ce_threshold to codel & fq_codel qdiscs,
      so that DCTCP can have feedback from queuing in the host.
      
      A DCTCP enabled egress port simply have a queue occupancy threshold
      above which ECT packets get CE mark.
      
      In codel language this translates to a sojourn time, so that one doesn't
      have to worry about bytes or bandwidth but delays.
      
      This makes the host an active participant in the health of the whole
      network.
      
      This also helps experimenting DCTCP in a setup without DCTCP compliant
      fabric.
      
      On following example, ce_threshold is set to 1ms, and we can see from
      'ldelay xxx us' that TCP is not trying to go around the 5ms codel
      target.
      
      Queue has more capacity to absorb inelastic bursts (say from UDP
      traffic), as queues are maintained to an optimal level.
      
      lpaa23:~# ./tc -s -d qd sh dev eth1
      qdisc mq 1: dev eth1 root
       Sent 87910654696 bytes 58065331 pkt (dropped 0, overlimits 0 requeues 42961)
       backlog 3108242b 364p requeues 42961
      qdisc codel 8063: dev eth1 parent 1:1 limit 1000p target 5.0ms ce_threshold 1.0ms interval 100.0ms
       Sent 7363778701 bytes 4863809 pkt (dropped 0, overlimits 0 requeues 5503)
       rate 2348Mbit 193919pps backlog 255866b 46p requeues 5503
        count 0 lastcount 0 ldelay 1.0ms drop_next 0us
        maxpacket 68130 ecn_mark 0 drop_overlimit 0 ce_mark 72384
      qdisc codel 8064: dev eth1 parent 1:2 limit 1000p target 5.0ms ce_threshold 1.0ms interval 100.0ms
       Sent 7636486190 bytes 5043942 pkt (dropped 0, overlimits 0 requeues 5186)
       rate 2319Mbit 191538pps backlog 207418b 64p requeues 5186
        count 0 lastcount 0 ldelay 694us drop_next 0us
        maxpacket 68130 ecn_mark 0 drop_overlimit 0 ce_mark 69873
      qdisc codel 8065: dev eth1 parent 1:3 limit 1000p target 5.0ms ce_threshold 1.0ms interval 100.0ms
       Sent 11569360142 bytes 7641602 pkt (dropped 0, overlimits 0 requeues 5554)
       rate 3041Mbit 251096pps backlog 210446b 59p requeues 5554
        count 0 lastcount 0 ldelay 889us drop_next 0us
        maxpacket 68130 ecn_mark 0 drop_overlimit 0 ce_mark 37780
      ...
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Florian Westphal <fw@strlen.de>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: Glenn Judd <glenn.judd@morganstanley.com>
      Cc: Nandita Dukkipati <nanditad@google.com>
      Cc: Neal Cardwell <ncardwell@google.com>
      Cc: Yuchung Cheng <ycheng@google.com>
      Acked-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      80ba92fa
  27. 05 2月, 2015 1 次提交
    • E
      pkt_sched: fq: better control of DDOS traffic · 06eb395f
      Eric Dumazet 提交于
      FQ has a fast path for skb attached to a socket, as it does not
      have to compute a flow hash. But for other packets, FQ being non
      stochastic means that hosts exposed to random Internet traffic
      can allocate million of flows structure (104 bytes each) pretty
      easily. Not only host can OOM, but lookup in RB trees can take
      too much cpu and memory resources.
      
      This patch adds a new attribute, orphan_mask, that is adding
      possibility of having a stochastic hash for orphaned skb.
      
      Its default value is 1024 slots, to mimic SFQ behavior.
      
      Note: This does not apply to locally generated TCP traffic,
      and no locally generated traffic will share a flow structure
      with another perfect or stochastic flow.
      
      This patch also handles the specific case of SYNACK messages:
      
      They are attached to the listener socket, and therefore all map
      to a single hash bucket. If listener have set SO_MAX_PACING_RATE,
      hoping to have new accepted socket inherit this rate, SYNACK
      might be paced and even dropped.
      
      This is very similar to an internal patch Google have used more
      than one year.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      06eb395f
  28. 07 1月, 2014 1 次提交
    • V
      net: pkt_sched: PIE AQM scheme · d4b36210
      Vijay Subramanian 提交于
      Proportional Integral controller Enhanced (PIE) is a scheduler to address the
      bufferbloat problem.
      
      >From the IETF draft below:
      " Bufferbloat is a phenomenon where excess buffers in the network cause high
      latency and jitter. As more and more interactive applications (e.g. voice over
      IP, real time video streaming and financial transactions) run in the Internet,
      high latency and jitter degrade application performance. There is a pressing
      need to design intelligent queue management schemes that can control latency and
      jitter; and hence provide desirable quality of service to users.
      
      We present here a lightweight design, PIE(Proportional Integral controller
      Enhanced) that can effectively control the average queueing latency to a target
      value. Simulation results, theoretical analysis and Linux testbed results have
      shown that PIE can ensure low latency and achieve high link utilization under
      various congestion situations. The design does not require per-packet
      timestamp, so it incurs very small overhead and is simple enough to implement
      in both hardware and software.  "
      
      Many thanks to Dave Taht for extensive feedback, reviews, testing and
      suggestions. Thanks also to Stephen Hemminger and Eric Dumazet for reviews and
      suggestions.  Naeem Khademi and Dave Taht independently contributed to ECN
      support.
      
      For more information, please see technical paper about PIE in the IEEE
      Conference on High Performance Switching and Routing 2013. A copy of the paper
      can be found at ftp://ftpeng.cisco.com/pie/.
      
      Please also refer to the IETF draft submission at
      http://tools.ietf.org/html/draft-pan-tsvwg-pie-00
      
      All relevant code, documents and test scripts and results can be found at
      ftp://ftpeng.cisco.com/pie/.
      
      For problems with the iproute2/tc or Linux kernel code, please contact Vijay
      Subramanian (vijaynsu@cisco.com or subramanian.vijay@gmail.com) Mythili Prabhu
      (mysuryan@cisco.com)
      Signed-off-by: NVijay Subramanian <subramanian.vijay@gmail.com>
      Signed-off-by: NMythili Prabhu <mysuryan@cisco.com>
      CC: Dave Taht <dave.taht@bufferbloat.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d4b36210
  29. 01 1月, 2014 1 次提交
  30. 27 12月, 2013 1 次提交
  31. 20 12月, 2013 1 次提交
    • T
      net-qdisc-hhf: Heavy-Hitter Filter (HHF) qdisc · 10239edf
      Terry Lam 提交于
      This patch implements the first size-based qdisc that attempts to
      differentiate between small flows and heavy-hitters.  The goal is to
      catch the heavy-hitters and move them to a separate queue with less
      priority so that bulk traffic does not affect the latency of critical
      traffic.  Currently "less priority" means less weight (2:1 in
      particular) in a Weighted Deficit Round Robin (WDRR) scheduler.
      
      In essence, this patch addresses the "delay-bloat" problem due to
      bloated buffers. In some systems, large queues may be necessary for
      obtaining CPU efficiency, or due to the presence of unresponsive
      traffic like UDP, or just a large number of connections with each
      having a small amount of outstanding traffic. In these circumstances,
      HHF aims to reduce the HoL blocking for latency sensitive traffic,
      while not impacting the queues built up by bulk traffic.  HHF can also
      be used in conjunction with other AQM mechanisms such as CoDel.
      
      To capture heavy-hitters, we implement the "multi-stage filter" design
      in the following paper:
      C. Estan and G. Varghese, "New Directions in Traffic Measurement and
      Accounting", in ACM SIGCOMM, 2002.
      
      Some configurable qdisc settings through 'tc':
      - hhf_reset_timeout: period to reset counter values in the multi-stage
                           filter (default 40ms)
      - hhf_admit_bytes:   threshold to classify heavy-hitters
                           (default 128KB)
      - hhf_evict_timeout: threshold to evict idle heavy-hitters
                           (default 1s)
      - hhf_non_hh_weight: Weighted Deficit Round Robin (WDRR) weight for
                           non-heavy-hitters (default 2)
      - hh_flows_limit:    max number of heavy-hitter flow entries
                           (default 2048)
      
      Note that the ratio between hhf_admit_bytes and hhf_reset_timeout
      reflects the bandwidth of heavy-hitters that we attempt to capture
      (25Mbps with the above default settings).
      
      The false negative rate (heavy-hitter flows getting away unclassified)
      is zero by the design of the multi-stage filter algorithm.
      With 100 heavy-hitter flows, using four hashes and 4000 counters yields
      a false positive rate (non-heavy-hitters mistakenly classified as
      heavy-hitters) of less than 1e-4.
      Signed-off-by: NTerry Lam <vtlam@google.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      10239edf
  32. 16 11月, 2013 2 次提交
    • E
      pkt_sched: fq: fix pacing for small frames · f52ed899
      Eric Dumazet 提交于
      For performance reasons, sch_fq tried hard to not setup timers for every
      sent packet, using a quantum based heuristic : A delay is setup only if
      the flow exhausted its credit.
      
      Problem is that application limited flows can refill their credit
      for every queued packet, and they can evade pacing.
      
      This problem can also be triggered when TCP flows use small MSS values,
      as TSO auto sizing builds packets that are smaller than the default fq
      quantum (3028 bytes)
      
      This patch adds a 40 ms delay to guard flow credit refill.
      
      Fixes: afe4fd06 ("pkt_sched: fq: Fair Queue packet scheduler")
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Maciej Żenczykowski <maze@google.com>
      Cc: Willem de Bruijn <willemb@google.com>
      Cc: Yuchung Cheng <ycheng@google.com>
      Cc: Neal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f52ed899
    • E
      pkt_sched: fq: warn users using defrate · 65c5189a
      Eric Dumazet 提交于
      Commit 7eec4174 ("pkt_sched: fq: fix non TCP flows pacing")
      obsoleted TCA_FQ_FLOW_DEFAULT_RATE without notice for the users.
      
      Suggested by David Miller
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      65c5189a
  33. 10 11月, 2013 1 次提交
  34. 21 9月, 2013 1 次提交
  35. 30 8月, 2013 1 次提交
    • E
      pkt_sched: fq: Fair Queue packet scheduler · afe4fd06
      Eric Dumazet 提交于
      - Uses perfect flow match (not stochastic hash like SFQ/FQ_codel)
      - Uses the new_flow/old_flow separation from FQ_codel
      - New flows get an initial credit allowing IW10 without added delay.
      - Special FIFO queue for high prio packets (no need for PRIO + FQ)
      - Uses a hash table of RB trees to locate the flows at enqueue() time
      - Smart on demand gc (at enqueue() time, RB tree lookup evicts old
        unused flows)
      - Dynamic memory allocations.
      - Designed to allow millions of concurrent flows per Qdisc.
      - Small memory footprint : ~8K per Qdisc, and 104 bytes per flow.
      - Single high resolution timer for throttled flows (if any).
      - One RB tree to link throttled flows.
      - Ability to have a max rate per flow. We might add a socket option
        to add per socket limitation.
      
      Attempts have been made to add TCP pacing in TCP stack, but this
      seems to add complex code to an already complex stack.
      
      TCP pacing is welcomed for flows having idle times, as the cwnd
      permits TCP stack to queue a possibly large number of packets.
      
      This removes the 'slow start after idle' choice, hitting badly
      large BDP flows, and applications delivering chunks of data
      as video streams.
      
      Nicely spaced packets :
      Here interface is 10Gbit, but flow bottleneck is ~20Mbit
      
      cwin is big, yet FQ avoids the typical bursts generated by TCP
      (as in netperf TCP_RR -- -r 100000,100000)
      
      15:01:23.545279 IP A > B: . 78193:81089(2896) ack 65248 win 3125 <nop,nop,timestamp 1115 11597805>
      15:01:23.545394 IP B > A: . ack 81089 win 3668 <nop,nop,timestamp 11597985 1115>
      15:01:23.546488 IP A > B: . 81089:83985(2896) ack 65248 win 3125 <nop,nop,timestamp 1115 11597805>
      15:01:23.546565 IP B > A: . ack 83985 win 3668 <nop,nop,timestamp 11597986 1115>
      15:01:23.547713 IP A > B: . 83985:86881(2896) ack 65248 win 3125 <nop,nop,timestamp 1115 11597805>
      15:01:23.547778 IP B > A: . ack 86881 win 3668 <nop,nop,timestamp 11597987 1115>
      15:01:23.548911 IP A > B: . 86881:89777(2896) ack 65248 win 3125 <nop,nop,timestamp 1115 11597805>
      15:01:23.548949 IP B > A: . ack 89777 win 3668 <nop,nop,timestamp 11597988 1115>
      15:01:23.550116 IP A > B: . 89777:92673(2896) ack 65248 win 3125 <nop,nop,timestamp 1115 11597805>
      15:01:23.550182 IP B > A: . ack 92673 win 3668 <nop,nop,timestamp 11597989 1115>
      15:01:23.551333 IP A > B: . 92673:95569(2896) ack 65248 win 3125 <nop,nop,timestamp 1115 11597805>
      15:01:23.551406 IP B > A: . ack 95569 win 3668 <nop,nop,timestamp 11597991 1115>
      15:01:23.552539 IP A > B: . 95569:98465(2896) ack 65248 win 3125 <nop,nop,timestamp 1115 11597805>
      15:01:23.552576 IP B > A: . ack 98465 win 3668 <nop,nop,timestamp 11597992 1115>
      15:01:23.553756 IP A > B: . 98465:99913(1448) ack 65248 win 3125 <nop,nop,timestamp 1115 11597805>
      15:01:23.554138 IP A > B: P 99913:100001(88) ack 65248 win 3125 <nop,nop,timestamp 1115 11597805>
      15:01:23.554204 IP B > A: . ack 100001 win 3668 <nop,nop,timestamp 11597993 1115>
      15:01:23.554234 IP B > A: . 65248:68144(2896) ack 100001 win 3668 <nop,nop,timestamp 11597993 1115>
      15:01:23.555620 IP B > A: . 68144:71040(2896) ack 100001 win 3668 <nop,nop,timestamp 11597993 1115>
      15:01:23.557005 IP B > A: . 71040:73936(2896) ack 100001 win 3668 <nop,nop,timestamp 11597993 1115>
      15:01:23.558390 IP B > A: . 73936:76832(2896) ack 100001 win 3668 <nop,nop,timestamp 11597993 1115>
      15:01:23.559773 IP B > A: . 76832:79728(2896) ack 100001 win 3668 <nop,nop,timestamp 11597993 1115>
      15:01:23.561158 IP B > A: . 79728:82624(2896) ack 100001 win 3668 <nop,nop,timestamp 11597994 1115>
      15:01:23.562543 IP B > A: . 82624:85520(2896) ack 100001 win 3668 <nop,nop,timestamp 11597994 1115>
      15:01:23.563928 IP B > A: . 85520:88416(2896) ack 100001 win 3668 <nop,nop,timestamp 11597994 1115>
      15:01:23.565313 IP B > A: . 88416:91312(2896) ack 100001 win 3668 <nop,nop,timestamp 11597994 1115>
      15:01:23.566698 IP B > A: . 91312:94208(2896) ack 100001 win 3668 <nop,nop,timestamp 11597994 1115>
      15:01:23.568083 IP B > A: . 94208:97104(2896) ack 100001 win 3668 <nop,nop,timestamp 11597994 1115>
      15:01:23.569467 IP B > A: . 97104:100000(2896) ack 100001 win 3668 <nop,nop,timestamp 11597994 1115>
      15:01:23.570852 IP B > A: . 100000:102896(2896) ack 100001 win 3668 <nop,nop,timestamp 11597994 1115>
      15:01:23.572237 IP B > A: . 102896:105792(2896) ack 100001 win 3668 <nop,nop,timestamp 11597994 1115>
      15:01:23.573639 IP B > A: . 105792:108688(2896) ack 100001 win 3668 <nop,nop,timestamp 11597994 1115>
      15:01:23.575024 IP B > A: . 108688:111584(2896) ack 100001 win 3668 <nop,nop,timestamp 11597994 1115>
      15:01:23.576408 IP B > A: . 111584:114480(2896) ack 100001 win 3668 <nop,nop,timestamp 11597994 1115>
      15:01:23.577793 IP B > A: . 114480:117376(2896) ack 100001 win 3668 <nop,nop,timestamp 11597994 1115>
      
      TCP timestamps show that most packets from B were queued in the same ms
      timeframe (TSval 1159799{3,4}), but FQ managed to send them right
      in time to avoid a big burst.
      
      In slow start or steady state, very few packets are throttled [1]
      
      FQ gets a bunch of tunables as :
      
        limit : max number of packets on whole Qdisc (default 10000)
      
        flow_limit : max number of packets per flow (default 100)
      
        quantum : the credit per RR round (default is 2 MTU)
      
        initial_quantum : initial credit for new flows (default is 10 MTU)
      
        maxrate : max per flow rate (default : unlimited)
      
        buckets : number of RB trees (default : 1024) in hash table.
                     (consumes 8 bytes per bucket)
      
        [no]pacing : disable/enable pacing (default is enable)
      
      All of them can be changed on a live qdisc.
      
      $ tc qd add dev eth0 root fq help
      Usage: ... fq [ limit PACKETS ] [ flow_limit PACKETS ]
                    [ quantum BYTES ] [ initial_quantum BYTES ]
                    [ maxrate RATE  ] [ buckets NUMBER ]
                    [ [no]pacing ]
      
      $ tc -s -d qd
      qdisc fq 8002: dev eth0 root refcnt 32 limit 10000p flow_limit 100p buckets 256 quantum 3028 initial_quantum 15140
       Sent 216532416 bytes 148395 pkt (dropped 0, overlimits 0 requeues 14)
       backlog 0b 0p requeues 14
        511 flows, 511 inactive, 0 throttled
        110 gc, 0 highprio, 0 retrans, 1143 throttled, 0 flows_plimit
      
      [1] Except if initial srtt is overestimated, as if using
      cached srtt in tcp metrics. We'll provide a fix for this issue.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Yuchung Cheng <ycheng@google.com>
      Cc: Neal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      afe4fd06
  36. 15 8月, 2013 1 次提交
    • J
      net_sched: restore "linklayer atm" handling · 8a8e3d84
      Jesper Dangaard Brouer 提交于
      commit 56b765b7 ("htb: improved accuracy at high rates")
      broke the "linklayer atm" handling.
      
       tc class add ... htb rate X ceil Y linklayer atm
      
      The linklayer setting is implemented by modifying the rate table
      which is send to the kernel.  No direct parameter were
      transferred to the kernel indicating the linklayer setting.
      
      The commit 56b765b7 ("htb: improved accuracy at high rates")
      removed the use of the rate table system.
      
      To keep compatible with older iproute2 utils, this patch detects
      the linklayer by parsing the rate table.  It also supports future
      versions of iproute2 to send this linklayer parameter to the
      kernel directly. This is done by using the __reserved field in
      struct tc_ratespec, to convey the choosen linklayer option, but
      only using the lower 4 bits of this field.
      
      Linklayer detection is limited to speeds below 100Mbit/s, because
      at high rates the rtab is gets too inaccurate, so bad that
      several fields contain the same values, this resembling the ATM
      detect.  Fields even start to contain "0" time to send, e.g. at
      1000Mbit/s sending a 96 bytes packet cost "0", thus the rtab have
      been more broken than we first realized.
      Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8a8e3d84