1. 22 9月, 2016 25 次提交
  2. 21 9月, 2016 15 次提交
    • D
      Merge branch 'mlxse-resource-query' · 2d7a8926
      David S. Miller 提交于
      Jiri Pirko says:
      
      ====================
      mlxsw: Replace Hw related const with resource query results
      
      Nogah says:
      
      Many of the ASIC's properties can be read from the HW with resources query.
      This patchset adds new resources to the resource query and implement
      using them, instead of the constants that we currently use.
      Those resources are lag, kvd and router related.
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2d7a8926
    • N
      mlxsw: spectrum: Implement max rif resource · 8f8a62d4
      Nogah Frankel 提交于
      Replace max rif const with using the result from resource query.
      Signed-off-by: NNogah Frankel <nogahf@mellanox.com>
      Reviewed-by: NIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8f8a62d4
    • N
      mlxsw: pci: Add max router interface resource · 274df7fb
      Nogah Frankel 提交于
      Add the max number of rif (router interfaces) to resource query.
      Signed-off-by: NNogah Frankel <nogahf@mellanox.com>
      Reviewed-by: NIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      274df7fb
    • N
      mlxsw: pci: Add some miscellaneous resources · e44d49cb
      Nogah Frankel 提交于
      Add max system ports, max regions and max vlan groups to resource query.
      Signed-off-by: NNogah Frankel <nogahf@mellanox.com>
      Reviewed-by: NIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e44d49cb
    • N
      mlxsw: spectrum: Implement max virtual routers resource · 9497c042
      Nogah Frankel 提交于
      Replace max virtual routers const with the result from
      the resource query.
      Signed-off-by: NNogah Frankel <nogahf@mellanox.com>
      Reviewed-by: NIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9497c042
    • N
      mlxsw: pci: Add max virtual routers resource · b8a09f0a
      Nogah Frankel 提交于
      Add the max number of virtual routers to resource query.
      Signed-off-by: NNogah Frankel <nogahf@mellanox.com>
      Reviewed-by: NIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b8a09f0a
    • N
      mlxsw: profile: Add KVD resources to profile config · 403547d3
      Nogah Frankel 提交于
      Use resources from resource query to determine values for
      the profile configuration.
      Add KVD determined section sizes to the resources struct.
      Change the profile struct and value to match this changes.
      Signed-off-by: NNogah Frankel <nogahf@mellanox.com>
      Reviewed-by: NIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      403547d3
    • N
      mlxsw: pci: Add KVD size relate resources · 2acd10c5
      Nogah Frankel 提交于
      Add KVD size, and minimum sizes for the single and double
      sections resources to resources query.
      Signed-off-by: NNogah Frankel <nogahf@mellanox.com>
      Reviewed-by: NIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2acd10c5
    • N
      mlxsw: spectrum: lag resources- use resources data instead of consts · ce0bd2b0
      Nogah Frankel 提交于
      Use max lag and max ports in lag resources as the result of resource query
      instead of using const to save them.
      Signed-off-by: NNogah Frankel <nogahf@mellanox.com>
      Reviewed-by: NIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ce0bd2b0
    • N
      mlxsw: pci: Add lag related resources to resources query · 9f7f797c
      Nogah Frankel 提交于
      Add max lag and max ports in lag resources to resources query.
      Signed-off-by: NNogah Frankel <nogahf@mellanox.com>
      Reviewed-by: NIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9f7f797c
    • O
      mlxsw: spectrum: Make offloads stats functions static · 4bdcc6ca
      Or Gerlitz 提交于
      The offloads stats functions are local to this file, make them static.
      
      Fixes: fc1bbb0f ('mlxsw: spectrum: Implement offload stats ndo [..]')
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Acked-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4bdcc6ca
    • D
      Merge branch 'tcp-bbr' · a624f93c
      David S. Miller 提交于
      Neal Cardwell says:
      
      ====================
      tcp: BBR congestion control algorithm
      
      This patch series implements a new TCP congestion control algorithm:
      BBR (Bottleneck Bandwidth and RTT). A paper with a detailed
      description of BBR will be published in ACM Queue, September-October
      2016, as "BBR: Congestion-Based Congestion Control". BBR is widely
      deployed in production at Google.
      
      The patch series starts with a set of supporting infrastructure
      changes, including a few that extend the congestion control
      framework. The last patch adds BBR as a TCP congestion control
      module. Please see individual patches for the details.
      
      - v3 -> v4:
       - Updated tcp_bbr.c in "tcp_bbr: add BBR congestion control"
         to use const to qualify all the constant parameters.
         Thanks to Stephen Hemminger.
       - In "tcp_bbr: add BBR congestion control", remove the bbr_rate_kbps()
         function, which had a 64-bit divide that would be problematic on some
         architectures, and just use bbr_rate_bytes_per_sec() directly.
         Thanks to Kenneth Klette Jonassen for suggesting this.
       - In "tcp: switch back to proper tcp_skb_cb size check in tcp_init()",
         switched from sizeof(skb->cb) to FIELD_SIZEOF.
         Thanks to Lance Richardson for suggesting this.
       - Updated "tcp_bbr: add BBR congestion control" commit message with
         performance data, more details about deployment at Google, and
         another reminder to use fq with BBR.
       - Updated tcp_bbr.c in "tcp_bbr: add BBR congestion control"
         to use MODULE_LICENSE("Dual BSD/GPL").
      
      - v2 -> v3: fix another issue caught by build bots:
       - adjust rate_sample struct initialization syntax to allow gcc-4.4 to compile
         the "tcp: track data delivery rate for a TCP connection" patch; also
         adjusted some similar syntax in "tcp_bbr: add BBR congestion control"
      
      - v1 -> v2: fix issues caught by build bots:
       - fix "tcp: export data delivery rate" to use rate64 instead of rate,
         so there is a 64-bit numerator for the do_div call
       - fix conflicting definitions for minmax caused by
         "tcp: use windowed min filter library for TCP min_rtt estimation"
         with a new commit:
         tcp: cdg: rename struct minmax in tcp_cdg.c to avoid a naming conflict
       - fix warning about the use of __packed in
         "tcp: track data delivery rate for a TCP connection",
         which involves the addition of a new commit:
         tcp: switch back to proper tcp_skb_cb size check in tcp_init()
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a624f93c
    • N
      tcp_bbr: add BBR congestion control · 0f8782ea
      Neal Cardwell 提交于
      This commit implements a new TCP congestion control algorithm: BBR
      (Bottleneck Bandwidth and RTT). A detailed description of BBR will be
      published in ACM Queue, Vol. 14 No. 5, September-October 2016, as
      "BBR: Congestion-Based Congestion Control".
      
      BBR has significantly increased throughput and reduced latency for
      connections on Google's internal backbone networks and google.com and
      YouTube Web servers.
      
      BBR requires only changes on the sender side, not in the network or
      the receiver side. Thus it can be incrementally deployed on today's
      Internet, or in datacenters.
      
      The Internet has predominantly used loss-based congestion control
      (largely Reno or CUBIC) since the 1980s, relying on packet loss as the
      signal to slow down. While this worked well for many years, loss-based
      congestion control is unfortunately out-dated in today's networks. On
      today's Internet, loss-based congestion control causes the infamous
      bufferbloat problem, often causing seconds of needless queuing delay,
      since it fills the bloated buffers in many last-mile links. On today's
      high-speed long-haul links using commodity switches with shallow
      buffers, loss-based congestion control has abysmal throughput because
      it over-reacts to losses caused by transient traffic bursts.
      
      In 1981 Kleinrock and Gale showed that the optimal operating point for
      a network maximizes delivered bandwidth while minimizing delay and
      loss, not only for single connections but for the network as a
      whole. Finding that optimal operating point has been elusive, since
      any single network measurement is ambiguous: network measurements are
      the result of both bandwidth and propagation delay, and those two
      cannot be measured simultaneously.
      
      While it is impossible to disambiguate any single bandwidth or RTT
      measurement, a connection's behavior over time tells a clearer
      story. BBR uses a measurement strategy designed to resolve this
      ambiguity. It combines these measurements with a robust servo loop
      using recent control systems advances to implement a distributed
      congestion control algorithm that reacts to actual congestion, not
      packet loss or transient queue delay, and is designed to converge with
      high probability to a point near the optimal operating point.
      
      In a nutshell, BBR creates an explicit model of the network pipe by
      sequentially probing the bottleneck bandwidth and RTT. On the arrival
      of each ACK, BBR derives the current delivery rate of the last round
      trip, and feeds it through a windowed max-filter to estimate the
      bottleneck bandwidth. Conversely it uses a windowed min-filter to
      estimate the round trip propagation delay. The max-filtered bandwidth
      and min-filtered RTT estimates form BBR's model of the network pipe.
      
      Using its model, BBR sets control parameters to govern sending
      behavior. The primary control is the pacing rate: BBR applies a gain
      multiplier to transmit faster or slower than the observed bottleneck
      bandwidth. The conventional congestion window (cwnd) is now the
      secondary control; the cwnd is set to a small multiple of the
      estimated BDP (bandwidth-delay product) in order to allow full
      utilization and bandwidth probing while bounding the potential amount
      of queue at the bottleneck.
      
      When a BBR connection starts, it enters STARTUP mode and applies a
      high gain to perform an exponential search to quickly probe the
      bottleneck bandwidth (doubling its sending rate each round trip, like
      slow start). However, instead of continuing until it fills up the
      buffer (i.e. a loss), or until delay or ACK spacing reaches some
      threshold (like Hystart), it uses its model of the pipe to estimate
      when that pipe is full: it estimates the pipe is full when it notices
      the estimated bandwidth has stopped growing. At that point it exits
      STARTUP and enters DRAIN mode, where it reduces its pacing rate to
      drain the queue it estimates it has created.
      
      Then BBR enters steady state. In steady state, PROBE_BW mode cycles
      between first pacing faster to probe for more bandwidth, then pacing
      slower to drain any queue that created if no more bandwidth was
      available, and then cruising at the estimated bandwidth to utilize the
      pipe without creating excess queue. Occasionally, on an as-needed
      basis, it sends significantly slower to probe for RTT (PROBE_RTT
      mode).
      
      BBR has been fully deployed on Google's wide-area backbone networks
      and we're experimenting with BBR on Google.com and YouTube on a global
      scale.  Replacing CUBIC with BBR has resulted in significant
      improvements in network latency and application (RPC, browser, and
      video) metrics. For more details please refer to our upcoming ACM
      Queue publication.
      
      Example performance results, to illustrate the difference between BBR
      and CUBIC:
      
      Resilience to random loss (e.g. from shallow buffers):
        Consider a netperf TCP_STREAM test lasting 30 secs on an emulated
        path with a 10Gbps bottleneck, 100ms RTT, and 1% packet loss
        rate. CUBIC gets 3.27 Mbps, and BBR gets 9150 Mbps (2798x higher).
      
      Low latency with the bloated buffers common in today's last-mile links:
        Consider a netperf TCP_STREAM test lasting 120 secs on an emulated
        path with a 10Mbps bottleneck, 40ms RTT, and 1000-packet bottleneck
        buffer. Both fully utilize the bottleneck bandwidth, but BBR
        achieves this with a median RTT 25x lower (43 ms instead of 1.09
        secs).
      
      Our long-term goal is to improve the congestion control algorithms
      used on the Internet. We are hopeful that BBR can help advance the
      efforts toward this goal, and motivate the community to do further
      research.
      
      Test results, performance evaluations, feedback, and BBR-related
      discussions are very welcome in the public e-mail list for BBR:
      
        https://groups.google.com/forum/#!forum/bbr-dev
      
      NOTE: BBR *must* be used with the fq qdisc ("man tc-fq") with pacing
      enabled, since pacing is integral to the BBR design and
      implementation. BBR without pacing would not function properly, and
      may incur unnecessary high packet loss rates.
      Signed-off-by: NVan Jacobson <vanj@google.com>
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NNandita Dukkipati <nanditad@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NSoheil Hassas Yeganeh <soheil@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0f8782ea
    • N
      tcp: increase ICSK_CA_PRIV_SIZE from 64 bytes to 88 · 7e744171
      Neal Cardwell 提交于
      The TCP CUBIC module already uses 64 bytes.
      The upcoming TCP BBR module uses 88 bytes.
      Signed-off-by: NVan Jacobson <vanj@google.com>
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NNandita Dukkipati <nanditad@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NSoheil Hassas Yeganeh <soheil@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7e744171
    • Y
      tcp: new CC hook to set sending rate with rate_sample in any CA state · c0402760
      Yuchung Cheng 提交于
      This commit introduces an optional new "omnipotent" hook,
      cong_control(), for congestion control modules. The cong_control()
      function is called at the end of processing an ACK (i.e., after
      updating sequence numbers, the SACK scoreboard, and loss
      detection). At that moment we have precise delivery rate information
      the congestion control module can use to control the sending behavior
      (using cwnd, TSO skb size, and pacing rate) in any CA state.
      
      This function can also be used by a congestion control that prefers
      not to use the default cwnd reduction approach (i.e., the PRR
      algorithm) during CA_Recovery to control the cwnd and sending rate
      during loss recovery.
      
      We take advantage of the fact that recent changes defer the
      retransmission or transmission of new data (e.g. by F-RTO) in recovery
      until the new tcp_cong_control() function is run.
      
      With this commit, we only run tcp_update_pacing_rate() if the
      congestion control is not using this new API. New congestion controls
      which use the new API do not want the TCP stack to run the default
      pacing rate calculation and overwrite whatever pacing rate they have
      chosen at initialization time.
      Signed-off-by: NVan Jacobson <vanj@google.com>
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NNandita Dukkipati <nanditad@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NSoheil Hassas Yeganeh <soheil@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c0402760