1. 22 10月, 2015 7 次提交
  2. 21 10月, 2015 10 次提交
    • E
      Adding switchdev ageing notification on port bridged · 6ac311ae
      Elad Raz 提交于
      Configure ageing time to the HW for newly bridged device
      
      CC: Scott Feldman <sfeldma@gmail.com>
      CC: Jiri Pirko <jiri@resnulli.us>
      Signed-off-by: NElad Raz <eladr@mellanox.com>
      Acked-by: NJiri Pirko <jiri@mellanox.com>
      Acked-by: NScott Feldman <sfeldma@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6ac311ae
    • D
      Merge branch 'tcp-rack' · eb9fae32
      David S. Miller 提交于
      Yuchung Cheng says:
      
      ====================
      RACK loss detection
      
      RACK (Recent ACK) loss recovery uses the notion of time instead of
      packet sequence (FACK) or counts (dupthresh).
      
      It's inspired by the FACK heuristic in tcp_mark_lost_retrans(): when a
      limited transmit (new data packet) is sacked in recovery, then any
      retransmission sent before that newly sacked packet was sent must have
      been lost, since at least one round trip time has elapsed.
      
      But that existing heuristic from tcp_mark_lost_retrans()
      has several limitations:
        1) it can't detect tail drops since it depends on limited transmit
        2) it's disabled upon reordering (assumes no reordering)
        3) it's only enabled in fast recovery but not timeout recovery
      
      RACK addresses these limitations with a core idea: an unacknowledged
      packet P1 is deemed lost if a packet P2 that was sent later is is
      s/acked, since at least one round trip has passed.
      
      Since RACK cares about the time sequence instead of the data sequence
      of packets, it can detect tail drops when a later retransmission is
      s/acked, while FACK or dupthresh can't. For reordering RACK uses a
      dynamically adjusted reordering window ("reo_wnd") to reduce false
      positives on ever (small) degree of reordering, similar to the delayed
      Early Retransmit.
      
      In the current patch set RACK is only a supplemental loss detection
      and does not trigger fast recovery. However we are developing RACK
      to replace or consolidate FACK/dupthresh, early retransmit, and
      thin-dupack. These heuristics all implicitly bear the time notion.
      For example, the delayed Early Retransmit is simply applying RACK
      to trigger the fast recovery with small inflight.
      
      RACK requires measuring the minimum RTT. Tracking a global min is less
      robust due to traffic engineering pathing changes. Therefore it uses a
      windowed filter by Kathleen Nichols. The min RTT can also be useful
      for various other purposes like congestion control or stat monitoring.
      
      This patch has been used on Google servers for well over 1 year. RACK
      has also been implemented in the QUIC protocol. We are submitting an
      IETF draft as well.
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      eb9fae32
    • Y
      tcp: use RACK to detect losses · 4f41b1c5
      Yuchung Cheng 提交于
      This patch implements the second half of RACK that uses the the most
      recent transmit time among all delivered packets to detect losses.
      
      tcp_rack_mark_lost() is called upon receiving a dubious ACK.
      It then checks if an not-yet-sacked packet was sent at least
      "reo_wnd" prior to the sent time of the most recently delivered.
      If so the packet is deemed lost.
      
      The "reo_wnd" reordering window starts with 1msec for fast loss
      detection and changes to min-RTT/4 when reordering is observed.
      We found 1msec accommodates well on tiny degree of reordering
      (<3 pkts) on faster links. We use min-RTT instead of SRTT because
      reordering is more of a path property but SRTT can be inflated by
      self-inflicated congestion. The factor of 4 is borrowed from the
      delayed early retransmit and seems to work reasonably well.
      
      Since RACK is still experimental, it is now used as a supplemental
      loss detection on top of existing algorithms. It is only effective
      after the fast recovery starts or after the timeout occurs. The
      fast recovery is still triggered by FACK and/or dupack threshold
      instead of RACK.
      
      We introduce a new sysctl net.ipv4.tcp_recovery for future
      experiments of loss recoveries. For now RACK can be disabled by
      setting it to 0.
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4f41b1c5
    • Y
      tcp: track the packet timings in RACK · 659a8ad5
      Yuchung Cheng 提交于
      This patch is the first half of the RACK loss recovery.
      
      RACK loss recovery uses the notion of time instead
      of packet sequence (FACK) or counts (dupthresh). It's inspired by the
      previous FACK heuristic in tcp_mark_lost_retrans(): when a limited
      transmit (new data packet) is sacked, then current retransmitted
      sequence below the newly sacked sequence must been lost,
      since at least one round trip time has elapsed.
      
      But it has several limitations:
      1) can't detect tail drops since it depends on limited transmit
      2) is disabled upon reordering (assumes no reordering)
      3) only enabled in fast recovery ut not timeout recovery
      
      RACK (Recently ACK) addresses these limitations with the notion
      of time instead: a packet P1 is lost if a later packet P2 is s/acked,
      as at least one round trip has passed.
      
      Since RACK cares about the time sequence instead of the data sequence
      of packets, it can detect tail drops when later retransmission is
      s/acked while FACK or dupthresh can't. For reordering RACK uses a
      dynamically adjusted reordering window ("reo_wnd") to reduce false
      positives on ever (small) degree of reordering.
      
      This patch implements tcp_advanced_rack() which tracks the
      most recent transmission time among the packets that have been
      delivered (ACKed or SACKed) in tp->rack.mstamp. This timestamp
      is the key to determine which packet has been lost.
      
      Consider an example that the sender sends six packets:
      T1: P1 (lost)
      T2: P2
      T3: P3
      T4: P4
      T100: sack of P2. rack.mstamp = T2
      T101: retransmit P1
      T102: sack of P2,P3,P4. rack.mstamp = T4
      T205: ACK of P4 since the hole is repaired. rack.mstamp = T101
      
      We need to be careful about spurious retransmission because it may
      falsely advance tp->rack.mstamp by an RTT or an RTO, causing RACK
      to falsely mark all packets lost, just like a spurious timeout.
      
      We identify spurious retransmission by the ACK's TS echo value.
      If TS option is not applicable but the retransmission is acknowledged
      less than min-RTT ago, it is likely to be spurious. We refrain from
      using the transmission time of these spurious retransmissions.
      
      The second half is implemented in the next patch that marks packet
      lost using RACK timestamp.
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      659a8ad5
    • Y
      tcp: skb_mstamp_after helper · 625a5e10
      Yuchung Cheng 提交于
      a helper to prepare the first main RACK patch.
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      625a5e10
    • Y
      tcp: add tcp_tsopt_ecr_before helper · 77c63127
      Yuchung Cheng 提交于
      a helper to prepare the main RACK patch
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      77c63127
    • Y
      tcp: remove tcp_mark_lost_retrans() · af82f4e8
      Yuchung Cheng 提交于
      Remove the existing lost retransmit detection because RACK subsumes
      it completely. This also stops the overloading the ack_seq field of
      the skb control block.
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      af82f4e8
    • Y
      tcp: track min RTT using windowed min-filter · f6722583
      Yuchung Cheng 提交于
      Kathleen Nichols' algorithm for tracking the minimum RTT of a
      data stream over some measurement window. It uses constant space
      and constant time per update. Yet it almost always delivers
      the same minimum as an implementation that has to keep all
      the data in the window. The measurement window is tunable via
      sysctl.net.ipv4.tcp_min_rtt_wlen with a default value of 5 minutes.
      
      The algorithm keeps track of the best, 2nd best & 3rd best min
      values, maintaining an invariant that the measurement time of
      the n'th best >= n-1'th best. It also makes sure that the three
      values are widely separated in the time window since that bounds
      the worse case error when that data is monotonically increasing
      over the window.
      
      Upon getting a new min, we can forget everything earlier because
      it has no value - the new min is less than everything else in the
      window by definition and it's the most recent. So we restart fresh
      on every new min and overwrites the 2nd & 3rd choices. The same
      property holds for the 2nd & 3rd best.
      
      Therefore we have to maintain two invariants to maximize the
      information in the samples, one on values (1st.v <= 2nd.v <=
      3rd.v) and the other on times (now-win <=1st.t <= 2nd.t <= 3rd.t <=
      now). These invariants determine the structure of the code
      
      The RTT input to the windowed filter is the minimum RTT measured
      from ACK or SACK, or as the last resort from TCP timestamps.
      
      The accessor tcp_min_rtt() returns the minimum RTT seen in the
      window. ~0U indicates it is not available. The minimum is 1usec
      even if the true RTT is below that.
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f6722583
    • Y
      tcp: apply Kern's check on RTTs used for congestion control · 9e45a3e3
      Yuchung Cheng 提交于
      Currently ca_seq_rtt_us does not use Kern's check. Fix that by
      checking if any packet acked is a retransmit, for both RTT used
      for RTT estimation and congestion control.
      
      Fixes: 5b08e47c ("tcp: prefer packet timing to TS-ECR for RTT")
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9e45a3e3
    • D
      Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue · c8fdc324
      David S. Miller 提交于
      Jeff Kirsher says:
      
      ====================
      Intel Wired LAN Driver Updates 2015-10-19
      
      This series contains updates to i40e and i40evf only.
      
      Kiran adds a spinlock around code accessing VSI MAC filter list to
      ensure that we are synchronizing access to the filter list, otherwise
      we can end up with multiple accesses at the same time which can cause
      the VSI MAC filter list to get in an unstable or corrupted state.
      
      Jesse fixes overlong BIT defines, where the RSS enabling call were
      mistakenly missed.  Also fixes a bug where the enable function was
      enabling the interrupt twice while trying to update the two interrupt
      throttle rate thresholds for Rx and Tx, while refactoring the IRQ
      enable function to simplify reading the flow.  Addressed the high
      CPU utilization of some small streaming workloads that the driver should
      reduce CPU in.
      
      Anjali fixes two X722 issues with respect to EEPROM checksum verify and
      reading NVM version info.  Fixed where a mask value was accidentally
      replaced with a bit mask causing Flow Director sideband to be broken.
      
      Alex Duyck fixes areas of the drivers which run from hard interrupt
      context or with interrupts already disabled in netpoll, so use
      napi_schedule_irqoff() instead of napi_schedule().
      
      Mitch fixes the VF drivers to not easily give up when it is not able
      to communicate with the PF driver.
      
      Carolyn fixes a problem where our tools MAC loopback test, after driver
      unbind would fail because the hardware was configured for multiqueue and
      unbind operation did not clear this configuration.  Also fixed a issue
      where the NVMUpdate tool gets bad data from the PHY when using the PHY
      NVM feature because of contention on the MDIO interface from getting
      PHY capability calls from the driver during regular operations.
      
      Catherine fixed an issue where we were checking if autoneg was allowed
      to change before checking if autoneg was changing, these checks need to
      be in the reverse order.
      
      Jean Sacren fixes up an function header comment to align the kernel-docs
      with the actual code.
      
      v2: Cleaned up the use of spin_is_locked() in patch 1 based on feedback
          from David Miller, since it always evaluates to zero on uni-processor
          builds
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c8fdc324
  3. 20 10月, 2015 19 次提交
  4. 19 10月, 2015 4 次提交
    • F
      net: bcmgenet: Fix early link interrupt enabling · 37850e37
      Florian Fainelli 提交于
      Link interrupts are enabled in init_umac(), which is too early for us to
      process them since we do not yet have a valid PHY device pointer. On
      BCM7425 chips for instance, we will crash calling phy_mac_interrupt()
      because phydev is NULL.
      
      Fix this by moving the link interrupts enabling in
      bcmgenet_netif_start(), under a specific function:
      bcmgenet_link_intr_enable() and while at it, update the comments
      surrounding the code.
      
      Fixes: 6cc8e6d4 ("net: bcmgenet: Delay PHY initialization to bcmgenet_open()")
      Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      37850e37
    • D
      Merge tag 'wireless-drivers-for-davem-2015-10-17' of... · afc050dd
      David S. Miller 提交于
      Merge tag 'wireless-drivers-for-davem-2015-10-17' of git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers
      
      Kalle Valo says:
      
      ====================
      iwlwifi:
      
      * mvm: flush fw_dump_wk when mvm fails to start
      * mvm: init card correctly on ctkill exit check
      * pci: add a few more PCI subvendor IDs for the 7265 series
      * fix firmware filename for 3160
      * mvm: clear csa countdown when AP is stopped
      * mvm: fix D3 firmware PN programming
      * dvm: fix D3 firmware PN programming
      * mvm: fix D3 CCMP TX PN assignment
      
      rtlwifi:
      
      * rtl8821ae: Fix system lockups on boot
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      afc050dd
    • D
      Merge git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf-next · 371f1c7e
      David S. Miller 提交于
      Pablo Neira Ayuso says:
      
      ====================
      Netfilter/IPVS updates for net-next
      
      The following patchset contains Netfilter/IPVS updates for your net-next
      tree. Most relevantly, updates for the nfnetlink_log to integrate with
      conntrack, fixes for cttimeout and improvements for nf_queue core, they are:
      
      1) Remove useless ifdef around static inline function in IPVS, from
         Eric W. Biederman.
      
      2) Simplify the conntrack support for nfnetlink_queue: Merge
         nfnetlink_queue_ct.c file into nfnetlink_queue_core.c, then rename it back
         to nfnetlink_queue.c
      
      3) Use y2038 safe timestamp from nfnetlink_queue.
      
      4) Get rid of dead function definition in nf_conntrack, from Flavio
         Leitner.
      
      5) Attach conntrack support for nfnetlink_log.c, from Ken-ichirou MATSUZAWA.
         This adds a new NETFILTER_NETLINK_GLUE_CT Kconfig switch that
         controls enabling both nfqueue and nflog integration with conntrack.
         The userspace application can request this via NFULNL_CFG_F_CONNTRACK
         configuration flag.
      
      6) Remove unused netns variables in IPVS, from Eric W. Biederman and
         Simon Horman.
      
      7) Don't put back the refcount on the cttimeout object from xt_CT on success.
      
      8) Fix crash on cttimeout policy object removal. We have to flush out
         the cttimeout extension area of the conntrack not to refer to an unexisting
         object that was just removed.
      
      9) Make sure rcu_callback completion before removing nfnetlink_cttimeout
         module removal.
      
      10) Fix compilation warning in br_netfilter when no nf_defrag_ipv4 and
          nf_defrag_ipv6 are enabled. Patch from Arnd Bergmann.
      
      11) Autoload ctnetlink dependencies when NFULNL_CFG_F_CONNTRACK is
          requested. Again from Ken-ichirou MATSUZAWA.
      
      12) Don't use pointer to previous hook when reinjecting traffic via
          nf_queue with NF_REPEAT verdict since it may be already gone. This
          also avoids a deadloop if the userspace application keeps returning
          NF_REPEAT.
      
      13) A bunch of cleanups for netfilter IPv4 and IPv6 code from Ian Morris.
      
      14) Consolidate logger instance existence check in nfulnl_recv_config().
      
      15) Fix broken atomicity when applying configuration updates to logger
          instances in nfnetlink_log.
      
      16) Get rid of the .owner attribute in our hook object. We don't need
          this anymore since we're dropping pending packets that have escaped
          from the kernel when unremoving the hook. Patch from Florian Westphal.
      
      17) Remove unnecessary rcu_read_lock() from nf_reinject code, we always
          assume RCU read side lock from .call_rcu in nfnetlink. Also from Florian.
      
      18) Use static inline function instead of macros to define NF_HOOK() and
          NF_HOOK_COND() when no netfilter support in on, from Arnd Bergmann.
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      371f1c7e
    • S
      RDS: fix rds-ping deadlock over TCP transport · 7b4b0009
      santosh.shilimkar@oracle.com 提交于
      Sowmini found hang with rds-ping while testing RDS over TCP. Its
      a corner case and doesn't happen always. The issue is not reproducible
      with IB transport. Its clear from below dump why we see it with RDS TCP.
      
       [<ffffffff8153b7e5>] do_tcp_setsockopt+0xb5/0x740
       [<ffffffff8153bec4>] tcp_setsockopt+0x24/0x30
       [<ffffffff814d57d4>] sock_common_setsockopt+0x14/0x20
       [<ffffffffa096071d>] rds_tcp_xmit_prepare+0x5d/0x70 [rds_tcp]
       [<ffffffffa093b5f7>] rds_send_xmit+0xd7/0x740 [rds]
       [<ffffffffa093bda2>] rds_send_pong+0x142/0x180 [rds]
       [<ffffffffa0939d34>] rds_recv_incoming+0x274/0x330 [rds]
       [<ffffffff810815ae>] ? ttwu_queue+0x11e/0x130
       [<ffffffff814dcacd>] ? skb_copy_bits+0x6d/0x2c0
       [<ffffffffa0960350>] rds_tcp_data_recv+0x2f0/0x3d0 [rds_tcp]
       [<ffffffff8153d836>] tcp_read_sock+0x96/0x1c0
       [<ffffffffa0960060>] ? rds_tcp_recv_init+0x40/0x40 [rds_tcp]
       [<ffffffff814d6a90>] ? sock_def_write_space+0xa0/0xa0
       [<ffffffffa09604d1>] rds_tcp_data_ready+0xa1/0xf0 [rds_tcp]
       [<ffffffff81545249>] tcp_data_queue+0x379/0x5b0
       [<ffffffffa0960cdb>] ? rds_tcp_write_space+0xbb/0x110 [rds_tcp]
       [<ffffffff81547fd2>] tcp_rcv_established+0x2e2/0x6e0
       [<ffffffff81552602>] tcp_v4_do_rcv+0x122/0x220
       [<ffffffff81553627>] tcp_v4_rcv+0x867/0x880
       [<ffffffff8152e0b3>] ip_local_deliver_finish+0xa3/0x220
      
      This happens because rds_send_xmit() chain wants to take
      sock_lock which is already taken by tcp_v4_rcv() on its
      way to rds_tcp_data_ready(). Commit db6526dc ("RDS: use
      rds_send_xmit() state instead of RDS_LL_SEND_FULL") which
      was trying to opportunistically finish the send request
      in same thread context.
      
      But because of above recursive lock hang with RDS TCP,
      the send work from rds_send_pong() needs to deferred to
      worker to avoid lock up. Given RDS ping is more of connectivity
      test than performance critical path, its should be ok even
      for transport like IB.
      Reported-by: NSowmini Varadhan <sowmini.varadhan@oracle.com>
      Acked-by: NSowmini Varadhan <sowmini.varadhan@oracle.com>
      Signed-off-by: NSantosh Shilimkar <ssantosh@kernel.org>
      Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@oracle.com>
      Acked-by: NSowmini Varadhan <sowmini.varadhan@oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7b4b0009