1. 30 11月, 2011 2 次提交
    • T
      net: Add queue state xoff flag for stack · 73466498
      Tom Herbert 提交于
      Create separate queue state flags so that either the stack or drivers
      can turn on XOFF.  Added a set of functions used in the stack to determine
      if a queue is really stopped (either by stack or driver)
      Signed-off-by: NTom Herbert <therbert@google.com>
      Acked-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      73466498
    • T
      dql: Dynamic queue limits · 75957ba3
      Tom Herbert 提交于
      Implementation of dynamic queue limits (dql).  This is a libary which
      allows a queue limit to be dynamically managed.  The goal of dql is
      to set the queue limit, number of objects to the queue, to be minimized
      without allowing the queue to be starved.
      
      dql would be used with a queue which has these properties:
      
      1) Objects are queued up to some limit which can be expressed as a
         count of objects.
      2) Periodically a completion process executes which retires consumed
         objects.
      3) Starvation occurs when limit has been reached, all queued data has
         actually been consumed but completion processing has not yet run,
         so queuing new data is blocked.
      4) Minimizing the amount of queued data is desirable.
      
      A canonical example of such a queue would be a NIC HW transmit queue.
      
      The queue limit is dynamic, it will increase or decrease over time
      depending on the workload.  The queue limit is recalculated each time
      completion processing is done.  Increases occur when the queue is
      starved and can exponentially increase over successive intervals.
      Decreases occur when more data is being maintained in the queue than
      needed to prevent starvation.  The number of extra objects, or "slack",
      is measured over successive intervals, and to avoid hysteresis the
      limit is only reduced by the miminum slack seen over a configurable
      time period.
      
      dql API provides routines to manage the queue:
      - dql_init is called to intialize the dql structure
      - dql_reset is called to reset dynamic values
      - dql_queued called when objects are being enqueued
      - dql_avail returns availability in the queue
      - dql_completed is called when objects have be consumed in the queue
      
      Configuration consists of:
      - max_limit, maximum limit
      - min_limit, minimum limit
      - slack_hold_time, time to measure instances of slack before reducing
        queue limit
      Signed-off-by: NTom Herbert <therbert@google.com>
      Acked-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      75957ba3
  2. 29 11月, 2011 34 次提交
  3. 28 11月, 2011 4 次提交
    • A
      net/irda: convert drivers/net/irda/* to use module_platform_driver() · 8b7ff200
      Axel Lin 提交于
      This patch converts the drivers in drivers/net/irda/* to use the
      module_platform_driver() macro which makes the code smaller and a bit
      simpler.
      
      Cc: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
      Signed-off-by: NAxel Lin <axel.lin@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8b7ff200
    • N
      tcp: skip cwnd moderation in TCP_CA_Open in tcp_try_to_open · 8cd6d616
      Neal Cardwell 提交于
      The problem: Senders were overriding cwnd values picked during an undo
      by calling tcp_moderate_cwnd() in tcp_try_to_open().
      
      The fix: Don't moderate cwnd in tcp_try_to_open() if we're in
      TCP_CA_Open, since doing so is generally unnecessary and specifically
      would override a DSACK-based undo of a cwnd reduction made in fast
      recovery.
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8cd6d616
    • N
      tcp: allow undo from reordered DSACKs · f698204b
      Neal Cardwell 提交于
      Previously, SACK-enabled connections hung around in TCP_CA_Disorder
      state while snd_una==high_seq, just waiting to accumulate DSACKs and
      hopefully undo a cwnd reduction. This could and did lead to the
      following unfortunate scenario: if some incoming ACKs advance snd_una
      beyond high_seq then we were setting undo_marker to 0 and moving to
      TCP_CA_Open, so if (due to reordering in the ACK return path) we
      shortly thereafter received a DSACK then we were no longer able to
      undo the cwnd reduction.
      
      The change: Simplify the congestion avoidance state machine by
      removing the behavior where SACK-enabled connections hung around in
      the TCP_CA_Disorder state just waiting for DSACKs. Instead, when
      snd_una advances to high_seq or beyond we typically move to
      TCP_CA_Open immediately and allow an undo in either TCP_CA_Open or
      TCP_CA_Disorder if we later receive enough DSACKs.
      
      Other patches in this series will provide other changes that are
      necessary to fully fix this problem.
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f698204b
    • N
      tcp: use SACKs and DSACKs that arrive on ACKs below snd_una · e95ae2f2
      Neal Cardwell 提交于
      The bug: When the ACK field is below snd_una (which can happen when
      ACKs are reordered), senders ignored DSACKs (preventing undo) and did
      not call tcp_fastretrans_alert, so they did not increment
      prr_delivered to reflect newly-SACKed sequence ranges, and did not
      call tcp_xmit_retransmit_queue, thus passing up chances to send out
      more retransmitted and new packets based on any newly-SACKed packets.
      
      The change: When the ACK field is below snd_una (the "old_ack" goto
      label), call tcp_fastretrans_alert to allow undo based on any
      newly-arrived DSACKs and try to send out more packets based on
      newly-SACKed packets.
      
      Other patches in this series will provide other changes that are
      necessary to fully fix this problem.
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Acked-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e95ae2f2