1. 30 9月, 2005 6 次提交
    • S
      [ATM]: [lec] reset retry counter when new arp issued · 75b895c1
      Scott Talbert 提交于
      From: Scott Talbert <scott.talbert@lmco.com>
      Signed-off-by: NChas Williams <chas@cmf.nrl.navy.mil>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      75b895c1
    • S
      [ATM]: [lec] attempt to support cisco failover · 4a7097fc
      Scott Talbert 提交于
      From: Scott Talbert <scott.talbert@lmco.com>
      Signed-off-by: NChas Williams <chas@cmf.nrl.navy.mil>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4a7097fc
    • A
      [TCP]: Don't over-clamp window in tcp_clamp_window() · 09e9ec87
      Alexey Kuznetsov 提交于
      From: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
      
      Handle better the case where the sender sends full sized
      frames initially, then moves to a mode where it trickles
      out small amounts of data at a time.
      
      This known problem is even mentioned in the comments
      above tcp_grow_window() in tcp_input.c, specifically:
      
      ...
       * The scheme does not work when sender sends good segments opening
       * window and then starts to feed us spagetti. But it should work
       * in common situations. Otherwise, we have to rely on queue collapsing.
      ...
      
      When the sender gives full sized frames, the "struct sk_buff" overhead
      from each packet is small.  So we'll advertize a larger window.
      If the sender moves to a mode where small segments are sent, this
      ratio becomes tilted to the other extreme and we start overrunning
      the socket buffer space.
      
      tcp_clamp_window() tries to address this, but it's clamping of
      tp->window_clamp is a wee bit too aggressive for this particular case.
      
      Fix confirmed by Ion Badulescu.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      09e9ec87
    • D
      [TCP]: Revert 6b251858 · 01ff367e
      David S. Miller 提交于
      But retain the comment fix.
      
      Alexey Kuznetsov has explained the situation as follows:
      
      --------------------
      
      I think the fix is incorrect. Look, the RFC function init_cwnd(mss) is
      not continuous: f.e. for mss=1095 it needs initial window 1095*4, but
      for mss=1096 it is 1096*3. We do not know exactly what mss sender used
      for calculations. If we advertised 1096 (and calculate initial window
      3*1096), the sender could limit it to some value < 1096 and then it
      will need window his_mss*4 > 3*1096 to send initial burst.
      
      See?
      
      So, the honest function for inital rcv_wnd derived from
      tcp_init_cwnd() is:
      
      	init_rcv_wnd(mss)=
      	  min { init_cwnd(mss1)*mss1 for mss1 <= mss }
      
      It is something sort of:
      
      	if (mss < 1096)
      		return mss*4;
      	if (mss < 1096*2)
      		return 1096*4;
      	return mss*2;
      
      (I just scrablled a graph of piece of paper, it is difficult to see or
      to explain without this)
      
      I selected it differently giving more window than it is strictly
      required.  Initial receive window must be large enough to allow sender
      following to the rfc (or just setting initial cwnd to 2) to send
      initial burst.  But besides that it is arbitrary, so I decided to give
      slack space of one segment.
      
      Actually, the logic was:
      
      If mss is low/normal (<=ethernet), set window to receive more than
      initial burst allowed by rfc under the worst conditions
      i.e. mss*4. This gives slack space of 1 segment for ethernet frames.
      
      For msses slighlty more than ethernet frame, take 3. Try to give slack
      space of 1 frame again.
      
      If mss is huge, force 2*mss. No slack space.
      
      Value 1460*3 is really confusing. Minimal one is 1096*2, but besides
      that it is an arbitrary value. It was meant to be ~4096. 1460*3 is
      just the magic number from RFC, 1460*3 = 1095*4 is the magic :-), so
      that I guess hands typed this themselves.
      
      --------------------
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      01ff367e
    • O
      [PATCH] fix TASK_STOPPED vs TASK_NONINTERACTIVE interaction · aa55a086
      Oleg Nesterov 提交于
      do_signal_stop:
      
      	for_each_thread(t) {
      		if (t->state < TASK_STOPPED)
      			++sig->group_stop_count;
      	}
      
      However, TASK_NONINTERACTIVE > TASK_STOPPED, so this loop will not
      count TASK_INTERRUPTIBLE | TASK_NONINTERACTIVE threads.
      
      See also wait_task_stopped(), which checks ->state > TASK_STOPPED.
      Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru>
      
      [ We really probably should always use the appropriate bitmasks to test
        task states, not do it like this. Using something like
      
      	#define TASK_RUNNABLE (TASK_RUNNING | TASK_INTERRUPTIBLE | \
      				TASK_UNINTERRUPTIBLE | TASK_NONINTERACTIVE)
      
        and then doing "if (task->state & TASK_RUNNABLE)" or similar. But the
        ordering of the task states is historical, and keeping the ordering
        does make sense regardless. ]
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      aa55a086
    • L
      Merge master.kernel.org:/home/rmk/linux-2.6-serial · b20fd650
      Linus Torvalds 提交于
      b20fd650
  2. 29 9月, 2005 29 次提交
  3. 28 9月, 2005 5 次提交