1. 22 10月, 2006 5 次提交
  2. 21 10月, 2006 21 次提交
  3. 20 10月, 2006 8 次提交
  4. 19 10月, 2006 6 次提交
    • J
      [TCP]: Bound TSO defer time · ae8064ac
      John Heffner 提交于
      This patch limits the amount of time you will defer sending a TSO segment
      to less than two clock ticks, or the time between two acks, whichever is
      longer.
      
      On slow links, deferring causes significant bursts.  See attached plots,
      which show RTT through a 1 Mbps link with a 100 ms RTT and ~100 ms queue
      for (a) non-TSO, (b) currnet TSO, and (c) patched TSO.  This burstiness
      causes significant jitter, tends to overflow queues early (bad for short
      queues), and makes delay-based congestion control more difficult.
      
      Deferring by a couple clock ticks I believe will have a relatively small
      impact on performance.
      Signed-off-by: NJohn Heffner <jheffner@psc.edu>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ae8064ac
    • T
      [IPv4] fib: Remove unused fib_config members · b52f070c
      Thomas Graf 提交于
      Signed-off-by: NThomas Graf <tgraf@suug.ch>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b52f070c
    • V
      [IPV6]: Remove struct pol_chain. · e320af1d
      Ville Nuorvala 提交于
      Struct pol_chain has existed since at least the 2.2 kernel, but isn't used
      anymore. As the IPv6 policy routing is implemented in a totally different
      way in the current kernel, just get rid of it.
      Signed-off-by: NVille Nuorvala <vnuorval@tcs.hut.fi>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e320af1d
    • L
      [TIPC]: Added subscription cancellation capability · eb409460
      Lijun Chen 提交于
      This patch allows a TIPC application to cancel an existing
      topology service subscription by re-requesting the subscription
      with the TIPC_SUB_CANCEL filter bit set.  (All other bits of
      the cancel request must match the original subscription request.)
      Signed-off-by: NAllan Stephens <allan.stephens@windriver.com>
      Signed-off-by: NPer Liden <per.liden@ericsson.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      eb409460
    • G
      PCI Hotplug: move pci_hotplug.h to include/linux/ · 7a54f25c
      Greg Kroah-Hartman 提交于
      This makes it possible to build pci hotplug drivers outside of the main
      kernel tree, and Sam keeps telling me to move local header files to
      their proper places...
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      7a54f25c
    • M
      PCI: optionally sort device lists breadth-first · 6b4b78fe
      Matt Domsch 提交于
      Problem:
      New Dell PowerEdge servers have 2 embedded ethernet ports, which are
      labeled NIC1 and NIC2 on the chassis, in the BIOS setup screens, and
      in the printed documentation.  Assuming no other add-in ethernet ports
      in the system, Linux 2.4 kernels name these eth0 and eth1
      respectively.  Many people have come to expect this naming.  Linux 2.6
      kernels name these eth1 and eth0 respectively (backwards from
      expectations).  I also have reports that various Sun and HP servers
      have similar behavior.
      
      
      Root cause:
      Linux 2.4 kernels walk the pci_devices list, which happens to be
      sorted in breadth-first order (or pcbios_find_device order on i386,
      which most often is breadth-first also).  2.6 kernels have both the
      pci_devices list and the pci_bus_type.klist_devices list, the latter
      is what is walked at driver load time to match the pci_id tables; this
      klist happens to be in depth-first order.
      
      On systems where, for physical routing reasons, NIC1 appears on a
      lower bus number than NIC2, but NIC2's bridge is discovered first in
      the depth-first ordering, NIC2 will be discovered before NIC1.  If the
      list were sorted breadth-first, NIC1 would be discovered before NIC2.
      
      A PowerEdge 1955 system has the following topology which easily
      exhibits the difference between depth-first and breadth-first device
      lists.
      
      -[0000:00]-+-00.0  Intel Corporation 5000P Chipset Memory Controller Hub
                 +-02.0-[0000:03-08]--+-00.0-[0000:04-07]--+-00.0-[0000:05-06]----00.0-[0000:06]----00.0  Broadcom Corporation NetXtreme II BCM5708S Gigabit Ethernet (labeled NIC2, 2.4 kernel name eth1, 2.6 kernel name eth0)
                 +-1c.0-[0000:01-02]----00.0-[0000:02]----00.0  Broadcom Corporation NetXtreme II BCM5708S Gigabit Ethernet (labeled NIC1, 2.4 kernel name eth0, 2.6 kernel name eth1)
      
      
      Other factors, such as device driver load order and the presence of
      PCI slots at various points in the bus hierarchy further complicate
      this problem; I'm not trying to solve those here, just restore the
      device order, and thus basic behavior, that 2.4 kernels had.
      
      
      Solution:
      
      The solution can come in multiple steps.
      
      Suggested fix #1: kernel
      Patch below optionally sorts the two device lists into breadth-first
      ordering to maintain compatibility with 2.4 kernels.  It adds two new
      command line options:
        pci=bfsort
        pci=nobfsort
      to force the sort order, or not, as you wish.  It also adds DMI checks
      for the specific Dell systems which exhibit "backwards" ordering, to
      make them "right".
      
      
      Suggested fix #2: udev rules from userland
      Many people also have the expectation that embedded NICs are always
      discovered before add-in NICs (which this patch does not try to do).
      Using the PCI IRQ Routing Table provided by system BIOS, it's easy to
      determine which PCI devices are embedded, or if add-in, which PCI slot
      they're in.  I'm working on a tool that would allow udev to name
      ethernet devices in ascending embedded, slot 1 .. slot N order,
      subsort by PCI bus/dev/fn breadth-first.  It'll be possible to use it
      independent of udev as well for those distributions that don't use
      udev in their installers.
      
      Suggested fix #3: system board routing rules
      One can constrain the system board layout to put NIC1 ahead of NIC2
      regardless of breadth-first or depth-first discovery order.  This adds
      a significant level of complexity to board routing, and may not be
      possible in all instances (witness the above systems from several
      major manufacturers).  I don't want to encourage this particular train
      of thought too far, at the expense of not doing #1 or #2 above.
      
      
      Feedback appreciated.  Patch tested on a Dell PowerEdge 1955 blade
      with 2.6.18.
      
      You'll also note I took some liberty and temporarily break the klist
      abstraction to simplify and speed up the sort algorithm.  I think
      that's both safe and appropriate in this instance.
      Signed-off-by: NMatt Domsch <Matt_Domsch@dell.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      
      6b4b78fe