1. 28 3月, 2008 4 次提交
  2. 27 3月, 2008 1 次提交
    • P
      [VLAN]: Reduce memory consumed by vlan_groups · 67727184
      Pavel Emelyanov 提交于
      Currently each vlan_groupd contains 8 pointers on arrays with 512
      pointers on struct net_device each  :)  Such a construction "in many
      cases ... wastes memory".
      
      My proposal is to allow for some of these arrays pointers be NULL,
      meaning that there are no devices in it. When a new device is added
      to the vlan_group, the appropriate array is allocated.
      
      The check in vlan_group_get_device's is safe, since the pointer
      vg->vlan_devices_arrays[x] can only switch from NULL to not-NULL.
      The vlan_group_prealloc_vid() is guarded with rtnl lock and is
      also safe.
      
      I've checked (I hope that) all the places, that use these arrays
      and found, that the register_vlan_dev is the only place, that can
      put a vlan device on an empty vlan_group.
      
      Rough calculations shows, that after the patch a setup with a
      single vlan dev (or up to 512 vlans with sequential vids) will
      occupy approximately 8 times less memory.
      
      The question I have is - does this patch makes sense, or a totally
      new structures are required to store the vlan_devs?
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Signed-off-by: NPatrick McHardy <kaber@trash.net>
      67727184
  3. 26 3月, 2008 20 次提交
  4. 25 3月, 2008 4 次提交
  5. 24 3月, 2008 3 次提交
  6. 23 3月, 2008 1 次提交
  7. 22 3月, 2008 1 次提交
    • P
      [TCP]: TCP_DEFER_ACCEPT updates - process as established · ec3c0982
      Patrick McManus 提交于
      Change TCP_DEFER_ACCEPT implementation so that it transitions a
      connection to ESTABLISHED after handshake is complete instead of
      leaving it in SYN-RECV until some data arrvies. Place connection in
      accept queue when first data packet arrives from slow path.
      
      Benefits:
        - established connection is now reset if it never makes it
         to the accept queue
      
       - diagnostic state of established matches with the packet traces
         showing completed handshake
      
       - TCP_DEFER_ACCEPT timeouts are expressed in seconds and can now be
         enforced with reasonable accuracy instead of rounding up to next
         exponential back-off of syn-ack retry.
      Signed-off-by: NPatrick McManus <mcmanus@ducksong.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ec3c0982
  8. 21 3月, 2008 1 次提交
    • P
      [NET]: Add per-connection option to set max TSO frame size · 82cc1a7a
      Peter P Waskiewicz Jr 提交于
      Update: My mailer ate one of Jarek's feedback mails...  Fixed the
      parameter in netif_set_gso_max_size() to be u32, not u16.  Fixed the
      whitespace issue due to a patch import botch.  Changed the types from
      u32 to unsigned int to be more consistent with other variables in the
      area.  Also brought the patch up to the latest net-2.6.26 tree.
      
      Update: Made gso_max_size container 32 bits, not 16.  Moved the
      location of gso_max_size within netdev to be less hotpath.  Made more
      consistent names between the sock and netdev layers, and added a
      define for the max GSO size.
      
      Update: Respun for net-2.6.26 tree.
      
      Update: changed max_gso_frame_size and sk_gso_max_size from signed to
      unsigned - thanks Stephen!
      
      This patch adds the ability for device drivers to control the size of
      the TSO frames being sent to them, per TCP connection.  By setting the
      netdevice's gso_max_size value, the socket layer will set the GSO
      frame size based on that value.  This will propogate into the TCP
      layer, and send TSO's of that size to the hardware.
      
      This can be desirable to help tune the bursty nature of TSO on a
      per-adapter basis, where one may have 1 GbE and 10 GbE devices
      coexisting in a system, one running multiqueue and the other not, etc.
      
      This can also be desirable for devices that cannot support full 64 KB
      TSO's, but still want to benefit from some level of segmentation
      offloading.
      Signed-off-by: NPeter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      82cc1a7a
  9. 18 3月, 2008 2 次提交
  10. 17 3月, 2008 3 次提交
    • T
      devres: implement pcim_iomap_regions_request_all() · 916fbfb7
      Tejun Heo 提交于
      Some drivers need to reserve all PCI BARs to prevent other drivers
      misusing unoccupied BARs.  pcim_iomap_regions_request_all() requests
      all BARs and iomap specified BARs.
      Signed-off-by: NTejun Heo <htejun@gmail.com>
      Cc: Greg Kroah-Hartman <gregkh@suse.de>
      Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
      Cc: Jeff Garzik <jeff@garzik.org>
      Signed-off-by: NJeff Garzik <jeff@garzik.org>
      916fbfb7
    • C
      virtio: fix race in enable_cb · 4265f161
      Christian Borntraeger 提交于
      There is a race in virtio_net, dealing with disabling/enabling the callback.
      I saw the following oops:
      
      kernel BUG at /space/kvm/drivers/virtio/virtio_ring.c:218!
      illegal operation: 0001 [#1] SMP
      Modules linked in: sunrpc dm_mod
      CPU: 2 Not tainted 2.6.25-rc1zlive-host-10623-gd358142-dirty #99
      Process swapper (pid: 0, task: 000000000f85a610, ksp: 000000000f873c60)
      Krnl PSW : 0404300180000000 00000000002b81a6 (vring_disable_cb+0x16/0x20)
                 R:0 T:1 IO:0 EX:0 Key:0 M:1 W:0 P:0 AS:0 CC:3 PM:0 EA:3
      Krnl GPRS: 0000000000000001 0000000000000001 0000000010005800 0000000000000001
                 000000000f3a0900 000000000f85a610 0000000000000000 0000000000000000
                 0000000000000000 000000000f870000 0000000000000000 0000000000001237
                 000000000f3a0920 000000000010ff74 00000000002846f6 000000000fa0bcd8
      Krnl Code: 00000000002b819a: a7110001           tmll    %r1,1
                 00000000002b819e: a7840004           brc     8,2b81a6
                 00000000002b81a2: a7f40001           brc     15,2b81a4
                >00000000002b81a6: a51b0001           oill    %r1,1
                 00000000002b81aa: 40102000           sth     %r1,0(%r2)
                 00000000002b81ae: 07fe               bcr     15,%r14
                 00000000002b81b0: eb7ff0380024       stmg    %r7,%r15,56(%r15)
                 00000000002b81b6: a7f13e00           tmll    %r15,15872
      Call Trace:
      ([<000000000fa0bcd0>] 0xfa0bcd0)
       [<00000000002b8350>] vring_interrupt+0x5c/0x6c
       [<000000000010ab08>] do_extint+0xb8/0xf0
       [<0000000000110716>] ext_no_vtime+0x16/0x1a
       [<0000000000107e72>] cpu_idle+0x1c2/0x1e0
      
      The problem can be triggered with a high amount of host->guest traffic.
      I think its the following race:
      
      poll says netif_rx_complete
      poll calls enable_cb
      enable_cb opens the interrupt mask
      a new packet comes, an interrupt is triggered----\
      enable_cb sees that there is more work           |
      enable_cb disables the interrupt                 |
             .                                         V
             .                            interrupt is delivered
             .                            skb_recv_done does atomic napi test, ok
       some waiting                       disable_cb is called->check fails->bang!
             .
      poll would do napi check
      poll would do disable_cb
      
      The fix is to let enable_cb not disable the interrupt again, but expect the
      caller to do the cleanup if it returns false. In that case, the interrupt is
      only disabled, if the napi test_set_bit was successful.
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (cleaned up doco)
      4265f161
    • M
      smc91x: introduce platform data flags V2 · 3e947943
      Magnus Damm 提交于
      This patch introduces struct smc91x_platdata and modifies the driver so
      bus width is checked during run time using SMC_nBIT() instead of
      SMC_CAN_USE_nBIT.
      
      V2 keeps static configuration lean using SMC_DYNAMIC_BUS_CONFIG.
      Signed-off-by: NMagnus Damm <damm@igel.co.jp>
      Acked-by: NNicolas Pitre <nico@cam.org>
      Signed-off-by: NJeff Garzik <jeff@garzik.org>
      3e947943