1. 11 10月, 2007 4 次提交
    • B
      ibmveth: Remove dead frag processing code · 3449a2ab
      Brian King 提交于
      Removes dead frag processing code from ibmveth. Since NETIF_F_SG was
      not set, this code was never executed. Also, since the ibmveth
      interface can only handle 6 fragments, core networking code would need
      to be modified in order to efficiently enable this support.
      Signed-off-by: NBrian King <brking@linux.vnet.ibm.com>
      Signed-off-by: NJeff Garzik <jeff@garzik.org>
      3449a2ab
    • B
      ibmveth: Implement ethtool hooks to enable/disable checksum offload · 5fc7e01c
      Brian King 提交于
      This patch adds the appropriate ethtool hooks to allow for enabling/disabling
      of hypervisor assisted checksum offload for TCP.
      Signed-off-by: NBrian King <brking@linux.vnet.ibm.com>
      Signed-off-by: NJeff Garzik <jeff@garzik.org>
      5fc7e01c
    • B
      ibmveth: Enable TCP checksum offload · f4ff2872
      Brian King 提交于
      This patchset enables TCP checksum offload support for IPV4
      on ibmveth. This completely eliminates the generation and checking of
      the checksum for packets that are completely virtual and never
      touch a physical network. A simple TCP_STREAM netperf run on
      a virtual network with maximum mtu set yielded a ~30% increase
      in throughput. This feature is enabled by default on systems that
      support it, but can be disabled with a module option.
      Signed-off-by: NBrian King <brking@linux.vnet.ibm.com>
      Signed-off-by: NJeff Garzik <jeff@garzik.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f4ff2872
    • S
      [NET]: Make NAPI polling independent of struct net_device objects. · bea3348e
      Stephen Hemminger 提交于
      Several devices have multiple independant RX queues per net
      device, and some have a single interrupt doorbell for several
      queues.
      
      In either case, it's easier to support layouts like that if the
      structure representing the poll is independant from the net
      device itself.
      
      The signature of the ->poll() call back goes from:
      
      	int foo_poll(struct net_device *dev, int *budget)
      
      to
      
      	int foo_poll(struct napi_struct *napi, int budget)
      
      The caller is returned the number of RX packets processed (or
      the number of "NAPI credits" consumed if you want to get
      abstract).  The callee no longer messes around bumping
      dev->quota, *budget, etc. because that is all handled in the
      caller upon return.
      
      The napi_struct is to be embedded in the device driver private data
      structures.
      
      Furthermore, it is the driver's responsibility to disable all NAPI
      instances in it's ->stop() device close handler.  Since the
      napi_struct is privatized into the driver's private data structures,
      only the driver knows how to get at all of the napi_struct instances
      it may have per-device.
      
      With lots of help and suggestions from Rusty Russell, Roland Dreier,
      Michael Chan, Jeff Garzik, and Jamal Hadi Salim.
      
      Bug fixes from Thomas Graf, Roland Dreier, Peter Zijlstra,
      Joseph Fannin, Scott Wood, Hans J. Koch, and Michael Chan.
      
      [ Ported to current tree and all drivers converted.  Integrated
        Stephen's follow-on kerneldoc additions, and restored poll_list
        handling to the old style to fix mutual exclusion issues.  -DaveM ]
      Signed-off-by: NStephen Hemminger <shemminger@linux-foundation.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bea3348e
  2. 08 8月, 2007 1 次提交
    • B
      ibmveth: Fix rx pool deactivate oops · 76b9cfcc
      Brian King 提交于
      This fixes the following oops which can occur when trying to deallocate
      receive buffer pools using sysfs with the ibmveth driver.
      
      NIP: d00000000024f954 LR: d00000000024fa58 CTR: c0000000000d7478
      REGS: c00000000ffef9f0 TRAP: 0300   Not tainted  (2.6.22-ppc64)
      MSR: 8000000000009032 <EE,ME,IR,DR>  CR: 24242442  XER: 00000010
      DAR: 00000000000007f0, DSISR: 0000000042000000
      TASK = c000000002f91360[2967] 'bash' THREAD: c00000001398c000 CPU: 2
      GPR00: 0000000000000000 c00000000ffefc70 d000000000262d30 c00000001c4087a0
      GPR04: 00000003000000fe 0000000000000000 000000000000000f c000000000579d80
      GPR08: 0000000000365688 c00000001c408998 00000000000007f0 0000000000000000
      GPR12: d000000000251e88 c000000000579d80 00000000200957ec 0000000000000000
      GPR16: 00000000100b8808 00000000100feb30 0000000000000000 0000000010084828
      GPR20: 0000000000000000 000000001014d4d0 0000000000000010 c00000000ffefeb0
      GPR24: c00000001c408000 0000000000000000 c00000001c408000 00000000ffffb054
      GPR28: 00000000000000fe 0000000000000003 d000000000262700 c00000001c4087a0
      NIP [d00000000024f954] .ibmveth_remove_buffer_from_pool+0x38/0x108 [ibmveth]
      LR [d00000000024fa58] .ibmveth_rxq_harvest_buffer+0x34/0x78 [ibmveth]
      Call Trace:
      [c00000000ffefc70] [c0000000000280a8] .dma_iommu_unmap_single+0x14/0x28 (unreliable)
      [c00000000ffefd00] [d00000000024fa58] .ibmveth_rxq_harvest_buffer+0x34/0x78 [ibmveth]
      [c00000000ffefd80] [d000000000250e40] .ibmveth_poll+0xd8/0x434 [ibmveth]
      [c00000000ffefe40] [c00000000032da8c] .net_rx_action+0xdc/0x248
      [c00000000ffefef0] [c000000000068b4c] .__do_softirq+0xa8/0x164
      [c00000000ffeff90] [c00000000002789c] .call_do_softirq+0x14/0x24
      [c00000001398f6f0] [c00000000000c04c] .do_softirq+0x68/0xac
      [c00000001398f780] [c000000000068ca0] .irq_exit+0x54/0x6c
      [c00000001398f800] [c00000000000c8e4] .do_IRQ+0x170/0x1ac
      [c00000001398f890] [c000000000004790] hardware_interrupt_entry+0x18/0x1c
         Exception: 501 at .plpar_hcall_norets+0x24/0x94
          LR = .veth_pool_store+0x15c/0x298 [ibmveth]
      [c00000001398fb80] [d000000000250b2c] .veth_pool_store+0x5c/0x298 [ibmveth] (unreliable)
      [c00000001398fc30] [c000000000145530] .sysfs_write_file+0x140/0x1d8
      [c00000001398fcf0] [c0000000000de89c] .vfs_write+0x120/0x208
      [c00000001398fd90] [c0000000000df2c8] .sys_write+0x4c/0x8c
      [c00000001398fe30] [c0000000000086ac] syscall_exit+0x0/0x40
      Instruction dump:
      fba1ffe8 fbe1fff8 789d0022 f8010010 f821ff71 789c0020 1d3d00a8 7b8a1f24
      38000000 7c7f1b78 7d291a14 e9690128 <7c0a592a> e8030000 e9690120 80a90100
      Signed-off-by: NBrian King <brking@linux.vnet.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NJeff Garzik <jeff@garzik.org>
      76b9cfcc
  3. 04 12月, 2006 1 次提交
  4. 01 8月, 2006 1 次提交
    • A
      [POWERPC] clean up pseries hcall interfaces · b9377ffc
      Anton Blanchard 提交于
      Our pseries hcall interfaces are out of control:
      
      	plpar_hcall_norets
      	plpar_hcall
      	plpar_hcall_8arg_2ret
      	plpar_hcall_4out
      	plpar_hcall_7arg_7ret
      	plpar_hcall_9arg_9ret
      
      Create 3 interfaces to cover all cases:
      
      	plpar_hcall_norets:	7 arguments no returns
      	plpar_hcall:		6 arguments 4 returns
      	plpar_hcall9:		9 arguments 9 returns
      
      There are only 2 cases in the kernel that need plpar_hcall9, hopefully
      we can keep it that way.
      
      Pass in a buffer to stash return parameters so we avoid the &dummy1,
      &dummy2 madness.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      --
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      b9377ffc
  5. 31 7月, 2006 1 次提交
  6. 24 5月, 2006 2 次提交
    • J
      [netdrvr ibmlana, ibmveth] trim trailing whitespace · d7fbeba6
      Jeff Garzik 提交于
      d7fbeba6
    • S
      [PATCH] ibmveth change buffer pools dynamically · 860f242e
      Santiago Leon 提交于
      This patch provides a sysfs interface to change some properties of the
      ibmveth buffer pools (size of the buffers, number of buffers per pool,
      and whether a pool is active).  Ethernet drivers use ethtool to provide
      this type of functionality.  However, the buffers in the ibmveth driver
      can have an arbitrary size (not only regular, mini, and jumbo which are
      the only sizes that ethtool can change), and also ibmveth can have an
      arbitrary number of buffer pools
      
      Under heavy load we have seen dropped packets which obviously kills TCP
      performance.  We have created several fixes that mitigate this issue,
      but we definitely need a way of changing the number of buffers for an
      adapter dynamically.  Also, changing the size of the buffers allows
      users to change the MTU to something big (bigger than a jumbo frame)
      greatly improving performance on partition to partition transfers.
      
      The patch creates directories pool1...pool4 in the device directory in
      sysfs, each with files: num, size, and active (which default to the
      values in the mainline version).
      
      Comments and suggestions are welcome...
      --
      Santiago A. Leon
      Power Linux Development
      IBM Linux Technology Center
      Signed-off-by: NJeff Garzik <jeff@garzik.org>
      860f242e
  7. 29 10月, 2005 3 次提交
    • S
      [PATCH] ibmveth lockless TX · 60296d9e
      Santiago Leon 提交于
      This patch adds the lockless TX feature to the ibmveth driver.  The
      hypervisor has its own locking so the only change that is necessary is
      to protect the statistics counters.
      Signed-off-by: NSantiago Leon <santil@us.ibm.com>
      Signed-off-by: NJeff Garzik <jgarzik@pobox.com>
      60296d9e
    • S
      [PATCH] ibmveth fix buffer replenishing · e2adbcb4
      Santiago Leon 提交于
      This patch removes the allocation of RX skb's  buffers from a workqueue
      to be called directly at RX processing time.  This change was suggested
      by Dave Miller when the driver was starving the RX buffers and
      deadlocking under heavy traffic:
      
      > Allocating RX SKBs via tasklet is, IMHO, the worst way to
      > do it.  It is no surprise that there are starvation cases.
      >
      > If tasklets or work queues get delayed in any way, you lose,
      > and it's very easy for a card to catch up with the driver RX'ing
      > packets very fast, no matter how aggressive you make the
      > replenishing.  By the time you detect that you need to be
      > "more aggressive" it is already too late.
      > The only pseudo-reliable way is to allocate at RX processing time.
      >
      Signed-off-by: NSantiago Leon <santil@us.ibm.com>
      Signed-off-by: NJeff Garzik <jgarzik@pobox.com>
      e2adbcb4
    • S
      [PATCH] ibmveth fix buffer pool management · b6d35182
      Santiago Leon 提交于
      This patch changes the way the ibmveth driver handles the receive
      buffers.  The old code mallocs and maps all the buffers in the pools
      regardless of MTU size and it also limits the number of buffer pools to
      three. This patch makes the driver malloc and map the buffers necessary
      to support the current MTU. It also changes the hardcoded names of the
      buffer pool number, size, and elements to arrays to make it easier to
      change (with the hope of making them runtime parameters in the future).
      Signed-off-by: NSantiago Leon <santil@us.ibm.com>
      Signed-off-by: NJeff Garzik <jgarzik@pobox.com>
      b6d35182
  8. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4