1. 10 6月, 2007 5 次提交
    • M
      NetXen: Fix compile failure seen on PPC architecture · 0d04761d
      Mithlesh Thukral 提交于
      NetXen: Add NETXEN prefixes to macros to clean them up.
      This is a cleanup patch which adds NETXEN prefix to some stand
      alone macro names.
      These posed compile errors when NetXen driver was backported to 2.6.9
      on PPC architecture as macros like USER_START are defined in file
      arch/ppc64/mm/hash_utils.c
      Signed-off-by: NAndy Gospodarek <andy@greyhouse.net>
      Signed-off by: Wen Xiong <wenxiong@us.ibm.com>
      Acked-off by: Mithlesh Thukral <mithlesh@netxen.com>
      Signed-off-by: NJeff Garzik <jeff@garzik.org>
      0d04761d
    • M
      NetXen: Fix ping issue after reboot on Blades with 3.4.19 firmware · 3e2facef
      Mithlesh Thukral 提交于
      NetXen: Fix initialization and subsequent ping issue on 3.4.19 firmware
      This patch fixes the ping problem seen X/PBlades after the adapter's
      firmware was moved to 3.4.19. After configured interface up, ping
      failed.
      NetXen adapter couldn't accept ARP broadcast packet. Manual addition of
      MAC address in the ARP table, made ping work.
      NetXen adapter should finish initilization after system boot. But looks
      NetXen adapter didn't initilization correctly after system boot up.
      So have to re-load the firmware again in probe routine.
      Also re-initilization netxen_config_0 and netxen_config_1 registers.
      
      Signed-off by: Wen Xiong <wenxiong@us.ibm.com>
      Signed-off by: Mithlesh Thukral <mithlesh@netxen.com>
      Signed-off-by: NJeff Garzik <jeff@garzik.org>
      3e2facef
    • D
      b4fea61a
    • B
      ibmveth: Automatically enable larger rx buffer pools for larger mtu · ce6eea58
      Brian King 提交于
      Currently, ibmveth maintains several rx buffer pools, which can
      be modified through sysfs. By default, pools are not allocated by
      default such that jumbo frames cannot be supported without first
      activating larger rx buffer pools. This results in failures when attempting
      to change the mtu. This patch makes ibmveth automatically allocate
      these larger buffer pools when the mtu is changed.
      Signed-off-by: NBrian King <brking@linux.vnet.ibm.com>
      Signed-off-by: NJeff Garzik <jeff@garzik.org>
      ce6eea58
    • B
      ibmveth: Fix h_free_logical_lan error on pool resize · 4aa9c93e
      Brian King 提交于
      When attempting to activate additional rx buffer pools on an ibmveth interface that
      was not yet up, the error below was seen. The patch fixes this by only closing
      and opening the interface to activate the resize if the interface is already
      opened.
      
      (drivers/net/ibmveth.c:597 ua:30000004) ERROR: h_free_logical_lan failed with fffffffffffffffc, continuing with close
      Unable to handle kernel paging request for data at address 0x00000ff8
      Faulting instruction address: 0xd0000000002540e0
      Oops: Kernel access of bad area, sig: 11 [#1]
      SMP NR_CPUS=128 NUMA PSERIES LPAR
      Modules linked in: ip6t_REJECT xt_tcpudp ipt_REJECT xt_state iptable_mangle ipta
      ble_nat ip_nat iptable_filter ip6table_mangle ip_conntrack nfnetlink ip_tables i
      p6table_filter ip6_tables x_tables ipv6 apparmor aamatch_pcre loop dm_mod ibmvet
      h sg ibmvscsic sd_mod scsi_mod
      NIP: D0000000002540E0 LR: D0000000002540D4 CTR: 80000000001AF404
      REGS: c00000001cd27870 TRAP: 0300   Not tainted  (2.6.16.46-0.4-ppc64)
      MSR: 8000000000009032 <EE,ME,IR,DR>  CR: 24242422  XER: 00000007
      DAR: 0000000000000FF8, DSISR: 0000000040000000
      TASK = c00000001ca7b4e0[1636] 'sh' THREAD: c00000001cd24000 CPU: 0
      GPR00: D0000000002540D4 C00000001CD27AF0 D000000000265650 C00000001C936500
      GPR04: 8000000000009032 FFFFFFFFFFFFFFFF 0000000000000007 000000000002C2EF
      GPR08: FFFFFFFFFFFFFFFF 0000000000000000 C000000000652A10 C000000000652AE0
      GPR12: 0000000000004000 C0000000004A3300 00000000100A0000 0000000000000000
      GPR16: 00000000100B8808 00000000100C0F60 0000000000000000 0000000010084878
      GPR20: 0000000000000000 00000000100C0CB0 00000000100AF498 0000000000000002
      GPR24: 00000000100BA488 C00000001C936760 D000000000258DD0 C00000001C936000
      GPR28: 0000000000000000 C00000001C936500 D000000000265180 C00000001C936000
      NIP [D0000000002540E0] .ibmveth_close+0xc8/0xf4 [ibmveth]
      LR [D0000000002540D4] .ibmveth_close+0xbc/0xf4 [ibmveth]
      Call Trace:
      [C00000001CD27AF0] [D0000000002540D4] .ibmveth_close+0xbc/0xf4 [ibmveth] (unreliable)
      [C00000001CD27B80] [D0000000002545FC] .veth_pool_store+0xd0/0x260 [ibmveth]
      [C00000001CD27C40] [C00000000012E0E8] .sysfs_write_file+0x118/0x198
      [C00000001CD27CF0] [C0000000000CDAF0] .vfs_write+0x130/0x218
      [C00000001CD27D90] [C0000000000CE52C] .sys_write+0x4c/0x8c
      [C00000001CD27E30] [C00000000000871C] syscall_exit+0x0/0x40
      Instruction dump:
      419affd8 2fa30000 419e0020 e93d0000 e89e8040 38a00255 e87e81b0 80c90018
      48001531 e8410028 e93d00e0 7fa3eb78 <e8090ff8> f81d0430 4bfffdc9 38210090
      Signed-off-by: NBrian King <brking@linux.vnet.ibm.com>
      Signed-off-by: NJeff Garzik <jeff@garzik.org>
      4aa9c93e
  2. 09 6月, 2007 35 次提交