1. 20 11月, 2013 1 次提交
  2. 16 11月, 2013 1 次提交
  3. 15 11月, 2013 8 次提交
    • H
      alx: Reset phy speed after resume · b54629e2
      hahnjo 提交于
      This fixes bug 62491 (https://bugzilla.kernel.org/show_bug.cgi?id=62491).
      After resuming some users got the following error flooding the kernel log:
      alx 0000:02:00.0: invalid PHY speed/duplex: 0xffff
      Signed-off-by: NJonas Hahnfeld <linux@hahnjo.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b54629e2
    • D
      net:fec: fix WARNING caused by lack of calls to dma_mapping_error() · d842a31f
      Duan Fugang-B38611 提交于
      The driver fails to check the results of DMA mapping and results in
      the following warning: (with kernel config "CONFIG_DMA_API_DEBUG" enable)
      
      ------------[ cut here ]------------
      WARNING: at lib/dma-debug.c:937 check_unmap+0x43c/0x7d8()
      fec 2188000.ethernet: DMA-API: device driver failed to check map
      error[device address=0x00000000383a8040] [size=2048 bytes] [mapped as single]
      
      Modules linked in:
      CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.10.17-16827-g9cdb0ba-dirty #188
      [<80013c4c>] (unwind_backtrace+0x0/0xf8) from [<80011704>] (show_stack+0x10/0x14)
      [<80011704>] (show_stack+0x10/0x14) from [<80025614>] (warn_slowpath_common+0x4c/0x6c)
      [<80025614>] (warn_slowpath_common+0x4c/0x6c) from [<800256c8>] (warn_slowpath_fmt+0x30/0x40)
      [<800256c8>] (warn_slowpath_fmt+0x30/0x40) from [<8026bfdc>] (check_unmap+0x43c/0x7d8)
      [<8026bfdc>] (check_unmap+0x43c/0x7d8) from [<8026c584>] (debug_dma_unmap_page+0x6c/0x78)
      [<8026c584>] (debug_dma_unmap_page+0x6c/0x78) from [<8038049c>] (fec_enet_rx_napi+0x254/0x8a8)
      [<8038049c>] (fec_enet_rx_napi+0x254/0x8a8) from [<804dc8c0>] (net_rx_action+0x94/0x160)
      [<804dc8c0>] (net_rx_action+0x94/0x160) from [<8002c758>] (__do_softirq+0xe8/0x1d0)
      [<8002c758>] (__do_softirq+0xe8/0x1d0) from [<8002c8e8>] (do_softirq+0x4c/0x58)
      [<8002c8e8>] (do_softirq+0x4c/0x58) from [<8002cb50>] (irq_exit+0x90/0xc8)
      [<8002cb50>] (irq_exit+0x90/0xc8) from [<8000ea88>] (handle_IRQ+0x3c/0x94)
      [<8000ea88>] (handle_IRQ+0x3c/0x94) from [<8000855c>] (gic_handle_irq+0x28/0x5c)
      [<8000855c>] (gic_handle_irq+0x28/0x5c) from [<8000de00>] (__irq_svc+0x40/0x50)
      Exception stack(0x815a5f38 to 0x815a5f80)
      5f20:                                                       815a5f80 3b9aca00
      5f40: 0fe52383 00000002 0dd8950e 00000002 81e7b080 00000000 00000000 815ac4d8
      5f60: 806032ec 00000000 00000017 815a5f80 80059028 8041fc4c 60000013 ffffffff
      [<8000de00>] (__irq_svc+0x40/0x50) from [<8041fc4c>] (cpuidle_enter_state+0x50/0xf0)
      [<8041fc4c>] (cpuidle_enter_state+0x50/0xf0) from [<8041fd94>] (cpuidle_idle_call+0xa8/0x14c)
      [<8041fd94>] (cpuidle_idle_call+0xa8/0x14c) from [<8000edac>] (arch_cpu_idle+0x10/0x4c)
      [<8000edac>] (arch_cpu_idle+0x10/0x4c) from [<800582f8>] (cpu_startup_entry+0x60/0x130)
      [<800582f8>] (cpu_startup_entry+0x60/0x130) from [<80bc7a48>] (start_kernel+0x2d0/0x328)
      [<80bc7a48>] (start_kernel+0x2d0/0x328) from [<10008074>] (0x10008074)
      ---[ end trace c6edec32436e0042 ]---
      
      Because dma-debug add new interfaces to debug dma mapping errors, pls refer
      to: http://lwn.net/Articles/516640/
      
      After dma mapping, it must call dma_mapping_error() to check mapping error,
      otherwise the map_err_type alway is MAP_ERR_NOT_CHECKED, check_unmap() define
      the mapping is not checked and dump the error msg. So,add dma_mapping_error()
      checking to fix the WARNING
      
      And RX DMA buffers are used repeatedly and the driver copies it into an skb,
      fec_enet_rx() should not map or unmap, use dma_sync_single_for_cpu()/dma_sync_single_for_device()
      instead of dma_map_single()/dma_unmap_single().
      
      There have another potential issue:  fec_enet_rx() passes the DMA address to __va().
      Physical and DMA addresses are *not* the same thing. They may differ if the device
      is behind an IOMMU or bounce buffering was required, or just because there is a fixed
      offset between the device and host physical addresses. Also fix it in this patch.
      
      =============================================
      V2: add net_ratelimit() to limit map err message.
          use dma_sync_single_for_cpu() instead of dma_map_single().
          fix the issue that pass DMA addresses to __va() to get virture address.
      V1: initial send
      =============================================
      Signed-off-by: NFugang Duan <B38611@freescale.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d842a31f
    • B
      ixp4xx_eth: Validate hwtstamp_config completely before applying it · a5be8cd3
      Ben Hutchings 提交于
      hwtstamp_ioctl() should validate all fields of hwtstamp_config
      before making any changes.  Currently it sets the TX configuration
      before validating the rx_filter field.
      
      Untested as I don't have a cross-compiler to hand.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a5be8cd3
    • B
      ti_cpsw: Validate hwtstamp_config completely before applying it · 2ee91e54
      Ben Hutchings 提交于
      cpsw_hwtstamp_ioctl() should validate all fields of hwtstamp_config,
      and the hardware version, before making any changes.  Currently it
      sets the TX configuration before validating the rx_filter field
      or that the hardware supports timestamping.
      
      Also correct the error code for hardware versions that don't
      support timestamping.  ENOTSUPP is used by the NFS implementation
      and is not part of userland API; we want EOPNOTSUPP (which glibc
      also calls ENOTSUP, with one 'P').
      
      Untested as I don't have a cross-compiler to hand.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      Acked-by: NMugunthan V N <mugunthanvnm@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2ee91e54
    • B
      stmmac: Validate hwtstamp_config completely before applying it · 5f3da328
      Ben Hutchings 提交于
      stmmac_hwtstamp_ioctl() should validate all fields of hwtstamp_config
      before making any changes.  Currently it sets the TX configuration
      before validating the rx_filter field.
      
      Compile-tested only.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5f3da328
    • B
      pch_gbe: Validate hwtstamp_config completely before applying it · 810abe9b
      Ben Hutchings 提交于
      hwtstamp_ioctl() should validate all fields of hwtstamp_config
      before making any changes.  Currently it sets the TX configuration
      before validating the rx_filter field.
      
      Compile-tested only.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      810abe9b
    • B
      e1000e: Validate hwtstamp_config completely before applying it · 62d7e3a2
      Ben Hutchings 提交于
      e1000e_hwtstamp_ioctl() should validate all fields of hwtstamp_config
      before making any changes.  Currently it copies the configuration to
      the e1000_adapter structure before validating it at all.
      
      Change e1000e_config_hwtstamp() to take a pointer to the
      hwstamp_config and to copy the config after validating it.
      
      Compile-tested only.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      Acked-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      62d7e3a2
    • B
      tg3: Validate hwtstamp_config completely before applying it · 58b187c6
      Ben Hutchings 提交于
      tg3_hwtstamp_ioctl() should validate all fields of hwtstamp_config
      before making any changes.  Currently it sets the TX configuration
      before validating the rx_filter field.
      
      Compile-tested only.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      Acked-by: NNithin Nayak Sujir <nsujir@broadcom.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      58b187c6
  4. 14 11月, 2013 1 次提交
  5. 13 11月, 2013 1 次提交
  6. 12 11月, 2013 3 次提交
  7. 11 11月, 2013 2 次提交
  8. 09 11月, 2013 3 次提交
  9. 08 11月, 2013 10 次提交
  10. 07 11月, 2013 4 次提交
  11. 06 11月, 2013 1 次提交
  12. 05 11月, 2013 5 次提交
    • J
      net/mlx4_core: Implement resource quota enforcement · 146f3ef4
      Jack Morgenstein 提交于
      Implements resource quota grant decision when resources are requested,
      for the following resources:  QPs, CQs, SRQs, MPTs, MTTs, vlans, MACs,
      and Counters.
      
      When granting a resource, the quota system increases the allocated-count
      for that slave.
      
      When the slave later frees the resource, its allocated-count is reduced.
      
      A spinlock is used to protect the integrity of each resource's free-pool counter.
      (One slave may be in the process of being granted a resource while another
      slave has crashed, initiating cleanup of that slave's resource quotas).
      Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      146f3ef4
    • J
      net/mlx4_core: Fix quota handling in the QUERY_FUNC_CAP wrapper · eb456a68
      Jack Morgenstein 提交于
      In current kernels, the mlx4 driver running on a VM does not
      differentiate between max resource numbers for the HCA and
      max quotas -- it simply takes the quota values passed to it
      as max-resource values.
      
      However, the driver actually requires the VFs to be aware of
      the actual number of resources that the HCA was initialized with,
      for QPs, CQs, SRQs and MPTs.
      
      For QPs, CQs and SRQs, the reason is that in completion handling
      the driver must know which of the 24 bits are the actual resource
      number, and which are "padding" bits.
      
      For MPTs, also, the driver assumes knowledge of the number of MPTs
      in the system.
      
      The previous commit fixes the quota logic on the VM for the quota values
      passed to it by QUERY_FUNC_CAPS.
      
      For QPs, CQs, SRQs, and MPTs, it takes the max resource numbers
      from QUERY_HCA (and not QUERY_FUNC_CAPS).  The quotas passed
      in QUERY_FUNC_CAPS are used to report max resource number values
      in the response to ib_query_device.
      
      However, the Hypervisor driver must consider that VMs
      may be running previous kernels, and compatibility must be preserved.
      
      To resolve the incompatibility with previous kernels running on VMs,
      we deprecated the quota fields in mlx4_QUERY_FUNC_CAP.  In the
      deprecated fields, we pass the max-resource values from INIT_HCA
      
      The quota fields are moved to a new location, and the current kernel
      driver takes the proper values from that location. There is
      also a new flag in dword 0, bit 28 of the mlx4_QUERY_FUNC_CAP mailbox;
      if this flag is set, the (VM) driver takes the quota values from the
      new location.
      
      VMs running previous kernels will work properly, except that the max resource
      numbers reported in ib_query_device for these resources will be
      too high.  The Hypervisor driver will, however, enforce the quotas
      for these VMs.
      Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      eb456a68
    • J
      mlx4: Structures and init/teardown for VF resource quotas · 5a0d0a61
      Jack Morgenstein 提交于
      This is step #1 for implementing SRIOV resource quotas for VFs.
      
      Quotas are implemented per resource type for VFs and the PF, to prevent
      any entity from simply grabbing all the resources for itself and leaving
      the other entities unable to obtain such resources.
      
      Resources which are allocated using quotas:  QPs, CQs, SRQs, MPTs, MTTs, MAC,
                                                   VLAN, and Counters.
      
      The quota system works as follows:
      Each entity (VF or PF) is given a max number of a given resource (its quota),
      and a guaranteed minimum number for each resource (starvation prevention).
      
      For QPs, CQs, SRQs, MPTs and MTTs:
      50% of the available quantity for the resource is divided equally among
      the PF and all the active VFs (i.e., the number of VFs in the mlx4_core module
      parameter "num_vfs"). This 50% represents the "guaranteed minimum" pool.
      The other 50% is the "free pool", allocated on a first-come-first-serve basis.
      For each VF/PF, resources are first allocated from its "guaranteed-minimum"
      pool. When that pool is exhausted, the driver attempts to allocate from
      the resource "free-pool".
      
      The quota (i.e., max) for the VFs and the PF is:
        The free-pool amount (50% of the real max) + the guaranteed minimum
      
      For MACs:
        Guarantee 2 MACs per VF/PF per port. As a result, since we have only
        128 MACs per port, reduce the allowable number of VFs from 64 to 63.
        Any remaining MACs are put into a free pool.
      
      For VLANs:
        For the PF, the per-port quota is 128 and guarantee is 64
           (to allow the PF to register at least a VLAN per VF in VST mode).
        For the VFs, the per-port quota is 64 and the guarantee is 0.
            We assume that VGT VFs are trusted not to abuse the VLAN resource.
      
      For Counters:
        For all functions (PF and VFs), the quota is 128 and the guarantee is 0.
      
      In this patch, we define the needed structures, which are added to the
      resource-tracker struct.  In addition, we do initialization
      for the resource quota, and adjust the query_device response to use quotas
      rather than resource maxima.
      
      As part of the implementation, we introduce a new field in
      mlx4_dev: quotas.  This field holds the resource quotas used
      to report maxima to the upper layers (ib_core, via query_device).
      
      The HCA maxima of these values are passed to the VFs (via
      QUERY_HCA) so that they may continue to use these in handling
      QPs, CQs, SRQs and MPTs.
      Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5a0d0a61
    • J
      net/mlx4_core: Fix checking order in MR table init · a30f1bc5
      Jack Morgenstein 提交于
      In procedure mlx4_init_mr_table(), slaves should do no processing,
      but should return success. This initialization is hypervisor-only.
      
      However, the check for num_mpts being a power-of-2 was performed
      before the check to return immediately if the driver is for a slave.
      This resulted in spurious failures.
      
      The order of performing the checks is reversed, so that if the
      driver is for a slave, no processing is done and success is returned.
      Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a30f1bc5
    • J
      net/mlx4_core: Don't fail reg/unreg vlan for older guests · 2c957ff2
      Jack Morgenstein 提交于
      In upstream kernels under SRIOV, the vlan register/unregister calls
      were NOPs (doing nothing and returning OK). We detect these old
      calls from guests (via the comm channel), since previously the
      port number in mlx4_register_vlan was passed (improperly) in the
      out_param. This has been corrected so that the port number is now
      passed in bits 8..15 of the in_modifier field.
      
      For old calls, these bits will be zero, so if the passed port
      number is zero, we can still look at the out_param field to see
      if it contains a valid port number. If yes, the VM is running
      an old driver.
      
      Since for old drivers, the register/unregister_vlan wrappers were
      NOPs, we continue this policy -- the reason being that upstream
      had an additional bug in eth driver running on guests (where
      procedure mlx4_en_vlan_rx_kill_vid() had the following code:
      
      if (!mlx4_find_cached_vlan(mdev->dev, priv->port, vid, &idx))
              mlx4_unregister_vlan(mdev->dev, priv->port, idx);
      else
              en_err(priv, "could not find vid %d in cache\n", vid);
      
      On a VM, mlx4_find_cached_vlan() will always fail, since the
      vlan cache is located on the Hypervisor; on guests it is empty.
      
      Therefore, if we allow upstream guests to register vlans, we will
      have vlan leakage since the unregister will never be performed.
      Leaving vlan reg/unreg for old guest drivers as a NOP is not a
      feature regression, since in upstream the register/unregister
      vlan wrapper is a NOP.
      Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2c957ff2