1. 23 9月, 2014 34 次提交
  2. 20 9月, 2014 6 次提交
    • A
      udp_tunnel: Only build ip6_udp_tunnel.c when IPV6 is selected · 6d967f87
      Andy Zhou 提交于
      Functions supplied in ip6_udp_tunnel.c are only needed when IPV6 is
      selected. When IPV6 is not selected, those functions are stubbed out
      in udp_tunnel.h.
      
      ==================================================================
       net/ipv6/ip6_udp_tunnel.c:15:5: error: redefinition of 'udp_sock_create6'
           int udp_sock_create6(struct net *net, struct udp_port_cfg *cfg,
       In file included from net/ipv6/ip6_udp_tunnel.c:9:0:
            include/net/udp_tunnel.h:36:19: note: previous definition of 'udp_sock_create6' was here
             static inline int udp_sock_create6(struct net *net, struct udp_port_cfg *cfg,
      ==================================================================
      
      Fixes:  fd384412 udp_tunnel: Seperate ipv6 functions into its own file
      Reported-by: Nkbuild test robot <fengguang.wu@intel.com>
      Signed-off-by: NAndy Zhou <azhou@nicira.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6d967f87
    • D
      Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next · 6c62f606
      David S. Miller 提交于
      Jeff Kirsher says:
      
      ====================
      Intel Wired LAN Driver Updates 2014-09-18
      
      This series contains updates to ixgbe and ixgbevf.
      
      Ethan Zhao cleans up ixgbe and ixgbevf by removing bd_number from the
      adapter struct because it is not longer useful.
      
      Mark fixes ixgbe where if a hardware transmit timestamp is requested,
      an uninitialized workqueue entry may be scheduled.  Added a check for
      a PTP clock to avoid that.
      
      Jacob provides a number of cleanups for ixgbe.  Since we may call
      ixgbe_acquire_msix_vectors() prior to registering our netdevice, we
      should not use the netdevice specific printk and use e_dev_warn()
      instead.  Similar to how ixgbevf handles acquiring MSI-X vectors, we
      can return an error code instead of relying on the flag being set.
      This makes it more clear that we have failed to setup MSI-X mode and
      will make it easier to consolidate MSI-X related code into a single
      function.  In the case of disabling DCB, it is not an error since we
      still can function, we just have to let the user know.  So use
      e_dev_warn() instead of e_err().  Added warnings for other features
      that are disabled when we are without MSI-X support.  Cleanup flags
      that are no longer used or needed.
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6c62f606
    • D
      Merge branch 'mlx4-next' · 58310b3f
      David S. Miller 提交于
      Or Gerlitz says:
      
      ====================
      mlx4: CQE/EQE stride support
      
      This series from Ido Shamay is intended for archs having
      cache line larger then 64 bytes.
      
      Since our CQE/EQEs are generally 64B in those systems, HW will write
      twice to the same cache line consecutively, causing pipe locks due to
      he hazard prevention mechanism. For elements in a cyclic buffer, writes
      are consecutive, so entries smaller than a cache line should be
      avoided, especially if they are written at a high rate.
      
      Reduce consecutive writes to same cache line in CQs/EQs, by allowing the
      driver to increase the distance between entries so that each will reside
      in a different cache line.
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      58310b3f
    • I
      net/mlx4_en: Add mlx4_en_get_cqe helper · b1b6b4da
      Ido Shamay 提交于
      This function derives the base address of the CQE from the CQE size,
      and calculates the real CQE context segment in it from the factor
      (this is like before). Before this change the code used the factor to
      calculate the base address of the CQE as well.
      
      The factor indicates in which segment of the cqe stride the cqe information
      is located. For 32-byte strides, the segment is 0, and for 64 byte strides,
      the segment is 1 (bytes 32..63). Using the factor was ok as long as we had
      only 32 and 64 byte strides. However, with larger strides, the factor is zero,
      and so cannot be used to calculate the base of the CQE.
      
      The helper uses the same method of CQE buffer pulling made by all other
      components that reads the CQE buffer (mlx4_ib driver and libmlx4).
      Signed-off-by: NIdo Shamay <idos@mellanox.com>
      Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b1b6b4da
    • I
      net/mlx4_core: Cache line EQE size support · 43c816c6
      Ido Shamay 提交于
      Enable mlx4 interrupt handler to work with EQE stride feature,
      The feature may be enabled when cache line is bigger than 64B.
      The EQE size will then be the cache line size, and the context
      segment resides in [0-31] offset.
      Signed-off-by: NIdo Shamay <idos@mellanox.com>
      Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      43c816c6
    • I
      net/mlx4_core: Enable CQE/EQE stride support · 77507aa2
      Ido Shamay 提交于
      This feature is intended for archs having cache line larger then 64B.
      
      Since our CQE/EQEs are generally 64B in those systems, HW will write
      twice to the same cache line consecutively, causing pipe locks due to
      he hazard prevention mechanism. For elements in a cyclic buffer, writes
      are consecutive, so entries smaller than a cache line should be
      avoided, especially if they are written at a high rate.
      
      Reduce consecutive writes to same cache line in CQs/EQs, by allowing the
      driver to increase the distance between entries so that each will reside
      in a different cache line. Until the introduction of this feature, there
      were two types of CQE/EQE:
      
      1. 32B stride and context in the [0-31] segment
      2. 64B stride and context in the [32-63] segment
      
      This feature introduces two additional types:
      
      3. 128B stride and context in the [0-31] segment (128B cache line)
      4. 256B stride and context in the [0-31] segment (256B cache line)
      
      Modify the mlx4_core driver to query the device for the CQE/EQE cache
      line stride capability and to enable that capability when the host
      cache line size is larger than 64 bytes (supported cache lines are
      128B and 256B).
      
      The mlx4 IB driver and libmlx4 need not be aware of this change. The PF
      context behaviour is changed to require this change in VF drivers
      running on such archs.
      Signed-off-by: NIdo Shamay <idos@mellanox.com>
      Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      77507aa2