1. 15 6月, 2014 1 次提交
  2. 03 6月, 2014 1 次提交
  3. 24 5月, 2014 1 次提交
    • S
      net-next:v4: Add support to configure SR-IOV VF minimum and maximum Tx rate through ip tool. · ed616689
      Sucheta Chakraborty 提交于
      o min_tx_rate puts lower limit on the VF bandwidth. VF is guaranteed
        to have a bandwidth of at least this value.
        max_tx_rate puts cap on the VF bandwidth. VF can have a bandwidth
        of up to this value.
      
      o A new handler set_vf_rate for attr IFLA_VF_RATE has been introduced
        which takes 4 arguments:
        netdev, VF number, min_tx_rate, max_tx_rate
      
      o ndo_set_vf_rate replaces ndo_set_vf_tx_rate handler.
      
      o Drivers that currently implement ndo_set_vf_tx_rate should now call
        ndo_set_vf_rate instead and reject attempt to set a minimum bandwidth
        greater than 0 for IFLA_VF_TX_RATE when IFLA_VF_RATE is not yet
        implemented by driver.
      
      o If user enters only one of either min_tx_rate or max_tx_rate, then,
        userland should read back the other value from driver and set both
        for IFLA_VF_RATE.
        Drivers that have not yet implemented IFLA_VF_RATE should always
        return min_tx_rate as 0 when read from ip tool.
      
      o If both IFLA_VF_TX_RATE and IFLA_VF_RATE options are specified, then
        IFLA_VF_RATE should override.
      
      o Idea is to have consistent display of rate values to user.
      
      o Usage example: -
      
        ./ip link set p4p1 vf 0 rate 900
      
        ./ip link show p4p1
        32: p4p1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode
        DEFAULT qlen 1000
          link/ether 00:0e:1e:08:b0:f0 brd ff:ff:ff:ff:ff:ff
          vf 0 MAC 3e:a0:ca:bd:ae:5a, tx rate 900 (Mbps), max_tx_rate 900Mbps
          vf 1 MAC f6:c6:7c:3f:3d:6c
          vf 2 MAC 56:32:43:98:d7:71
          vf 3 MAC d6:be:c3:b5:85:ff
          vf 4 MAC ee:a9:9a:1e:19:14
          vf 5 MAC 4a:d0:4c:07:52:18
          vf 6 MAC 3a:76:44:93:62:f9
          vf 7 MAC 82:e9:e7:e3:15:1a
      
        ./ip link set p4p1 vf 0 max_tx_rate 300 min_tx_rate 200
      
        ./ip link show p4p1
        32: p4p1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode
        DEFAULT qlen 1000
          link/ether 00:0e:1e:08:b0:f0 brd ff:ff:ff:ff:ff:ff
          vf 0 MAC 3e:a0:ca:bd:ae:5a, tx rate 300 (Mbps), max_tx_rate 300Mbps,
          min_tx_rate 200Mbps
          vf 1 MAC f6:c6:7c:3f:3d:6c
          vf 2 MAC 56:32:43:98:d7:71
          vf 3 MAC d6:be:c3:b5:85:ff
          vf 4 MAC ee:a9:9a:1e:19:14
          vf 5 MAC 4a:d0:4c:07:52:18
          vf 6 MAC 3a:76:44:93:62:f9
          vf 7 MAC 82:e9:e7:e3:15:1a
      
        ./ip link set p4p1 vf 0 max_tx_rate 600 rate 300
      
        ./ip link show p4p1
        32: p4p1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode
        DEFAULT qlen 1000
          link/ether 00:0e:1e:08:b0:f brd ff:ff:ff:ff:ff:ff
          vf 0 MAC 3e:a0:ca:bd:ae:5, tx rate 600 (Mbps), max_tx_rate 600Mbps,
          min_tx_rate 200Mbps
          vf 1 MAC f6:c6:7c:3f:3d:6c
          vf 2 MAC 56:32:43:98:d7:71
          vf 3 MAC d6:be:c3:b5:85:ff
          vf 4 MAC ee:a9:9a:1e:19:14
          vf 5 MAC 4a:d0:4c:07:52:18
          vf 6 MAC 3a:76:44:93:62:f9
          vf 7 MAC 82:e9:e7:e3:15:1a
      Signed-off-by: NSucheta Chakraborty <sucheta.chakraborty@qlogic.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ed616689
  4. 23 5月, 2014 1 次提交
    • M
      vlan: more careful checksum features handling · da08143b
      Michal Kubeček 提交于
      When combining real_dev's features and vlan_features, simple
      bitwise AND is used. This doesn't work well for checksum
      offloading features as if one set has NETIF_F_HW_CSUM and the
      other NETIF_F_IP_CSUM and/or NETIF_F_IPV6_CSUM, we end up with
      no checksum offloading. However, from the logical point of view
      (how can_checksum_protocol() works), NETIF_F_HW_CSUM contains
      the functionality of NETIF_F_IP_CSUM and NETIF_F_IPV6_CSUM so
      that the result should be IP/IPV6.
      
      Add helper function netdev_intersect_features() implementing
      this logic and use it in vlan_dev_fix_features().
      Signed-off-by: NMichal Kubecek <mkubecek@suse.cz>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      da08143b
  5. 17 5月, 2014 3 次提交
    • V
      bonding: Fix stacked device detection in arp monitoring · 44a40855
      Vlad Yasevich 提交于
      Prior to commit fbd929f2
      	bonding: support QinQ for bond arp interval
      
      the arp monitoring code allowed for proper detection of devices
      stacked on top of vlans.  Since the above commit, the
      code can still detect a device stacked on top of single
      vlan, but not a device stacked on top of Q-in-Q configuration.
      The search will only set the inner vlan tag if the route
      device is the vlan device.  However, this is not always the
      case, as it is possible to extend the stacked configuration.
      
      With this patch it is possible to provision devices on
      top Q-in-Q vlan configuration that should be used as
      a source of ARP monitoring information.
      
      For example:
      ip link add link bond0 vlan10 type vlan proto 802.1q id 10
      ip link add link vlan10 vlan100 type vlan proto 802.1q id 100
      ip link add link vlan100 type macvlan
      
      Note:  This patch limites the number of stacked VLANs to 2,
      just like before.  The original, however had another issue
      in that if we had more then 2 levels of VLANs, we would end
      up generating incorrectly tagged traffic.  This is no longer
      possible.
      
      Fixes: fbd929f2 (bonding: support QinQ for bond arp interval)
      CC: Jay Vosburgh <j.vosburgh@gmail.com>
      CC: Veaceslav Falico <vfalico@redhat.com>
      CC: Andy Gospodarek <andy@greyhouse.net>
      CC: Ding Tianhong <dingtianhong@huawei.com>
      CC: Patric McHardy <kaber@trash.net>
      Signed-off-by: NVlad Yasevich <vyasevic@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      44a40855
    • V
      net: Allow for more then a single subclass for netif_addr_lock · 25175ba5
      Vlad Yasevich 提交于
      Currently netif_addr_lock_nested assumes that there can be only
      a single nesting level between 2 devices.  However, if we
      have multiple devices of the same type stacked, this fails.
      For example:
       eth0 <-- vlan0.10 <-- vlan0.10.20
      
      A more complicated configuration may stack more then one type of
      device in different order.
      Ex:
        eth0 <-- vlan0.10 <-- macvlan0 <-- vlan1.10.20 <-- macvlan1
      
      This patch adds an ndo_* function that allows each stackable
      device to report its nesting level.  If the device doesn't
      provide this function default subclass of 1 is used.
      Signed-off-by: NVlad Yasevich <vyasevic@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      25175ba5
    • V
      net: Find the nesting level of a given device by type. · 4085ebe8
      Vlad Yasevich 提交于
      Multiple devices in the kernel can be stacked/nested and they
      need to know their nesting level for the purposes of lockdep.
      This patch provides a generic function that determines a nesting
      level of a particular device by its type (ex: vlan, macvlan, etc).
      We only care about nesting of the same type of devices.
      
      For example:
        eth0 <- vlan0.10 <- macvlan0 <- vlan1.20
      
      The nesting level of vlan1.20 would be 1, since there is another vlan
      in the stack under it.
      Signed-off-by: NVlad Yasevich <vyasevic@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4085ebe8
  6. 14 5月, 2014 1 次提交
  7. 08 5月, 2014 1 次提交
  8. 21 4月, 2014 1 次提交
  9. 18 4月, 2014 1 次提交
  10. 04 4月, 2014 1 次提交
  11. 01 4月, 2014 4 次提交
  12. 30 3月, 2014 2 次提交
  13. 29 3月, 2014 2 次提交
  14. 18 3月, 2014 1 次提交
    • E
      netpoll: Remove dead packet receive code (CONFIG_NETPOLL_TRAP) · 9c62a68d
      Eric W. Biederman 提交于
      The netpoll packet receive code only becomes active if the netpoll
      rx_skb_hook is implemented, and there is not a single implementation
      of the netpoll rx_skb_hook in the kernel.
      
      All of the out of tree implementations I have found all call
      netpoll_poll which was removed from the kernel in 2011, so this
      change should not add any additional breakage.
      
      There are problems with the netpoll packet receive code.  __netpoll_rx
      does not call dev_kfree_skb_irq or dev_kfree_skb_any in hard irq
      context.  netpoll_neigh_reply leaks every skb it receives.  Reception
      of packets does not work successfully on stacked devices (aka bonding,
      team, bridge, and vlans).
      
      Given that the netpoll packet receive code is buggy, there are no
      out of tree users that will be merged soon, and the code has
      not been used for in tree for a decade let's just remove it.
      
      Reverting this commit can server as a starting point for anyone
      who wants to resurrect netpoll packet reception support.
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9c62a68d
  15. 12 3月, 2014 1 次提交
  16. 28 2月, 2014 1 次提交
    • L
      net: move net_device priv_flags out from UAPI · 7aa98047
      Luis R. Rodriguez 提交于
      These are private to userspace, and they're unstable
      anyway and can be shuffled at will (see 080e4130)
      so any userspace application relying on them is on crack.
      
      Test compiled with allyesconfig.
      
      mcgrof@drvbp1 /pub/mem/mcgrof/net-next (git::master)$ make allyesconfig
      mcgrof@drvbp1 /pub/mem/mcgrof/net-next (git::master)$ time make -j 20
      ...
        BUILD   arch/x86/boot/bzImage
      Setup is 16992 bytes (padded to 17408 bytes).
      System is 56153 kB
      CRC 721d2751
      Kernel: arch/x86/boot/bzImage is ready  (#1)
      real    19m35.744s
      user    280m37.984s
      sys     27m54.104s
      
      Cc: netdev@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Cc: Florian Fainelli <f.fainelli@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Signed-off-by: NLuis R. Rodriguez <mcgrof@suse.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7aa98047
  17. 27 2月, 2014 1 次提交
  18. 19 2月, 2014 1 次提交
  19. 17 2月, 2014 2 次提交
  20. 15 2月, 2014 1 次提交
  21. 14 2月, 2014 1 次提交
  22. 23 1月, 2014 1 次提交
  23. 22 1月, 2014 1 次提交
    • O
      net: Add GRO support for UDP encapsulating protocols · b582ef09
      Or Gerlitz 提交于
      Add GRO handlers for protocols that do UDP encapsulation, with the intent of
      being able to coalesce packets which encapsulate packets belonging to
      the same TCP session.
      
      For GRO purposes, the destination UDP port takes the role of the ether type
      field in the ethernet header or the next protocol in the IP header.
      
      The UDP GRO handler will only attempt to coalesce packets whose destination
      port is registered to have gro handler.
      
      Use a mark on the skb GRO CB data to disallow (flush) running the udp gro receive
      code twice on a packet. This solves the problem of udp encapsulated packets whose
      inner VM packet is udp and happen to carry a port which has registered offloads.
      Signed-off-by: NShlomo Pongratz <shlomop@mellanox.com>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b582ef09
  24. 18 1月, 2014 1 次提交
  25. 17 1月, 2014 2 次提交
  26. 16 1月, 2014 1 次提交
    • V
      net: rename sysfs symlinks on device name change · 5bb025fa
      Veaceslav Falico 提交于
      Currently, we don't rename the upper/lower_ifc symlinks in
      /sys/class/net/*/ , which might result stale/duplicate links/names.
      
      Fix this by adding netdev_adjacent_rename_links(dev, oldname) which renames
      all the upper/lower interface's links to dev from the upper/lower_oldname
      to the new name.
      
      We don't need a rollback because only we control these symlinks and if we
      fail to rename them - sysfs will anyway complain.
      Reported-by: NDing Tianhong <dingtianhong@huawei.com>
      CC: Ding Tianhong <dingtianhong@huawei.com>
      CC: "David S. Miller" <davem@davemloft.net>
      CC: Eric Dumazet <edumazet@google.com>
      CC: Nicolas Dichtel <nicolas.dichtel@6wind.com>
      CC: Cong Wang <amwang@redhat.com>
      Signed-off-by: NVeaceslav Falico <vfalico@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5bb025fa
  27. 11 1月, 2014 1 次提交
    • J
      net: core: explicitly select a txq before doing l2 forwarding · f663dd9a
      Jason Wang 提交于
      Currently, the tx queue were selected implicitly in ndo_dfwd_start_xmit(). The
      will cause several issues:
      
      - NETIF_F_LLTX were removed for macvlan, so txq lock were done for macvlan
        instead of lower device which misses the necessary txq synchronization for
        lower device such as txq stopping or frozen required by dev watchdog or
        control path.
      - dev_hard_start_xmit() was called with NULL txq which bypasses the net device
        watchdog.
      - dev_hard_start_xmit() does not check txq everywhere which will lead a crash
        when tso is disabled for lower device.
      
      Fix this by explicitly introducing a new param for .ndo_select_queue() for just
      selecting queues in the case of l2 forwarding offload. netdev_pick_tx() was also
      extended to accept this parameter and dev_queue_xmit_accel() was used to do l2
      forwarding transmission.
      
      With this fixes, NETIF_F_LLTX could be preserved for macvlan and there's no need
      to check txq against NULL in dev_hard_start_xmit(). Also there's no need to keep
      a dedicated ndo_dfwd_start_xmit() and we can just reuse the code of
      dev_queue_xmit() to do the transmission.
      
      In the future, it was also required for macvtap l2 forwarding support since it
      provides a necessary synchronization method.
      
      Cc: John Fastabend <john.r.fastabend@intel.com>
      Cc: Neil Horman <nhorman@tuxdriver.com>
      Cc: e1000-devel@lists.sourceforge.net
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Acked-by: NNeil Horman <nhorman@tuxdriver.com>
      Acked-by: NJohn Fastabend <john.r.fastabend@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f663dd9a
  28. 08 1月, 2014 1 次提交
    • J
      net-gre-gro: Add GRE support to the GRO stack · bf5a755f
      Jerry Chu 提交于
      This patch built on top of Commit 299603e8
      ("net-gro: Prepare GRO stack for the upcoming tunneling support") to add
      the support of the standard GRE (RFC1701/RFC2784/RFC2890) to the GRO
      stack. It also serves as an example for supporting other encapsulation
      protocols in the GRO stack in the future.
      
      The patch supports version 0 and all the flags (key, csum, seq#) but
      will flush any pkt with the S (seq#) flag. This is because the S flag
      is not support by GSO, and a GRO pkt may end up in the forwarding path,
      thus requiring GSO support to break it up correctly.
      
      Currently the "packet_offload" structure only contains L3 (ETH_P_IP/
      ETH_P_IPV6) GRO offload support so the encapped pkts are limited to
      IP pkts (i.e., w/o L2 hdr). But support for other protocol type can
      be easily added, so is the support for GRE variations like NVGRE.
      
      The patch also support csum offload. Specifically if the csum flag is on
      and the h/w is capable of checksumming the payload (CHECKSUM_COMPLETE),
      the code will take advantage of the csum computed by the h/w when
      validating the GRE csum.
      
      Note that commit 60769a5d "ipv4: gre:
      add GRO capability" already introduces GRO capability to IPv4 GRE
      tunnels, using the gro_cells infrastructure. But GRO is done after
      GRE hdr has been removed (i.e., decapped). The following patch applies
      GRO when pkts first come in (before hitting the GRE tunnel code). There
      is some performance advantage for applying GRO as early as possible.
      Also this approach is transparent to other subsystem like Open vSwitch
      where GRE decap is handled outside of the IP stack hence making it
      harder for the gro_cells stuff to apply. On the other hand, some NICs
      are still not capable of hashing on the inner hdr of a GRE pkt (RSS).
      In that case the GRO processing of pkts from the same remote host will
      all happen on the same CPU and the performance may be suboptimal.
      
      I'm including some rough preliminary performance numbers below. Note
      that the performance will be highly dependent on traffic load, mix as
      usual. Moreover it also depends on NIC offload features hence the
      following is by no means a comprehesive study. Local testing and tuning
      will be needed to decide the best setting.
      
      All tests spawned 50 copies of netperf TCP_STREAM and ran for 30 secs.
      (super_netperf 50 -H 192.168.1.18 -l 30)
      
      An IP GRE tunnel with only the key flag on (e.g., ip tunnel add gre1
      mode gre local 10.246.17.18 remote 10.246.17.17 ttl 255 key 123)
      is configured.
      
      The GRO support for pkts AFTER decap are controlled through the device
      feature of the GRE device (e.g., ethtool -K gre1 gro on/off).
      
      1.1 ethtool -K gre1 gro off; ethtool -K eth0 gro off
      thruput: 9.16Gbps
      CPU utilization: 19%
      
      1.2 ethtool -K gre1 gro on; ethtool -K eth0 gro off
      thruput: 5.9Gbps
      CPU utilization: 15%
      
      1.3 ethtool -K gre1 gro off; ethtool -K eth0 gro on
      thruput: 9.26Gbps
      CPU utilization: 12-13%
      
      1.4 ethtool -K gre1 gro on; ethtool -K eth0 gro on
      thruput: 9.26Gbps
      CPU utilization: 10%
      
      The following tests were performed on a different NIC that is capable of
      csum offload. I.e., the h/w is capable of computing IP payload csum
      (CHECKSUM_COMPLETE).
      
      2.1 ethtool -K gre1 gro on (hence will use gro_cells)
      
      2.1.1 ethtool -K eth0 gro off; csum offload disabled
      thruput: 8.53Gbps
      CPU utilization: 9%
      
      2.1.2 ethtool -K eth0 gro off; csum offload enabled
      thruput: 8.97Gbps
      CPU utilization: 7-8%
      
      2.1.3 ethtool -K eth0 gro on; csum offload disabled
      thruput: 8.83Gbps
      CPU utilization: 5-6%
      
      2.1.4 ethtool -K eth0 gro on; csum offload enabled
      thruput: 8.98Gbps
      CPU utilization: 5%
      
      2.2 ethtool -K gre1 gro off
      
      2.2.1 ethtool -K eth0 gro off; csum offload disabled
      thruput: 5.93Gbps
      CPU utilization: 9%
      
      2.2.2 ethtool -K eth0 gro off; csum offload enabled
      thruput: 5.62Gbps
      CPU utilization: 8%
      
      2.2.3 ethtool -K eth0 gro on; csum offload disabled
      thruput: 7.69Gbps
      CPU utilization: 8%
      
      2.2.4 ethtool -K eth0 gro on; csum offload enabled
      thruput: 8.96Gbps
      CPU utilization: 5-6%
      Signed-off-by: NH.K. Jerry Chu <hkchu@google.com>
      Reviewed-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bf5a755f
  29. 05 1月, 2014 1 次提交
  30. 04 1月, 2014 1 次提交
  31. 03 1月, 2014 1 次提交
    • W
      ipv4: fix tunneled VM traffic over hw VXLAN/GRE GSO NIC · 7a7ffbab
      Wei-Chun Chao 提交于
      VM to VM GSO traffic is broken if it goes through VXLAN or GRE
      tunnel and the physical NIC on the host supports hardware VXLAN/GRE
      GSO offload (e.g. bnx2x and next-gen mlx4).
      
      Two issues -
      (VXLAN) VM traffic has SKB_GSO_DODGY and SKB_GSO_UDP_TUNNEL with
      SKB_GSO_TCP/UDP set depending on the inner protocol. GSO header
      integrity check fails in udp4_ufo_fragment if inner protocol is
      TCP. Also gso_segs is calculated incorrectly using skb->len that
      includes tunnel header. Fix: robust check should only be applied
      to the inner packet.
      
      (VXLAN & GRE) Once GSO header integrity check passes, NULL segs
      is returned and the original skb is sent to hardware. However the
      tunnel header is already pulled. Fix: tunnel header needs to be
      restored so that hardware can perform GSO properly on the original
      packet.
      Signed-off-by: NWei-Chun Chao <weichunc@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7a7ffbab