1. 27 9月, 2010 1 次提交
  2. 18 9月, 2010 1 次提交
  3. 17 9月, 2010 1 次提交
  4. 16 9月, 2010 2 次提交
  5. 09 9月, 2010 1 次提交
  6. 08 9月, 2010 1 次提交
    • H
      net: fix tx queue selection for bridged devices implementing select_queue · deabc772
      Helmut Schaa 提交于
      When a net device is implementing the select_queue callback and is part of
      a bridge, frames coming from the bridge already have a tx queue associated
      to the socket (introduced in commit a4ee3ce3,
      "net: Use sk_tx_queue_mapping for connected sockets"). The call to
      sk_tx_queue_get will then return the tx queue used by the bridge instead
      of calling the select_queue callback.
      
      In case of mac80211 this broke QoS which is implemented by using the
      select_queue callback. Furthermore it introduced problems with rt2x00
      because frames with the same TID and RA sometimes appeared on different
      tx queues which the hw cannot handle correctly.
      
      Fix this by always calling select_queue first if it is available and only
      afterwards use the socket tx queue mapping.
      Signed-off-by: NHelmut Schaa <helmut.schaa@googlemail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      deabc772
  7. 03 9月, 2010 1 次提交
  8. 02 9月, 2010 1 次提交
  9. 27 8月, 2010 1 次提交
    • E
      gro: __napi_gro_receive() optimizations · 40d0802b
      Eric Dumazet 提交于
      compare_ether_header() can have a special implementation on 64 bit
      arches if CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS is defined.
      
      __napi_gro_receive() and vlan_gro_common() can avoid a conditional
      branch to perform device match.
      
      On x86_64, __napi_gro_receive() has now 38 instructions instead of 53
      
      As gcc-4.4.3 still choose to not inline it, add inline keyword to this
      performance critical function.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      CC: Herbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      40d0802b
  10. 23 8月, 2010 2 次提交
  11. 22 8月, 2010 1 次提交
  12. 20 8月, 2010 3 次提交
  13. 19 8月, 2010 1 次提交
  14. 18 8月, 2010 1 次提交
  15. 17 8月, 2010 1 次提交
    • K
      core: Factor out flow calculation from get_rps_cpu · bfb564e7
      Krishna Kumar 提交于
      Factor out flow calculation code from get_rps_cpu, since other
      functions can use the same code.
      
      Revisions:
      
      v2 (Ben): Separate flow calcuation out and use in select queue.
      v3 (Arnd): Don't re-implement MIN.
      v4 (Changli): skb->data points to ethernet header in macvtap, and
      	make a fast path. Tested macvtap with this patch.
      v5 (Changli):
      	- Cache skb->rxhash in skb_get_rxhash
      	- macvtap may not have pow(2) queues, so change code for
      	  queue selection.
          (Arnd):
      	- Use first available queue if all fails.
      Signed-off-by: NKrishna Kumar <krkumar2@in.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bfb564e7
  16. 08 8月, 2010 1 次提交
  17. 06 8月, 2010 1 次提交
  18. 03 8月, 2010 2 次提交
  19. 01 8月, 2010 1 次提交
  20. 26 7月, 2010 1 次提交
  21. 20 7月, 2010 1 次提交
  22. 19 7月, 2010 1 次提交
    • R
      net: support time stamping in phy devices. · c1f19b51
      Richard Cochran 提交于
      This patch adds a new networking option to allow hardware time stamps
      from PHY devices. When enabled, likely candidates among incoming and
      outgoing network packets are offered to the PHY driver for possible
      time stamping. When accepted by the PHY driver, incoming packets are
      deferred for later delivery by the driver.
      
      The patch also adds phylib driver methods for the SIOCSHWTSTAMP ioctl
      and callbacks for transmit and receive time stamping. Drivers may
      optionally implement these functions.
      Signed-off-by: NRichard Cochran <richard.cochran@omicron.at>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c1f19b51
  23. 15 7月, 2010 2 次提交
    • T
      net: fix problem in reading sock TX queue · b0f77d0e
      Tom Herbert 提交于
      Fix problem in reading the tx_queue recorded in a socket.  In
      dev_pick_tx, the TX queue is read by doing a check with
      sk_tx_queue_recorded on the socket, followed by a sk_tx_queue_get.
      The problem is that there is not mutual exclusion across these
      calls in the socket so it it is possible that the queue in the
      sock can be invalidated after sk_tx_queue_recorded is called so
      that sk_tx_queue get returns -1, which sets 65535 in queue_index
      and thus dev_pick_tx returns 65536 which is a bogus queue and
      can cause crash in dev_queue_xmit.
      
      We fix this by only calling sk_tx_queue_get which does the proper
      checks.  The interface is that sk_tx_queue_get returns the TX queue
      if the sock argument is non-NULL and TX queue is recorded, else it
      returns -1.  sk_tx_queue_recorded is no longer used so it can be
      completely removed.
      Signed-off-by: NTom Herbert <therbert@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b0f77d0e
    • E
      net: skb_tx_hash() fix relative to skb_orphan_try() · 87fd308c
      Eric Dumazet 提交于
      commit fc6055a5 (net: Introduce skb_orphan_try()) added early
      orphaning of skbs.
      
      This unfortunately added a performance regression in skb_tx_hash() in
      case of stacked devices (bonding, vlans, ...)
      
      Since skb->sk is now NULL, we cannot access sk->sk_hash anymore to
      spread tx packets to multiple NIC queues on multiqueue devices.
      
      skb_tx_hash() in this case only uses skb->protocol, same value for all
      flows.
      
      skb_orphan_try() can copy sk->sk_hash into skb->rxhash and skb_tx_hash()
      can use this saved sk_hash value to compute its internal hash value.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      87fd308c
  24. 10 7月, 2010 2 次提交
    • B
      net: Document that dev_get_stats() returns the given pointer · d7753516
      Ben Hutchings 提交于
      Document that dev_get_stats() returns the same stats pointer it was
      given.  Remove const qualification from the returned pointer since the
      caller may do what it likes with that structure.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d7753516
    • B
      net: Get rid of rtnl_link_stats64 / net_device_stats union · 3cfde79c
      Ben Hutchings 提交于
      In commit be1f3c2c "net: Enable 64-bit
      net device statistics on 32-bit architectures" I redefined struct
      net_device_stats so that it could be used in a union with struct
      rtnl_link_stats64, avoiding the need for explicit copying or
      conversion between the two.  However, this is unsafe because there is
      no locking required and no lock consistently held around calls to
      dev_get_stats() and use of the statistics structure it returns.
      
      In commit 28172739 "net: fix 64 bit
      counters on 32 bit arches" Eric Dumazet dealt with that problem by
      requiring callers of dev_get_stats() to provide storage for the
      result.  This means that the net_device::stats64 field and the padding
      in struct net_device_stats are now redundant, so remove them.
      
      Update the comment on net_device_ops::ndo_get_stats64 to reflect its
      new usage.
      
      Change dev_txq_stats_fold() to use struct rtnl_link_stats64, since
      that is what all its callers are really using and it is no longer
      going to be compatible with struct net_device_stats.
      
      Eric Dumazet suggested the separate function for the structure
      conversion.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      Acked-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3cfde79c
  25. 08 7月, 2010 1 次提交
    • E
      net: fix 64 bit counters on 32 bit arches · 28172739
      Eric Dumazet 提交于
      There is a small possibility that a reader gets incorrect values on 32
      bit arches. SNMP applications could catch incorrect counters when a
      32bit high part is changed by another stats consumer/provider.
      
      One way to solve this is to add a rtnl_link_stats64 param to all
      ndo_get_stats64() methods, and also add such a parameter to
      dev_get_stats().
      
      Rule is that we are not allowed to use dev->stats64 as a temporary
      storage for 64bit stats, but a caller provided area (usually on stack)
      
      Old drivers (only providing get_stats() method) need no changes.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      28172739
  26. 05 7月, 2010 1 次提交
  27. 03 7月, 2010 1 次提交
    • J
      net: decreasing real_num_tx_queues needs to flush qdisc · f0796d5c
      John Fastabend 提交于
      Reducing real_num_queues needs to flush the qdisc otherwise
      skbs with queue_mappings greater then real_num_tx_queues can
      be sent to the underlying driver.
      
      The flow for this is,
      
      dev_queue_xmit()
      	dev_pick_tx()
      		skb_tx_hash()  => hash using real_num_tx_queues
      		skb_set_queue_mapping()
      	...
      	qdisc_enqueue_root() => enqueue skb on txq from hash
      ...
      dev->real_num_tx_queues -= n
      ...
      sch_direct_xmit()
      	dev_hard_start_xmit()
      		ndo_start_xmit(skb,dev) => skb queue set with old hash
      
      skbs are enqueued on the qdisc with skb->queue_mapping set
      0 < queue_mappings < real_num_tx_queues.  When the driver
      decreases real_num_tx_queues skb's may be dequeued from the
      qdisc with a queue_mapping greater then real_num_tx_queues.
      
      This fixes a case in ixgbe where this was occurring with DCB
      and FCoE. Because the driver is using queue_mapping to map
      skbs to tx descriptor rings we can potentially map skbs to
      rings that no longer exist.
      Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com>
      Tested-by: NRoss Brattain <ross.b.brattain@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f0796d5c
  28. 01 7月, 2010 1 次提交
  29. 24 6月, 2010 1 次提交
  30. 16 6月, 2010 2 次提交
  31. 13 6月, 2010 1 次提交
    • B
      net: Enable 64-bit net device statistics on 32-bit architectures · be1f3c2c
      Ben Hutchings 提交于
      Use struct rtnl_link_stats64 as the statistics structure.
      
      On 32-bit architectures, insert 32 bits of padding after/before each
      field of struct net_device_stats to make its layout compatible with
      struct rtnl_link_stats64.  Add an anonymous union in net_device; move
      stats into the union and add struct rtnl_link_stats64 stats64.
      
      Add net_device_ops::ndo_get_stats64, implementations of which will
      return a pointer to struct rtnl_link_stats64.  Drivers that implement
      this operation must not update the structure asynchronously.
      
      Change dev_get_stats() to call ndo_get_stats64 if available, and to
      return a pointer to struct rtnl_link_stats64.  Change callers of
      dev_get_stats() accordingly.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      be1f3c2c
  32. 11 6月, 2010 1 次提交
    • J
      net: deliver skbs on inactive slaves to exact matches · 597a264b
      John Fastabend 提交于
      Currently, the accelerated receive path for VLAN's will
      drop packets if the real device is an inactive slave and
      is not one of the special pkts tested for in
      skb_bond_should_drop().  This behavior is different then
      the non-accelerated path and for pkts over a bonded vlan.
      
      For example,
      
      vlanx -> bond0 -> ethx
      
      will be dropped in the vlan path and not delivered to any
      packet handlers at all.  However,
      
      bond0 -> vlanx -> ethx
      
      and
      
      bond0 -> ethx
      
      will be delivered to handlers that match the exact dev,
      because the VLAN path checks the real_dev which is not a
      slave and netif_recv_skb() doesn't drop frames but only
      delivers them to exact matches.
      
      This patch adds a sk_buff flag which is used for tagging
      skbs that would previously been dropped and allows the
      skb to continue to skb_netif_recv().  Here we add
      logic to check for the deliver_no_wcard flag and if it
      is set only deliver to handlers that match exactly.  This
      makes both paths above consistent and gives pkt handlers
      a way to identify skbs that come from inactive slaves.
      Without this patch in some configurations skbs will be
      delivered to handlers with exact matches and in others
      be dropped out right in the vlan path.
      
      I have tested the following 4 configurations in failover modes
      and load balancing modes.
      
      # bond0 -> ethx
      
      # vlanx -> bond0 -> ethx
      
      # bond0 -> vlanx -> ethx
      
      # bond0 -> ethx
                  |
        vlanx -> --
      Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      597a264b