1. 18 5月, 2010 1 次提交
    • S
      net: Add netlink support for virtual port management (was iovnl) · 57b61080
      Scott Feldman 提交于
      Add new netdev ops ndo_{set|get}_vf_port to allow setting of
      port-profile on a netdev interface.  Extends netlink socket RTM_SETLINK/
      RTM_GETLINK with two new sub msgs called IFLA_VF_PORTS and IFLA_PORT_SELF
      (added to end of IFLA_cmd list).  These are both nested atrtibutes
      using this layout:
      
                    [IFLA_NUM_VF]
                    [IFLA_VF_PORTS]
                            [IFLA_VF_PORT]
                                    [IFLA_PORT_*], ...
                            [IFLA_VF_PORT]
                                    [IFLA_PORT_*], ...
                            ...
                    [IFLA_PORT_SELF]
                            [IFLA_PORT_*], ...
      
      These attributes are design to be set and get symmetrically.  VF_PORTS
      is a list of VF_PORTs, one for each VF, when dealing with an SR-IOV
      device.  PORT_SELF is for the PF of the SR-IOV device, in case it wants
      to also have a port-profile, or for the case where the VF==PF, like in
      enic patch 2/2 of this patch set.
      
      A port-profile is used to configure/enable the external switch virtual port
      backing the netdev interface, not to configure the host-facing side of the
      netdev.  A port-profile is an identifier known to the switch.  How port-
      profiles are installed on the switch or how available port-profiles are
      made know to the host is outside the scope of this patch.
      
      There are two types of port-profiles specs in the netlink msg.  The first spec
      is for 802.1Qbg (pre-)standard, VDP protocol.  The second spec is for devices
      that run a similar protocol as VDP but in firmware, thus hiding the protocol
      details.  In either case, the specs have much in common and makes sense to
      define the netlink msg as the union of the two specs.  For example, both specs
      have a notition of associating/deassociating a port-profile.  And both specs
      require some information from the hypervisor manager, such as client port
      instance ID.
      
      The general flow is the port-profile is applied to a host netdev interface
      using RTM_SETLINK, the receiver of the RTM_SETLINK msg communicates with the
      switch, and the switch virtual port backing the host netdev interface is
      configured/enabled based on the settings defined by the port-profile.  What
      those settings comprise, and how those settings are managed is again
      outside the scope of this patch, since this patch only deals with the
      first step in the flow.
      Signed-off-by: NScott Feldman <scofeldm@cisco.com>
      Signed-off-by: NRoopa Prabhu <roprabhu@cisco.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      57b61080
  2. 16 5月, 2010 1 次提交
    • E
      net: Consistent skb timestamping · 3b098e2d
      Eric Dumazet 提交于
      With RPS inclusion, skb timestamping is not consistent in RX path.
      
      If netif_receive_skb() is used, its deferred after RPS dispatch.
      
      If netif_rx() is used, its done before RPS dispatch.
      
      This can give strange tcpdump timestamps results.
      
      I think timestamping should be done as soon as possible in the receive
      path, to get meaningful values (ie timestamps taken at the time packet
      was delivered by NIC driver to our stack), even if NAPI already can
      defer timestamping a bit (RPS can help to reduce the gap)
      
      Tom Herbert prefer to sample timestamps after RPS dispatch. In case
      sampling is expensive (HPET/acpi_pm on x86), this makes sense.
      
      Let admins switch from one mode to another, using a new
      sysctl, /proc/sys/net/core/netdev_tstamp_prequeue
      
      Its default value (1), means timestamps are taken as soon as possible,
      before backlog queueing, giving accurate timestamps.
      
      Setting a 0 value permits to sample timestamps when processing backlog,
      after RPS dispatch, to lower the load of the pre-RPS cpu.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3b098e2d
  3. 06 5月, 2010 1 次提交
    • W
      netpoll: add generic support for bridge and bonding devices · 0e34e931
      WANG Cong 提交于
      This whole patchset is for adding netpoll support to bridge and bonding
      devices. I already tested it for bridge, bonding, bridge over bonding,
      and bonding over bridge. It looks fine now.
      
      To make bridge and bonding support netpoll, we need to adjust
      some netpoll generic code. This patch does the following things:
      
      1) introduce two new priv_flags for struct net_device:
         IFF_IN_NETPOLL which identifies we are processing a netpoll;
         IFF_DISABLE_NETPOLL is used to disable netpoll support for a device
         at run-time;
      
      2) introduce one new method for netdev_ops:
         ->ndo_netpoll_cleanup() is used to clean up netpoll when a device is
           removed.
      
      3) introduce netpoll_poll_dev() which takes a struct net_device * parameter;
         export netpoll_send_skb() and netpoll_poll_dev() which will be used later;
      
      4) hide a pointer to struct netpoll in struct netpoll_info, ditto.
      
      5) introduce ->real_dev for struct netpoll.
      
      6) introduce a new status NETDEV_BONDING_DESLAE, which is used to disable
         netconsole before releasing a slave, to avoid deadlocks.
      
      Cc: David Miller <davem@davemloft.net>
      Cc: Neil Horman <nhorman@tuxdriver.com>
      Signed-off-by: NWANG Cong <amwang@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0e34e931
  4. 03 5月, 2010 2 次提交
  5. 28 4月, 2010 2 次提交
  6. 20 4月, 2010 2 次提交
    • E
      rps: cleanups · e36fa2f7
      Eric Dumazet 提交于
      struct softnet_data holds many queues, so consistent use "sd" name
      instead of "queue" is better.
      
      Adds a rps_ipi_queued() helper to cleanup enqueue_to_backlog()
      
      Adds a _and_irq_disable suffix to net_rps_action() name, as David
      suggested.
      
      incr_input_queue_head() becomes input_queue_head_incr()
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e36fa2f7
    • E
      rps: shortcut net_rps_action() · 88751275
      Eric Dumazet 提交于
      net_rps_action() is a bit expensive on NR_CPUS=64..4096 kernels, even if
      RPS is not active.
      
      Tom Herbert used two bitmasks to hold information needed to send IPI,
      but a single LIFO list seems more appropriate.
      
      Move all RPS logic into net_rps_action() to cleanup net_rx_action() code
      (remove two ifdefs)
      
      Move rps_remote_softirq_cpus into softnet_data to share its first cache
      line, filling an existing hole.
      
      In a future patch, we could call net_rps_action() from process_backlog()
      to make sure we send IPI before handling this cpu backlog.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      88751275
  7. 17 4月, 2010 1 次提交
    • T
      rfs: Receive Flow Steering · fec5e652
      Tom Herbert 提交于
      This patch implements receive flow steering (RFS).  RFS steers
      received packets for layer 3 and 4 processing to the CPU where
      the application for the corresponding flow is running.  RFS is an
      extension of Receive Packet Steering (RPS).
      
      The basic idea of RFS is that when an application calls recvmsg
      (or sendmsg) the application's running CPU is stored in a hash
      table that is indexed by the connection's rxhash which is stored in
      the socket structure.  The rxhash is passed in skb's received on
      the connection from netif_receive_skb.  For each received packet,
      the associated rxhash is used to look up the CPU in the hash table,
      if a valid CPU is set then the packet is steered to that CPU using
      the RPS mechanisms.
      
      The convolution of the simple approach is that it would potentially
      allow OOO packets.  If threads are thrashing around CPUs or multiple
      threads are trying to read from the same sockets, a quickly changing
      CPU value in the hash table could cause rampant OOO packets--
      we consider this a non-starter.
      
      To avoid OOO packets, this solution implements two types of hash
      tables: rps_sock_flow_table and rps_dev_flow_table.
      
      rps_sock_table is a global hash table.  Each entry is just a CPU
      number and it is populated in recvmsg and sendmsg as described above.
      This table contains the "desired" CPUs for flows.
      
      rps_dev_flow_table is specific to each device queue.  Each entry
      contains a CPU and a tail queue counter.  The CPU is the "current"
      CPU for a matching flow.  The tail queue counter holds the value
      of a tail queue counter for the associated CPU's backlog queue at
      the time of last enqueue for a flow matching the entry.
      
      Each backlog queue has a queue head counter which is incremented
      on dequeue, and so a queue tail counter is computed as queue head
      count + queue length.  When a packet is enqueued on a backlog queue,
      the current value of the queue tail counter is saved in the hash
      entry of the rps_dev_flow_table.
      
      And now the trick: when selecting the CPU for RPS (get_rps_cpu)
      the rps_sock_flow table and the rps_dev_flow table for the RX queue
      are consulted.  When the desired CPU for the flow (found in the
      rps_sock_flow table) does not match the current CPU (found in the
      rps_dev_flow table), the current CPU is changed to the desired CPU
      if one of the following is true:
      
      - The current CPU is unset (equal to RPS_NO_CPU)
      - Current CPU is offline
      - The current CPU's queue head counter >= queue tail counter in the
      rps_dev_flow table.  This checks if the queue tail has advanced
      beyond the last packet that was enqueued using this table entry.
      This guarantees that all packets queued using this entry have been
      dequeued, thus preserving in order delivery.
      
      Making each queue have its own rps_dev_flow table has two advantages:
      1) the tail queue counters will be written on each receive, so
      keeping the table local to interrupting CPU s good for locality.  2)
      this allows lockless access to the table-- the CPU number and queue
      tail counter need to be accessed together under mutual exclusion
      from netif_receive_skb, we assume that this is only called from
      device napi_poll which is non-reentrant.
      
      This patch implements RFS for TCP and connected UDP sockets.
      It should be usable for other flow oriented protocols.
      
      There are two configuration parameters for RFS.  The
      "rps_flow_entries" kernel init parameter sets the number of
      entries in the rps_sock_flow_table, the per rxqueue sysfs entry
      "rps_flow_cnt" contains the number of entries in the rps_dev_flow
      table for the rxqueue.  Both are rounded to power of two.
      
      The obvious benefit of RFS (over just RPS) is that it achieves
      CPU locality between the receive processing for a flow and the
      applications processing; this can result in increased performance
      (higher pps, lower latency).
      
      The benefits of RFS are dependent on cache hierarchy, application
      load, and other factors.  On simple benchmarks, we don't necessarily
      see improvement and sometimes see degradation.  However, for more
      complex benchmarks and for applications where cache pressure is
      much higher this technique seems to perform very well.
      
      Below are some benchmark results which show the potential benfit of
      this patch.  The netperf test has 500 instances of netperf TCP_RR
      test with 1 byte req. and resp.  The RPC test is an request/response
      test similar in structure to netperf RR test ith 100 threads on
      each host, but does more work in userspace that netperf.
      
      e1000e on 8 core Intel
         No RFS or RPS		104K tps at 30% CPU
         No RFS (best RPS config):    290K tps at 63% CPU
         RFS				303K tps at 61% CPU
      
      RPC test	tps	CPU%	50/90/99% usec latency	Latency StdDev
        No RFS/RPS	103K	48%	757/900/3185		4472.35
        RPS only:	174K	73%	415/993/2468		491.66
        RFS		223K	73%	379/651/1382		315.61
      Signed-off-by: NTom Herbert <therbert@google.com>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fec5e652
  8. 15 4月, 2010 1 次提交
  9. 13 4月, 2010 1 次提交
  10. 08 4月, 2010 1 次提交
  11. 04 4月, 2010 2 次提交
  12. 31 3月, 2010 1 次提交
  13. 26 3月, 2010 1 次提交
  14. 22 3月, 2010 1 次提交
  15. 19 3月, 2010 3 次提交
  16. 17 3月, 2010 1 次提交
    • T
      rps: Receive Packet Steering · 0a9627f2
      Tom Herbert 提交于
      This patch implements software receive side packet steering (RPS).  RPS
      distributes the load of received packet processing across multiple CPUs.
      
      Problem statement: Protocol processing done in the NAPI context for received
      packets is serialized per device queue and becomes a bottleneck under high
      packet load.  This substantially limits pps that can be achieved on a single
      queue NIC and provides no scaling with multiple cores.
      
      This solution queues packets early on in the receive path on the backlog queues
      of other CPUs.   This allows protocol processing (e.g. IP and TCP) to be
      performed on packets in parallel.   For each device (or each receive queue in
      a multi-queue device) a mask of CPUs is set to indicate the CPUs that can
      process packets. A CPU is selected on a per packet basis by hashing contents
      of the packet header (e.g. the TCP or UDP 4-tuple) and using the result to index
      into the CPU mask.  The IPI mechanism is used to raise networking receive
      softirqs between CPUs.  This effectively emulates in software what a multi-queue
      NIC can provide, but is generic requiring no device support.
      
      Many devices now provide a hash over the 4-tuple on a per packet basis
      (e.g. the Toeplitz hash).  This patch allow drivers to set the HW reported hash
      in an skb field, and that value in turn is used to index into the RPS maps.
      Using the HW generated hash can avoid cache misses on the packet when
      steering it to a remote CPU.
      
      The CPU mask is set on a per device and per queue basis in the sysfs variable
      /sys/class/net/<device>/queues/rx-<n>/rps_cpus.  This is a set of canonical
      bit maps for receive queues in the device (numbered by <n>).  If a device
      does not support multi-queue, a single variable is used for the device (rx-0).
      
      Generally, we have found this technique increases pps capabilities of a single
      queue device with good CPU utilization.  Optimal settings for the CPU mask
      seem to depend on architectures and cache hierarcy.  Below are some results
      running 500 instances of netperf TCP_RR test with 1 byte req. and resp.
      Results show cumulative transaction rate and system CPU utilization.
      
      e1000e on 8 core Intel
         Without RPS: 108K tps at 33% CPU
         With RPS:    311K tps at 64% CPU
      
      forcedeth on 16 core AMD
         Without RPS: 156K tps at 15% CPU
         With RPS:    404K tps at 49% CPU
      
      bnx2x on 16 core AMD
         Without RPS  567K tps at 61% CPU (4 HW RX queues)
         Without RPS  738K tps at 96% CPU (8 HW RX queues)
         With RPS:    854K tps at 76% CPU (4 HW RX queues)
      
      Caveats:
      - The benefits of this patch are dependent on architecture and cache hierarchy.
      Tuning the masks to get best performance is probably necessary.
      - This patch adds overhead in the path for processing a single packet.  In
      a lightly loaded server this overhead may eliminate the advantages of
      increased parallelism, and possibly cause some relative performance degradation.
      We have found that masks that are cache aware (share same caches with
      the interrupting CPU) mitigate much of this.
      - The RPS masks can be changed dynamically, however whenever the mask is changed
      this introduces the possibility of generating out of order packets.  It's
      probably best not change the masks too frequently.
      Signed-off-by: NTom Herbert <therbert@google.com>
      
       include/linux/netdevice.h |   32 ++++-
       include/linux/skbuff.h    |    3 +
       net/core/dev.c            |  335 +++++++++++++++++++++++++++++++++++++--------
       net/core/net-sysfs.c      |  225 ++++++++++++++++++++++++++++++-
       net/core/skbuff.c         |    2 +
       5 files changed, 538 insertions(+), 59 deletions(-)
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0a9627f2
  17. 27 2月, 2010 3 次提交
    • P
      dev: support deferring device flag change notifications · bd380811
      Patrick McHardy 提交于
      Split dev_change_flags() into two functions: __dev_change_flags() to
      perform the actual changes and __dev_notify_flags() to invoke netdevice
      notifiers. This will be used by rtnl_link to defer netlink notifications
      until the device has been fully configured.
      
      This changes ordering of some operations, in particular:
      
      - netlink notifications are sent after all changes have been performed.
        As a side effect this surpresses one unnecessary netlink message when
        the IFF_UP and other flags are changed simultaneously.
      
      - The NETDEV_UP/NETDEV_DOWN and NETDEV_CHANGE notifiers are invoked
        after all changes have been performed. Their relative is unchanged.
      
      - net_dmaengine_put() is invoked before the NETDEV_DOWN notifier instead
        of afterwards. This should not make any difference since both RX and TX
        are already shut down at this point.
      Signed-off-by: NPatrick McHardy <kaber@trash.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bd380811
    • P
      rtnetlink: handle rtnl_link netlink notifications manually · a2835763
      Patrick McHardy 提交于
      In order to support specifying device flags during device creation,
      we must be able to roll back device registration in case setting the
      flags fails without sending any notifications related to the device
      to userspace.
      
      This patch changes rollback_registered_many() and register_netdevice()
      to manually send netlink notifications for devices not handled by
      rtnl_link and allows to defer notifications for devices handled by
      rtnl_link until setup is complete.
      Signed-off-by: NPatrick McHardy <kaber@trash.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a2835763
    • J
      netdevice.h: check for CONFIG_WLAN instead of CONFIG_WLAN_80211 · caf66e58
      John W. Linville 提交于
      In "wireless: remove WLAN_80211 and WLAN_PRE80211 from Kconfig" I
      inadvertantly missed a line in include/linux/netdevice.h.  I thereby
      effectively reverted "net: Set LL_MAX_HEADER properly for wireless." by
      accident. :-(  Now we should check there for CONFIG_WLAN instead.
      Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
      Reported-by: NChristoph Egger <siccegge@stud.informatik.uni-erlangen.de>
      Cc: stable@kernel.org
      caf66e58
  18. 13 2月, 2010 3 次提交
  19. 11 2月, 2010 1 次提交
  20. 05 2月, 2010 1 次提交
    • J
      net: use helpers to access mc list V2 · 6683ece3
      Jiri Pirko 提交于
      This patch introduces the similar helpers as those already done for uc list.
      However multicast lists are no list_head lists but "mademanually". The three
      macros added by this patch will make the transition of mc_list to list_head
      smooth in two steps:
      
      1) convert all drivers to use these macros (with the original iterator of type
         "struct dev_mc_list")
      2) once all drivers are converted, convert list type and iterators to "struct
         netdev_hw_addr" in one patch.
      
      >From now on, drivers can (and should) use "netdev_for_each_mc_addr" to iterate
      over the addresses with iterator of type "struct netdev_hw_addr". Also macros
      "netdev_mc_count" and "netdev_mc_empty" to read list's length. This is the state
      which should be reached in all drivers.
      Signed-off-by: NJiri Pirko <jpirko@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6683ece3
  21. 04 2月, 2010 1 次提交
  22. 26 1月, 2010 1 次提交
  23. 23 1月, 2010 1 次提交
  24. 20 1月, 2010 1 次提交
  25. 04 12月, 2009 1 次提交
  26. 02 12月, 2009 1 次提交
  27. 27 11月, 2009 1 次提交
  28. 18 11月, 2009 2 次提交
    • E
      linkwatch: linkwatch_forget_dev() to speedup device dismantle · e014debe
      Eric Dumazet 提交于
      Herbert Xu a écrit :
      > On Tue, Nov 17, 2009 at 04:26:04AM -0800, David Miller wrote:
      >> Really, the link watch stuff is just due for a redesign.  I don't
      >> think a simple hack is going to cut it this time, sorry Eric :-)
      >
      > I have no objections against any redesigns, but since the only
      > caller of linkwatch_forget_dev runs in process context with the
      > RTNL, it could also legally emit those events.
      
      Thanks guys, here an updated version then, before linkwatch surgery ?
      
      In this version, I force the event to be sent synchronously.
      
      [PATCH net-next-2.6] linkwatch: linkwatch_forget_dev() to speedup device dismantle
      
      time ip link del eth3.103 ; time ip link del eth3.104 ; time ip link del eth3.105
      
      real	0m0.266s
      user	0m0.000s
      sys	0m0.001s
      
      real	0m0.770s
      user	0m0.000s
      sys	0m0.000s
      
      real	0m1.022s
      user	0m0.000s
      sys	0m0.000s
      
      One problem of current schem in vlan dismantle phase is the
      holding of device done by following chain :
      
      vlan_dev_stop() ->
      	netif_carrier_off(dev) ->
      		linkwatch_fire_event(dev) ->
      			dev_hold() ...
      
      And __linkwatch_run_queue() runs up to one second later...
      
      A generic fix to this problem is to add a linkwatch_forget_dev() method
      to unlink the device from the list of watched devices.
      
      dev->link_watch_next becomes dev->link_watch_list (and use a bit more memory),
      to be able to unlink device in O(1).
      
      After patch :
      time ip link del eth3.103 ; time ip link del eth3.104 ; time ip link del eth3.105
      
      real    0m0.024s
      user    0m0.000s
      sys     0m0.000s
      
      real    0m0.032s
      user    0m0.000s
      sys     0m0.001s
      
      real    0m0.033s
      user    0m0.000s
      sys     0m0.000s
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e014debe
    • E
      net: add dev_txq_stats_fold() helper · d83345ad
      Eric Dumazet 提交于
      Some drivers ndo_get_stats() method need to perform txqueue stats folding.
      
      Move folding from dev_get_stats() to a new dev_txq_stats_fold() function
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d83345ad
  29. 16 11月, 2009 1 次提交
    • J
      net: Optimize hard_start_xmit() return checking · 9a1654ba
      Jarek Poplawski 提交于
      Recent changes in the TX error propagation require additional checking
      and masking of values returned from hard_start_xmit(), mainly to
      separate cases where skb was consumed. This aim can be simplified by
      changing the order of NETDEV_TX and NET_XMIT codes, because the latter
      are treated similarly to negative (ERRNO) values.
      
      After this change much simpler dev_xmit_complete() is also used in
      sch_direct_xmit(), so it is moved to netdevice.h.
      
      Additionally NET_RX definitions in netdevice.h are moved up from
      between TX codes to avoid confusion while reading the TX comment.
      Signed-off-by: NJarek Poplawski <jarkao2@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9a1654ba