1. 16 5月, 2010 2 次提交
    • E
      net: Consistent skb timestamping · 3b098e2d
      Eric Dumazet 提交于
      With RPS inclusion, skb timestamping is not consistent in RX path.
      
      If netif_receive_skb() is used, its deferred after RPS dispatch.
      
      If netif_rx() is used, its done before RPS dispatch.
      
      This can give strange tcpdump timestamps results.
      
      I think timestamping should be done as soon as possible in the receive
      path, to get meaningful values (ie timestamps taken at the time packet
      was delivered by NIC driver to our stack), even if NAPI already can
      defer timestamping a bit (RPS can help to reduce the gap)
      
      Tom Herbert prefer to sample timestamps after RPS dispatch. In case
      sampling is expensive (HPET/acpi_pm on x86), this makes sense.
      
      Let admins switch from one mode to another, using a new
      sysctl, /proc/sys/net/core/netdev_tstamp_prequeue
      
      Its default value (1), means timestamps are taken as soon as possible,
      before backlog queueing, giving accurate timestamps.
      
      Setting a 0 value permits to sample timestamps when processing backlog,
      after RPS dispatch, to lower the load of the pre-RPS cpu.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3b098e2d
    • J
      net: adjust handle_macvlan to pass port struct to hook · a14462f1
      Jiri Pirko 提交于
      Now there's null check here and also again in the hook. Looking at bridge bits
      which are simmilar, port structure is rcu_dereferenced right away in
      handle_bridge and passed to hook. Looks nicer.
      Signed-off-by: NJiri Pirko <jpirko@redhat.com>
      Acked-by: NPatrick McHardy <kaber@trash.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a14462f1
  2. 07 5月, 2010 1 次提交
    • E
      rps: Various optimizations · eecfd7c4
      Eric Dumazet 提交于
      Introduce ____napi_schedule() helper for callers in irq disabled
      contexts. rps_trigger_softirq() becomes a leaf function.
      
      Use container_of() in process_backlog() instead of accessing per_cpu
      address.
      
      Use a custom inlined version of __napi_complete() in process_backlog()
      to avoid one locked instruction :
      
       only current cpu owns and manipulates this napi,
       and NAPI_STATE_SCHED is the only possible flag set on backlog.
       we can use a plain write instead of clear_bit(),
       and we dont need an smp_mb() memory barrier, since RPS is on,
       backlog is protected by a spinlock.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      eecfd7c4
  3. 06 5月, 2010 2 次提交
    • E
      veth: Dont kfree_skb() after dev_forward_skb() · 6ec82562
      Eric Dumazet 提交于
      In case of congestion, netif_rx() frees the skb, so we must assume
      dev_forward_skb() also consume skb.
      
      Bug introduced by commit 44540960
      (veth: move loopback logic to common location)
      
      We must change dev_forward_skb() to always consume skb, and veth to not
      double free it.
      
      Bug report : http://marc.info/?l=linux-netdev&m=127310770900442&w=3Reported-by: NMartín Ferrari <martin.ferrari@gmail.com>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6ec82562
    • W
      netpoll: add generic support for bridge and bonding devices · 0e34e931
      WANG Cong 提交于
      This whole patchset is for adding netpoll support to bridge and bonding
      devices. I already tested it for bridge, bonding, bridge over bonding,
      and bonding over bridge. It looks fine now.
      
      To make bridge and bonding support netpoll, we need to adjust
      some netpoll generic code. This patch does the following things:
      
      1) introduce two new priv_flags for struct net_device:
         IFF_IN_NETPOLL which identifies we are processing a netpoll;
         IFF_DISABLE_NETPOLL is used to disable netpoll support for a device
         at run-time;
      
      2) introduce one new method for netdev_ops:
         ->ndo_netpoll_cleanup() is used to clean up netpoll when a device is
           removed.
      
      3) introduce netpoll_poll_dev() which takes a struct net_device * parameter;
         export netpoll_send_skb() and netpoll_poll_dev() which will be used later;
      
      4) hide a pointer to struct netpoll in struct netpoll_info, ditto.
      
      5) introduce ->real_dev for struct netpoll.
      
      6) introduce a new status NETDEV_BONDING_DESLAE, which is used to disable
         netconsole before releasing a slave, to avoid deadlocks.
      
      Cc: David Miller <davem@davemloft.net>
      Cc: Neil Horman <nhorman@tuxdriver.com>
      Signed-off-by: NWANG Cong <amwang@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0e34e931
  4. 05 5月, 2010 1 次提交
    • E
      net: __alloc_skb() speedup · ec7d2f2c
      Eric Dumazet 提交于
      With following patch I can reach maximum rate of my pktgen+udpsink
      simulator :
      - 'old' machine : dual quad core E5450  @3.00GHz
      - 64 UDP rx flows (only differ by destination port)
      - RPS enabled, NIC interrupts serviced on cpu0
      - rps dispatched on 7 other cores. (~130.000 IPI per second)
      - SLAB allocator (faster than SLUB in this workload)
      - tg3 NIC
      - 1.080.000 pps without a single drop at NIC level.
      
      Idea is to add two prefetchw() calls in __alloc_skb(), one to prefetch
      first sk_buff cache line, the second to prefetch the shinfo part.
      
      Also using one memset() to initialize all skb_shared_info fields instead
      of one by one to reduce number of instructions, using long word moves.
      
      All skb_shared_info fields before 'dataref' are cleared in 
      __alloc_skb().
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ec7d2f2c
  5. 04 5月, 2010 1 次提交
  6. 03 5月, 2010 1 次提交
    • C
      net: fix softnet_stat · dee42870
      Changli Gao 提交于
      Per cpu variable softnet_data.total was shared between IRQ and SoftIRQ context
      without any protection. And enqueue_to_backlog should update the netdev_rx_stat
      of the target CPU.
      
      This patch renames softnet_data.total to softnet_data.processed: the number of
      packets processed in uppper levels(IP stacks).
      
      softnet_stat data is moved into softnet_data.
      Signed-off-by: NChangli Gao <xiaosuo@gmail.com>
      ----
       include/linux/netdevice.h |   17 +++++++----------
       net/core/dev.c            |   26 ++++++++++++--------------
       net/sched/sch_generic.c   |    2 +-
       3 files changed, 20 insertions(+), 25 deletions(-)
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      dee42870
  7. 02 5月, 2010 2 次提交
    • D
      net: Inline skb_pull() in eth_type_trans(). · 47d29646
      David S. Miller 提交于
      In commit 6be8ac2f ("[NET]: uninline skb_pull, de-bloats a lot")
      we uninlined skb_pull.
      
      But in some critical paths it makes sense to inline this thing
      and it helps performance significantly.
      
      Create an skb_pull_inline() so that we can do this in a way that
      serves also as annotation.
      
      Based upon a patch by Eric Dumazet.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      47d29646
    • E
      net: sock_def_readable() and friends RCU conversion · 43815482
      Eric Dumazet 提交于
      sk_callback_lock rwlock actually protects sk->sk_sleep pointer, so we
      need two atomic operations (and associated dirtying) per incoming
      packet.
      
      RCU conversion is pretty much needed :
      
      1) Add a new structure, called "struct socket_wq" to hold all fields
      that will need rcu_read_lock() protection (currently: a
      wait_queue_head_t and a struct fasync_struct pointer).
      
      [Future patch will add a list anchor for wakeup coalescing]
      
      2) Attach one of such structure to each "struct socket" created in
      sock_alloc_inode().
      
      3) Respect RCU grace period when freeing a "struct socket_wq"
      
      4) Change sk_sleep pointer in "struct sock" by sk_wq, pointer to "struct
      socket_wq"
      
      5) Change sk_sleep() function to use new sk->sk_wq instead of
      sk->sk_sleep
      
      6) Change sk_has_sleeper() to wq_has_sleeper() that must be used inside
      a rcu_read_lock() section.
      
      7) Change all sk_has_sleeper() callers to :
        - Use rcu_read_lock() instead of read_lock(&sk->sk_callback_lock)
        - Use wq_has_sleeper() to eventually wakeup tasks.
        - Use rcu_read_unlock() instead of read_unlock(&sk->sk_callback_lock)
      
      8) sock_wake_async() is modified to use rcu protection as well.
      
      9) Exceptions :
        macvtap, drivers/net/tun.c, af_unix use integrated "struct socket_wq"
      instead of dynamically allocated ones. They dont need rcu freeing.
      
      Some cleanups or followups are probably needed, (possible
      sk_callback_lock conversion to a spinlock for example...).
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      43815482
  8. 29 4月, 2010 1 次提交
    • E
      net: speedup udp receive path · 4b0b72f7
      Eric Dumazet 提交于
      Since commit 95766fff ([UDP]: Add memory accounting.), 
      each received packet needs one extra sock_lock()/sock_release() pair.
      
      This added latency because of possible backlog handling. Then later,
      ticket spinlocks added yet another latency source in case of DDOS.
      
      This patch introduces lock_sock_bh() and unlock_sock_bh()
      synchronization primitives, avoiding one atomic operation and backlog
      processing.
      
      skb_free_datagram_locked() uses them instead of full blown
      lock_sock()/release_sock(). skb is orphaned inside locked section for
      proper socket memory reclaim, and finally freed outside of it.
      
      UDP receive path now take the socket spinlock only once.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4b0b72f7
  9. 28 4月, 2010 4 次提交
  10. 26 4月, 2010 2 次提交
    • P
      net: rtnetlink: decouple rtnetlink address families from real address families · 25239cee
      Patrick McHardy 提交于
      Decouple rtnetlink address families from real address families in socket.h to
      be able to add rtnetlink interfaces to code that is not a real address family
      without increasing AF_MAX/NPROTO.
      
      This will be used to add support for multicast route dumping from all tables
      as the proc interface can't be extended to support anything but the main table
      without breaking compatibility.
      
      This partialy undoes the patch to introduce independant families for routing
      rules and converts ipmr routing rules to a new rtnetlink family. Similar to
      that patch, values up to 127 are reserved for real address families, values
      above that may be used arbitrarily.
      Signed-off-by: NPatrick McHardy <kaber@trash.net>
      25239cee
    • P
      net: fib_rules: mark arguments to fib_rules_register const and __net_initdata · 3d0c9c4e
      Patrick McHardy 提交于
      fib_rules_register() duplicates the template passed to it without modification,
      mark the argument as const. Additionally the templates are only needed when
      instantiating a new namespace, so mark them as __net_initdata, which means
      they can be discarded when CONFIG_NET_NS=n.
      Signed-off-by: NPatrick McHardy <kaber@trash.net>
      3d0c9c4e
  11. 25 4月, 2010 2 次提交
  12. 23 4月, 2010 2 次提交
  13. 22 4月, 2010 3 次提交
  14. 21 4月, 2010 3 次提交
    • D
      net: Fix an RCU warning in dev_pick_tx() · 05d17608
      David Howells 提交于
      Fix the following RCU warning in dev_pick_tx():
      
      ===================================================
      [ INFO: suspicious rcu_dereference_check() usage. ]
      ---------------------------------------------------
      net/core/dev.c:1993 invoked rcu_dereference_check() without protection!
      
      other info that might help us debug this:
      
      rcu_scheduler_active = 1, debug_locks = 0
      2 locks held by swapper/0:
       #0:  (&idev->mc_ifc_timer){+.-...}, at: [<ffffffff81039e65>] run_timer_softirq+0x17b/0x278
       #1:  (rcu_read_lock_bh){.+....}, at: [<ffffffff812ea3eb>] dev_queue_xmit+0x14e/0x4dc
      
      stack backtrace:
      Pid: 0, comm: swapper Not tainted 2.6.34-rc5-cachefs #4
      Call Trace:
       <IRQ>  [<ffffffff810516c4>] lockdep_rcu_dereference+0xaa/0xb2
       [<ffffffff812ea4f6>] dev_queue_xmit+0x259/0x4dc
       [<ffffffff812ea3eb>] ? dev_queue_xmit+0x14e/0x4dc
       [<ffffffff81052324>] ? trace_hardirqs_on+0xd/0xf
       [<ffffffff81035362>] ? local_bh_enable_ip+0xbc/0xc1
       [<ffffffff812f0954>] neigh_resolve_output+0x24b/0x27c
       [<ffffffff8134f673>] ip6_output_finish+0x7c/0xb4
       [<ffffffff81350c34>] ip6_output2+0x256/0x261
       [<ffffffff81052324>] ? trace_hardirqs_on+0xd/0xf
       [<ffffffff813517fb>] ip6_output+0xbbc/0xbcb
       [<ffffffff8135bc5d>] ? fib6_force_start_gc+0x2b/0x2d
       [<ffffffff81368acb>] mld_sendpack+0x273/0x39d
       [<ffffffff81368858>] ? mld_sendpack+0x0/0x39d
       [<ffffffff81052099>] ? mark_held_locks+0x52/0x70
       [<ffffffff813692fc>] mld_ifc_timer_expire+0x24f/0x288
       [<ffffffff81039ed6>] run_timer_softirq+0x1ec/0x278
       [<ffffffff81039e65>] ? run_timer_softirq+0x17b/0x278
       [<ffffffff813690ad>] ? mld_ifc_timer_expire+0x0/0x288
       [<ffffffff81035531>] ? __do_softirq+0x69/0x140
       [<ffffffff8103556a>] __do_softirq+0xa2/0x140
       [<ffffffff81002e0c>] call_softirq+0x1c/0x28
       [<ffffffff81004b54>] do_softirq+0x38/0x80
       [<ffffffff81034f06>] irq_exit+0x45/0x47
       [<ffffffff810177c3>] smp_apic_timer_interrupt+0x88/0x96
       [<ffffffff810028d3>] apic_timer_interrupt+0x13/0x20
       <EOI>  [<ffffffff810488dd>] ? __atomic_notifier_call_chain+0x0/0x86
       [<ffffffff810096bf>] ? mwait_idle+0x6e/0x78
       [<ffffffff810096b6>] ? mwait_idle+0x65/0x78
       [<ffffffff810011cb>] cpu_idle+0x4d/0x83
       [<ffffffff81380b05>] rest_init+0xb9/0xc0
       [<ffffffff81380a4c>] ? rest_init+0x0/0xc0
       [<ffffffff8168dcf0>] start_kernel+0x392/0x39d
       [<ffffffff8168d2a3>] x86_64_start_reservations+0xb3/0xb7
       [<ffffffff8168d38b>] x86_64_start_kernel+0xe4/0xeb
      
      An rcu_dereference() should be an rcu_dereference_bh().
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      05d17608
    • R
      net: Remove two unnecessary exports (skbuff). · ccb7c773
      Rami Rosen 提交于
      There is no need to export skb_under_panic() and skb_over_panic() in
      skbuff.c, since these methods are used only in skbuff.c ; this patch
      removes these two exports. It also marks these functions as 'static'
      and removeS the extern declarations of them from
      include/linux/skbuff.h
      Signed-off-by: NRami Rosen <ramirose@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ccb7c773
    • E
      net: sk_sleep() helper · aa395145
      Eric Dumazet 提交于
      Define a new function to return the waitqueue of a "struct sock".
      
      static inline wait_queue_head_t *sk_sleep(struct sock *sk)
      {
      	return sk->sk_sleep;
      }
      
      Change all read occurrences of sk_sleep by a call to this function.
      
      Needed for a future RCU conversion. sk_sleep wont be a field directly
      available.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      aa395145
  15. 20 4月, 2010 5 次提交
  16. 18 4月, 2010 2 次提交
    • E
      net: Introduce skb_orphan_try() · fc6055a5
      Eric Dumazet 提交于
      Transmitted skb might be attached to a socket and a destructor, for
      memory accounting purposes.
      
      Traditionally, this destructor is called at tx completion time, when skb
      is freed.
      
      When tx completion is performed by another cpu than the sender, this
      forces some cache lines to change ownership. XPS was an attempt to give
      tx completion to initial cpu.
      
      David idea is to call destructor right before giving skb to device (call
      to ndo_start_xmit()). Because device queues are usually small, orphaning
      skb before tx completion is not a big deal. Some drivers already do
      this, we could do it in upper level.
      
      There is one known exception to this early orphaning, called tx
      timestamping. It needs to keep a reference to socket until device can
      give a hardware or software timestamp.
      
      This patch adds a skb_orphan_try() helper, to centralize all exceptions
      to early orphaning in one spot, and use it in dev_hard_start_xmit().
      
      "tbench 16" results on a Nehalem machine (2 X5570  @ 2.93GHz)
      before: Throughput 4428.9 MB/sec 16 procs
      after: Throughput 4448.14 MB/sec 16 procs
      
      UDP should get even better results, its destructor being more complex,
      since SOCK_USE_WRITE_QUEUE is not set (four atomic ops instead of one)
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fc6055a5
    • E
      net: remove time limit in process_backlog() · 9958da05
      Eric Dumazet 提交于
      - There is no point to enforce a time limit in process_backlog(), since
      other napi instances dont follow same rule. We can exit after only one
      packet processed...
      The normal quota of 64 packets per napi instance should be the norm, and
      net_rx_action() already has its own time limit.
      Note : /proc/net/core/dev_weight can be used to tune this 64 default
      value.
      
      - Use DEFINE_PER_CPU_ALIGNED for softnet_data definition.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9958da05
  17. 17 4月, 2010 2 次提交
    • E
      rps: rps_sock_flow_table is mostly read · 8770acf0
      Eric Dumazet 提交于
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8770acf0
    • T
      rfs: Receive Flow Steering · fec5e652
      Tom Herbert 提交于
      This patch implements receive flow steering (RFS).  RFS steers
      received packets for layer 3 and 4 processing to the CPU where
      the application for the corresponding flow is running.  RFS is an
      extension of Receive Packet Steering (RPS).
      
      The basic idea of RFS is that when an application calls recvmsg
      (or sendmsg) the application's running CPU is stored in a hash
      table that is indexed by the connection's rxhash which is stored in
      the socket structure.  The rxhash is passed in skb's received on
      the connection from netif_receive_skb.  For each received packet,
      the associated rxhash is used to look up the CPU in the hash table,
      if a valid CPU is set then the packet is steered to that CPU using
      the RPS mechanisms.
      
      The convolution of the simple approach is that it would potentially
      allow OOO packets.  If threads are thrashing around CPUs or multiple
      threads are trying to read from the same sockets, a quickly changing
      CPU value in the hash table could cause rampant OOO packets--
      we consider this a non-starter.
      
      To avoid OOO packets, this solution implements two types of hash
      tables: rps_sock_flow_table and rps_dev_flow_table.
      
      rps_sock_table is a global hash table.  Each entry is just a CPU
      number and it is populated in recvmsg and sendmsg as described above.
      This table contains the "desired" CPUs for flows.
      
      rps_dev_flow_table is specific to each device queue.  Each entry
      contains a CPU and a tail queue counter.  The CPU is the "current"
      CPU for a matching flow.  The tail queue counter holds the value
      of a tail queue counter for the associated CPU's backlog queue at
      the time of last enqueue for a flow matching the entry.
      
      Each backlog queue has a queue head counter which is incremented
      on dequeue, and so a queue tail counter is computed as queue head
      count + queue length.  When a packet is enqueued on a backlog queue,
      the current value of the queue tail counter is saved in the hash
      entry of the rps_dev_flow_table.
      
      And now the trick: when selecting the CPU for RPS (get_rps_cpu)
      the rps_sock_flow table and the rps_dev_flow table for the RX queue
      are consulted.  When the desired CPU for the flow (found in the
      rps_sock_flow table) does not match the current CPU (found in the
      rps_dev_flow table), the current CPU is changed to the desired CPU
      if one of the following is true:
      
      - The current CPU is unset (equal to RPS_NO_CPU)
      - Current CPU is offline
      - The current CPU's queue head counter >= queue tail counter in the
      rps_dev_flow table.  This checks if the queue tail has advanced
      beyond the last packet that was enqueued using this table entry.
      This guarantees that all packets queued using this entry have been
      dequeued, thus preserving in order delivery.
      
      Making each queue have its own rps_dev_flow table has two advantages:
      1) the tail queue counters will be written on each receive, so
      keeping the table local to interrupting CPU s good for locality.  2)
      this allows lockless access to the table-- the CPU number and queue
      tail counter need to be accessed together under mutual exclusion
      from netif_receive_skb, we assume that this is only called from
      device napi_poll which is non-reentrant.
      
      This patch implements RFS for TCP and connected UDP sockets.
      It should be usable for other flow oriented protocols.
      
      There are two configuration parameters for RFS.  The
      "rps_flow_entries" kernel init parameter sets the number of
      entries in the rps_sock_flow_table, the per rxqueue sysfs entry
      "rps_flow_cnt" contains the number of entries in the rps_dev_flow
      table for the rxqueue.  Both are rounded to power of two.
      
      The obvious benefit of RFS (over just RPS) is that it achieves
      CPU locality between the receive processing for a flow and the
      applications processing; this can result in increased performance
      (higher pps, lower latency).
      
      The benefits of RFS are dependent on cache hierarchy, application
      load, and other factors.  On simple benchmarks, we don't necessarily
      see improvement and sometimes see degradation.  However, for more
      complex benchmarks and for applications where cache pressure is
      much higher this technique seems to perform very well.
      
      Below are some benchmark results which show the potential benfit of
      this patch.  The netperf test has 500 instances of netperf TCP_RR
      test with 1 byte req. and resp.  The RPC test is an request/response
      test similar in structure to netperf RR test ith 100 threads on
      each host, but does more work in userspace that netperf.
      
      e1000e on 8 core Intel
         No RFS or RPS		104K tps at 30% CPU
         No RFS (best RPS config):    290K tps at 63% CPU
         RFS				303K tps at 61% CPU
      
      RPC test	tps	CPU%	50/90/99% usec latency	Latency StdDev
        No RFS/RPS	103K	48%	757/900/3185		4472.35
        RPS only:	174K	73%	415/993/2468		491.66
        RFS		223K	73%	379/651/1382		315.61
      Signed-off-by: NTom Herbert <therbert@google.com>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fec5e652
  18. 15 4月, 2010 2 次提交
  19. 14 4月, 2010 2 次提交