1. 06 5月, 2010 1 次提交
    • W
      netpoll: add generic support for bridge and bonding devices · 0e34e931
      WANG Cong 提交于
      This whole patchset is for adding netpoll support to bridge and bonding
      devices. I already tested it for bridge, bonding, bridge over bonding,
      and bonding over bridge. It looks fine now.
      
      To make bridge and bonding support netpoll, we need to adjust
      some netpoll generic code. This patch does the following things:
      
      1) introduce two new priv_flags for struct net_device:
         IFF_IN_NETPOLL which identifies we are processing a netpoll;
         IFF_DISABLE_NETPOLL is used to disable netpoll support for a device
         at run-time;
      
      2) introduce one new method for netdev_ops:
         ->ndo_netpoll_cleanup() is used to clean up netpoll when a device is
           removed.
      
      3) introduce netpoll_poll_dev() which takes a struct net_device * parameter;
         export netpoll_send_skb() and netpoll_poll_dev() which will be used later;
      
      4) hide a pointer to struct netpoll in struct netpoll_info, ditto.
      
      5) introduce ->real_dev for struct netpoll.
      
      6) introduce a new status NETDEV_BONDING_DESLAE, which is used to disable
         netconsole before releasing a slave, to avoid deadlocks.
      
      Cc: David Miller <davem@davemloft.net>
      Cc: Neil Horman <nhorman@tuxdriver.com>
      Signed-off-by: NWANG Cong <amwang@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0e34e931
  2. 05 5月, 2010 1 次提交
    • E
      net: __alloc_skb() speedup · ec7d2f2c
      Eric Dumazet 提交于
      With following patch I can reach maximum rate of my pktgen+udpsink
      simulator :
      - 'old' machine : dual quad core E5450  @3.00GHz
      - 64 UDP rx flows (only differ by destination port)
      - RPS enabled, NIC interrupts serviced on cpu0
      - rps dispatched on 7 other cores. (~130.000 IPI per second)
      - SLAB allocator (faster than SLUB in this workload)
      - tg3 NIC
      - 1.080.000 pps without a single drop at NIC level.
      
      Idea is to add two prefetchw() calls in __alloc_skb(), one to prefetch
      first sk_buff cache line, the second to prefetch the shinfo part.
      
      Also using one memset() to initialize all skb_shared_info fields instead
      of one by one to reduce number of instructions, using long word moves.
      
      All skb_shared_info fields before 'dataref' are cleared in 
      __alloc_skb().
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ec7d2f2c
  3. 04 5月, 2010 1 次提交
  4. 03 5月, 2010 3 次提交
    • M
      tun: add ioctl to modify vnet header size · d9d52b51
      Michael S. Tsirkin 提交于
      virtio added mergeable buffers mode where 2 bytes of extra info is put
      after vnet header but before actual data (tun does not need this data).
      In hindsight, it would have been better to add the new info *before* the
      packet: as it is, users need a lot of tricky code to skip the extra 2
      bytes in the middle of the iovec, and in fact applications seem to get
      it wrong, and only work with specific iovec layout.  The fact we might
      need to split iovec also means we might in theory overflow iovec max
      size.
      
      This patch adds a simpler way for applications to handle this,
      and future proofs the interface against further extensions,
      by making the size of the virtio net header configurable
      from userspace. As a result, tun driver will simply
      skip the extra 2 bytes on both input and output.
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      d9d52b51
    • D
    • C
      net: fix softnet_stat · dee42870
      Changli Gao 提交于
      Per cpu variable softnet_data.total was shared between IRQ and SoftIRQ context
      without any protection. And enqueue_to_backlog should update the netdev_rx_stat
      of the target CPU.
      
      This patch renames softnet_data.total to softnet_data.processed: the number of
      packets processed in uppper levels(IP stacks).
      
      softnet_stat data is moved into softnet_data.
      Signed-off-by: NChangli Gao <xiaosuo@gmail.com>
      ----
       include/linux/netdevice.h |   17 +++++++----------
       net/core/dev.c            |   26 ++++++++++++--------------
       net/sched/sch_generic.c   |    2 +-
       3 files changed, 20 insertions(+), 25 deletions(-)
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      dee42870
  5. 02 5月, 2010 2 次提交
    • D
      net: Inline skb_pull() in eth_type_trans(). · 47d29646
      David S. Miller 提交于
      In commit 6be8ac2f ("[NET]: uninline skb_pull, de-bloats a lot")
      we uninlined skb_pull.
      
      But in some critical paths it makes sense to inline this thing
      and it helps performance significantly.
      
      Create an skb_pull_inline() so that we can do this in a way that
      serves also as annotation.
      
      Based upon a patch by Eric Dumazet.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      47d29646
    • E
      net: sock_def_readable() and friends RCU conversion · 43815482
      Eric Dumazet 提交于
      sk_callback_lock rwlock actually protects sk->sk_sleep pointer, so we
      need two atomic operations (and associated dirtying) per incoming
      packet.
      
      RCU conversion is pretty much needed :
      
      1) Add a new structure, called "struct socket_wq" to hold all fields
      that will need rcu_read_lock() protection (currently: a
      wait_queue_head_t and a struct fasync_struct pointer).
      
      [Future patch will add a list anchor for wakeup coalescing]
      
      2) Attach one of such structure to each "struct socket" created in
      sock_alloc_inode().
      
      3) Respect RCU grace period when freeing a "struct socket_wq"
      
      4) Change sk_sleep pointer in "struct sock" by sk_wq, pointer to "struct
      socket_wq"
      
      5) Change sk_sleep() function to use new sk->sk_wq instead of
      sk->sk_sleep
      
      6) Change sk_has_sleeper() to wq_has_sleeper() that must be used inside
      a rcu_read_lock() section.
      
      7) Change all sk_has_sleeper() callers to :
        - Use rcu_read_lock() instead of read_lock(&sk->sk_callback_lock)
        - Use wq_has_sleeper() to eventually wakeup tasks.
        - Use rcu_read_unlock() instead of read_unlock(&sk->sk_callback_lock)
      
      8) sock_wake_async() is modified to use rcu protection as well.
      
      9) Exceptions :
        macvtap, drivers/net/tun.c, af_unix use integrated "struct socket_wq"
      instead of dynamically allocated ones. They dont need rcu freeing.
      
      Some cleanups or followups are probably needed, (possible
      sk_callback_lock conversion to a spinlock for example...).
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      43815482
  6. 29 4月, 2010 1 次提交
  7. 28 4月, 2010 4 次提交
  8. 27 4月, 2010 4 次提交
  9. 26 4月, 2010 1 次提交
    • P
      net: rtnetlink: decouple rtnetlink address families from real address families · 25239cee
      Patrick McHardy 提交于
      Decouple rtnetlink address families from real address families in socket.h to
      be able to add rtnetlink interfaces to code that is not a real address family
      without increasing AF_MAX/NPROTO.
      
      This will be used to add support for multicast route dumping from all tables
      as the proc interface can't be extended to support anything but the main table
      without breaking compatibility.
      
      This partialy undoes the patch to introduce independant families for routing
      rules and converts ipmr routing rules to a new rtnetlink family. Similar to
      that patch, values up to 127 are reserved for real address families, values
      above that may be used arbitrarily.
      Signed-off-by: NPatrick McHardy <kaber@trash.net>
      25239cee
  10. 24 4月, 2010 2 次提交
  11. 23 4月, 2010 4 次提交
    • S
      remove DCB_PROTO_VERSION as we don't do netlink versioning · 286d1e7f
      Scott Feldman 提交于
      remove DCB_PROTO_VERSION as we don't do netlink versioning
      Signed-off-by: NScott Feldman <scofeldm@cisco.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      286d1e7f
    • A
      X25: Add if_x25.h and x25 to device identifiers · 5ebfbc06
      Andrew Hendry 提交于
      V2 Feedback from John Hughes.
      - Add header for userspace implementations such as xot/xoe to use
      - Use explicit values for interface stability
      - No changes to driver patches
      
      V1
      - Use identifiers instead of magic numbers for X25 layer 3 to device interface.
      - Also fixed checkpatch notes on updated code.
      
      [ Add new user header to include/linux/Kbuild  -DaveM ]
      Signed-off-by: NAndrew Hendry <andrew.hendry@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5ebfbc06
    • P
      net: Socket filter ancilliary data access for skb->dev->type · 40eaf962
      Paul LeoNerd Evans 提交于
      Add an SKF_AD_HATYPE field to the packet ancilliary data area, giving
      access to skb->dev->type, as reported in the sll_hatype field.
      
      When capturing packets on a PF_PACKET/SOCK_RAW socket bound to all
      interfaces, there doesn't appear to be a way for the filter program to
      actually find out the underlying hardware type the packet was captured
      on. This patch adds such ability.
      
      This patch also handles the case where skb->dev can be NULL, such as on
      netlink sockets.
      Signed-off-by: NPaul Evans <leonerd@leonerd.org.uk>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      40eaf962
    • S
      IPv6: Generic TTL Security Mechanism (final version) · e802af9c
      Stephen Hemminger 提交于
      This patch adds IPv6 support for RFC5082 Generalized TTL Security Mechanism.  
      
      Not to users of mapped address; the IPV6 and IPV4 socket options are seperate.
      The server does have to deal with both IPv4 and IPv6 socket options
      and the client has to handle the different for each family.
      
      On client:
      	int ttl = 255;
      	getaddrinfo(argv[1], argv[2], &hint, &result);
      
      	for (rp = result; rp != NULL; rp = rp->ai_next) {
      		s = socket(rp->ai_family, rp->ai_socktype, rp->ai_protocol);
      		if (s < 0) continue;
      
      		if (rp->ai_family == AF_INET) {
      			setsockopt(s, IPPROTO_IP, IP_TTL, &ttl, sizeof(ttl));
      		} else if (rp->ai_family == AF_INET6) {
      			setsockopt(s, IPPROTO_IPV6,  IPV6_UNICAST_HOPS, 
      					&ttl, sizeof(ttl)))
      		}
      			
      		if (connect(s, rp->ai_addr, rp->ai_addrlen) == 0) {
      		   ...
      
      On server:
      	int minttl = 255 - maxhops;
         
      	getaddrinfo(NULL, port, &hints, &result);
      	for (rp = result; rp != NULL; rp = rp->ai_next) {
      		s = socket(rp->ai_family, rp->ai_socktype, rp->ai_protocol);
      		if (s < 0) continue;
      
      		if (rp->ai_family == AF_INET6)
      			setsockopt(s, IPPROTO_IPV6,  IPV6_MINHOPCOUNT,
      					&minttl, sizeof(minttl));
      		setsockopt(s, IPPROTO_IP, IP_MINTTL, &minttl, sizeof(minttl));
      			
      		if (bind(s, rp->ai_addr, rp->ai_addrlen) == 0)
      			break
      ...
      Signed-off-by: NStephen Hemminger <shemminger@vyatta.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e802af9c
  12. 22 4月, 2010 2 次提交
  13. 21 4月, 2010 1 次提交
  14. 20 4月, 2010 3 次提交
  15. 19 4月, 2010 1 次提交
  16. 17 4月, 2010 2 次提交
    • T
      rfs: Receive Flow Steering · fec5e652
      Tom Herbert 提交于
      This patch implements receive flow steering (RFS).  RFS steers
      received packets for layer 3 and 4 processing to the CPU where
      the application for the corresponding flow is running.  RFS is an
      extension of Receive Packet Steering (RPS).
      
      The basic idea of RFS is that when an application calls recvmsg
      (or sendmsg) the application's running CPU is stored in a hash
      table that is indexed by the connection's rxhash which is stored in
      the socket structure.  The rxhash is passed in skb's received on
      the connection from netif_receive_skb.  For each received packet,
      the associated rxhash is used to look up the CPU in the hash table,
      if a valid CPU is set then the packet is steered to that CPU using
      the RPS mechanisms.
      
      The convolution of the simple approach is that it would potentially
      allow OOO packets.  If threads are thrashing around CPUs or multiple
      threads are trying to read from the same sockets, a quickly changing
      CPU value in the hash table could cause rampant OOO packets--
      we consider this a non-starter.
      
      To avoid OOO packets, this solution implements two types of hash
      tables: rps_sock_flow_table and rps_dev_flow_table.
      
      rps_sock_table is a global hash table.  Each entry is just a CPU
      number and it is populated in recvmsg and sendmsg as described above.
      This table contains the "desired" CPUs for flows.
      
      rps_dev_flow_table is specific to each device queue.  Each entry
      contains a CPU and a tail queue counter.  The CPU is the "current"
      CPU for a matching flow.  The tail queue counter holds the value
      of a tail queue counter for the associated CPU's backlog queue at
      the time of last enqueue for a flow matching the entry.
      
      Each backlog queue has a queue head counter which is incremented
      on dequeue, and so a queue tail counter is computed as queue head
      count + queue length.  When a packet is enqueued on a backlog queue,
      the current value of the queue tail counter is saved in the hash
      entry of the rps_dev_flow_table.
      
      And now the trick: when selecting the CPU for RPS (get_rps_cpu)
      the rps_sock_flow table and the rps_dev_flow table for the RX queue
      are consulted.  When the desired CPU for the flow (found in the
      rps_sock_flow table) does not match the current CPU (found in the
      rps_dev_flow table), the current CPU is changed to the desired CPU
      if one of the following is true:
      
      - The current CPU is unset (equal to RPS_NO_CPU)
      - Current CPU is offline
      - The current CPU's queue head counter >= queue tail counter in the
      rps_dev_flow table.  This checks if the queue tail has advanced
      beyond the last packet that was enqueued using this table entry.
      This guarantees that all packets queued using this entry have been
      dequeued, thus preserving in order delivery.
      
      Making each queue have its own rps_dev_flow table has two advantages:
      1) the tail queue counters will be written on each receive, so
      keeping the table local to interrupting CPU s good for locality.  2)
      this allows lockless access to the table-- the CPU number and queue
      tail counter need to be accessed together under mutual exclusion
      from netif_receive_skb, we assume that this is only called from
      device napi_poll which is non-reentrant.
      
      This patch implements RFS for TCP and connected UDP sockets.
      It should be usable for other flow oriented protocols.
      
      There are two configuration parameters for RFS.  The
      "rps_flow_entries" kernel init parameter sets the number of
      entries in the rps_sock_flow_table, the per rxqueue sysfs entry
      "rps_flow_cnt" contains the number of entries in the rps_dev_flow
      table for the rxqueue.  Both are rounded to power of two.
      
      The obvious benefit of RFS (over just RPS) is that it achieves
      CPU locality between the receive processing for a flow and the
      applications processing; this can result in increased performance
      (higher pps, lower latency).
      
      The benefits of RFS are dependent on cache hierarchy, application
      load, and other factors.  On simple benchmarks, we don't necessarily
      see improvement and sometimes see degradation.  However, for more
      complex benchmarks and for applications where cache pressure is
      much higher this technique seems to perform very well.
      
      Below are some benchmark results which show the potential benfit of
      this patch.  The netperf test has 500 instances of netperf TCP_RR
      test with 1 byte req. and resp.  The RPC test is an request/response
      test similar in structure to netperf RR test ith 100 threads on
      each host, but does more work in userspace that netperf.
      
      e1000e on 8 core Intel
         No RFS or RPS		104K tps at 30% CPU
         No RFS (best RPS config):    290K tps at 63% CPU
         RFS				303K tps at 61% CPU
      
      RPC test	tps	CPU%	50/90/99% usec latency	Latency StdDev
        No RFS/RPS	103K	48%	757/900/3185		4472.35
        RPS only:	174K	73%	415/993/2468		491.66
        RFS		223K	73%	379/651/1382		315.61
      Signed-off-by: NTom Herbert <therbert@google.com>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fec5e652
    • G
      wl1251: add support for dedicated IRQ line · a02a2956
      Grazvydas Ignotas 提交于
      wl1251 has WLAN_IRQ pin for generating interrupts to host processor,
      which is mandatory in SPI mode and optional in SDIO mode (which can
      use SDIO interrupts instead). However TI recommends using deditated
      IRQ line for SDIO too.
      
      Add support for using dedicated interrupt line with SDIO, but also leave
      ability to switch to SDIO interrupts in case it's needed.
      Signed-off-by: NGrazvydas Ignotas <notasas@gmail.com>
      Reviewed-by: NBob Copeland <me@bobcopeland.com>
      Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
      a02a2956
  17. 15 4月, 2010 2 次提交
  18. 14 4月, 2010 5 次提交
    • G
      stmmac: new descriptor field for the driver's platform · e326e850
      Giuseppe CAVALLARO 提交于
      The new enh_desc is used for selecting the enhanced descriptors
      structure. There are several scenarios; some chips (mac10/100
      or gmac) want to use the enhanced descriptors; others want the normal
      ones.
      For example, on ST platforms: MAC10/100 uses the normal desc structure
      and the GMAC uses the enhanced one.
      It can be useful to get this information from the platform.
      This could also be decided at run-time looking at the chip's ID number;
      but it could happen that chips with the same ID want to use different
      descriptor structure.
      Signed-off-by: NGiuseppe Cavallaro <peppe.cavallaro@st.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e326e850
    • D
      rcu: Better explain the condition parameter of rcu_dereference_check() · c08c68dd
      David Howells 提交于
      Better explain the condition parameter of
      rcu_dereference_check() that describes the conditions under
      which the dereference is permitted to take place (and
      incorporate Yong Zhang's suggestion).  This condition is only
      checked under lockdep proving.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: eric.dumazet@gmail.com
      LKML-Reference: <1270852752-25278-2-git-send-email-paulmck@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c08c68dd
    • P
      rcu: Add rcu_access_pointer and rcu_dereference_protected · b62730ba
      Paul E. McKenney 提交于
      This patch adds variants of rcu_dereference() that handle
      situations where the RCU-protected data structure cannot change,
      perhaps due to our holding the update-side lock, or where the
      RCU-protected pointer is only to be fetched, not dereferenced.
      These are needed due to some performance concerns with using
      rcu_dereference() where it is not required, aside from the need
      for lockdep/sparse checking.
      
      The new rcu_access_pointer() primitive is for the case where the
      pointer is be fetch and not dereferenced.  This primitive may be
      used without protection, RCU or otherwise, due to the fact that
      it uses ACCESS_ONCE().
      
      The new rcu_dereference_protected() primitive is for the case
      where updates are prevented, for example, due to holding the
      update-side lock.  This primitive does neither ACCESS_ONCE() nor
      smp_read_barrier_depends(), so can only be used when updates are
      somehow prevented.
      Suggested-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      Cc: eric.dumazet@gmail.com
      LKML-Reference: <1270852752-25278-1-git-send-email-paulmck@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b62730ba
    • P
      ipv4: ipmr: support multiple tables · f0ad0860
      Patrick McHardy 提交于
      This patch adds support for multiple independant multicast routing instances,
      named "tables".
      
      Userspace multicast routing daemons can bind to a specific table instance by
      issuing a setsockopt call using a new option MRT_TABLE. The table number is
      stored in the raw socket data and affects all following ipmr setsockopt(),
      getsockopt() and ioctl() calls. By default, a single table (RT_TABLE_DEFAULT)
      is created with a default routing rule pointing to it. Newly created pimreg
      devices have the table number appended ("pimregX"), with the exception of
      devices created in the default table, which are named just "pimreg" for
      compatibility reasons.
      
      Packets are directed to a specific table instance using routing rules,
      similar to how regular routing rules work. Currently iif, oif and mark
      are supported as keys, source and destination addresses could be supported
      additionally.
      
      Example usage:
      
      - bind pimd/xorp/... to a specific table:
      
      uint32_t table = 123;
      setsockopt(fd, IPPROTO_IP, MRT_TABLE, &table, sizeof(table));
      
      - create routing rules directing packets to the new table:
      
      # ip mrule add iif eth0 lookup 123
      # ip mrule add oif eth0 lookup 123
      Signed-off-by: NPatrick McHardy <kaber@trash.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f0ad0860
    • P