1. 10 12月, 2010 2 次提交
    • E
      net: optimize INET input path further · 68835aba
      Eric Dumazet 提交于
      Followup of commit b178bb3d (net: reorder struct sock fields)
      
      Optimize INET input path a bit further, by :
      
      1) moving sk_refcnt close to sk_lock.
      
      This reduces number of dirtied cache lines by one on 64bit arches (and
      64 bytes cache line size).
      
      2) moving inet_daddr & inet_rcv_saddr at the beginning of sk
      
      (same cache line than hash / family / bound_dev_if / nulls_node)
      
      This reduces number of accessed cache lines in lookups by one, and dont
      increase size of inet and timewait socks.
      inet and tw sockets now share same place-holder for these fields.
      
      Before patch :
      
      offsetof(struct sock, sk_refcnt) = 0x10
      offsetof(struct sock, sk_lock) = 0x40
      offsetof(struct sock, sk_receive_queue) = 0x60
      offsetof(struct inet_sock, inet_daddr) = 0x270
      offsetof(struct inet_sock, inet_rcv_saddr) = 0x274
      
      After patch :
      
      offsetof(struct sock, sk_refcnt) = 0x44
      offsetof(struct sock, sk_lock) = 0x48
      offsetof(struct sock, sk_receive_queue) = 0x68
      offsetof(struct inet_sock, inet_daddr) = 0x0
      offsetof(struct inet_sock, inet_rcv_saddr) = 0x4
      
      compute_score() (udp or tcp) now use a single cache line per ignored
      item, instead of two.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      68835aba
    • D
      net: Abstract away all dst_entry metrics accesses. · defb3519
      David S. Miller 提交于
      Use helper functions to hide all direct accesses, especially writes,
      to dst_entry metrics values.
      
      This will allow us to:
      
      1) More easily change how the metrics are stored.
      
      2) Implement COW for metrics.
      
      In particular this will help us put metrics into the inetpeer
      cache if that is what we end up doing.  We can make the _metrics
      member a pointer instead of an array, initially have it point
      at the read-only metrics in the FIB, and then on the first set
      grab an inetpeer entry and point the _metrics member there.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      Acked-by: NEric Dumazet <eric.dumazet@gmail.com>
      defb3519
  2. 09 12月, 2010 3 次提交
  3. 07 12月, 2010 6 次提交
  4. 03 12月, 2010 5 次提交
  5. 02 12月, 2010 4 次提交
  6. 01 12月, 2010 5 次提交
  7. 30 11月, 2010 3 次提交
  8. 29 11月, 2010 5 次提交
  9. 28 11月, 2010 1 次提交
    • T
      rtnl: make link af-specific updates atomic · cf7afbfe
      Thomas Graf 提交于
      As David pointed out correctly, updates to af-specific attributes
      are currently not atomic. If multiple changes are requested and
      one of them fails, previous updates may have been applied already
      leaving the link behind in a undefined state.
      
      This patch splits the function parse_link_af() into two functions
      validate_link_af() and set_link_at(). validate_link_af() is placed
      to validate_linkmsg() check for errors as early as possible before
      any changes to the link have been made. set_link_af() is called to
      commit the changes later.
      
      This method is not fail proof, while it is currently sufficient
      to make set_link_af() inerrable and thus 100% atomic, the
      validation function method will not be able to detect all error
      scenarios in the future, there will likely always be errors
      depending on states which are f.e. not protected by rtnl_mutex
      and thus may change between validation and setting.
      
      Also, instead of silently ignoring unknown address families and
      config blocks for address families which did not register a set
      function the errors EAFNOSUPPORT respectively EOPNOSUPPORT are
      returned to avoid comitting 4 out of 5 update requests without
      notifying the user.
      Signed-off-by: NThomas Graf <tgraf@infradead.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cf7afbfe
  10. 25 11月, 2010 6 次提交
    • J
      cfg80211: allow using CQM event to notify packet loss · c063dbf5
      Johannes Berg 提交于
      This adds the ability for drivers to use CQM events
      to notify about packet loss for specific stations
      (which could be the AP for the managed mode case).
      Since the threshold might be determined by the
      driver (it isn't passed in right now) it will be
      passed out of the driver to userspace in the event.
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
      c063dbf5
    • B
      79b1c460
    • F
      cfg80211/mac80211: improve ad-hoc multicast rate handling · dd5b4cc7
      Felix Fietkau 提交于
      - store the multicast rate as an index instead of the rate value
        (reduces cpu overhead in a hotpath)
      - validate the rate values (must match a bitrate in at least one sband)
      Signed-off-by: NFelix Fietkau <nbd@openwrt.org>
      Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
      dd5b4cc7
    • J
      Revert "nl80211/mac80211: Report signal average" · ccb14354
      John W. Linville 提交于
      This reverts commit 86107fd1.
      
      This patch inadvertantly changed the userland ABI.
      Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
      ccb14354
    • T
      xps: Transmit Packet Steering · 1d24eb48
      Tom Herbert 提交于
      This patch implements transmit packet steering (XPS) for multiqueue
      devices.  XPS selects a transmit queue during packet transmission based
      on configuration.  This is done by mapping the CPU transmitting the
      packet to a queue.  This is the transmit side analogue to RPS-- where
      RPS is selecting a CPU based on receive queue, XPS selects a queue
      based on the CPU (previously there was an XPS patch from Eric
      Dumazet, but that might more appropriately be called transmit completion
      steering).
      
      Each transmit queue can be associated with a number of CPUs which will
      use the queue to send packets.  This is configured as a CPU mask on a
      per queue basis in:
      
      /sys/class/net/eth<n>/queues/tx-<n>/xps_cpus
      
      The mappings are stored per device in an inverted data structure that
      maps CPUs to queues.  In the netdevice structure this is an array of
      num_possible_cpu structures where each structure holds and array of
      queue_indexes for queues which that CPU can use.
      
      The benefits of XPS are improved locality in the per queue data
      structures.  Also, transmit completions are more likely to be done
      nearer to the sending thread, so this should promote locality back
      to the socket on free (e.g. UDP).  The benefits of XPS are dependent on
      cache hierarchy, application load, and other factors.  XPS would
      nominally be configured so that a queue would only be shared by CPUs
      which are sharing a cache, the degenerative configuration woud be that
      each CPU has it's own queue.
      
      Below are some benchmark results which show the potential benfit of
      this patch.  The netperf test has 500 instances of netperf TCP_RR test
      with 1 byte req. and resp.
      
      bnx2x on 16 core AMD
         XPS (16 queues, 1 TX queue per CPU)  1234K at 100% CPU
         No XPS (16 queues)                   996K at 100% CPU
      Signed-off-by: NTom Herbert <therbert@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1d24eb48
    • T
      xps: Improvements in TX queue selection · 3853b584
      Tom Herbert 提交于
      In dev_pick_tx, don't do work in calculating queue
      index or setting
      the index in the sock unless the device has more than one queue.  This
      allows the sock to be set only with a queue index of a multi-queue
      device which is desirable if device are stacked like in a tunnel.
      
      We also allow the mapping of a socket to queue to be changed.  To
      maintain in order packet transmission a flag (ooo_okay) has been
      added to the sk_buff structure.  If a transport layer sets this flag
      on a packet, the transmit queue can be changed for the socket.
      Presumably, the transport would set this if there was no possbility
      of creating OOO packets (for instance, there are no packets in flight
      for the socket).  This patch includes the modification in TCP output
      for setting this flag.
      Signed-off-by: NTom Herbert <therbert@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3853b584