1. 31 7月, 2013 3 次提交
  2. 11 6月, 2013 1 次提交
  3. 26 4月, 2013 2 次提交
  4. 25 4月, 2013 1 次提交
  5. 28 2月, 2013 1 次提交
    • S
      hlist: drop the node parameter from iterators · b67bfe0d
      Sasha Levin 提交于
      I'm not sure why, but the hlist for each entry iterators were conceived
      
              list_for_each_entry(pos, head, member)
      
      The hlist ones were greedy and wanted an extra parameter:
      
              hlist_for_each_entry(tpos, pos, head, member)
      
      Why did they need an extra pos parameter? I'm not quite sure. Not only
      they don't really need it, it also prevents the iterator from looking
      exactly like the list iterator, which is unfortunate.
      
      Besides the semantic patch, there was some manual work required:
      
       - Fix up the actual hlist iterators in linux/list.h
       - Fix up the declaration of other iterators based on the hlist ones.
       - A very small amount of places were using the 'node' parameter, this
       was modified to use 'obj->member' instead.
       - Coccinelle didn't handle the hlist_for_each_entry_safe iterator
       properly, so those had to be fixed up manually.
      
      The semantic patch which is mostly the work of Peter Senna Tschudin is here:
      
      @@
      iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
      
      type T;
      expression a,c,d,e;
      identifier b;
      statement S;
      @@
      
      -T b;
          <+... when != b
      (
      hlist_for_each_entry(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_continue(a,
      - b,
      c) S
      |
      hlist_for_each_entry_from(a,
      - b,
      c) S
      |
      hlist_for_each_entry_rcu(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_rcu_bh(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_continue_rcu_bh(a,
      - b,
      c) S
      |
      for_each_busy_worker(a, c,
      - b,
      d) S
      |
      ax25_uid_for_each(a,
      - b,
      c) S
      |
      ax25_for_each(a,
      - b,
      c) S
      |
      inet_bind_bucket_for_each(a,
      - b,
      c) S
      |
      sctp_for_each_hentry(a,
      - b,
      c) S
      |
      sk_for_each(a,
      - b,
      c) S
      |
      sk_for_each_rcu(a,
      - b,
      c) S
      |
      sk_for_each_from
      -(a, b)
      +(a)
      S
      + sk_for_each_from(a) S
      |
      sk_for_each_safe(a,
      - b,
      c, d) S
      |
      sk_for_each_bound(a,
      - b,
      c) S
      |
      hlist_for_each_entry_safe(a,
      - b,
      c, d, e) S
      |
      hlist_for_each_entry_continue_rcu(a,
      - b,
      c) S
      |
      nr_neigh_for_each(a,
      - b,
      c) S
      |
      nr_neigh_for_each_safe(a,
      - b,
      c, d) S
      |
      nr_node_for_each(a,
      - b,
      c) S
      |
      nr_node_for_each_safe(a,
      - b,
      c, d) S
      |
      - for_each_gfn_sp(a, c, d, b) S
      + for_each_gfn_sp(a, c, d) S
      |
      - for_each_gfn_indirect_valid_sp(a, c, d, b) S
      + for_each_gfn_indirect_valid_sp(a, c, d) S
      |
      for_each_host(a,
      - b,
      c) S
      |
      for_each_host_safe(a,
      - b,
      c, d) S
      |
      for_each_mesh_entry(a,
      - b,
      c, d) S
      )
          ...+>
      
      [akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
      [akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
      [akpm@linux-foundation.org: checkpatch fixes]
      [akpm@linux-foundation.org: fix warnings]
      [akpm@linux-foudnation.org: redo intrusive kvm changes]
      Tested-by: NPeter Senna Tschudin <peter.senna@gmail.com>
      Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NSasha Levin <sasha.levin@oracle.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Gleb Natapov <gleb@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b67bfe0d
  6. 16 2月, 2013 4 次提交
  7. 05 2月, 2013 3 次提交
  8. 01 2月, 2013 1 次提交
  9. 24 1月, 2013 2 次提交
  10. 22 11月, 2012 1 次提交
  11. 01 11月, 2012 2 次提交
  12. 23 10月, 2012 1 次提交
    • A
      ixgbe: Fix possible memory leak in ixgbe_set_ringparam · 1f4702aa
      Alexander Duyck 提交于
      We were not correctly freeing the temporary rings on error in
      ixgbe_set_ring_param.  In order to correct this I am unwinding a number of
      changes that were made in order to get things back to the original working
      form with modification for the current ring layouts.
      
      This approach has multiple advantages including a smaller memory footprint,
      and the fact that the interface is stopped while we are allocating the rings
      meaning that there is less potential for some sort of memory corruption on the
      ring.
      
      The only disadvantage I see with this approach is that on a Rx allocation
      failure we will report an error and only update the Tx rings.  However the
      adapter should be fully functional in this state and the likelihood of such
      an error is very low.  In addition it is not unreasonable to expect the
      user to need to recheck the ring configuration should they experience an
      error setting the ring sizes.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      1f4702aa
  13. 03 10月, 2012 1 次提交
  14. 19 7月, 2012 1 次提交
  15. 11 7月, 2012 1 次提交
  16. 20 6月, 2012 1 次提交
  17. 10 5月, 2012 2 次提交
  18. 04 5月, 2012 1 次提交
  19. 27 4月, 2012 1 次提交
  20. 20 3月, 2012 1 次提交
  21. 19 3月, 2012 1 次提交
  22. 17 3月, 2012 1 次提交
    • A
      ixgbe: Replace standard receive path with a page based receive · f800326d
      Alexander Duyck 提交于
      This patch replaces the existing Rx hot-path in the ixgbe driver with a new
      implementation that is based on performing a double buffered receive.  The
      ixgbe driver already had something similar in place for its' packet split
      path, however in that case we were still receiving the header for the
      packet into the sk_buff.  The big change here is the entire receive path
      will receive into pages only, and then pull the header out of the page and
      copy it into the sk_buff data.  There are several motivations behind this
      approach.
      
      First, this allows us to avoid several cache misses as we were taking a
      set of cache misses for allocating the sk_buff and then another set for
      receiving data into the sk_buff.  We are able to avoid these misses on
      receive now as we allocate the sk_buff when data is available.
      
      Second we are able to see a considerable performance gain when an IOMMU is
      enabled because we are no longer unmapping every buffer on receive.
      Instead we can delay the unmap until we are unable to use the page, and
      instead we can simply call sync_single_range on the half of the page that
      contains new data.
      
      Finally we are able to drop a considerable amount of code from the driver
      as we no longer have to support 2 different receive modes, packet split and
      one buffer.  This allows us to optimize the Rx path further since less
      branching is required.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Tested-by: NRoss Brattain <ross.b.brattain@intel.com>
      Tested-by: NStephen Ko <stephen.s.ko@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      f800326d
  23. 14 3月, 2012 1 次提交
  24. 13 3月, 2012 3 次提交
    • A
      ixgbe: Simplify logic for ethtool loopback frame creation and testing · 3832b26e
      Alexander Duyck 提交于
      This change makes it a bit easier to do the loopback frame creating and
      testing.  Previously we were doing an and to drop the last bit, and then
      dividing the frame_size by 2 in order to get locations for frame bytes and
      testing.  Instead we can simplify it by just shifting the register one bit
      to the right and using that for the frame offsets.
      
      This change also replaces all instances of rx_buffer_info with just
      rx_buffer since that is closer to the name of the actual structure being
      used and can save a few extra characters.
      
      In addition I have updated the logic for cleaning up a test frame so that
      we pass an rx_buffer instead of the sk_buff.  The main motivation behind
      this is changes that will replace the sk_buff with just a page in the
      future.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Tested-by: NStephen Ko <stephen.s.ko@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      3832b26e
    • A
      ixgbe: Allocate rings as part of the q_vector · de88eeeb
      Alexander Duyck 提交于
      This patch makes the rings a part of the q_vector directly instead of
      indirectly.  Specifically on x86 systems this helps to avoid any cache
      set conflicts between the q_vector, the tx_rings, and the rx_rings as the
      critical stride is 4K and in order to cross that boundary you would need to
      have over 15 rings on a single q_vector.
      
      In addition this allows for smarter allocations when Flow Director is
      enabled.  Previously Flow Director would set the irq_affinity hints based
      on the CPU and was still using a node interleaving approach which on some
      systems would end up with the two values mismatched.  With the new approach
      we can set the affinity for the irq_vector and use the CPU for that
      affinity to determine the node value for the node and the rings.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Tested-by: NStephen Ko <stephen.s.ko@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      de88eeeb
    • J
      ixgbe: remove tie between NAPI work limits and interrupt moderation · 35551c47
      Jeff Kirsher 提交于
      As noted by Ben Hutchings and David Miller, work limits for NAPI
      should not be tied to interrupt moderation parameters.  This
      should be handled by NAPI, possibly through sysfs.
      
      Neil Horman & Stephen Hemminger are working on a solution for
      NAPI currently.  In the meantime, remove this tie between
      work limits and interrupt moderation.
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      CC: Ben Hutchings <bhutchings@solarflare.com>
      Tested-by: NPhil Schmitt <phillip.j.schmitt@intel.com>
      35551c47
  25. 11 2月, 2012 1 次提交
  26. 09 2月, 2012 1 次提交
    • J
      ixgbe: ethtool: stats user buffer overrun · 9cc00b51
      John Fastabend 提交于
      If the number of tx/rx queues changes the ethtool ioctl
      ETHTOOL_GSTATS may overrun the userspace buffer. This
      occurs because the general practice in user space to
      query stats is to issue a ETHTOOL_GSSET cmd to learn the
      buffer size needed, allocate the buffer, then call
      ETHTOOL_GSTIRNGS and ETHTOOL_GSTATS. If the number of
      real_num_queues is changed or flow control attributes
      are changed after ETHTOOL_GSSET but before the
      ETHTOOL_GSTRINGS/ETHTOOL_GSTATS a user space buffer
      overrun occurs.
      
      To fix the overrun always return the max buffer size
      needed from get_sset_count() then return all strings
      and stats from get_strings()/get_ethtool_stats().
      
      This _will_ change the output from the ioctl() call
      which could break applications and script parsing in
      theory. I believe these changes should not break existing
      tools because the only changes will be more {tx|rx}_queues
      and the {tx|rx}_pb_* stats will always be returned.
      Existing scripts already need to handle changing number
      of queues because this occurs today depending on system
      and current features. The {tx|rx}_pb_* stats are at the
      end of the output and should be handled by scripts today
      regardless.
      
      Finally get_ethtool_stats and get_strings are free-form
      outputs tools parsing these outputs should be defensive
      anyways. In the end these updates are better then
      having a tool segfault because of a buffer overrun.
      Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      9cc00b51
  27. 03 2月, 2012 1 次提交