1. 26 3月, 2018 9 次提交
  2. 12 3月, 2018 1 次提交
  3. 16 11月, 2017 1 次提交
    • M
      mm: remove __GFP_COLD · 453f85d4
      Mel Gorman 提交于
      As the page free path makes no distinction between cache hot and cold
      pages, there is no real useful ordering of pages in the free list that
      allocation requests can take advantage of.  Juding from the users of
      __GFP_COLD, it is likely that a number of them are the result of copying
      other sites instead of actually measuring the impact.  Remove the
      __GFP_COLD parameter which simplifies a number of paths in the page
      allocator.
      
      This is potentially controversial but bear in mind that the size of the
      per-cpu pagelists versus modern cache sizes means that the whole per-cpu
      list can often fit in the L3 cache.  Hence, there is only a potential
      benefit for microbenchmarks that alloc/free pages in a tight loop.  It's
      even worse when THP is taken into account which has little or no chance
      of getting a cache-hot page as the per-cpu list is bypassed and the
      zeroing of multiple pages will thrash the cache anyway.
      
      The truncate microbenchmarks are not shown as this patch affects the
      allocation path and not the free path.  A page fault microbenchmark was
      tested but it showed no sigificant difference which is not surprising
      given that the __GFP_COLD branches are a miniscule percentage of the
      fault path.
      
      Link: http://lkml.kernel.org/r/20171018075952.10627-9-mgorman@techsingularity.netSigned-off-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      453f85d4
  4. 24 10月, 2017 1 次提交
  5. 16 8月, 2017 4 次提交
  6. 15 8月, 2017 8 次提交
  7. 19 6月, 2017 1 次提交
  8. 16 6月, 2017 1 次提交
    • J
      networking: introduce and use skb_put_data() · 59ae1d12
      Johannes Berg 提交于
      A common pattern with skb_put() is to just want to memcpy()
      some data into the new space, introduce skb_put_data() for
      this.
      
      An spatch similar to the one for skb_put_zero() converts many
      of the places using it:
      
          @@
          identifier p, p2;
          expression len, skb, data;
          type t, t2;
          @@
          (
          -p = skb_put(skb, len);
          +p = skb_put_data(skb, data, len);
          |
          -p = (t)skb_put(skb, len);
          +p = skb_put_data(skb, data, len);
          )
          (
          p2 = (t2)p;
          -memcpy(p2, data, len);
          |
          -memcpy(p, data, len);
          )
      
          @@
          type t, t2;
          identifier p, p2;
          expression skb, data;
          @@
          t *p;
          ...
          (
          -p = skb_put(skb, sizeof(t));
          +p = skb_put_data(skb, data, sizeof(t));
          |
          -p = (t *)skb_put(skb, sizeof(t));
          +p = skb_put_data(skb, data, sizeof(t));
          )
          (
          p2 = (t2)p;
          -memcpy(p2, data, sizeof(*p));
          |
          -memcpy(p, data, sizeof(*p));
          )
      
          @@
          expression skb, len, data;
          @@
          -memcpy(skb_put(skb, len), data, len);
          +skb_put_data(skb, data, len);
      
      (again, manually post-processed to retain some comments)
      Reviewed-by: NStephen Hemminger <stephen@networkplumber.org>
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      59ae1d12
  9. 07 4月, 2017 1 次提交
    • F
      liquidio: fix Octeon core watchdog timeout false alarm · bb54be58
      Felix Manlunas 提交于
      Detection of watchdog timeout of Octeon cores is flawed and susceptible to
      false alarms.  Refactor by removing the detection code, and in its place,
      leverage existing code that monitors for an indication from the NIC
      firmware that an Octeon core crashed; expand the meaning of the indication
      to "an Octeon core crashed or its watchdog timer expired".  Detection of
      watchdog timeout is now delegated to an exception handler in the NIC
      firmware; this is free of false alarms.
      
      Also if there's an Octeon core crash or watchdog timeout:
      (1) Disable VF Ethernet links.
      (2) Decrement the module refcount by an amount equal to the number of
          active VFs of the NIC whose Octeon core crashed or had a watchdog
          timeout.  The refcount will continue to reflect the active VFs of
          other liquidio NIC(s) (if present) whose Octeon cores are faultless.
      
      Item (2) is needed to avoid the case of not being able to unload the driver
      because the module refcount is stuck at some non-zero number.  There is
      code that, in normal cases, decrements the refcount upon receiving a
      message from the firmware that a VF driver was unloaded.  But in
      exceptional cases like an Octeon core crash or watchdog timeout, arrival of
      that particular message from the firmware might be unreliable.  That normal
      case code is changed to not touch the refcount in the exceptional case to
      avoid contention (over the refcount) with the liquidio_watchdog kernel
      thread who will carry out item (2).
      Signed-off-by: NFelix Manlunas <felix.manlunas@cavium.com>
      Signed-off-by: NDerek Chickles <derek.chickles@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bb54be58
  10. 23 3月, 2017 1 次提交
  11. 22 3月, 2017 1 次提交
  12. 10 3月, 2017 1 次提交
    • V
      liquidio: improve UDP TX performance · 67e303e0
      VSR Burru 提交于
      Improve UDP TX performance by:
      * reducing the ring size from 2K to 512
      * replacing the numerous streaming DMA allocations for info buffers and
        gather lists with one large consistent DMA allocation per ring
      
      BQL is not effective here.  We reduced the ring size because there is heavy
      overhead with dma_map_single every so often.  With iommu=on, dma_map_single
      in PF Tx data path was taking longer time (~700usec) for every ~250
      packets.  Debugged intel_iommu code, and found that PF driver is utilizing
      too many static IO virtual address mapping entries (for gather list entries
      and info buffers): about 100K entries for two PF's each using 8 rings.
      Also, finding an empty entry (in rbtree of device domain's iova mapping in
      kernel) during Tx path becomes a bottleneck every so often; the loop to
      find the empty entry goes through over 40K iterations; this is too costly
      and was the major overhead.  Overhead is low when this loop quits quickly.
      
      Netperf benchmark numbers before and after patch:
      
      PF UDP TX
      +--------+--------+------------+------------+---------+
      |        |        |  Before    |  After     |         |
      | Number |        |  Patch     |  Patch     |         |
      |  of    | Packet | Throughput | Throughput | Percent |
      | Flows  |  Size  |  (Gbps)    |  (Gbps)    | Change  |
      +--------+--------+------------+------------+---------+
      |        |   360  |   0.52     |   0.93     |  +78.9  |
      |   1    |  1024  |   1.62     |   2.84     |  +75.3  |
      |        |  1518  |   2.44     |   4.21     |  +72.5  |
      +--------+--------+------------+------------+---------+
      |        |   360  |   0.45     |   1.59     | +253.3  |
      |   4    |  1024  |   1.34     |   5.48     | +308.9  |
      |        |  1518  |   2.27     |   8.31     | +266.1  |
      +--------+--------+------------+------------+---------+
      |        |   360  |   0.40     |   1.61     | +302.5  |
      |   8    |  1024  |   1.64     |   4.24     | +158.5  |
      |        |  1518  |   2.87     |   6.52     | +127.2  |
      +--------+--------+------------+------------+---------+
      
      VF UDP TX
      +--------+--------+------------+------------+---------+
      |        |        |  Before    |  After     |         |
      | Number |        |  Patch     |  Patch     |         |
      |  of    | Packet | Throughput | Throughput | Percent |
      | Flows  |  Size  |  (Gbps)    |  (Gbps)    | Change  |
      +--------+--------+------------+------------+---------+
      |        |   360  |   1.28     |   1.49     |  +16.4  |
      |   1    |  1024  |   4.44     |   4.39     |   -1.1  |
      |        |  1518  |   6.08     |   6.51     |   +7.1  |
      +--------+--------+------------+------------+---------+
      |        |   360  |   2.35     |   2.35     |    0.0  |
      |   4    |  1024  |   6.41     |   8.07     |  +25.9  |
      |        |  1518  |   9.56     |   9.54     |   -0.2  |
      +--------+--------+------------+------------+---------+
      |        |   360  |   3.41     |   3.65     |   +7.0  |
      |   8    |  1024  |   9.35     |   9.34     |   -0.1  |
      |        |  1518  |   9.56     |   9.57     |   +0.1  |
      +--------+--------+------------+------------+---------+
      Signed-off-by: NVSR Burru <veerasenareddy.burru@cavium.com>
      Signed-off-by: NFelix Manlunas <felix.manlunas@cavium.com>
      Signed-off-by: NDerek Chickles <derek.chickles@cavium.com>
      Signed-off-by: NRaghu Vatsavayi <raghu.vatsavayi@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      67e303e0
  13. 09 12月, 2016 1 次提交
  14. 16 11月, 2016 2 次提交
  15. 18 10月, 2016 1 次提交
  16. 03 9月, 2016 2 次提交
  17. 01 9月, 2016 1 次提交
  18. 05 7月, 2016 3 次提交