1. 29 9月, 2014 2 次提交
  2. 22 8月, 2014 1 次提交
  3. 05 8月, 2014 1 次提交
  4. 23 6月, 2014 1 次提交
  5. 11 6月, 2014 2 次提交
  6. 13 5月, 2014 1 次提交
  7. 18 4月, 2014 1 次提交
  8. 25 3月, 2014 1 次提交
    • E
      cxgb4: Call dev_kfree/consume_skb_any instead of [dev_]kfree_skb. · a7525198
      Eric W. Biederman 提交于
      Replace kfree_skb with dev_consume_skb_any in free_tx_desc that can be
      called in hard irq and other contexts. dev_consume_skb_any is used
      as this function consumes successfully transmitted skbs.
      
      Replace dev_kfree_skb with dev_kfree_skb_any in t4_eth_xmit that can
      be called in hard irq and other contexts, on paths that drop the skb.
      
      Replace dev_kfree_skb with dev_consume_skb_any in t4_eth_xmit that can
      be called in hard irq and other contexts, on paths that successfully
      transmit the skb.
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      a7525198
  9. 15 3月, 2014 1 次提交
    • S
      cxgb4/iw_cxgb4: Doorbell Drop Avoidance Bug Fixes · 05eb2389
      Steve Wise 提交于
      The current logic suffers from a slow response time to disable user DB
      usage, and also fails to avoid DB FIFO drops under heavy load. This commit
      fixes these deficiencies and makes the avoidance logic more optimal.
      This is done by more efficiently notifying the ULDs of potential DB
      problems, and implements a smoother flow control algorithm in iw_cxgb4,
      which is the ULD that puts the most load on the DB fifo.
      
      Design:
      
      cxgb4:
      
      Direct ULD callback from the DB FULL/DROP interrupt handler.  This allows
      the ULD to stop doing user DB writes as quickly as possible.
      
      While user DB usage is disabled, the LLD will accumulate DB write events
      for its queues.  Then once DB usage is reenabled, a single DB write is
      done for each queue with its accumulated write count.  This reduces the
      load put on the DB fifo when reenabling.
      
      iw_cxgb4:
      
      Instead of marking each qp to indicate DB writes are disabled, we create
      a device-global status page that each user process maps.  This allows
      iw_cxgb4 to only set this single bit to disable all DB writes for all
      user QPs vs traversing the idr of all the active QPs.  If the libcxgb4
      doesn't support this, then we fall back to the old approach of marking
      each QP.  Thus we allow the new driver to work with an older libcxgb4.
      
      When the LLD upcalls iw_cxgb4 indicating DB FULL, we disable all DB writes
      via the status page and transition the DB state to STOPPED.  As user
      processes see that DB writes are disabled, they call into iw_cxgb4
      to submit their DB write events.  Since the DB state is in STOPPED,
      the QP trying to write gets enqueued on a new DB "flow control" list.
      As subsequent DB writes are submitted for this flow controlled QP, the
      amount of writes are accumulated for each QP on the flow control list.
      So all the user QPs that are actively ringing the DB get put on this
      list and the number of writes they request are accumulated.
      
      When the LLD upcalls iw_cxgb4 indicating DB EMPTY, which is in a workq
      context, we change the DB state to FLOW_CONTROL, and begin resuming all
      the QPs that are on the flow control list.  This logic runs on until
      the flow control list is empty or we exit FLOW_CONTROL mode (due to
      a DB DROP upcall, for example).  QPs are removed from this list, and
      their accumulated DB write counts written to the DB FIFO.  Sets of QPs,
      called chunks in the code, are removed at one time. The chunk size is 64.
      So 64 QPs are resumed at a time, and before the next chunk is resumed, the
      logic waits (blocks) for the DB FIFO to drain.  This prevents resuming to
      quickly and overflowing the FIFO.  Once the flow control list is empty,
      the db state transitions back to NORMAL and user QPs are again allowed
      to write directly to the user DB register.
      
      The algorithm is designed such that if the DB write load is high enough,
      then all the DB writes get submitted by the kernel using this flow
      controlled approach to avoid DB drops.  As the load lightens though, we
      resume to normal DB writes directly by user applications.
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      05eb2389
  10. 14 3月, 2014 4 次提交
  11. 19 2月, 2014 2 次提交
  12. 04 1月, 2014 1 次提交
    • T
      cxgb4: allow large buffer size to have page size · 940d9d34
      Thadeu Lima de Souza Cascardo 提交于
      Since commit 52367a76
      ("cxgb4/cxgb4vf: Code cleanup to enable T4 Configuration File support"),
      we have failures like this during cxgb4 probe:
      
      cxgb4 0000:01:00.4: bad SGE FL page buffer sizes [65536, 65536]
      cxgb4: probe of 0000:01:00.4 failed with error -22
      
      This happens whenever software parameters are used, without a
      configuration file. That happens when the hardware was already
      initialized (after kexec, or after csiostor is loaded).
      
      It happens that these values are acceptable, rendering fl_pg_order equal
      to 0, which is the case of a hard init when the page size is equal or
      larger than 65536.
      
      Accepting fl_large_pg equal to fl_small_pg solves the issue, and
      shouldn't cause any trouble besides a possible performance reduction
      when smaller pages are used. And that can be fixed by a configuration
      file.
      Signed-off-by: NThadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      940d9d34
  13. 19 12月, 2013 1 次提交
  14. 04 12月, 2013 1 次提交
  15. 05 6月, 2013 1 次提交
  16. 20 4月, 2013 1 次提交
  17. 14 3月, 2013 2 次提交
  18. 27 11月, 2012 1 次提交
  19. 28 9月, 2012 1 次提交
  20. 06 9月, 2012 1 次提交
  21. 01 8月, 2012 1 次提交
    • M
      netvm: propagate page->pfmemalloc from skb_alloc_page to skb · 0614002b
      Mel Gorman 提交于
      The skb->pfmemalloc flag gets set to true iff during the slab allocation
      of data in __alloc_skb that the the PFMEMALLOC reserves were used.  If
      page splitting is used, it is possible that pages will be allocated from
      the PFMEMALLOC reserve without propagating this information to the skb.
      This patch propagates page->pfmemalloc from pages allocated for fragments
      to the skb.
      
      It works by reintroducing and expanding the skb_alloc_page() API to take
      an skb.  If the page was allocated from pfmemalloc reserves, it is
      automatically copied.  If the driver allocates the page before the skb, it
      should call skb_propagate_pfmemalloc() after the skb is allocated to
      ensure the flag is copied properly.
      
      Failure to do so is not critical.  The resulting driver may perform slower
      if it is used for swap-over-NBD or swap-over-NFS but it should not result
      in failure.
      
      [davem@davemloft.net: API rename and consistency]
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Cc: Eric B Munson <emunson@mgebm.net>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Christoph Lameter <cl@linux.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0614002b
  22. 07 6月, 2012 1 次提交
    • J
      ethernet: Remove casts to same type · 64699336
      Joe Perches 提交于
      Adding casts of objects to the same type is unnecessary
      and confusing for a human reader.
      
      For example, this cast:
      
              int y;
              int *p = (int *)&y;
      
      I used the coccinelle script below to find and remove these
      unnecessary casts.  I manually removed the conversions this
      script produces of casts with __force, __iomem and __user.
      
      @@
      type T;
      T *p;
      @@
      
      -       (T *)p
      +       p
      
      A function in atl1e_main.c was passed a const pointer
      when it actually modified elements of the structure.
      
      Change the argument to a non-const pointer.
      
      A function in stmmac needed a __force to avoid a sparse
      warning.  Added it.
      Signed-off-by: NJoe Perches <joe@perches.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      64699336
  23. 19 5月, 2012 2 次提交
  24. 23 11月, 2011 1 次提交
    • E
      net: remove netdev_alloc_page and use __GFP_COLD · 1f2149c1
      Eric Dumazet 提交于
      Given we dont use anymore the struct net_device *dev argument, and this
      interface brings litle benefit, remove netdev_{alloc|free}_page(), to
      debloat include/linux/skbuff.h a bit.
      
      (Some drivers used a mix of these interfaces and alloc_pages())
      
      When allocating a page given to device for DMA transfer (device to
      memory), it makes sense to use a cold one (__GFP_COLD)
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      CC: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
      CC: Dimitris Michailidis <dm@chelsio.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1f2149c1
  25. 01 11月, 2011 1 次提交
  26. 21 10月, 2011 1 次提交
  27. 19 10月, 2011 1 次提交
  28. 11 8月, 2011 1 次提交
  29. 23 5月, 2011 2 次提交
    • P
      Add appropriate <linux/prefetch.h> include for prefetch users · 70c71606
      Paul Gortmaker 提交于
      After discovering that wide use of prefetch on modern CPUs
      could be a net loss instead of a win, net drivers which were
      relying on the implicit inclusion of prefetch.h via the list
      headers showed up in the resulting cleanup fallout.  Give
      them an explicit include via the following $0.02 script.
      
       =========================================
       #!/bin/bash
       MANUAL=""
       for i in `git grep -l 'prefetch(.*)' .` ; do
       	grep -q '<linux/prefetch.h>' $i
       	if [ $? = 0 ] ; then
       		continue
       	fi
      
       	(	echo '?^#include <linux/?a'
       		echo '#include <linux/prefetch.h>'
       		echo .
       		echo w
       		echo q
       	) | ed -s $i > /dev/null 2>&1
       	if [ $? != 0 ]; then
       		echo $i needs manual fixup
       		MANUAL="$i $MANUAL"
       	fi
       done
       echo ------------------- 8\<----------------------
       echo vi $MANUAL
       =========================================
      Signed-off-by: NPaul <paul.gortmaker@windriver.com>
      [ Fixed up some incorrect #include placements, and added some
        non-network drivers and the fib_trie.c case    - Linus ]
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      70c71606
    • P
      drivers/net: add prefetch header for prefetch users · c0cba59e
      Paul Gortmaker 提交于
      After discovering that wide use of prefetch on modern CPUs
      could be a net loss instead of a win, net drivers which were
      relying on the implicit inclusion of prefetch.h via the list
      headers showed up in the resulting cleanup fallout.  Give
      them an explicit include via the following $0.02 script.
      
       =========================================
       #!/bin/bash
       MANUAL=""
       for i in `git grep -l 'prefetch(.*)' .` ; do
       	grep -q '<linux/prefetch.h>' $i
       	if [ $? = 0 ] ; then
       		continue
       	fi
      
       	(	echo '?^#include <linux/?a'
       		echo '#include <linux/prefetch.h>'
       		echo .
       		echo w
       		echo q
       	) | ed -s $i > /dev/null 2>&1
       	if [ $? != 0 ]; then
       		echo $i needs manual fixup
       		MANUAL="$i $MANUAL"
       	fi
       done
       echo ------------------- 8\<----------------------
       echo vi $MANUAL
       =========================================
      Signed-off-by: NPaul <paul.gortmaker@windriver.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c0cba59e
  30. 18 4月, 2011 1 次提交
  31. 17 12月, 2010 1 次提交