1. 19 7月, 2010 2 次提交
  2. 14 7月, 2010 2 次提交
  3. 03 6月, 2010 3 次提交
  4. 26 5月, 2010 1 次提交
  5. 20 5月, 2010 1 次提交
  6. 19 5月, 2010 5 次提交
    • D
      crypto: skcipher - Add ablkcipher_walk interfaces · bf06099d
      David S. Miller 提交于
      These are akin to the blkcipher_walk helpers.
      
      The main differences in the async variant are:
      
      1) Only physical walking is supported.  We can't hold on to
         kmap mappings across the async operation to support virtual
         ablkcipher_walk operations anyways.
      
      2) Bounce buffers used for async more need to be persistent and
         freed at a later point in time when the async op completes.
         Therefore we maintain a list of writeback buffers and require
         that the ablkcipher_walk user call the 'complete' operation
         so we can copy the bounce buffers out to the real buffers and
         free up the bounce buffer chunks.
      
      These interfaces will be used by the new Niagara2 crypto driver.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      bf06099d
    • D
      crypto: testmgr - Add testing for async hashing and update/final · a8f1a052
      David S. Miller 提交于
      Extend testmgr such that it tests async hash algorithms,
      and that for both sync and async hashes it tests both
      ->digest() and ->update()/->final() sequences.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      a8f1a052
    • D
      crypto: tcrypt - Add speed tests for async hashing · beb63da7
      David S. Miller 提交于
      These are invoked in the 'mode' range of 400 to 499.
      
      The cost of async vs. sync for the software algorithm implementations
      varies.  It can be as low as 16 cycles but as much as a couple hundred.
      
      Here two runs of md5 testing, async then sync:
      
      testing speed of async md5
      test  0 (   16 byte blocks,   16 bytes per update,   1 updates):   2448 cycles/operation,  153 cycles/byte
      test  1 (   64 byte blocks,   16 bytes per update,   4 updates):   4992 cycles/operation,   78 cycles/byte
      test  2 (   64 byte blocks,   64 bytes per update,   1 updates):   3808 cycles/operation,   59 cycles/byte
      test  3 (  256 byte blocks,   16 bytes per update,  16 updates):  14000 cycles/operation,   54 cycles/byte
      test  4 (  256 byte blocks,   64 bytes per update,   4 updates):   8480 cycles/operation,   33 cycles/byte
      test  5 (  256 byte blocks,  256 bytes per update,   1 updates):   7280 cycles/operation,   28 cycles/byte
      test  6 ( 1024 byte blocks,   16 bytes per update,  64 updates):  50016 cycles/operation,   48 cycles/byte
      test  7 ( 1024 byte blocks,  256 bytes per update,   4 updates):  22496 cycles/operation,   21 cycles/byte
      test  8 ( 1024 byte blocks, 1024 bytes per update,   1 updates):  21232 cycles/operation,   20 cycles/byte
      test  9 ( 2048 byte blocks,   16 bytes per update, 128 updates): 117184 cycles/operation,   57 cycles/byte
      test 10 ( 2048 byte blocks,  256 bytes per update,   8 updates):  43008 cycles/operation,   21 cycles/byte
      test 11 ( 2048 byte blocks, 1024 bytes per update,   2 updates):  40176 cycles/operation,   19 cycles/byte
      test 12 ( 2048 byte blocks, 2048 bytes per update,   1 updates):  39888 cycles/operation,   19 cycles/byte
      test 13 ( 4096 byte blocks,   16 bytes per update, 256 updates): 194176 cycles/operation,   47 cycles/byte
      test 14 ( 4096 byte blocks,  256 bytes per update,  16 updates):  84096 cycles/operation,   20 cycles/byte
      test 15 ( 4096 byte blocks, 1024 bytes per update,   4 updates):  78336 cycles/operation,   19 cycles/byte
      test 16 ( 4096 byte blocks, 4096 bytes per update,   1 updates):  77120 cycles/operation,   18 cycles/byte
      test 17 ( 8192 byte blocks,   16 bytes per update, 512 updates): 403056 cycles/operation,   49 cycles/byte
      test 18 ( 8192 byte blocks,  256 bytes per update,  32 updates): 166112 cycles/operation,   20 cycles/byte
      test 19 ( 8192 byte blocks, 1024 bytes per update,   8 updates): 154768 cycles/operation,   18 cycles/byte
      test 20 ( 8192 byte blocks, 4096 bytes per update,   2 updates): 151904 cycles/operation,   18 cycles/byte
      test 21 ( 8192 byte blocks, 8192 bytes per update,   1 updates): 155456 cycles/operation,   18 cycles/byte
      
      testing speed of md5
      test  0 (   16 byte blocks,   16 bytes per update,   1 updates):   2208 cycles/operation,  138 cycles/byte
      test  1 (   64 byte blocks,   16 bytes per update,   4 updates):   5008 cycles/operation,   78 cycles/byte
      test  2 (   64 byte blocks,   64 bytes per update,   1 updates):   3600 cycles/operation,   56 cycles/byte
      test  3 (  256 byte blocks,   16 bytes per update,  16 updates):  14080 cycles/operation,   55 cycles/byte
      test  4 (  256 byte blocks,   64 bytes per update,   4 updates):   8560 cycles/operation,   33 cycles/byte
      test  5 (  256 byte blocks,  256 bytes per update,   1 updates):   7040 cycles/operation,   27 cycles/byte
      test  6 ( 1024 byte blocks,   16 bytes per update,  64 updates):  50592 cycles/operation,   49 cycles/byte
      test  7 ( 1024 byte blocks,  256 bytes per update,   4 updates):  22736 cycles/operation,   22 cycles/byte
      test  8 ( 1024 byte blocks, 1024 bytes per update,   1 updates):  24960 cycles/operation,   24 cycles/byte
      test  9 ( 2048 byte blocks,   16 bytes per update, 128 updates):  99312 cycles/operation,   48 cycles/byte
      test 10 ( 2048 byte blocks,  256 bytes per update,   8 updates):  43520 cycles/operation,   21 cycles/byte
      test 11 ( 2048 byte blocks, 1024 bytes per update,   2 updates):  40704 cycles/operation,   19 cycles/byte
      test 12 ( 2048 byte blocks, 2048 bytes per update,   1 updates):  39552 cycles/operation,   19 cycles/byte
      test 13 ( 4096 byte blocks,   16 bytes per update, 256 updates): 196720 cycles/operation,   48 cycles/byte
      test 14 ( 4096 byte blocks,  256 bytes per update,  16 updates):  85152 cycles/operation,   20 cycles/byte
      test 15 ( 4096 byte blocks, 1024 bytes per update,   4 updates):  79408 cycles/operation,   19 cycles/byte
      test 16 ( 4096 byte blocks, 4096 bytes per update,   1 updates):  76816 cycles/operation,   18 cycles/byte
      test 17 ( 8192 byte blocks,   16 bytes per update, 512 updates): 391520 cycles/operation,   47 cycles/byte
      test 18 ( 8192 byte blocks,  256 bytes per update,  32 updates): 168464 cycles/operation,   20 cycles/byte
      test 19 ( 8192 byte blocks, 1024 bytes per update,   8 updates): 156912 cycles/operation,   19 cycles/byte
      test 20 ( 8192 byte blocks, 4096 bytes per update,   2 updates): 154016 cycles/operation,   18 cycles/byte
      test 21 ( 8192 byte blocks, 8192 bytes per update,   1 updates): 153856 cycles/operation,   18 cycles/byte
      
      We can ditch the sync hash code at some point if we feel that makes
      sense.  For now I've left it there.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      beb63da7
    • D
      crypto: scatterwalk - Fix scatterwalk_done() test · 85c6201a
      David S. Miller 提交于
      We are done with the scattergather entry when the walk offset goes
      past sg->offset + sg->length, not when it crosses a page boundary.
      
      There is a similarly queer test in the second half of
      scatterwalk_pagedone() that probably needs some scrutiny.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      85c6201a
    • H
      crypto: shash - Remove usage of CRYPTO_MINALIGN · 18eb8ea6
      Herbert Xu 提交于
      The macro CRYPTO_MINALIGN is not meant to be used directly.  This
      patch replaces it with crypto_tfm_ctx_alignment.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      18eb8ea6
  7. 18 5月, 2010 1 次提交
  8. 05 5月, 2010 1 次提交
  9. 03 5月, 2010 1 次提交
  10. 26 4月, 2010 1 次提交
  11. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  12. 29 3月, 2010 1 次提交
  13. 24 3月, 2010 1 次提交
  14. 18 3月, 2010 1 次提交
  15. 10 3月, 2010 2 次提交
  16. 03 3月, 2010 2 次提交
  17. 02 3月, 2010 2 次提交
  18. 17 2月, 2010 1 次提交
    • T
      percpu: add __percpu sparse annotations to what's left · a29d8b8e
      Tejun Heo 提交于
      Add __percpu sparse annotations to places which didn't make it in one
      of the previous patches.  All converions are trivial.
      
      These annotations are to make sparse consider percpu variables to be
      in a different address space and warn if accessed without going
      through percpu accessors.  This patch doesn't affect normal builds.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NBorislav Petkov <borislav.petkov@amd.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Huang Ying <ying.huang@intel.com>
      Cc: Len Brown <lenb@kernel.org>
      Cc: Neil Brown <neilb@suse.de>
      a29d8b8e
  19. 16 2月, 2010 11 次提交