1. 26 6月, 2006 2 次提交
  2. 21 3月, 2006 6 次提交
  3. 08 2月, 2006 1 次提交
  4. 10 1月, 2006 10 次提交
  5. 07 1月, 2006 5 次提交
  6. 30 10月, 2005 3 次提交
  7. 07 9月, 2005 1 次提交
  8. 02 9月, 2005 2 次提交
  9. 28 7月, 2005 1 次提交
  10. 15 7月, 2005 1 次提交
  11. 07 7月, 2005 8 次提交
    • D
      [CRYPTO] Add faster DES code from Dag Arne Osvik · e1d5dea1
      Dag Arne Osvik 提交于
      I've made a new implementation of DES to replace the old one in the kernel.
      It provides faster encryption on all tested processors apart from the original
      Pentium, and key setup is many times faster.
      
                                      Speed relative to old kernel implementation
      Processor       des_setkey      des_encrypt     des3_ede_setkey des3_ede_encrypt
      Pentium
      120Mhz          6.8             0.82            7.2             0.86
      Pentium III
      1.266Ghz        5.6             1.19            5.8             1.34
      Pentium M
      1.3Ghz          5.7             1.15            6.0             1.31
      Pentium 4
      2.266Ghz        5.8             1.24            6.0             1.40
      Pentium 4E
      3Ghz            5.4             1.27            5.5             1.48
      StrongARM 1110
      206Mhz          4.3             1.03            4.4             1.14
      Athlon XP
      2Ghz            7.8             1.44            8.1             1.61
      Athlon 64
      2Ghz            7.8             1.34            8.3             1.49
      Signed-off-by: NDag Arne Osvik <da@osvik.no>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e1d5dea1
    • H
      [CRYPTO] Remove unused iv field from context structure · a9df3597
      Herbert Xu 提交于
      The iv field in des_ctx/des3_ede_ctx/serpent_ctx has never been used.
      This was noticed by Dag Arne Osvik.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a9df3597
    • A
      [CRYPTO] Add x86_64 asm AES · a2a892a2
      Andreas Steinmetz 提交于
      Implementation:
      ===============
      The encrypt/decrypt code is based on an x86 implementation I did a while
      ago which I never published. This unpublished implementation does
      include an assembler based key schedule and precomputed tables. For
      simplicity and best acceptance, however, I took Gladman's in-kernel code
      for table generation and key schedule for the kernel port of my
      assembler code and modified this code to produce the key schedule as
      required by my assembler implementation. File locations and Kconfig are
      kept similar to the i586 AES assembler implementation.
      It may seem a little bit strange to use 32 bit I/O and registers in the
      assembler implementation but this gives the best code size. My
      implementation takes one instruction more per round compared to
      Gladman's x86 assembler but it doesn't require any stack for local
      variables or saved registers and it is less serialized than Gladman's
      code.
      Note that all comparisons to Gladman's code were done after my code was
      implemented. I did only use FIPS PUB 197 for the implementation so my
      implementation is independent work.
      If anybody has a better assembler solution for x86_64 I'll be pleased to
      have my code replaced with the better solution.
      
      Testing:
      ========
      The implementation passes the in-kernel crypto testing module and I'm
      running it without any problems on my laptop where it is mainly used for
      dm-crypt.
      
      Microbenchmark:
      ===============
      The microbenchmark was done in userspace with similar compile flags as
      used during kernel compile.
      Encrypt/decrypt is about 35% faster than the generic C implementation.
      As the generic C as well as my assembler implementation are both table
      I don't really expect that there is much room for further
      improvements though I'll be glad to be corrected here.
      The key schedule is about 5% slower than the generic C implementation.
      This is due to the fact that some more work has to be done in the key
      schedule routine to fit the schedule to the assembler implementation.
      
      Code Size:
      ==========
      Encrypt and decrypt are together about 2.1 Kbytes smaller than the
      generic C implementation which is important with regard to L1 cache
      usage. The key schedule routine is about 100 bytes larger than the
      generic C implementation.
      
      Data Size:
      ==========
      There's no difference in data size requirements between the assembler
      implementation and the generic C implementation.
      
      License:
      ========
      Gladmans's code is dual BSD/GPL whereas my assembler code is GPLv2 only
      (I'm  not going to change the license for my code). So I had to change
      the module license for the x86_64 aes module from 'Dual BSD/GPL' to
      'GPL' to reflect the most restrictive license within the module.
      Signed-off-by: NAndreas Steinmetz <ast@domdv.de>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a2a892a2
    • J
      [CRYPTO] Add null short circuit to crypto_free_tfm · a61cc448
      Jesper Juhl 提交于
      As far as I'm aware there's a general concensus that functions that are
      responsible for freeing resources should be able to cope with being passed
      a NULL pointer. This makes sense as it removes the need for all callers to
      check for NULL, thus elliminating the bugs that happen when some forget
      (safer to just check centrally in the freeing function) and it also makes
      for smaller code all over due to the lack of all those NULL checks.
      This patch makes it safe to pass the crypto_free_tfm() function a NULL
      pointer. Once this patch is applied we can start removing the NULL checks
      from the callers.
      Signed-off-by: NJesper Juhl <juhl-lkml@dif.dk>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a61cc448
    • H
      [CRYPTO] Handle unaligned iv from encrypt_iv/decrypt_iv · 915e8561
      Herbert Xu 提交于
      Even though cit_iv is now always aligned, the user can still supply an
      unaligned iv through crypto_cipher_encrypt_iv/crypto_cipher_decrypt_iv.
      This patch will check the alignment of the user-supplied iv and copy
      it if necessary.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      915e8561
    • H
      [CRYPTO] Ensure cit_iv is aligned correctly · fbdae9f3
      Herbert Xu 提交于
      This patch ensures that cit_iv is aligned according to cra_alignmask
      by allocating it as part of the tfm structure.  As a side effect the
      crypto layer will also guarantee that the tfm ctx area has enough space
      to be aligned by cra_alignmask.  This allows us to remove the extra
      space reservation from the Padlock driver.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fbdae9f3
    • A
      [CRYPTO] Make crypto_alg_lookup static · 176c3652
      Adrian Bunk 提交于
      This patch makes a needlessly global function static.
      Signed-off-by: NAdrian Bunk <bunk@stusta.de>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      176c3652
    • H
      [CRYPTO] Add alignmask for low-level cipher implementations · 95477377
      Herbert Xu 提交于
      The VIA Padlock device requires the input and output buffers to
      be aligned on 16-byte boundaries.  This patch adds the alignmask
      attribute for low-level cipher implementations to indicate their
      alignment requirements.
      
      The mid-level crypt() function will copy the input/output buffers
      if they are not aligned correctly before they are passed to the
      low-level implementation.
      
      Strictly speaking, some of the software implementations require
      the buffers to be aligned on 4-byte boundaries as they do 32-bit
      loads.  However, it is not clear whether it is better to copy
      the buffers or pay the penalty for unaligned loads/stores.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      95477377