- 11 1月, 2008 40 次提交
-
-
由 Herbert Xu 提交于
Unfortunately the generic chaining hasn't been ported to all architectures yet, and notably not s390. So this patch restores the chainging that we've been using previously which does work everywhere. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Herbert Xu 提交于
The scatterwalk infrastructure is used by algorithms so it needs to move out of crypto for future users that may live in drivers/crypto or asm/*/crypto. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Herbert Xu 提交于
This patch changes gcm/authenc to return EBADMSG instead of EINVAL for ICV mismatches. This convention has already been adopted by IPsec. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Herbert Xu 提交于
The crypto_aead convention for ICVs is to include it directly in the output. If we decided to change this in future then we would make the ICV (if the algorithm has an explicit one) available in the request itself. For now no algorithm needs this so this patch changes gcm to conform to this convention. It also adjusts the tcrypt aead tests to take this into account. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Herbert Xu 提交于
Currently the gcm(aes) tests have to be taken together with all other ciphers. This patch makes it available by itself at number 35. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Herbert Xu 提交于
The previous code incorrectly included the hash in the verification which also meant that we'd crash and burn when it comes to actually verifying the hash since we'd go past the end of the SG list. This patch fixes that by subtracting authsize from cryptlen at the start. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Herbert Xu 提交于
Having enckeylen as a template parameter makes it a pain for hardware devices that implement ciphers with many key sizes since each one would have to be registered separately. Since the authenc algorithm is mainly used for legacy purposes where its key is going to be constructed out of two separate keys, we can in fact embed this value into the key itself. This patch does this by prepending an rtnetlink header to the key that contains the encryption key length. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Herbert Xu 提交于
As it is authsize is an algorithm paramter which cannot be changed at run-time. This is inconvenient because hardware that implements such algorithms would have to register each authsize that they support separately. Since authsize is a property common to all AEAD algorithms, we can add a function setauthsize that sets it at run-time, just like setkey. This patch does exactly that and also changes authenc so that authsize is no longer a parameter of its template. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Herbert Xu 提交于
Since alignment masks are always one less than a power of two, we can use binary or to find their maximum. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Denis Cheng 提交于
These utilities implemented in lib/hexdump.c are more handy, please use this. Signed-off-by: NDenis Cheng <crquan@gmail.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Jan Glauber 提交于
Add test vectors to tcrypt for AES in CBC mode for key sizes 192 and 256. The test vectors are copied from NIST SP800-38A. Signed-off-by: NJan Glauber <jang@linux.vnet.ibm.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Tan Swee Heng 提交于
This patch adds a large AES CTR mode test vector. The test vector is 4100 bytes in size. It was generated using a C++ program that called Crypto++. Note that this patch increases considerably the size of "struct cipher_testvec" and hence the size of tcrypt.ko. Signed-off-by: NTan Swee Heng <thesweeheng@gmail.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Tan Swee Heng 提交于
Currently the number of entries in a cipher test vector template is limited by TVMEMSIZE/sizeof(struct cipher_testvec). This patch circumvents the problem by pointing cipher_tv to each entry in the template, rather than the template itself. Signed-off-by: NTan Swee Heng <thesweeheng@gmail.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Herbert Xu 提交于
When the data spans across a page boundary, CTR may incorrectly process a partial block in the middle because the blkcipher walking code may supply partial blocks in the middle as long as the total length of the supplied data is more than a block. CTR is supposed to return any unused partial block in that case to the walker. This patch fixes this by doing exactly that, returning partial blocks to the walker unless we received less than a block-worth of data to start with. This also allows us to optimise the bulk of the processing since we no longer have to worry about partial blocks until the very end. Thanks to Tan Swee Heng for fixes and actually testing this :) Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Mikko Herranen 提交于
Add GCM/GMAC support to cryptoapi. GCM (Galois/Counter Mode) is an AEAD mode of operations for any block cipher with a block size of 16. The typical example is AES-GCM. Signed-off-by: NMikko Herranen <mh1@iki.fi> Reviewed-by: NMika Kukkonen <mika.kukkonen@nsn.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Mikko Herranen 提交于
Add AEAD support to tcrypt, needed by GCM. Signed-off-by: NMikko Herranen <mh1@iki.fi> Reviewed-by: NMika Kukkonen <mika.kukkonen@nsn.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Denys Vlasenko 提交于
Analogously to camellia7 patch, move "absorb kw2 to other subkeys" and "absorb kw4 to other subkeys" code parts into camellia_setup_tail(). This further reduces source and object code size at the cost of two brances in key setup code. Signed-off-by: NDenys Vlasenko <vda.linux@googlemail.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Denys Vlasenko 提交于
Move "key XOR is end of F-function" code part into camellia_setup_tail(), it is sufficiently similar between camellia_setup128 and camellia_setup256. Signed-off-by: NDenys Vlasenko <vda.linux@googlemail.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Denys Vlasenko 提交于
unifies encrypt/decrypt routines for different key lengths. This reduces module size by ~25%, with tiny (less than 1%) speed impact. Also collapses encrypt/decrypt into more readable (visually shorter) form using macros. Signed-off-by: NDenys Vlasenko <vda.linux@googlemail.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Denys Vlasenko 提交于
Remove unused macro params. Use (u8)(expr) instead of (expr) & 0xff, helps gcc to realize how to use simpler commands. Move CAMELLIA_FLS macro closer to encrypt/decrypt routines. Signed-off-by: NDenys Vlasenko <vda.linux@googlemail.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Herbert Xu 提交于
This patch replaces the custom inc/xor in CTR with the generic functions. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Herbert Xu 提交于
This patch replaces the custom xor in CBC with the generic crypto_xor. It changes the operations for in-place encryption slightly to avoid calling crypto_xor with tmpbuf since it is not necessarily aligned. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Herbert Xu 提交于
All common block ciphers have a block size that's a power of 2. In fact, all of our block ciphers obey this rule. If we require this then CBC can be optimised to avoid an expensive divide on in-place decryption. I've also changed the saving of the first IV in the in-place decryption case to the last IV because that lets us use walk->iv (which is already aligned) for the xor operation where alignment is required. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Herbert Xu 提交于
This patch replaces the custom xor in CBC with the generic crypto_xor. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Herbert Xu 提交于
With the addition of more stream ciphers we need to curb the proliferation of ad-hoc xor functions. This patch creates a generic pair of functions, crypto_inc and crypto_xor which does big-endian increment and exclusive or, respectively. For optimum performance, they both use u32 operations so alignment must be as that of u32 even though the arguments are of type u8 *. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Tan Swee Heng 提交于
This patch implements the Salsa20 stream cipher using the blkcipher interface. The core cipher code comes from Daniel Bernstein's submission to eSTREAM: http://www.ecrypt.eu.org/stream/svn/viewcvs.cgi/ecrypt/trunk/submissions/salsa20/full/ref/ The test vectors comes from: http://www.ecrypt.eu.org/stream/svn/viewcvs.cgi/ecrypt/trunk/submissions/salsa20/full/ It has been tested successfully with "modprobe tcrypt mode=34" on an UML instance. Signed-off-by: NTan Swee Heng <thesweeheng@gmail.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Herbert Xu 提交于
Up until now we have ablkcipher algorithms have been identified as type BLKCIPHER with the ASYNC bit set. This is suboptimal because ablkcipher refers to two things. On the one hand it refers to the top-level ablkcipher interface with requests. On the other hand it refers to and algorithm type underneath. As it is you cannot request a synchronous block cipher algorithm with the ablkcipher interface on top. This is a problem because we want to be able to eventually phase out the blkcipher top-level interface. This patch fixes this by making ABLKCIPHER its own type, just as we have distinct types for HASH and DIGEST. The type it associated with the algorithm implementation only. Which top-level interface is used for synchronous block ciphers is then determined by the mask that's used. If it's a specific mask then the old blkcipher interface is given, otherwise we go with the new ablkcipher interface. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Herbert Xu 提交于
This patch converts the crypto scatterwalk code to use the generic scatterlist chaining rather the version specific to crypto. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Jonathan Lynch 提交于
Resubmitting this patch which extends sha256_generic.c to support SHA-224 as described in FIPS 180-2 and RFC 3874. HMAC-SHA-224 as described in RFC4231 is then supported through the hmac interface. Patch includes test vectors for SHA-224 and HMAC-SHA-224. SHA-224 chould be chosen as a hash algorithm when 112 bits of security strength is required. Patch generated against the 2.6.24-rc1 kernel and tested against 2.6.24-rc1-git14 which includes fix for scatter gather implementation for HMAC. Signed-off-by: NJonathan Lynch <jonathan.lynch@intel.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Sebastian Siewior 提交于
The setkey() function can be shared with the generic algorithm. Signed-off-by: NSebastian Siewior <sebastian@breakpoint.cc> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Sebastian Siewior 提交于
NO other block mode is M by default. Signed-off-by: NSebastian Siewior <sebastian@breakpoint.cc> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Sebastian Siewior 提交于
The setkey() function can be shared with the generic algorithm. Signed-off-by: NSebastian Siewior <sebastian@breakpoint.cc> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Sebastian Siewior 提交于
This patch exports four tables and the set_key() routine. This ressources can be shared by other AES implementations (aes-x86_64 for instance). The decryption key has been turned around (deckey[0] is the first piece of the key instead of deckey[keylen+20]). The encrypt/decrypt functions are looking now identical (except they are using different tables and key). Signed-off-by: NSebastian Siewior <sebastian@breakpoint.cc> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Sebastian Siewior 提交于
Signed-off-by: NSebastian Siewior <sebastian@breakpoint.cc> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Joy Latten 提交于
This patch adds countersize to CTR mode. The template is now ctr(algo,noncesize,ivsize,countersize). For example, ctr(aes,4,8,4) indicates the counterblock will be composed of a salt/nonce that is 4 bytes, an iv that is 8 bytes and the counter is 4 bytes. When noncesize + ivsize < blocksize, CTR initializes the last block - ivsize - noncesize portion of the block to zero. Otherwise the counter block is composed of the IV (and nonce if necessary). If noncesize + ivsize == blocksize, then this indicates that user is passing in entire counterblock. Thus countersize indicates the amount of bytes in counterblock to use as the counter for incrementing. CTR will increment counter portion by 1, and begin encryption with that value. Note that CTR assumes the counter portion of the block that will be incremented is stored in big endian. Signed-off-by: NJoy Latten <latten@austin.ibm.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Denys Vlasenko 提交于
Move huge unrolled pieces of code (3 screenfuls) at the end of 128/256 key setup routines into common camellia_setup_tail(), convert it to loop there. Loop is still unrolled six times, so performance hit is very small, code size win is big. Signed-off-by: NDenys Vlasenko <vda.linux@googlemail.com> Acked-by: NNoriaki TAKAMIYA <takamiya@po.ntts.co.jp> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Denys Vlasenko 提交于
Optimize GETU32 to use 4-byte memcpy (modern gcc will convert such memcpy to single move instruction on i386). Original GETU32 did four byte fetches, and shifted/XORed those. Signed-off-by: NDenys Vlasenko <vda.linux@googlemail.com> Acked-by: NNoriaki TAKAMIYA <takamiya@po.ntts.co.jp> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Denys Vlasenko 提交于
Rename some macros to shorter names: CAMELLIA_RR8 -> ROR8, making it easier to understand that it is just a right rotation, nothing camellia-specific in it. CAMELLIA_SUBKEY_L() -> SUBKEY_L() - just shorter. Move be32 <-> cpu conversions out of en/decrypt128/256 and into camellia_en/decrypt - no reason to have that code duplicated twice. Signed-off-by: NDenys Vlasenko <vda.linux@googlemail.com> Acked-by: NNoriaki TAKAMIYA <takamiya@po.ntts.co.jp> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Denys Vlasenko 提交于
Move code blocks around so that related pieces are closer together: e.g. CAMELLIA_ROUNDSM macro does not need to be separated from the rest of the code by huge array of constants. Remove unused macros (COPY4WORD, SWAP4WORD, XOR4WORD[2]) Drop SUBL(), SUBR() macros which only obscure things. Same for CAMELLIA_SP1110() macro and KEY_TABLE_TYPE typedef. Remove useless comments: /* encryption */ -- well it's obvious enough already! void camellia_encrypt128(...) Combine swap with copying at the beginning/end of encrypt/decrypt. Signed-off-by: NDenys Vlasenko <vda.linux@googlemail.com> Acked-by: NNoriaki TAKAMIYA <takamiya@po.ntts.co.jp> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Denys Vlasenko 提交于
Currently twofish cipher key setup code has unrolled loops - approximately 70-100 instructions are repeated 40 times. As a result, twofish module is the biggest module in crypto/*. Unrolling produces x2.5 more code (+18k on i386), and speeds up key setup by 7%: unrolled: twofish_setkey/sec: 41128 loop: twofish_setkey/sec: 38148 CALC_K256: ~100 insns each CALC_K192: ~90 insns CALC_K: ~70 insns Attached patch removes this unrolling. $ size */twofish_common.o text data bss dec hex filename 37920 0 0 37920 9420 crypto.org/twofish_common.o 13209 0 0 13209 3399 crypto/twofish_common.o Run tested (modprobe tcrypt reports ok). Please apply. Signed-off-by: NDenys Vlasenko <vda.linux@googlemail.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-