- 21 4月, 2018 1 次提交
-
-
由 Salvatore Mesoraca 提交于
We avoid various VLAs[1] by using constant expressions for block size and alignment mask. [1] http://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.comSigned-off-by: NSalvatore Mesoraca <s.mesoraca16@gmail.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 04 8月, 2017 1 次提交
-
-
由 Ard Biesheuvel 提交于
There are quite a number of occurrences in the kernel of the pattern if (dst != src) memcpy(dst, src, walk.total % AES_BLOCK_SIZE); crypto_xor(dst, final, walk.total % AES_BLOCK_SIZE); or crypto_xor(keystream, src, nbytes); memcpy(dst, keystream, nbytes); where crypto_xor() is preceded or followed by a memcpy() invocation that is only there because crypto_xor() uses its output parameter as one of the inputs. To avoid having to add new instances of this pattern in the arm64 code, which will be refactored to implement non-SIMD fallbacks, add an alternative implementation called crypto_xor_cpy(), taking separate input and output arguments. This removes the need for the separate memcpy(). Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 11 2月, 2017 1 次提交
-
-
由 Ard Biesheuvel 提交于
Instead of unconditionally forcing 4 byte alignment for all generic chaining modes that rely on crypto_xor() or crypto_inc() (which may result in unnecessary copying of data when the underlying hardware can perform unaligned accesses efficiently), make those functions deal with unaligned input explicitly, but only if the Kconfig symbol HAVE_EFFICIENT_UNALIGNED_ACCESS is set. This will allow us to drop the alignmasks from the CBC, CMAC, CTR, CTS, PCBC and SEQIV drivers. For crypto_inc(), this simply involves making the 4-byte stride conditional on HAVE_EFFICIENT_UNALIGNED_ACCESS being set, given that it typically operates on 16 byte buffers. For crypto_xor(), an algorithm is implemented that simply runs through the input using the largest strides possible if unaligned accesses are allowed. If they are not, an optimal sequence of memory accesses is emitted that takes the relative alignment of the input buffers into account, e.g., if the relative misalignment of dst and src is 4 bytes, the entire xor operation will be completed using 4 byte loads and stores (modulo unaligned bits at the start and end). Note that all expressions involving misalign are simply eliminated by the compiler when HAVE_EFFICIENT_UNALIGNED_ACCESS is defined. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 13 1月, 2017 1 次提交
-
-
由 Gideon Israel Dsouza 提交于
Continuing from this commit: 52f5684c ("kernel: use macros from compiler.h instead of __attribute__((...))") I submitted 4 total patches. They are part of task I've taken up to increase compiler portability in the kernel. I've cleaned up the subsystems under /kernel /mm /block and /security, this patch targets /crypto. There is <linux/compiler.h> which provides macros for various gcc specific constructs. Eg: __weak for __attribute__((weak)). I've cleaned all instances of gcc specific attributes with the right macros for the crypto subsystem. I had to make one additional change into compiler-gcc.h for the case when one wants to use this: __attribute__((aligned) and not specify an alignment factor. From the gcc docs, this will result in the largest alignment for that data type on the target machine so I've named the macro __aligned_largest. Please advise if another name is more appropriate. Signed-off-by: NGideon Israel Dsouza <gidisrael@gmail.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 28 11月, 2016 1 次提交
-
-
由 Herbert Xu 提交于
This patch converts lrw over to the skcipher interface. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 26 11月, 2014 1 次提交
-
-
由 Kees Cook 提交于
This adds the module loading prefix "crypto-" to the template lookup as well. For example, attempting to load 'vfat(blowfish)' via AF_ALG now correctly includes the "crypto-" prefix at every level, correctly rejecting "vfat": net-pf-38 algif-hash crypto-vfat(blowfish) crypto-vfat(blowfish)-all crypto-vfat Reported-by: NMathias Krause <minipli@googlemail.com> Signed-off-by: NKees Cook <keescook@chromium.org> Acked-by: NMathias Krause <minipli@googlemail.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 08 2月, 2008 1 次提交
-
-
由 David Howells 提交于
Convert instances of ERR_PTR(PTR_ERR(p)) to ERR_CAST(p) using: perl -spi -e 's/ERR_PTR[(]PTR_ERR[(](.*)[)][)]/ERR_CAST(\1)/' `grep -rl 'ERR_PTR[(]*PTR_ERR' fs crypto net security` Signed-off-by: NDavid Howells <dhowells@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 11 1月, 2008 1 次提交
-
-
由 Herbert Xu 提交于
This patch replaces the custom xor in CBC with the generic crypto_xor. It changes the operations for in-place encryption slightly to avoid calling crypto_xor with tmpbuf since it is not necessarily aligned. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 02 5月, 2007 1 次提交
-
-
由 Herbert Xu 提交于
This patch passes the type/mask along when constructing instances of templates. This is in preparation for templates that may support multiple types of instances depending on what is requested. For example, the planned software async crypto driver will use this construct. For the moment this allows us to check whether the instance constructed is of the correct type and avoid returning success if the type does not match. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 07 2月, 2007 2 次提交
-
-
由 Herbert Xu 提交于
This patch allows spawns of specific types (e.g., cipher) to be allocated. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 David Howells 提交于
Add PCBC crypto template support as used by RxRPC. Signed-Off-By: NDavid Howells <dhowells@redhat.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 21 9月, 2006 1 次提交
-
-
由 Herbert Xu 提交于
This patch adds two block cipher algorithms, CBC and ECB. These are implemented as templates on top of existing single-block cipher algorithms. They invoke the single-block cipher through the new encrypt_one/decrypt_one interface. This also optimises the in-place encryption and decryption to remove the cost of an IV copy each round. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-