- 13 1月, 2017 1 次提交
-
-
由 Ard Biesheuvel 提交于
This is a straight port to arm64/NEON of the x86 SSE3 implementation of the ChaCha20 stream cipher. It uses the new skcipher walksize attribute to process the input in strides of 4x the block size. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 28 12月, 2016 1 次提交
-
-
由 Herbert Xu 提交于
This patch reverts the following commits: 8621caa0 80966672 I should not have applied them because they had already been obsoleted by a subsequent patch series. They also cause a build failure because of the subsequent commit 9ae433bc. Fixes: 9ae433bc ("crypto: chacha20 - convert generic and...") Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 27 12月, 2016 1 次提交
-
-
由 Ard Biesheuvel 提交于
This is a straight port to arm64/NEON of the x86 SSE3 implementation of the ChaCha20 stream cipher. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 07 12月, 2016 2 次提交
-
-
由 Ard Biesheuvel 提交于
This is a combination of the the Intel algorithm implemented using SSE and PCLMULQDQ instructions from arch/x86/crypto/crc32-pclmul_asm.S, and the new CRC32 extensions introduced for both 32-bit and 64-bit ARM in version 8 of the architecture. Two versions of the above combo are provided, one for CRC32 and one for CRC32C. The PMULL/NEON algorithm is faster, but operates on blocks of at least 64 bytes, and on multiples of 16 bytes only. For the remaining input, or for all input on systems that lack the PMULL 64x64->128 instructions, the CRC32 instructions will be used. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Ard Biesheuvel 提交于
This is a transliteration of the Intel algorithm implemented using SSE and PCLMULQDQ instructions that resides in the file arch/x86/crypto/crct10dif-pcl-asm_64.S, but simplified to only operate on buffers that are 16 byte aligned (but of any size) Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 29 11月, 2016 1 次提交
-
-
由 Herbert Xu 提交于
The skcipher conversion for ARM missed the select on CRYPTO_SIMD, causing build failures if SIMD was not otherwise enabled. Fixes: da40e7a4 ("crypto: aes-ce - Convert to skcipher") Fixes: 211f41af ("crypto: aesbs - Convert to skcipher") Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 28 11月, 2016 1 次提交
-
-
由 Ard Biesheuvel 提交于
This integrates both the accelerated scalar and the NEON implementations of SHA-224/256 as well as SHA-384/512 from the OpenSSL project. Relative performance compared to the respective generic C versions: | SHA256-scalar | SHA256-NEON* | SHA512 | ------------+-----------------+--------------+----------+ Cortex-A53 | 1.63x | 1.63x | 2.34x | Cortex-A57 | 1.43x | 1.59x | 1.95x | Cortex-A73 | 1.26x | 1.56x | ? | The core crypto code was authored by Andy Polyakov of the OpenSSL project, in collaboration with whom the upstream code was adapted so that this module can be built from the same version of sha512-armv8.pl. The version in this patch was taken from OpenSSL commit 32bbb62ea634 ("sha/asm/sha512-armv8.pl: fix big-endian support in __KERNEL__ case.") * The core SHA algorithm is fundamentally sequential, but there is a secondary transformation involved, called the schedule update, which can be performed independently. The NEON version of SHA-224/SHA-256 only implements this part of the algorithm using NEON instructions, the sequential part is always done using scalar instructions. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 20 11月, 2014 1 次提交
-
-
由 Yazen Ghannam 提交于
This module registers a crc32 algorithm and a crc32c algorithm that use the optional CRC32 and CRC32C instructions in ARMv8. Tested on AMD Seattle. Improvement compared to crc32c-generic algorithm: TCRYPT CRC32C speed test shows ~450% speedup. Simple dd write tests to btrfs filesystem show ~30% speedup. Signed-off-by: NYazen Ghannam <yazen.ghannam@linaro.org> Acked-by: NSteve Capper <steve.capper@linaro.org> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 07 11月, 2014 1 次提交
-
-
由 Ard Biesheuvel 提交于
This patch implements the AES key schedule generation using ARMv8 Crypto Instructions. It replaces the table based C implementation in aes_generic.ko, which means we can drop the dependency on that module. Tested-by: NSteve Capper <steve.capper@linaro.org> Acked-by: NSteve Capper <steve.capper@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 15 5月, 2014 6 次提交
-
-
由 Ard Biesheuvel 提交于
This adds ARMv8 implementations of AES in ECB, CBC, CTR and XTS modes, both for ARMv8 with Crypto Extensions and for plain ARMv8 NEON. The Crypto Extensions version can only run on ARMv8 implementations that have support for these optional extensions. The plain NEON version is a table based yet time invariant implementation. All S-box substitutions are performed in parallel, leveraging the wide range of ARMv8's tbl/tbx instructions, and the huge NEON register file, which can comfortably hold the entire S-box and still have room to spare for doing the actual computations. The key expansion routines were borrowed from aes_generic. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Ard Biesheuvel 提交于
This patch adds support for the AES-CCM encryption algorithm for CPUs that have support for the AES part of the ARM v8 Crypto Extensions. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Ard Biesheuvel 提交于
This patch adds support for the AES symmetric encryption algorithm for CPUs that have support for the AES part of the ARM v8 Crypto Extensions. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Ard Biesheuvel 提交于
This is a port to ARMv8 (Crypto Extensions) of the Intel implementation of the GHASH Secure Hash (used in the Galois/Counter chaining mode). It relies on the optional PMULL/PMULL2 instruction (polynomial multiply long, what Intel call carry-less multiply). Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Ard Biesheuvel 提交于
This patch adds support for the SHA-224 and SHA-256 Secure Hash Algorithms for CPUs that have support for the SHA-2 part of the ARM v8 Crypto Extensions. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Ard Biesheuvel 提交于
This patch adds support for the SHA-1 Secure Hash Algorithm for CPUs that have support for the SHA-1 part of the ARM v8 Crypto Extensions. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NHerbert Xu <herbert@gondor.apana.org.au>
-