- 31 3月, 2018 9 次提交
-
-
由 Antoine Tenart 提交于
This patch adds the hmac(sha224) support to the Inside Secure cryptographic engine driver. Signed-off-by: NAntoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Antoine Tenart 提交于
This patch adds the hmac(sha256) support to the Inside Secure cryptographic engine driver. Signed-off-by: NAntoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Antoine Tenart 提交于
This patches uses the state size of the algorithms instead of their digest size to copy the ipad and opad in the context. This doesn't fix anything as the state and digest size are the same for many algorithms, and for all the hmac currently supported by this driver. However hmac(sha224) use the sha224 hash function which has a different digest and state size. This commit prepares the addition of such algorithms. Signed-off-by: NAntoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Antoine Tenart 提交于
The token used for encryption and decryption of skcipher algorithms sets its stat field to "last packet". As it's a cipher only algorithm, there is not hash operation and thus the "last hash" bit should be set to tell the internal engine no hash operation should be performed. This does not fix a bug, but improves the token definition to follow exactly what's advised by the datasheet. Signed-off-by: NAntoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Antoine Tenart 提交于
This patches update the way the digest is copied from the state buffer to the result buffer, so that the copy only happen after the state buffer was DMA unmapped, as otherwise the buffer would be owned by the device. Signed-off-by: NAntoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Antoine Tenart 提交于
This patch improves the send error path as it wasn't handling all error cases. A new label is added, and some of the goto are updated to point to the right labels, so that the code is more robust to errors. Signed-off-by: NAntoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Antoine Tenart 提交于
This patch fixes a typo in the EIP197_HIA_xDR_WR_CTRL_BUG register name, as it should be EIP197_HIA_xDR_WR_CTRL_BUF. This is a cosmetic only change. Signed-off-by: NAntoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Antoine Tenart 提交于
Small cosmetic patch fixing one typo in the EIP197_HIA_DSE_CFG_ALLWAYS_BUFFERABLE macro, it should be _ALWAYS_. Signed-off-by: NAntoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Antoine Tenart 提交于
This patches moves the digest information from the transformation context to the request context. This fixes cases where HMAC init functions were called and override the digest value for a short period of time, as the HMAC init functions call the SHA init one which reset the value. This lead to a small percentage of HMAC being incorrectly computed under heavy load. Fixes: 1b44c5a6 ("crypto: inside-secure - add SafeXcel EIP197 crypto engine driver") Suggested-by: NOfer Heifetz <oferh@marvell.com> Signed-off-by: NAntoine Tenart <antoine.tenart@bootlin.com> [Ofer here did all the work, from seeing the issue to understanding the root cause. I only made the patch.] Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 23 3月, 2018 11 次提交
-
-
由 Jia-Ju Bai 提交于
cpt_device_init() is never called in atomic context. The call chain ending up at cpt_device_init() is: [1] cpt_device_init() <- cpt_probe() cpt_probe() is only set as ".probe" in pci_driver structure "cpt_pci_driver". Despite never getting called from atomic context, cpt_device_init() calls mdelay(100), i.e. busy wait for 100ms. That is not necessary and can be replaced with msleep to avoid busy waiting. This is found by a static analysis tool named DCNS written by myself. Signed-off-by: NJia-Ju Bai <baijiaju1990@gmail.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Gary R Hook 提交于
Add missing comments for union members ablkcipher, blkcipher, cipher, and compress. This silences complaints when building the htmldocs. Fixes: 0d7f488f (crypto: doc - cipher data structures) Signed-off-by: NGary R Hook <gary.hook@amd.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Arnd Bergmann 提交于
The blackfin architecture is getting removed, so this driver won't be used any more. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Leonard Crestez 提交于
The decision to rebuild .S_shipped is made based on the relative timestamps of .S_shipped and .pl files but git makes this essentially random. This means that the perl script might run anyway (usually at most once per checkout), defeating the whole purpose of _shipped. Fix by skipping the rule unless explicit make variables are provided: REGENERATE_ARM_CRYPTO or REGENERATE_ARM64_CRYPTO. This can produce nasty occasional build failures downstream, for example for toolchains with broken perl. The solution is minimally intrusive to make it easier to push into stable. Another report on a similar issue here: https://lkml.org/lkml/2018/3/8/1379Signed-off-by: NLeonard Crestez <leonard.crestez@nxp.com> Cc: <stable@vger.kernel.org> Reviewed-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Vitaly Andrianov 提交于
Keystone Security Accelerator module has a hardware random generator sub-module. This commit adds the driver for this sub-module. Signed-off-by: NVitaly Andrianov <vitalya@ti.com> [t-kristo@ti.com: dropped one unnecessary dev_err message] Signed-off-by: NTero Kristo <t-kristo@ti.com> Signed-off-by: NMurali Karicheri <m-karicheri2@ti.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Vitaly Andrianov 提交于
The Keystone SA module has a hardware random generator module. This commit adds binding doc for the KS2 SA HWRNG driver. Signed-off-by: NVitaly Andrianov <vitalya@ti.com> Signed-off-by: NMurali Karicheri <m-karicheri2@ti.com> Reviewed-by: NRob Herring <robh@kernel.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Gregory CLEMENT 提交于
On Armada 7K/8K we need to explicitly enable the register clock. This clock is optional because not all the SoCs using this IP need it but at least for Armada 7K/8K it is actually mandatory. The binding documentation is updated accordingly. Signed-off-by: NGregory CLEMENT <gregory.clement@bootlin.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Gregory CLEMENT 提交于
The clock is optional, but if it is present we should managed it. If there is an error while trying getting it, we should exit and report this error. So instead of returning an error only in the -EPROBE case, turn it in an other way and ignore the clock only if it is not present (-ENOENT case). Signed-off-by: NGregory CLEMENT <gregory.clement@bootlin.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Gregory CLEMENT 提交于
In this driver the clock is got but never put when the driver is removed or if there is an error in the probe. Using the managed version of clk_get() allows to let the kernel take care of it. Fixes: 1b44c5a6 ("crypto: inside-secure - add SafeXcel EIP197 crypto engine driver") cc: stable@vger.kernel.org Signed-off-by: NGregory CLEMENT <gregory.clement@bootlin.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 weiyongjun \(A\) 提交于
Add the missing unlock before return from function safexcel_ahash_send_req() in the error handling case. Fixes: cff9a175 ("crypto: inside-secure - move cache result dma mapping to request") Signed-off-by: NWei Yongjun <weiyongjun1@huawei.com> Acked-by: NAntoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Markus Elfring 提交于
Omit an extra message for a memory allocation failure in this function. This issue was detected by using the Coccinelle software. Signed-off-by: NMarkus Elfring <elfring@users.sourceforge.net> Reviewed-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 16 3月, 2018 20 次提交
-
-
由 Ard Biesheuvel 提交于
Tweak the SHA256 update routines to invoke the SHA256 block transform block by block, to avoid excessive scheduling delays caused by the NEON algorithm running with preemption disabled. Also, remove a stale comment which no longer applies now that kernel mode NEON is actually disallowed in some contexts. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Ard Biesheuvel 提交于
CBC MAC is strictly sequential, and so the current AES code simply processes the input one block at a time. However, we are about to add yield support, which adds a bit of overhead, and which we prefer to align with other modes in terms of granularity (i.e., it is better to have all routines yield every 64 bytes and not have an exception for CBC MAC which yields every 16 bytes) So unroll the loop by 4. We still cannot perform the AES algorithm in parallel, but we can at least merge the loads and stores. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Ard Biesheuvel 提交于
CBC encryption is strictly sequential, and so the current AES code simply processes the input one block at a time. However, we are about to add yield support, which adds a bit of overhead, and which we prefer to align with other modes in terms of granularity (i.e., it is better to have all routines yield every 64 bytes and not have an exception for CBC encrypt which yields every 16 bytes) So unroll the loop by 4. We still cannot perform the AES algorithm in parallel, but we can at least merge the loads and stores. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Ard Biesheuvel 提交于
The AES block mode implementation using Crypto Extensions or plain NEON was written before real hardware existed, and so its interleave factor was made build time configurable (as well as an option to instantiate all interleaved sequences inline rather than as subroutines) We ended up using INTERLEAVE=4 with inlining disabled for both flavors of the core AES routines, so let's stick with that, and remove the option to configure this at build time. This makes the code easier to modify, which is nice now that we're adding yield support. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Ard Biesheuvel 提交于
When kernel mode NEON was first introduced on arm64, the preserve and restore of the userland NEON state was completely unoptimized, and involved saving all registers on each call to kernel_neon_begin(), and restoring them on each call to kernel_neon_end(). For this reason, the NEON crypto code that was introduced at the time keeps the NEON enabled throughout the execution of the crypto API methods, which may include calls back into the crypto API that could result in memory allocation or other actions that we should avoid when running with preemption disabled. Since then, we have optimized the kernel mode NEON handling, which now restores lazily (upon return to userland), and so the preserve action is only costly the first time it is called after entering the kernel. So let's put the kernel_neon_begin() and kernel_neon_end() calls around the actual invocations of the NEON crypto code, and run the remainder of the code with kernel mode NEON disabled (and preemption enabled) Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Ard Biesheuvel 提交于
When kernel mode NEON was first introduced on arm64, the preserve and restore of the userland NEON state was completely unoptimized, and involved saving all registers on each call to kernel_neon_begin(), and restoring them on each call to kernel_neon_end(). For this reason, the NEON crypto code that was introduced at the time keeps the NEON enabled throughout the execution of the crypto API methods, which may include calls back into the crypto API that could result in memory allocation or other actions that we should avoid when running with preemption disabled. Since then, we have optimized the kernel mode NEON handling, which now restores lazily (upon return to userland), and so the preserve action is only costly the first time it is called after entering the kernel. So let's put the kernel_neon_begin() and kernel_neon_end() calls around the actual invocations of the NEON crypto code, and run the remainder of the code with kernel mode NEON disabled (and preemption enabled) Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Ard Biesheuvel 提交于
When kernel mode NEON was first introduced on arm64, the preserve and restore of the userland NEON state was completely unoptimized, and involved saving all registers on each call to kernel_neon_begin(), and restoring them on each call to kernel_neon_end(). For this reason, the NEON crypto code that was introduced at the time keeps the NEON enabled throughout the execution of the crypto API methods, which may include calls back into the crypto API that could result in memory allocation or other actions that we should avoid when running with preemption disabled. Since then, we have optimized the kernel mode NEON handling, which now restores lazily (upon return to userland), and so the preserve action is only costly the first time it is called after entering the kernel. So let's put the kernel_neon_begin() and kernel_neon_end() calls around the actual invocations of the NEON crypto code, and run the remainder of the code with kernel mode NEON disabled (and preemption enabled) Note that this requires some reshuffling of the registers in the asm code, because the XTS routines can no longer rely on the registers to retain their contents between invocations. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Ard Biesheuvel 提交于
When kernel mode NEON was first introduced on arm64, the preserve and restore of the userland NEON state was completely unoptimized, and involved saving all registers on each call to kernel_neon_begin(), and restoring them on each call to kernel_neon_end(). For this reason, the NEON crypto code that was introduced at the time keeps the NEON enabled throughout the execution of the crypto API methods, which may include calls back into the crypto API that could result in memory allocation or other actions that we should avoid when running with preemption disabled. Since then, we have optimized the kernel mode NEON handling, which now restores lazily (upon return to userland), and so the preserve action is only costly the first time it is called after entering the kernel. So let's put the kernel_neon_begin() and kernel_neon_end() calls around the actual invocations of the NEON crypto code, and run the remainder of the code with kernel mode NEON disabled (and preemption enabled) Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Ard Biesheuvel 提交于
In order to be able to test yield support under preempt, add a test vector for CRC-T10DIF that is long enough to take multiple iterations (and thus possible preemption between them) of the primary loop of the accelerated x86 and arm64 implementations. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Kees Cook 提交于
On the quest to remove all VLAs from the kernel[1], this switches to a pair of kmalloc regions instead of using the stack. This also moves the get_random_bytes() after all allocations (and drops the needless "nbytes" variable). [1] https://lkml.org/lkml/2018/3/7/621Signed-off-by: NKees Cook <keescook@chromium.org> Reviewed-by: NTudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Gary R Hook 提交于
The CCP driver copies data between scatter/gather lists and DMA buffers. The length of the requested copy operation must be checked against the available destination buffer length. Reported-by: NMaciej S. Szmigiero <mail@maciej.szmigiero.name> Signed-off-by: NGary R Hook <gary.hook@amd.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Kamil Konieczny 提交于
Prevent improper use of req->result field in ahash update, init, export and import functions in drivers code. A driver should use ahash request context if it needs to save internal state. Signed-off-by: NKamil Konieczny <k.konieczny@partner.samsung.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Peter Wu 提交于
virtio_crypto does not use function crypto_authenc_extractkeys, remove this unnecessary dependency. Compiles fine and passes cryptodev-linux cipher and speed tests from https://wiki.qemu.org/Features/VirtioCrypto Fixes: dbaf0624 ("crypto: add virtio-crypto driver") Signed-off-by: NPeter Wu <peter@lekensteyn.nl> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Gilad Ben-Yossef 提交于
Add testmgr tests for the newly introduced SM4 ECB symmetric cipher. Signed-off-by: NGilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Gilad Ben-Yossef 提交于
Introduce the SM4 cipher algorithms (OSCCA GB/T 32907-2016). SM4 (GBT.32907-2016) is a cryptographic standard issued by the Organization of State Commercial Administration of China (OSCCA) as an authorized cryptographic algorithms for the use within China. SMS4 was originally created for use in protecting wireless networks, and is mandated in the Chinese National Standard for Wireless LAN WAPI (Wired Authentication and Privacy Infrastructure) (GB.15629.11-2003). Signed-off-by: NGilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Harsh Jain 提交于
Send multiple WRs to H/W when No. of entries received in scatter list cannot be sent in single request. Signed-off-by: NHarsh Jain <harsh@chelsio.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Harsh Jain 提交于
We use ctr(aes) to fallback rfc3686(ctr) request. Send updated IV to fallback path. Signed-off-by: NHarsh Jain <harsh@chelsio.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Harsh Jain 提交于
CBC Decryption requires Last Block as IV. In case src/dst buffer are same last block will be replaced by plain text. This patch copies the Last Block before sending request to HW. Signed-off-by: NHarsh Jain <harsh@chelsio.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Harsh Jain 提交于
ulptx header cannot have length > 64k. Adjust length accordingly. Signed-off-by: NHarsh Jain <harsh@chelsio.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Harsh Jain 提交于
Replace DIV_ROUND_UP to roundup or rounddown Signed-off-by: NHarsh Jain <harsh@chelsio.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-