- 05 7月, 2016 2 次提交
-
-
由 Tudor Ambarus 提交于
Add RSA support to caam driver. Initial author is Yashpal Dutta <yashpal.dutta@freescale.com>. Signed-off-by: NTudor Ambarus <tudor-dan.ambarus@nxp.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Salvatore Benedetto 提交于
Drop all asn1 related code and use the new rsa_helper functions rsa_parse_[pub|priv]_key for parsing the key Signed-off-by: NSalvatore Benedetto <salvatore.benedetto@intel.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 01 7月, 2016 6 次提交
-
-
由 Bin Liu 提交于
The arm-neon-sha implementations have cra_priority of 150...300, so increase omap-sham priority to 400 to ensure it is on top of any software alg. Signed-off-by: NBin Liu <b-liu@ti.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Herbert Xu 提交于
This patch replaces use of the obsolete ablkcipher with skcipher. It also removes shash_fallback which is totally unused. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Herbert Xu 提交于
This patch replaces use of the obsolete ablkcipher with skcipher. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Herbert Xu 提交于
This patch replaces use of the obsolete ablkcipher with skcipher. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Herbert Xu 提交于
This patch replaces use of the obsolete ablkcipher with skcipher. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Herbert Xu 提交于
This patch replaces use of the obsolete ablkcipher with skcipher. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 28 6月, 2016 1 次提交
-
-
由 Arnd Bergmann 提交于
The ARM allmodconfig build currently warngs because of the ux500 crypto driver not working well with the jump label implementation that we started using for dynamic debug, which breaks building with 'gcc -O0': In file included from /git/arm-soc/include/linux/jump_label.h:105:0, from /git/arm-soc/include/linux/dynamic_debug.h:5, from /git/arm-soc/include/linux/printk.h:289, from /git/arm-soc/include/linux/kernel.h:13, from /git/arm-soc/include/linux/clk.h:16, from /git/arm-soc/drivers/crypto/ux500/hash/hash_core.c:16: /git/arm-soc/arch/arm/include/asm/jump_label.h: In function 'hash_set_dma_transfer': /git/arm-soc/arch/arm/include/asm/jump_label.h:13:7: error: asm operand 0 probably doesn't match constraints [-Werror] asm_volatile_goto("1:\n\t" Turning off compiler optimizations has never really been supported here, and it's only used when debugging the driver. I have not found a good reason for doing this here, other than a misguided attempt to produce more readable assembly output. Also, the driver is only used in obsolete hardware that almost certainly nobody will spend time debugging any more. This just removes the -O0 flag from the compiler options. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 24 6月, 2016 4 次提交
-
-
由 Bin Liu 提交于
Adds software fallback support for small crypto requests. In these cases, it is undesirable to use DMA, as setting it up itself is rather heavy operation. Gives about 40% extra performance in ipsec usecase. Signed-off-by: NBin Liu <b-liu@ti.com> [t-kristo@ti.com: dropped the extra traces, updated some comments on the code] Signed-off-by: NTero Kristo <t-kristo@ti.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Lokesh Vutla 提交于
The extra call to dmaengine_terminate_all is not needed, as the DMA is not running at this point. This improves performance slightly. Signed-off-by: NLokesh Vutla <lokeshvutla@ti.com> Signed-off-by: NTero Kristo <t-kristo@ti.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Tero Kristo 提交于
Change crypto queue size from 1 to 10 for omap SHA driver. This should allow clients to enqueue requests more effectively to avoid serializing whole crypto sequences, giving extra performance. Signed-off-by: NTero Kristo <t-kristo@ti.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Tero Kristo 提交于
Calling runtime PM API for every block causes serious performance hit to crypto operations that are done on a long buffer. As crypto is performed on a page boundary, encrypting large buffers can cause a series of crypto operations divided by page. The runtime PM API is also called those many times. Convert the driver to use runtime_pm autosuspend instead, with a default timeout value of 1 second. This results in upto ~50% speedup. Signed-off-by: NTero Kristo <t-kristo@ti.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 23 6月, 2016 10 次提交
-
-
由 Romain Perier 提交于
Now that crypto requests are chained together at the DMA level, we increase the size of the crypto queue for each engine. The result is that as the backlog list is reached later, it does not stop the crypto stack from sending asychronous requests, so more cryptographic tasks are processed by the engines. Signed-off-by: NRomain Perier <romain.perier@free-electrons.com> Acked-by: NBoris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Romain Perier 提交于
The Cryptographic Engines and Security Accelerators (CESA) supports the Multi-Packet Chain Mode. With this mode enabled, multiple tdma requests can be chained and processed by the hardware without software intervention. This mode was already activated, however the crypto requests were not chained together. By doing so, we reduce significantly the number of IRQs. Instead of being interrupted at the end of each crypto request, we are interrupted at the end of the last cryptographic request processed by the engine. This commits re-factorizes the code, changes the code architecture and adds the required data structures to chain cryptographic requests together before sending them to an engine (stopped or possibly already running). Signed-off-by: NRomain Perier <romain.perier@free-electrons.com> Acked-by: NBoris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Romain Perier 提交于
This commits adds support for fine grained load balancing on multi-engine IPs. The engine is pre-selected based on its current load and on the weight of the crypto request that is about to be processed. The global crypto queue is also moved to each engine. These changes are required to allow chaining crypto requests at the DMA level. By using a crypto queue per engine, we make sure that we keep the state of the tdma chain synchronized with the crypto queue. We also reduce contention on 'cesa_dev->lock' and improve parallelism. Signed-off-by: NRomain Perier <romain.perier@free-electrons.com> Acked-by: NBoris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Romain Perier 提交于
Currently the crypto requests were sent to engines sequentially. This commit moves the SRAM I/O operations from the prepare to the step functions. It provides flexibility for future works and allow to prepare a request while the engine is running. Signed-off-by: NRomain Perier <romain.perier@free-electrons.com> Acked-by: NBoris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Romain Perier 提交于
So far, the 'process' operation was used to check if the current request was correctly handled by the engine, if it was the case it copied information from the SRAM to the main memory. Now, we split this operation. We keep the 'process' operation, which still checks if the request was correctly handled by the engine or not, then we add a new operation for completion. The 'complete' method copies the content of the SRAM to memory. This will soon become useful if we want to call the process and the complete operations from different locations depending on the type of the request (different cleanup logic). Signed-off-by: NRomain Perier <romain.perier@free-electrons.com> Acked-by: NBoris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Romain Perier 提交于
Currently, the only way to access the tdma chain is to use the 'req' union from a mv_cesa_{ablkcipher,ahash}. This will soon become a problem if we want to handle the TDMA chaining vs standard/non-DMA processing in a generic way (with generic functions at the cesa.c level detecting whether the request should be queued at the DMA level or not). Hence the decision to move the chain field a the mv_cesa_req level at the expense of adding 2 void * fields to all request contexts (including non-DMA ones) and to remove the type completly. To limit the overhead, we get rid of the type field, which can now be deduced from the req->chain.first value. Once these changes are done the union is no longer needed, so remove it and move mv_cesa_ablkcipher_std_req and mv_cesa_req to mv_cesa_ablkcipher_req directly. There are also no needs to keep the 'base' field into the union of mv_cesa_ahash_req, so move it into the upper structure. Signed-off-by: NRomain Perier <romain.perier@free-electrons.com> Acked-by: NBoris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Romain Perier 提交于
Add a TDMA descriptor at the end of the request for copying the output IV vector via a DMA transfer. This is a good way for offloading as much as processing as possible to the DMA and the crypto engine. This is also required for processing multiple cipher requests in chained mode, otherwise the content of the IV vector would be overwritten by the last processed request. Signed-off-by: NRomain Perier <romain.perier@free-electrons.com> Acked-by: NBoris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Romain Perier 提交于
So far, the way that the type of a TDMA operation was checked was wrong. We have to use the type mask in order to get the right part of the flag containing the type of the operation. Signed-off-by: NRomain Perier <romain.perier@free-electrons.com> Acked-by: NBoris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Romain Perier 提交于
Add a BUG_ON() call when the driver tries to launch a crypto request while the engine is still processing the previous one. This replaces a silent system hang by a verbose kernel panic with the associated backtrace to let the user know that something went wrong in the CESA driver. Signed-off-by: NRomain Perier <romain.perier@free-electrons.com> Acked-by: NBoris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Romain Perier 提交于
Adding a macro constant to be used for the size of the crypto queue, instead of using a numeric value directly. It will be easier to maintain in case we add more than one crypto queue of the same size. Signed-off-by: NRomain Perier <romain.perier@free-electrons.com> Acked-by: NBoris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 20 6月, 2016 2 次提交
-
-
由 Tudor Ambarus 提交于
EXTRA_CFLAGS is still supported but its usage is deprecated. Signed-off-by: NTudor Ambarus <tudor-dan.ambarus@nxp.com> Reviewed-by: NHoria Geantă <horia.geanta@nxp.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Arnd Bergmann 提交于
An endianess fix mistakenly used higher_32_bits() instead of upper_32_bits(), and that doesn't exist: drivers/crypto/caam/desc_constr.h: In function 'append_ptr': drivers/crypto/caam/desc_constr.h:84:75: error: implicit declaration of function 'higher_32_bits' [-Werror=implicit-function-declaration] *offset = cpu_to_caam_dma(ptr); Signed-off-by: NArnd Bergmann <arnd@arndb.de> Fixes: 261ea058 ("crypto: caam - handle core endianness != caam endianness") Reviewed-by: NHoria Geantă <horia.geanta@nxp.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 13 6月, 2016 1 次提交
-
-
由 Bhaktipriya Shridhar 提交于
alloc_workqueue replaces deprecated create_workqueue(). The workqueue device_reset_wq has workitem &reset_data->reset_work per adf_reset_dev_data. The workqueue pf2vf_resp_wq is a workqueue for PF2VF responses has workitem &pf2vf_resp->pf2vf_resp_work per pf2vf_resp. The workqueue adf_vf_stop_wq is used to call adf_dev_stop() asynchronously. Dedicated workqueues have been used in all cases since the workitems on the workqueues are involved in operation of crypto which can be used in the IO path which is depended upon during memory reclaim. Hence, WQ_MEM_RECLAIM has been set to gurantee forward progress under memory pressure. Since there are only a fixed number of work items, explicit concurrency limit is unnecessary. Signed-off-by: NBhaktipriya Shridhar <bhaktipriya96@gmail.com> Acked-by: NTejun Heo <tj@kernel.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 08 6月, 2016 7 次提交
-
-
由 LEROY Christophe 提交于
This will allow IPSEC on SEC1 Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 LEROY Christophe 提交于
SEC1 doesn't have IPSEC_ESP descriptor type but it is able to perform IPSEC using HMAC_SNOOP_NO_AFEU, which is also existing on SEC2 In order to be able to define descriptors templates for SEC1 without breaking SEC2+, we have to give lower priority to HMAC_SNOOP_NO_AFEU so that SEC2+ selects IPSEC_ESP and not HMAC_SNOOP_NO_AFEU which is less performant. This is done by adding a priority field in the template. If the field is 0, we use the default priority, otherwise we used the one in the field. Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 LEROY Christophe 提交于
Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 LEROY Christophe 提交于
This patchs enhances the IPSEC_ESP related functions for them to also supports the same operations with descriptor type HMAC_SNOOP_NO_AFEU. The differences between the two descriptor types are: * pointeurs 2 and 3 are swaped (Confidentiality key and Primary EU Context IN) * HMAC_SNOOP_NO_AFEU has CICV out in pointer 6 * HMAC_SNOOP_NO_AFEU has no primary EU context out so we get it from the end of data out Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 LEROY Christophe 提交于
In preparation of IPSEC for SEC1, first step is to make the mapping helpers more generic so that they can also be used by AEAD functions. First, the functions are moved before IPSEC functions in talitos.c talitos_sg_unmap() and unmap_sg_talitos_ptr() are merged as they are quite similar, the second one handling the SEC1 case an calling the first one for SEC2 map_sg_in_talitos_ptr() and map_sg_out_talitos_ptr() are merged into talitos_sg_map() and enhenced to support offseted zones as used for AEAD. The actual mapping is now performed outside that helper. The DMA sync is also done outside to not make it several times. talitos_edesc_alloc() size calculation are fixed to also take into account AEAD specific parts also for SEC1 Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 LEROY Christophe 提交于
In order to be able to use the mapping/unmapping helpers for IPSEC it needs to be move upper in the file Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 LEROY Christophe 提交于
Use helper for all modifications to talitos_ptr in preparation to the implementation of AEAD for SEC1 to_talitos_ptr_extent_clear() has been removed in favor of to_talitos_ptr_ext_set() to set any value and to_talitos_ptr_ext_or() to or the extent field with a value name has been shorten to help keeping single lines of 80 chars Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 07 6月, 2016 1 次提交
-
-
由 Lokesh Vutla 提交于
Algorithms can be registered only once. So skip registration of algorithms if already registered (i.e. in case we have two AES cores in the system.) Signed-off-by: NLokesh Vutla <lokeshvutla@ti.com> Signed-off-by: NTero Kristo <t-kristo@ti.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 31 5月, 2016 5 次提交
-
-
由 Krzysztof Kozlowski 提交于
Bring some consistency by: 1. Replacing fixed-space indentation of structure members with just tabs. 2. Remove indentation in declaration of local variable between type and name. Driver was mixing usage of such indentation and lack of it. When removing indentation, reorder variables in reversed-christmas-tree order with first variables being initialized ones. Signed-off-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com> Acked-by: NVladimir Zapolskiy <vz@mleia.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Horia Geantă 提交于
This basically adds support for ls1043a platform. Signed-off-by: NHoria Geantă <horia.geanta@nxp.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Horia Geantă 提交于
There are SoCs like LS1043A where CAAM endianness (BE) does not match the default endianness of the core (LE). Moreover, there are requirements for the driver to handle cases like CPU_BIG_ENDIAN=y on ARM-based SoCs. This requires for a complete rewrite of the I/O accessors. PPC-specific accessors - {in,out}_{le,be}XX - are replaced with generic ones - io{read,write}[be]XX. Endianness is detected dynamically (at runtime) to allow for multiplatform kernels, for e.g. running the same kernel image on LS1043A (BE CAAM) and LS2080A (LE CAAM) armv8-based SoCs. While here: debugfs entries need to take into consideration the endianness of the core when displaying data. Add the necessary glue code so the entries remain the same, but they are properly read, regardless of the core and/or SEC endianness. Note: pdb.h fixes only what is currently being used (IPsec). Reviewed-by: NTudor Ambarus <tudor-dan.ambarus@nxp.com> Signed-off-by: NHoria Geantă <horia.geanta@nxp.com> Signed-off-by: NAlex Porosanu <alexandru.porosanu@nxp.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Cristian Stoica 提交于
The offset field is 13 bits wide; make sure we don't overwrite more than that in the caam hardware scatter gather structure. Signed-off-by: NCristian Stoica <cristian.stoica@freescale.com> Signed-off-by: NHoria Geantă <horia.geanta@nxp.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Tadeusz Struk 提交于
The sizeof(*ctx->dec_cd) and sizeof(*ctx->enc_cd) are equal, but we should use the correct one for freeing memory anyway. Signed-off-by: NTadeusz Struk <tadeusz.struk@intel.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 28 5月, 2016 1 次提交
-
-
由 Arnd Bergmann 提交于
Most users of IS_ERR_VALUE() in the kernel are wrong, as they pass an 'int' into a function that takes an 'unsigned long' argument. This happens to work because the type is sign-extended on 64-bit architectures before it gets converted into an unsigned type. However, anything that passes an 'unsigned short' or 'unsigned int' argument into IS_ERR_VALUE() is guaranteed to be broken, as are 8-bit integers and types that are wider than 'unsigned long'. Andrzej Hajda has already fixed a lot of the worst abusers that were causing actual bugs, but it would be nice to prevent any users that are not passing 'unsigned long' arguments. This patch changes all users of IS_ERR_VALUE() that I could find on 32-bit ARM randconfig builds and x86 allmodconfig. For the moment, this doesn't change the definition of IS_ERR_VALUE() because there are probably still architecture specific users elsewhere. Almost all the warnings I got are for files that are better off using 'if (err)' or 'if (err < 0)'. The only legitimate user I could find that we get a warning for is the (32-bit only) freescale fman driver, so I did not remove the IS_ERR_VALUE() there but changed the type to 'unsigned long'. For 9pfs, I just worked around one user whose calling conventions are so obscure that I did not dare change the behavior. I was using this definition for testing: #define IS_ERR_VALUE(x) ((unsigned long*)NULL == (typeof (x)*)NULL && \ unlikely((unsigned long long)(x) >= (unsigned long long)(typeof(x))-MAX_ERRNO)) which ends up making all 16-bit or wider types work correctly with the most plausible interpretation of what IS_ERR_VALUE() was supposed to return according to its users, but also causes a compile-time warning for any users that do not pass an 'unsigned long' argument. I suggested this approach earlier this year, but back then we ended up deciding to just fix the users that are obviously broken. After the initial warning that caused me to get involved in the discussion (fs/gfs2/dir.c) showed up again in the mainline kernel, Linus asked me to send the whole thing again. [ Updated the 9p parts as per Al Viro - Linus ] Signed-off-by: NArnd Bergmann <arnd@arndb.de> Cc: Andrzej Hajda <a.hajda@samsung.com> Cc: Andrew Morton <akpm@linux-foundation.org> Link: https://lkml.org/lkml/2016/1/7/363 Link: https://lkml.org/lkml/2016/5/27/486 Acked-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org> # For nvmem part Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-