- 07 12月, 2016 1 次提交
-
-
由 Romain Perier 提交于
No need to copy the template of an hash operation twice into the SRAM from the step function. Fixes: commit 85030c51 ("crypto: marvell - Add support for chai...") Signed-off-by: NRomain Perier <romain.perier@free-electrons.com> Cc: <stable@vger.kernel.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 09 8月, 2016 5 次提交
-
-
由 Romain Perier 提交于
Don't use 64 'as is', as max block size in mv_cesa_ahash_cache_req. Use CESA_MAX_HASH_BLOCK_SIZE instead, this is better for readability. Signed-off-by: NRomain Perier <romain.perier@free-electrons.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Romain Perier 提交于
Currently, in mv_cesa_{md5,sha1,sha256}_init creq->state is initialized before the call to mv_cesa_ahash_init. This is wrong because this function fills creq with zero by using memset, so its 'state' that contains the default DIGEST is overwritten. This commit fixes the issue by initializing creq->state just after the call to mv_cesa_ahash_init. Fixes: commit b0ef5106 ("crypto: marvell/cesa - initialize hash...") Signed-off-by: NRomain Perier <romain.perier@free-electrons.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Thomas Petazzoni 提交于
The mv_cesa_ahash_cache_req() function always returns 0, which makes its return value pretty much useless. However, in addition to returning a useless value, it also returns a boolean in a variable passed by reference to indicate if the request was already cached. So, this commit changes mv_cesa_ahash_cache_req() to return this boolean. It consequently simplifies the only call site of mv_cesa_ahash_cache_req(), where the "ret" variable is no longer needed. Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Thomas Petazzoni 提交于
The mv_cesa_ahash_init() function always returns 0, and the return value is anyway never checked. Turn it into a function returning void. Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Thomas Petazzoni 提交于
The dma_iter parameter of mv_cesa_ahash_dma_add_cache() is never used, so get rid of it. Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 28 7月, 2016 1 次提交
-
-
由 Romain Perier 提交于
So far, the cache of the ahash requests was updated from the 'complete' operation. This complete operation is called from mv_cesa_tdma_process before the cleanup operation, which means that the content of req->src can be read and copied when it is still mapped. This commit fixes the issue by moving this cache update from mv_cesa_ahash_complete to mv_cesa_ahash_req_cleanup, so the copy is done once the sglist is unmapped. Fixes: 1bf6682c ("crypto: marvell - Add a complete operation for..") Signed-off-by: NRomain Perier <romain.perier@free-electrons.com> Acked-by: NBoris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 23 6月, 2016 6 次提交
-
-
由 Romain Perier 提交于
The Cryptographic Engines and Security Accelerators (CESA) supports the Multi-Packet Chain Mode. With this mode enabled, multiple tdma requests can be chained and processed by the hardware without software intervention. This mode was already activated, however the crypto requests were not chained together. By doing so, we reduce significantly the number of IRQs. Instead of being interrupted at the end of each crypto request, we are interrupted at the end of the last cryptographic request processed by the engine. This commits re-factorizes the code, changes the code architecture and adds the required data structures to chain cryptographic requests together before sending them to an engine (stopped or possibly already running). Signed-off-by: NRomain Perier <romain.perier@free-electrons.com> Acked-by: NBoris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Romain Perier 提交于
This commits adds support for fine grained load balancing on multi-engine IPs. The engine is pre-selected based on its current load and on the weight of the crypto request that is about to be processed. The global crypto queue is also moved to each engine. These changes are required to allow chaining crypto requests at the DMA level. By using a crypto queue per engine, we make sure that we keep the state of the tdma chain synchronized with the crypto queue. We also reduce contention on 'cesa_dev->lock' and improve parallelism. Signed-off-by: NRomain Perier <romain.perier@free-electrons.com> Acked-by: NBoris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Romain Perier 提交于
Currently the crypto requests were sent to engines sequentially. This commit moves the SRAM I/O operations from the prepare to the step functions. It provides flexibility for future works and allow to prepare a request while the engine is running. Signed-off-by: NRomain Perier <romain.perier@free-electrons.com> Acked-by: NBoris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Romain Perier 提交于
So far, the 'process' operation was used to check if the current request was correctly handled by the engine, if it was the case it copied information from the SRAM to the main memory. Now, we split this operation. We keep the 'process' operation, which still checks if the request was correctly handled by the engine or not, then we add a new operation for completion. The 'complete' method copies the content of the SRAM to memory. This will soon become useful if we want to call the process and the complete operations from different locations depending on the type of the request (different cleanup logic). Signed-off-by: NRomain Perier <romain.perier@free-electrons.com> Acked-by: NBoris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Romain Perier 提交于
Currently, the only way to access the tdma chain is to use the 'req' union from a mv_cesa_{ablkcipher,ahash}. This will soon become a problem if we want to handle the TDMA chaining vs standard/non-DMA processing in a generic way (with generic functions at the cesa.c level detecting whether the request should be queued at the DMA level or not). Hence the decision to move the chain field a the mv_cesa_req level at the expense of adding 2 void * fields to all request contexts (including non-DMA ones) and to remove the type completly. To limit the overhead, we get rid of the type field, which can now be deduced from the req->chain.first value. Once these changes are done the union is no longer needed, so remove it and move mv_cesa_ablkcipher_std_req and mv_cesa_req to mv_cesa_ablkcipher_req directly. There are also no needs to keep the 'base' field into the union of mv_cesa_ahash_req, so move it into the upper structure. Signed-off-by: NRomain Perier <romain.perier@free-electrons.com> Acked-by: NBoris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Romain Perier 提交于
Add a BUG_ON() call when the driver tries to launch a crypto request while the engine is still processing the previous one. This replaces a silent system hang by a verbose kernel panic with the associated backtrace to let the user know that something went wrong in the CESA driver. Signed-off-by: NRomain Perier <romain.perier@free-electrons.com> Acked-by: NBoris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 05 4月, 2016 1 次提交
-
-
由 Dan Carpenter 提交于
creq->cache[] is an array inside the struct, it's not a pointer and it can't be NULL. Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Acked-by: NBoris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 17 3月, 2016 2 次提交
-
-
由 Boris BREZILLON 提交于
->export() might be called before we have done an update operation, and in this case the ->state field is left uninitialized. Put the correct default value when initializing the request. Signed-off-by: NBoris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Boris BREZILLON 提交于
Crypto requests are not guaranteed to be finalized (->final() call), and can be freed at any moment, without getting any notification from the core. This can lead to memory leaks of the ->cache buffer. Make this buffer part of the request object, and allocate an extra buffer from the DMA cache pool when doing DMA operations. As a side effect, this patch also fixes another bug related to cache allocation and DMA operations. When the core allocates a new request and import an existing state, a cache buffer can be allocated (depending on the state). The problem is, at that very moment, we don't know yet whether the request will use DMA or not, and since everything is likely to be initialized to zero, mv_cesa_ahash_alloc_cache() thinks it should allocate a buffer for standard operation. But when mv_cesa_ahash_free_cache() is called, req->type has been set to CESA_DMA_REQ in the meantime, thus leading to an invalind dma_pool_free() call (the buffer passed in argument has not been allocated from the pool). Signed-off-by: NBoris Brezillon <boris.brezillon@free-electrons.com> Reported-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 17 11月, 2015 1 次提交
-
-
由 LABBE Corentin 提交于
The sg_nents_for_len() function could fail, this patch add a check for its return value. Signed-off-by: NLABBE Corentin <clabbe.montjoie@gmail.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 20 10月, 2015 19 次提交
-
-
由 Russell King 提交于
Use the IO memcpy() functions when copying from/to MMIO memory. These locations were found via sparse. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Russell King 提交于
Use relaxed IO accessors where appropriate. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Boris Brezillon 提交于
To: Boris Brezillon <boris.brezillon@free-electrons.com>,Arnaud Ebalard <arno@natisbad.org>,Thomas Petazzoni <thomas.petazzoni@free-electrons.com>,Jason Cooper <jason@lakedaemon.net> The local chain variable is not cleaned up if an error occurs in the middle of DMA chain creation. Fix that by dropping the local chain variable and using the dreq->chain field which will be cleaned up by mv_cesa_dma_cleanup() in case of errors. Signed-off-by: NBoris Brezillon <boris.brezillon@free-electrons.com> Reported-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Russell King 提交于
When adding the software padding, this must be done using the first/mid fragment mode, and any subsequent operation needs to be a mid-fragment. Fix this. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Russell King 提交于
Rearrange the last request handling for hashes which require software padding. We prepare the padding to be appended, and then append as much of the padding to any existing data that's already queued up, adding an operation block and launching the operation. Any remainder is then appended as a separate operation. This ensures that the hardware only ever sees multiples of the hash block size to be operated on for software padded hashes, thus ensuring that the engine always indicates that it has finished the calculation. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Russell King 提交于
Rearrange the last request handling for hardware finished hashes by moving the generation of the fragment operation into this path. This results in a simplified sequence to handle this case, and allows us to move the software padded case further down into the function. Add comments describing these parts. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Russell King 提交于
Move the test for the last request out of mv_cesa_ahash_dma_last_req() to its caller, and move the mv_cesa_dma_add_frag() down into this function. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Russell King 提交于
Avoid adding the final operation within the loop, but instead add it outside. We combine this with the handling for the no-data case. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Russell King 提交于
When we process the last request of data, and the request contains user data, the loop in mv_cesa_ahash_dma_req_init() marks the first data size as being iter.base.op_len which does not include the size of the cache data. This means we end up hashing an insufficient amount of data. Fix this by always including the cache size in the first operation length of any request. This has the effect that for a request containing no user data, iter.base.op_len === iter.src.op_offset === creq->cache_ptr As a result, we include one further change to use iter.base.op_len in the cache-but-no-user-data case to make the next change clearer. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Russell King 提交于
Use the presence of the scatterlist to determine whether we should load any new user data to the engine. The following shall always be true at this point: iter.base.op_len == 0 === iter.src.sg In doing so, we can: 1. eliminate the test for iter.base.op_len inside the loop, which makes the loop operation more obvious and understandable. 2. move the operation generation for the cache-only case. This prepares the code for the next step in its transformation, and also uncovers a bug that will be fixed in the next patch. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Russell King 提交于
Move the calls to mv_cesa_dma_add_frag() into the parent function, mv_cesa_ahash_dma_req_init(). This is in preparation to changing when we generate the operation blocks, as we need to avoid generating a block for a partial hash block at the end of the user data. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Russell King 提交于
If we add a template first-fragment operation, always update the template to be a mid-fragment. This ensures that mid-fragments always follow on from a first fragment in every case. This means we can move the first to mid-fragment update code out of mv_cesa_ahash_dma_add_data(). Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Russell King 提交于
Add a helper to add the fragment operation block followed by the DMA entry to launch the operation. Although at the moment this pattern only strictly appears at one site, two other sites can be factored as well by slightly changing the order in which the DMA operations are performed. This should be harmless as the only thing which matters is to have all the data loaded into SRAM prior to launching the operation. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Russell King 提交于
Multiple locations in the driver test the operation context fragment type, checking whether it is a first fragment or not. Introduce a mv_cesa_mac_op_is_first_frag() helper, which returns true if the fragment operation is for a first fragment. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Russell King 提交于
Ensure that the template operation is fully initialised, otherwise we end up loading data from the kernel stack into the engines, which can upset the hash results. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Russell King 提交于
The endianness of the bit length used in the final stage depends on the endianness of the algorithm - md5 hashes need it to be in little endian format, whereas SHA hashes need it in big endian format. Use the previously added algorithm endianness flag to control this. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Russell King 提交于
Rather than determining whether we're using a MD5 hash by looking at the digest size, switch to a cleaner solution using a per-request flag initialised by the method type. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Russell King 提交于
Currently, we read/write the state in CPU endian, but on the final request, we convert its endian according to the requested algorithm. (md5 is little endian, SHA are big endian.) Always keep creq->state in CPU native endian format, and perform the necessary conversion when copying the hash to the result. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Russell King 提交于
There's an easier way to get at the hash transform - rather than using crypto_ahash_tfm(ahash), we can get it directly from req->base.tfm. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 14 10月, 2015 4 次提交
-
-
由 Russell King 提交于
As all the import functions and export functions are virtually identical, factor out their common parts into a generic mv_cesa_ahash_import() and mv_cesa_ahash_export() respectively. This performs the actual import or export, and we pass the data pointers and length into these functions. We have to switch a % const operation to do_div() in the common import function to avoid provoking gcc to use the expensive 64-bit by 64-bit modulus operation. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Acked-by: NBoris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Russell King 提交于
Attempting to use the sha1 digest for openssh via openssl reveals that the result from the hash is wrong: this happens when we export the state from one socket and import it into another via calling accept(). The reason for this is because the operation is reset to "initial block" state, whereas we may be past the first fragment of data to be hashed. Arrange for the operation code to avoid the initialisation of the state, thereby preserving the imported state. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Acked-by: NBoris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Russell King 提交于
When a AF_ALG fd is accepted a second time (hence hash_accept() is used), hash_accept_parent() allocates a new private context using sock_kmalloc(). This context is uninitialised. After use of the new fd, we eventually end up with the kernel complaining: marvell-cesa f1090000.crypto: dma_pool_free cesa_padding, c0627770/0 (bad dma) where c0627770 is a random address. Poisoning the memory allocated by the above sock_kmalloc() produces kernel oopses within the marvell hash code, particularly the interrupt handling. The following simplfied call sequence occurs: hash_accept() crypto_ahash_export() marvell hash export function af_alg_accept() hash_accept_parent() <== allocates uninitialised struct hash_ctx crypto_ahash_import() marvell hash import function hash_ctx contains the struct mv_cesa_ahash_req in its req.__ctx member, and, as the marvell hash import function only partially initialises this structure, we end up with a lot of members which are left with whatever data was in memory prior to sock_kmalloc(). Add zero-initialisation of this structure. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Acked-by: NBoris Brezillon <boris.brezillon@free-electronc.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Russell King 提交于
Several of the algorithms in marvell/hash.c have a statesize of zero. When an AF_ALG accept() on an already-accepted file descriptor to calls into hash_accept(), this causes: char state[crypto_ahash_statesize(crypto_ahash_reqtfm(req))]; to be zero-sized, but we still pass this to: err = crypto_ahash_export(req, state); which proceeds to write to 'state' as if it was a "struct md5_state", "struct sha1_state" etc. Add the necessary initialisers for the .statesize member. Acked-by: NBoris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-