提交 70477371 编写于 作者: L Linus Torvalds

Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto update from Herbert Xu:
 "Here is the crypto update for 4.6:

  API:
   - Convert remaining crypto_hash users to shash or ahash, also convert
     blkcipher/ablkcipher users to skcipher.
   - Remove crypto_hash interface.
   - Remove crypto_pcomp interface.
   - Add crypto engine for async cipher drivers.
   - Add akcipher documentation.
   - Add skcipher documentation.

  Algorithms:
   - Rename crypto/crc32 to avoid name clash with lib/crc32.
   - Fix bug in keywrap where we zero the wrong pointer.

  Drivers:
   - Support T5/M5, T7/M7 SPARC CPUs in n2 hwrng driver.
   - Add PIC32 hwrng driver.
   - Support BCM6368 in bcm63xx hwrng driver.
   - Pack structs for 32-bit compat users in qat.
   - Use crypto engine in omap-aes.
   - Add support for sama5d2x SoCs in atmel-sha.
   - Make atmel-sha available again.
   - Make sahara hashing available again.
   - Make ccp hashing available again.
   - Make sha1-mb available again.
   - Add support for multiple devices in ccp.
   - Improve DMA performance in caam.
   - Add hashing support to rockchip"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (116 commits)
  crypto: qat - remove redundant arbiter configuration
  crypto: ux500 - fix checks of error code returned by devm_ioremap_resource()
  crypto: atmel - fix checks of error code returned by devm_ioremap_resource()
  crypto: qat - Change the definition of icp_qat_uof_regtype
  hwrng: exynos - use __maybe_unused to hide pm functions
  crypto: ccp - Add abstraction for device-specific calls
  crypto: ccp - CCP versioning support
  crypto: ccp - Support for multiple CCPs
  crypto: ccp - Remove check for x86 family and model
  crypto: ccp - memset request context to zero during import
  lib/mpi: use "static inline" instead of "extern inline"
  lib/mpi: avoid assembler warning
  hwrng: bcm63xx - fix non device tree compatibility
  crypto: testmgr - allow rfc3686 aes-ctr variants in fips mode.
  crypto: qat - The AE id should be less than the maximal AE number
  lib/mpi: Endianness fix
  crypto: rockchip - add hash support for crypto engine in rk3288
  crypto: xts - fix compile errors
  crypto: doc - add skcipher API documentation
  crypto: doc - update AEAD AD handling
  ...
......@@ -348,10 +348,7 @@
<para>type:
<itemizedlist>
<listitem>
<para>blkcipher for synchronous block ciphers</para>
</listitem>
<listitem>
<para>ablkcipher for asynchronous block ciphers</para>
<para>skcipher for symmetric key ciphers</para>
</listitem>
<listitem>
<para>cipher for single block ciphers that may be used with
......@@ -484,6 +481,9 @@
<listitem>
<para>CRYPTO_ALG_TYPE_RNG Random Number Generation</para>
</listitem>
<listitem>
<para>CRYPTO_ALG_TYPE_AKCIPHER Asymmetric cipher</para>
</listitem>
<listitem>
<para>CRYPTO_ALG_TYPE_PCOMPRESS Enhanced version of
CRYPTO_ALG_TYPE_COMPRESS allowing for segmented compression /
......@@ -597,7 +597,7 @@ kernel crypto API | IPSEC Layer
v v
+-----------+ +-----------+
| | | |
| ablkcipher| | ahash |
| skcipher | | ahash |
| (ctr) | ---+ | (ghash) |
+-----------+ | +-----------+
|
......@@ -658,7 +658,7 @@ kernel crypto API | IPSEC Layer
<listitem>
<para>
The GCM AEAD cipher type implementation now invokes the ABLKCIPHER API
The GCM AEAD cipher type implementation now invokes the SKCIPHER API
with the instantiated CTR(AES) cipher handle.
</para>
......@@ -669,7 +669,7 @@ kernel crypto API | IPSEC Layer
</para>
<para>
That means that the ABLKCIPHER implementation of CTR(AES) only
That means that the SKCIPHER implementation of CTR(AES) only
implements the CTR block chaining mode. After performing the block
chaining operation, the CIPHER implementation of AES is invoked.
</para>
......@@ -677,7 +677,7 @@ kernel crypto API | IPSEC Layer
<listitem>
<para>
The ABLKCIPHER of CTR(AES) now invokes the CIPHER API with the AES
The SKCIPHER of CTR(AES) now invokes the CIPHER API with the AES
cipher handle to encrypt one block.
</para>
</listitem>
......@@ -706,7 +706,7 @@ kernel crypto API | IPSEC Layer
<para>
For example, CBC(AES) is implemented with cbc.c, and aes-generic.c. The
ASCII art picture above applies as well with the difference that only
step (4) is used and the ABLKCIPHER block chaining mode is CBC.
step (4) is used and the SKCIPHER block chaining mode is CBC.
</para>
</sect2>
......@@ -904,15 +904,14 @@ kernel crypto API | Caller
</sect2>
</sect1>
<sect1><title>Multi-Block Ciphers [BLKCIPHER] [ABLKCIPHER]</title>
<sect1><title>Multi-Block Ciphers</title>
<para>
Example of transformations: cbc(aes), ecb(arc4), ...
</para>
<para>
This section describes the multi-block cipher transformation
implementations for both synchronous [BLKCIPHER] and
asynchronous [ABLKCIPHER] case. The multi-block ciphers are
implementations. The multi-block ciphers are
used for transformations which operate on scatterlists of
data supplied to the transformation functions. They output
the result into a scatterlist of data as well.
......@@ -921,16 +920,15 @@ kernel crypto API | Caller
<sect2><title>Registration Specifics</title>
<para>
The registration of [BLKCIPHER] or [ABLKCIPHER] algorithms
The registration of multi-block cipher algorithms
is one of the most standard procedures throughout the crypto API.
</para>
<para>
Note, if a cipher implementation requires a proper alignment
of data, the caller should use the functions of
crypto_blkcipher_alignmask() or crypto_ablkcipher_alignmask()
respectively to identify a memory alignment mask. The kernel
crypto API is able to process requests that are unaligned.
crypto_skcipher_alignmask() to identify a memory alignment mask.
The kernel crypto API is able to process requests that are unaligned.
This implies, however, additional overhead as the kernel
crypto API needs to perform the realignment of the data which
may imply moving of data.
......@@ -945,14 +943,13 @@ kernel crypto API | Caller
<para>
Please refer to the single block cipher description for schematics
of the block cipher usage. The usage patterns are exactly the same
for [ABLKCIPHER] and [BLKCIPHER] as they are for plain [CIPHER].
of the block cipher usage.
</para>
</sect2>
<sect2><title>Specifics Of Asynchronous Multi-Block Cipher</title>
<para>
There are a couple of specifics to the [ABLKCIPHER] interface.
There are a couple of specifics to the asynchronous interface.
</para>
<para>
......@@ -1692,7 +1689,28 @@ read(opfd, out, outlen);
!Finclude/linux/crypto.h cipher_alg
!Finclude/crypto/rng.h rng_alg
</sect1>
<sect1><title>Asynchronous Block Cipher API</title>
<sect1><title>Symmetric Key Cipher API</title>
!Pinclude/crypto/skcipher.h Symmetric Key Cipher API
!Finclude/crypto/skcipher.h crypto_alloc_skcipher
!Finclude/crypto/skcipher.h crypto_free_skcipher
!Finclude/crypto/skcipher.h crypto_has_skcipher
!Finclude/crypto/skcipher.h crypto_skcipher_ivsize
!Finclude/crypto/skcipher.h crypto_skcipher_blocksize
!Finclude/crypto/skcipher.h crypto_skcipher_setkey
!Finclude/crypto/skcipher.h crypto_skcipher_reqtfm
!Finclude/crypto/skcipher.h crypto_skcipher_encrypt
!Finclude/crypto/skcipher.h crypto_skcipher_decrypt
</sect1>
<sect1><title>Symmetric Key Cipher Request Handle</title>
!Pinclude/crypto/skcipher.h Symmetric Key Cipher Request Handle
!Finclude/crypto/skcipher.h crypto_skcipher_reqsize
!Finclude/crypto/skcipher.h skcipher_request_set_tfm
!Finclude/crypto/skcipher.h skcipher_request_alloc
!Finclude/crypto/skcipher.h skcipher_request_free
!Finclude/crypto/skcipher.h skcipher_request_set_callback
!Finclude/crypto/skcipher.h skcipher_request_set_crypt
</sect1>
<sect1><title>Asynchronous Block Cipher API - Deprecated</title>
!Pinclude/linux/crypto.h Asynchronous Block Cipher API
!Finclude/linux/crypto.h crypto_alloc_ablkcipher
!Finclude/linux/crypto.h crypto_free_ablkcipher
......@@ -1704,7 +1722,7 @@ read(opfd, out, outlen);
!Finclude/linux/crypto.h crypto_ablkcipher_encrypt
!Finclude/linux/crypto.h crypto_ablkcipher_decrypt
</sect1>
<sect1><title>Asynchronous Cipher Request Handle</title>
<sect1><title>Asynchronous Cipher Request Handle - Deprecated</title>
!Pinclude/linux/crypto.h Asynchronous Cipher Request Handle
!Finclude/linux/crypto.h crypto_ablkcipher_reqsize
!Finclude/linux/crypto.h ablkcipher_request_set_tfm
......@@ -1733,10 +1751,9 @@ read(opfd, out, outlen);
!Finclude/crypto/aead.h aead_request_free
!Finclude/crypto/aead.h aead_request_set_callback
!Finclude/crypto/aead.h aead_request_set_crypt
!Finclude/crypto/aead.h aead_request_set_assoc
!Finclude/crypto/aead.h aead_request_set_ad
</sect1>
<sect1><title>Synchronous Block Cipher API</title>
<sect1><title>Synchronous Block Cipher API - Deprecated</title>
!Pinclude/linux/crypto.h Synchronous Block Cipher API
!Finclude/linux/crypto.h crypto_alloc_blkcipher
!Finclude/linux/crypto.h crypto_free_blkcipher
......@@ -1761,19 +1778,6 @@ read(opfd, out, outlen);
!Finclude/linux/crypto.h crypto_cipher_setkey
!Finclude/linux/crypto.h crypto_cipher_encrypt_one
!Finclude/linux/crypto.h crypto_cipher_decrypt_one
</sect1>
<sect1><title>Synchronous Message Digest API</title>
!Pinclude/linux/crypto.h Synchronous Message Digest API
!Finclude/linux/crypto.h crypto_alloc_hash
!Finclude/linux/crypto.h crypto_free_hash
!Finclude/linux/crypto.h crypto_has_hash
!Finclude/linux/crypto.h crypto_hash_blocksize
!Finclude/linux/crypto.h crypto_hash_digestsize
!Finclude/linux/crypto.h crypto_hash_init
!Finclude/linux/crypto.h crypto_hash_update
!Finclude/linux/crypto.h crypto_hash_final
!Finclude/linux/crypto.h crypto_hash_digest
!Finclude/linux/crypto.h crypto_hash_setkey
</sect1>
<sect1><title>Message Digest Algorithm Definitions</title>
!Pinclude/crypto/hash.h Message Digest Algorithm Definitions
......@@ -1825,15 +1829,36 @@ read(opfd, out, outlen);
!Finclude/crypto/rng.h crypto_alloc_rng
!Finclude/crypto/rng.h crypto_rng_alg
!Finclude/crypto/rng.h crypto_free_rng
!Finclude/crypto/rng.h crypto_rng_generate
!Finclude/crypto/rng.h crypto_rng_get_bytes
!Finclude/crypto/rng.h crypto_rng_reset
!Finclude/crypto/rng.h crypto_rng_seedsize
!Cinclude/crypto/rng.h
</sect1>
<sect1><title>Asymmetric Cipher API</title>
!Pinclude/crypto/akcipher.h Generic Public Key API
!Finclude/crypto/akcipher.h akcipher_alg
!Finclude/crypto/akcipher.h akcipher_request
!Finclude/crypto/akcipher.h crypto_alloc_akcipher
!Finclude/crypto/akcipher.h crypto_free_akcipher
!Finclude/crypto/akcipher.h crypto_akcipher_set_pub_key
!Finclude/crypto/akcipher.h crypto_akcipher_set_priv_key
</sect1>
<sect1><title>Asymmetric Cipher Request Handle</title>
!Finclude/crypto/akcipher.h akcipher_request_alloc
!Finclude/crypto/akcipher.h akcipher_request_free
!Finclude/crypto/akcipher.h akcipher_request_set_callback
!Finclude/crypto/akcipher.h akcipher_request_set_crypt
!Finclude/crypto/akcipher.h crypto_akcipher_maxsize
!Finclude/crypto/akcipher.h crypto_akcipher_encrypt
!Finclude/crypto/akcipher.h crypto_akcipher_decrypt
!Finclude/crypto/akcipher.h crypto_akcipher_sign
!Finclude/crypto/akcipher.h crypto_akcipher_verify
</sect1>
</chapter>
<chapter id="Code"><title>Code Examples</title>
<sect1><title>Code Example For Asynchronous Block Cipher Operation</title>
<sect1><title>Code Example For Symmetric Key Cipher Operation</title>
<programlisting>
struct tcrypt_result {
......@@ -1842,15 +1867,15 @@ struct tcrypt_result {
};
/* tie all data structures together */
struct ablkcipher_def {
struct skcipher_def {
struct scatterlist sg;
struct crypto_ablkcipher *tfm;
struct ablkcipher_request *req;
struct crypto_skcipher *tfm;
struct skcipher_request *req;
struct tcrypt_result result;
};
/* Callback function */
static void test_ablkcipher_cb(struct crypto_async_request *req, int error)
static void test_skcipher_cb(struct crypto_async_request *req, int error)
{
struct tcrypt_result *result = req-&gt;data;
......@@ -1862,15 +1887,15 @@ static void test_ablkcipher_cb(struct crypto_async_request *req, int error)
}
/* Perform cipher operation */
static unsigned int test_ablkcipher_encdec(struct ablkcipher_def *ablk,
int enc)
static unsigned int test_skcipher_encdec(struct skcipher_def *sk,
int enc)
{
int rc = 0;
if (enc)
rc = crypto_ablkcipher_encrypt(ablk-&gt;req);
rc = crypto_skcipher_encrypt(sk-&gt;req);
else
rc = crypto_ablkcipher_decrypt(ablk-&gt;req);
rc = crypto_skcipher_decrypt(sk-&gt;req);
switch (rc) {
case 0:
......@@ -1878,52 +1903,52 @@ static unsigned int test_ablkcipher_encdec(struct ablkcipher_def *ablk,
case -EINPROGRESS:
case -EBUSY:
rc = wait_for_completion_interruptible(
&amp;ablk-&gt;result.completion);
if (!rc &amp;&amp; !ablk-&gt;result.err) {
reinit_completion(&amp;ablk-&gt;result.completion);
&amp;sk-&gt;result.completion);
if (!rc &amp;&amp; !sk-&gt;result.err) {
reinit_completion(&amp;sk-&gt;result.completion);
break;
}
default:
pr_info("ablkcipher encrypt returned with %d result %d\n",
rc, ablk-&gt;result.err);
pr_info("skcipher encrypt returned with %d result %d\n",
rc, sk-&gt;result.err);
break;
}
init_completion(&amp;ablk-&gt;result.completion);
init_completion(&amp;sk-&gt;result.completion);
return rc;
}
/* Initialize and trigger cipher operation */
static int test_ablkcipher(void)
static int test_skcipher(void)
{
struct ablkcipher_def ablk;
struct crypto_ablkcipher *ablkcipher = NULL;
struct ablkcipher_request *req = NULL;
struct skcipher_def sk;
struct crypto_skcipher *skcipher = NULL;
struct skcipher_request *req = NULL;
char *scratchpad = NULL;
char *ivdata = NULL;
unsigned char key[32];
int ret = -EFAULT;
ablkcipher = crypto_alloc_ablkcipher("cbc-aes-aesni", 0, 0);
if (IS_ERR(ablkcipher)) {
pr_info("could not allocate ablkcipher handle\n");
return PTR_ERR(ablkcipher);
skcipher = crypto_alloc_skcipher("cbc-aes-aesni", 0, 0);
if (IS_ERR(skcipher)) {
pr_info("could not allocate skcipher handle\n");
return PTR_ERR(skcipher);
}
req = ablkcipher_request_alloc(ablkcipher, GFP_KERNEL);
req = skcipher_request_alloc(skcipher, GFP_KERNEL);
if (IS_ERR(req)) {
pr_info("could not allocate request queue\n");
ret = PTR_ERR(req);
goto out;
}
ablkcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
test_ablkcipher_cb,
&amp;ablk.result);
skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
test_skcipher_cb,
&amp;sk.result);
/* AES 256 with random key */
get_random_bytes(&amp;key, 32);
if (crypto_ablkcipher_setkey(ablkcipher, key, 32)) {
if (crypto_skcipher_setkey(skcipher, key, 32)) {
pr_info("key could not be set\n");
ret = -EAGAIN;
goto out;
......@@ -1945,26 +1970,26 @@ static int test_ablkcipher(void)
}
get_random_bytes(scratchpad, 16);
ablk.tfm = ablkcipher;
ablk.req = req;
sk.tfm = skcipher;
sk.req = req;
/* We encrypt one block */
sg_init_one(&amp;ablk.sg, scratchpad, 16);
ablkcipher_request_set_crypt(req, &amp;ablk.sg, &amp;ablk.sg, 16, ivdata);
init_completion(&amp;ablk.result.completion);
sg_init_one(&amp;sk.sg, scratchpad, 16);
skcipher_request_set_crypt(req, &amp;sk.sg, &amp;sk.sg, 16, ivdata);
init_completion(&amp;sk.result.completion);
/* encrypt data */
ret = test_ablkcipher_encdec(&amp;ablk, 1);
ret = test_skcipher_encdec(&amp;sk, 1);
if (ret)
goto out;
pr_info("Encryption triggered successfully\n");
out:
if (ablkcipher)
crypto_free_ablkcipher(ablkcipher);
if (skcipher)
crypto_free_skcipher(skcipher);
if (req)
ablkcipher_request_free(req);
skcipher_request_free(req);
if (ivdata)
kfree(ivdata);
if (scratchpad)
......@@ -1974,77 +1999,6 @@ out:
</programlisting>
</sect1>
<sect1><title>Code Example For Synchronous Block Cipher Operation</title>
<programlisting>
static int test_blkcipher(void)
{
struct crypto_blkcipher *blkcipher = NULL;
char *cipher = "cbc(aes)";
// AES 128
charkey =
"\x12\x34\x56\x78\x90\xab\xcd\xef\x12\x34\x56\x78\x90\xab\xcd\xef";
chariv =
"\x12\x34\x56\x78\x90\xab\xcd\xef\x12\x34\x56\x78\x90\xab\xcd\xef";
unsigned int ivsize = 0;
char *scratchpad = NULL; // holds plaintext and ciphertext
struct scatterlist sg;
struct blkcipher_desc desc;
int ret = -EFAULT;
blkcipher = crypto_alloc_blkcipher(cipher, 0, 0);
if (IS_ERR(blkcipher)) {
printk("could not allocate blkcipher handle for %s\n", cipher);
return -PTR_ERR(blkcipher);
}
if (crypto_blkcipher_setkey(blkcipher, key, strlen(key))) {
printk("key could not be set\n");
ret = -EAGAIN;
goto out;
}
ivsize = crypto_blkcipher_ivsize(blkcipher);
if (ivsize) {
if (ivsize != strlen(iv))
printk("IV length differs from expected length\n");
crypto_blkcipher_set_iv(blkcipher, iv, ivsize);
}
scratchpad = kmalloc(crypto_blkcipher_blocksize(blkcipher), GFP_KERNEL);
if (!scratchpad) {
printk("could not allocate scratchpad for %s\n", cipher);
goto out;
}
/* get some random data that we want to encrypt */
get_random_bytes(scratchpad, crypto_blkcipher_blocksize(blkcipher));
desc.flags = 0;
desc.tfm = blkcipher;
sg_init_one(&amp;sg, scratchpad, crypto_blkcipher_blocksize(blkcipher));
/* encrypt data in place */
crypto_blkcipher_encrypt(&amp;desc, &amp;sg, &amp;sg,
crypto_blkcipher_blocksize(blkcipher));
/* decrypt data in place
* crypto_blkcipher_decrypt(&amp;desc, &amp;sg, &amp;sg,
*/ crypto_blkcipher_blocksize(blkcipher));
printk("Cipher operation completed\n");
return 0;
out:
if (blkcipher)
crypto_free_blkcipher(blkcipher);
if (scratchpad)
kzfree(scratchpad);
return ret;
}
</programlisting>
</sect1>
<sect1><title>Code Example For Use of Operational State Memory With SHASH</title>
<programlisting>
......
......@@ -49,28 +49,33 @@ under development.
Here's an example of how to use the API:
#include <linux/crypto.h>
#include <crypto/ahash.h>
#include <linux/err.h>
#include <linux/scatterlist.h>
struct scatterlist sg[2];
char result[128];
struct crypto_hash *tfm;
struct hash_desc desc;
struct crypto_ahash *tfm;
struct ahash_request *req;
tfm = crypto_alloc_hash("md5", 0, CRYPTO_ALG_ASYNC);
tfm = crypto_alloc_ahash("md5", 0, CRYPTO_ALG_ASYNC);
if (IS_ERR(tfm))
fail();
/* ... set up the scatterlists ... */
desc.tfm = tfm;
desc.flags = 0;
if (crypto_hash_digest(&desc, sg, 2, result))
req = ahash_request_alloc(tfm, GFP_ATOMIC);
if (!req)
fail();
ahash_request_set_callback(req, 0, NULL, NULL);
ahash_request_set_crypt(req, sg, result, 2);
crypto_free_hash(tfm);
if (crypto_ahash_digest(req))
fail();
ahash_request_free(req);
crypto_free_ahash(tfm);
Many real examples are available in the regression test module (tcrypt.c).
......
BCM6368 Random number generator
Required properties:
- compatible : should be "brcm,bcm6368-rng"
- reg : Specifies base physical address and size of the registers
- clocks : phandle to clock-controller plus clock-specifier pair
- clock-names : "ipsec" as a clock name
Example:
random: rng@10004180 {
compatible = "brcm,bcm6368-rng";
reg = <0x10004180 0x14>;
clocks = <&periph_clk 18>;
clock-names = "ipsec";
};
* Microchip PIC32 Random Number Generator
The PIC32 RNG provides a pseudo random number generator which can be seeded by
another true random number generator.
Required properties:
- compatible : should be "microchip,pic32mzda-rng"
- reg : Specifies base physical address and size of the registers.
- clocks: clock phandle.
Example:
rng: rng@1f8e6000 {
compatible = "microchip,pic32mzda-rng";
reg = <0x1f8e6000 0x1000>;
clocks = <&PBCLK5>;
};
HWRNG support for the n2_rng driver
Required properties:
- reg : base address to sample from
- compatible : should contain one of the following
RNG versions:
- 'SUNW,n2-rng' for Niagara 2 Platform (SUN UltraSPARC T2 CPU)
- 'SUNW,vf-rng' for Victoria Falls Platform (SUN UltraSPARC T2 Plus CPU)
- 'SUNW,kt-rng' for Rainbow/Yosemite Falls Platform (SUN SPARC T3/T4), (UltraSPARC KT/Niagara 3 - development names)
more recent systems (after Oracle acquisition of SUN)
- 'ORCL,m4-rng' for SPARC T5/M5
- 'ORCL,m7-rng' for SPARC T7/M7
Examples:
/* linux LDOM on SPARC T5-2 */
Node 0xf029a4f4
.node: f029a4f4
rng-#units: 00000002
compatible: 'ORCL,m4-rng'
reg: 0000000e
name: 'random-number-generator'
/* solaris on SPARC M7-8 */
Node 0xf028c08c
rng-#units: 00000003
compatible: 'ORCL,m7-rng'
reg: 0000000e
name: 'random-number-generator'
PS: see as well prtconfs.git by DaveM
......@@ -171,6 +171,7 @@ opencores OpenCores.org
option Option NV
ortustech Ortus Technology Co., Ltd.
ovti OmniVision Technologies
ORCL Oracle Corporation
panasonic Panasonic Corporation
parade Parade Technologies Inc.
pericom Pericom Technology Inc.
......@@ -229,6 +230,7 @@ startek Startek
ste ST-Ericsson
stericsson ST-Ericsson
synology Synology, Inc.
SUNW Sun Microsystems, Inc
tbs TBS Technologies
tcl Toby Churchill Ltd.
technologic Technologic Systems
......
......@@ -15,6 +15,7 @@
#include <crypto/ablk_helper.h>
#include <crypto/algapi.h>
#include <linux/module.h>
#include <crypto/xts.h>
MODULE_DESCRIPTION("AES-ECB/CBC/CTR/XTS using ARMv8 Crypto Extensions");
MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>");
......@@ -152,6 +153,10 @@ static int xts_set_key(struct crypto_tfm *tfm, const u8 *in_key,
struct crypto_aes_xts_ctx *ctx = crypto_tfm_ctx(tfm);
int ret;
ret = xts_check_key(tfm, in_key, key_len);
if (ret)
return ret;
ret = ce_aes_expandkey(&ctx->key1, in_key, key_len / 2);
if (!ret)
ret = ce_aes_expandkey(&ctx->key2, &in_key[key_len / 2],
......
......@@ -13,6 +13,7 @@
#include <crypto/ablk_helper.h>
#include <crypto/algapi.h>
#include <linux/module.h>
#include <crypto/xts.h>
#include "aes_glue.h"
......@@ -89,6 +90,11 @@ static int aesbs_xts_set_key(struct crypto_tfm *tfm, const u8 *in_key,
{
struct aesbs_xts_ctx *ctx = crypto_tfm_ctx(tfm);
int bits = key_len * 4;
int err;
err = xts_check_key(tfm, in_key, key_len);
if (err)
return err;
if (private_AES_set_encrypt_key(in_key, bits, &ctx->enc.rk)) {
tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
......
......@@ -15,6 +15,7 @@
#include <crypto/algapi.h>
#include <linux/module.h>
#include <linux/cpufeature.h>
#include <crypto/xts.h>
#include "aes-ce-setkey.h"
......@@ -85,6 +86,10 @@ static int xts_set_key(struct crypto_tfm *tfm, const u8 *in_key,
struct crypto_aes_xts_ctx *ctx = crypto_tfm_ctx(tfm);
int ret;
ret = xts_check_key(tfm, in_key, key_len);
if (ret)
return ret;
ret = aes_expandkey(&ctx->key1, in_key, key_len / 2);
if (!ret)
ret = aes_expandkey(&ctx->key2, &in_key[key_len / 2],
......
......@@ -22,6 +22,7 @@
#include <asm/byteorder.h>
#include <asm/switch_to.h>
#include <crypto/algapi.h>
#include <crypto/xts.h>
/*
* MAX_BYTES defines the number of bytes that are allowed to be processed
......@@ -126,6 +127,11 @@ static int ppc_xts_setkey(struct crypto_tfm *tfm, const u8 *in_key,
unsigned int key_len)
{
struct ppc_xts_ctx *ctx = crypto_tfm_ctx(tfm);
int err;
err = xts_check_key(tfm, in_key, key_len);
if (err)
return err;
key_len >>= 1;
......
......@@ -27,6 +27,7 @@
#include <linux/cpufeature.h>
#include <linux/init.h>
#include <linux/spinlock.h>
#include <crypto/xts.h>
#include "crypt_s390.h"
#define AES_KEYLEN_128 1
......@@ -587,6 +588,11 @@ static int xts_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
{
struct s390_xts_ctx *xts_ctx = crypto_tfm_ctx(tfm);
u32 *flags = &tfm->crt_flags;
int err;
err = xts_check_key(tfm, in_key, key_len);
if (err)
return err;
switch (key_len) {
case 32:
......
......@@ -639,16 +639,11 @@ static int xts_aesni_setkey(struct crypto_tfm *tfm, const u8 *key,
unsigned int keylen)
{
struct aesni_xts_ctx *ctx = crypto_tfm_ctx(tfm);
u32 *flags = &tfm->crt_flags;
int err;
/* key consists of keys of equal size concatenated, therefore
* the length must be even
*/
if (keylen % 2) {
*flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
return -EINVAL;
}
err = xts_check_key(tfm, key, keylen);
if (err)
return err;
/* first half of xts-key is for crypt */
err = aes_set_key_common(tfm, ctx->raw_crypt_ctx, key, keylen / 2);
......
......@@ -1503,13 +1503,9 @@ int xts_camellia_setkey(struct crypto_tfm *tfm, const u8 *key,
u32 *flags = &tfm->crt_flags;
int err;
/* key consists of keys of equal size concatenated, therefore
* the length must be even
*/
if (keylen % 2) {
*flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
return -EINVAL;
}
err = xts_check_key(tfm, key, keylen);
if (err)
return err;
/* first half of xts-key is for crypt */
err = __camellia_setkey(&ctx->crypt_ctx, key, keylen / 2, flags);
......
......@@ -329,13 +329,9 @@ static int xts_cast6_setkey(struct crypto_tfm *tfm, const u8 *key,
u32 *flags = &tfm->crt_flags;
int err;
/* key consists of keys of equal size concatenated, therefore
* the length must be even
*/
if (keylen % 2) {
*flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
return -EINVAL;
}
err = xts_check_key(tfm, key, keylen);
if (err)
return err;
/* first half of xts-key is for crypt */
err = __cast6_setkey(&ctx->crypt_ctx, key, keylen / 2, flags);
......
......@@ -332,16 +332,11 @@ int xts_serpent_setkey(struct crypto_tfm *tfm, const u8 *key,
unsigned int keylen)
{
struct serpent_xts_ctx *ctx = crypto_tfm_ctx(tfm);
u32 *flags = &tfm->crt_flags;
int err;
/* key consists of keys of equal size concatenated, therefore
* the length must be even
*/
if (keylen % 2) {
*flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
return -EINVAL;
}
err = xts_check_key(tfm, key, keylen);
if (err)
return err;
/* first half of xts-key is for crypt */
err = __serpent_setkey(&ctx->crypt_ctx, key, keylen / 2);
......
......@@ -309,16 +309,11 @@ static int xts_serpent_setkey(struct crypto_tfm *tfm, const u8 *key,
unsigned int keylen)
{
struct serpent_xts_ctx *ctx = crypto_tfm_ctx(tfm);
u32 *flags = &tfm->crt_flags;
int err;
/* key consists of keys of equal size concatenated, therefore
* the length must be even
*/
if (keylen % 2) {
*flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
return -EINVAL;
}
err = xts_check_key(tfm, key, keylen);
if (err)
return err;
/* first half of xts-key is for crypt */
err = __serpent_setkey(&ctx->crypt_ctx, key, keylen / 2);
......
......@@ -762,6 +762,38 @@ static int sha1_mb_async_digest(struct ahash_request *req)
return crypto_ahash_digest(mcryptd_req);
}
static int sha1_mb_async_export(struct ahash_request *req, void *out)
{
struct ahash_request *mcryptd_req = ahash_request_ctx(req);
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
struct sha1_mb_ctx *ctx = crypto_ahash_ctx(tfm);
struct mcryptd_ahash *mcryptd_tfm = ctx->mcryptd_tfm;
memcpy(mcryptd_req, req, sizeof(*req));
ahash_request_set_tfm(mcryptd_req, &mcryptd_tfm->base);
return crypto_ahash_export(mcryptd_req, out);
}
static int sha1_mb_async_import(struct ahash_request *req, const void *in)
{
struct ahash_request *mcryptd_req = ahash_request_ctx(req);
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
struct sha1_mb_ctx *ctx = crypto_ahash_ctx(tfm);
struct mcryptd_ahash *mcryptd_tfm = ctx->mcryptd_tfm;
struct crypto_shash *child = mcryptd_ahash_child(mcryptd_tfm);
struct mcryptd_hash_request_ctx *rctx;
struct shash_desc *desc;
memcpy(mcryptd_req, req, sizeof(*req));
ahash_request_set_tfm(mcryptd_req, &mcryptd_tfm->base);
rctx = ahash_request_ctx(mcryptd_req);
desc = &rctx->desc;
desc->tfm = child;
desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
return crypto_ahash_import(mcryptd_req, in);
}
static int sha1_mb_async_init_tfm(struct crypto_tfm *tfm)
{
struct mcryptd_ahash *mcryptd_tfm;
......@@ -796,8 +828,11 @@ static struct ahash_alg sha1_mb_async_alg = {
.final = sha1_mb_async_final,
.finup = sha1_mb_async_finup,
.digest = sha1_mb_async_digest,
.export = sha1_mb_async_export,
.import = sha1_mb_async_import,
.halg = {
.digestsize = SHA1_DIGEST_SIZE,
.statesize = sizeof(struct sha1_hash_ctx),
.base = {
.cra_name = "sha1",
.cra_driver_name = "sha1_mb",
......
......@@ -197,7 +197,7 @@ len_is_0:
vpinsrd $1, _args_digest+1*32(state , idx, 4), %xmm0, %xmm0
vpinsrd $2, _args_digest+2*32(state , idx, 4), %xmm0, %xmm0
vpinsrd $3, _args_digest+3*32(state , idx, 4), %xmm0, %xmm0
movl 4*32(state, idx, 4), DWORD_tmp
movl _args_digest+4*32(state, idx, 4), DWORD_tmp
vmovdqu %xmm0, _result_digest(job_rax)
movl DWORD_tmp, _result_digest+1*16(job_rax)
......
......@@ -277,13 +277,9 @@ int xts_twofish_setkey(struct crypto_tfm *tfm, const u8 *key,
u32 *flags = &tfm->crt_flags;
int err;
/* key consists of keys of equal size concatenated, therefore
* the length must be even
*/
if (keylen % 2) {
*flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
return -EINVAL;
}
err = xts_check_key(tfm, key, keylen);
if (err)
return err;
/* first half of xts-key is for crypt */
err = __twofish_setkey(&ctx->crypt_ctx, key, keylen / 2, flags);
......
......@@ -84,15 +84,6 @@ config CRYPTO_RNG_DEFAULT
tristate
select CRYPTO_DRBG_MENU
config CRYPTO_PCOMP
tristate
select CRYPTO_PCOMP2
select CRYPTO_ALGAPI
config CRYPTO_PCOMP2
tristate
select CRYPTO_ALGAPI2
config CRYPTO_AKCIPHER2
tristate
select CRYPTO_ALGAPI2
......@@ -122,7 +113,6 @@ config CRYPTO_MANAGER2
select CRYPTO_AEAD2
select CRYPTO_HASH2
select CRYPTO_BLKCIPHER2
select CRYPTO_PCOMP2
select CRYPTO_AKCIPHER2
config CRYPTO_USER
......@@ -227,6 +217,9 @@ config CRYPTO_GLUE_HELPER_X86
depends on X86
select CRYPTO_ALGAPI
config CRYPTO_ENGINE
tristate
comment "Authenticated Encryption with Associated Data"
config CRYPTO_CCM
......@@ -1506,15 +1499,6 @@ config CRYPTO_DEFLATE
You will most probably want this if using IPSec.
config CRYPTO_ZLIB
tristate "Zlib compression algorithm"
select CRYPTO_PCOMP
select ZLIB_INFLATE
select ZLIB_DEFLATE
select NLATTR
help
This is the zlib algorithm.
config CRYPTO_LZO
tristate "LZO compression algorithm"
select CRYPTO_ALGAPI
......@@ -1595,6 +1579,7 @@ endif # if CRYPTO_DRBG_MENU
config CRYPTO_JITTERENTROPY
tristate "Jitterentropy Non-Deterministic Random Number Generator"
select CRYPTO_RNG
help
The Jitterentropy RNG is a noise that is intended
to provide seed to another RNG. The RNG does not
......
......@@ -7,6 +7,7 @@ crypto-y := api.o cipher.o compress.o memneq.o
obj-$(CONFIG_CRYPTO_WORKQUEUE) += crypto_wq.o
obj-$(CONFIG_CRYPTO_ENGINE) += crypto_engine.o
obj-$(CONFIG_CRYPTO_FIPS) += fips.o
crypto_algapi-$(CONFIG_PROC_FS) += proc.o
......@@ -28,7 +29,6 @@ crypto_hash-y += ahash.o
crypto_hash-y += shash.o
obj-$(CONFIG_CRYPTO_HASH2) += crypto_hash.o
obj-$(CONFIG_CRYPTO_PCOMP2) += pcompress.o
obj-$(CONFIG_CRYPTO_AKCIPHER2) += akcipher.o
$(obj)/rsapubkey-asn1.o: $(obj)/rsapubkey-asn1.c $(obj)/rsapubkey-asn1.h
......@@ -99,10 +99,9 @@ obj-$(CONFIG_CRYPTO_SALSA20) += salsa20_generic.o
obj-$(CONFIG_CRYPTO_CHACHA20) += chacha20_generic.o
obj-$(CONFIG_CRYPTO_POLY1305) += poly1305_generic.o
obj-$(CONFIG_CRYPTO_DEFLATE) += deflate.o
obj-$(CONFIG_CRYPTO_ZLIB) += zlib.o
obj-$(CONFIG_CRYPTO_MICHAEL_MIC) += michael_mic.o
obj-$(CONFIG_CRYPTO_CRC32C) += crc32c_generic.o
obj-$(CONFIG_CRYPTO_CRC32) += crc32.o
obj-$(CONFIG_CRYPTO_CRC32) += crc32_generic.o
obj-$(CONFIG_CRYPTO_CRCT10DIF) += crct10dif_common.o crct10dif_generic.o
obj-$(CONFIG_CRYPTO_AUTHENC) += authenc.o authencesn.o
obj-$(CONFIG_CRYPTO_LZO) += lzo.o
......
......@@ -166,24 +166,6 @@ int crypto_ahash_walk_first(struct ahash_request *req,
}
EXPORT_SYMBOL_GPL(crypto_ahash_walk_first);
int crypto_hash_walk_first_compat(struct hash_desc *hdesc,
struct crypto_hash_walk *walk,
struct scatterlist *sg, unsigned int len)
{
walk->total = len;
if (!walk->total) {
walk->entrylen = 0;
return 0;
}
walk->alignmask = crypto_hash_alignmask(hdesc->tfm);
walk->sg = sg;
walk->flags = hdesc->flags & CRYPTO_TFM_REQ_MASK;
return hash_walk_new_entry(walk);
}
static int ahash_setkey_unaligned(struct crypto_ahash *tfm, const u8 *key,
unsigned int keylen)
{
......@@ -542,6 +524,12 @@ struct crypto_ahash *crypto_alloc_ahash(const char *alg_name, u32 type,
}
EXPORT_SYMBOL_GPL(crypto_alloc_ahash);
int crypto_has_ahash(const char *alg_name, u32 type, u32 mask)
{
return crypto_type_has_alg(alg_name, &crypto_ahash_type, type, mask);
}
EXPORT_SYMBOL_GPL(crypto_has_ahash);
static int ahash_prepare_alg(struct ahash_alg *alg)
{
struct crypto_alg *base = &alg->halg.base;
......
......@@ -987,6 +987,21 @@ unsigned int crypto_alg_extsize(struct crypto_alg *alg)
}
EXPORT_SYMBOL_GPL(crypto_alg_extsize);
int crypto_type_has_alg(const char *name, const struct crypto_type *frontend,
u32 type, u32 mask)
{
int ret = 0;
struct crypto_alg *alg = crypto_find_alg(name, frontend, type, mask);
if (!IS_ERR(alg)) {
crypto_mod_put(alg);
ret = 1;
}
return ret;
}
EXPORT_SYMBOL_GPL(crypto_type_has_alg);
static int __init crypto_algapi_init(void)
{
crypto_init_proc();
......
......@@ -131,7 +131,7 @@ static struct shash_alg alg = {
.digestsize = CHKSUM_DIGEST_SIZE,
.base = {
.cra_name = "crc32",
.cra_driver_name = "crc32-table",
.cra_driver_name = "crc32-generic",
.cra_priority = 100,
.cra_blocksize = CHKSUM_BLOCK_SIZE,
.cra_ctxsize = sizeof(u32),
......@@ -157,3 +157,4 @@ MODULE_AUTHOR("Alexander Boyko <alexander_boyko@xyratex.com>");
MODULE_DESCRIPTION("CRC32 calculations wrapper for lib/crc32");
MODULE_LICENSE("GPL");
MODULE_ALIAS_CRYPTO("crc32");
MODULE_ALIAS_CRYPTO("crc32-generic");
/*
* Handle async block request by crypto hardware engine.
*
* Copyright (C) 2016 Linaro, Inc.
*
* Author: Baolin Wang <baolin.wang@linaro.org>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*
*/
#include <linux/err.h>
#include <linux/delay.h>
#include "internal.h"
#define CRYPTO_ENGINE_MAX_QLEN 10
void crypto_finalize_request(struct crypto_engine *engine,
struct ablkcipher_request *req, int err);
/**
* crypto_pump_requests - dequeue one request from engine queue to process
* @engine: the hardware engine
* @in_kthread: true if we are in the context of the request pump thread
*
* This function checks if there is any request in the engine queue that
* needs processing and if so call out to the driver to initialize hardware
* and handle each request.
*/
static void crypto_pump_requests(struct crypto_engine *engine,
bool in_kthread)
{
struct crypto_async_request *async_req, *backlog;
struct ablkcipher_request *req;
unsigned long flags;
bool was_busy = false;
int ret;
spin_lock_irqsave(&engine->queue_lock, flags);
/* Make sure we are not already running a request */
if (engine->cur_req)
goto out;
/* If another context is idling then defer */
if (engine->idling) {
queue_kthread_work(&engine->kworker, &engine->pump_requests);
goto out;
}
/* Check if the engine queue is idle */
if (!crypto_queue_len(&engine->queue) || !engine->running) {
if (!engine->busy)
goto out;
/* Only do teardown in the thread */
if (!in_kthread) {
queue_kthread_work(&engine->kworker,
&engine->pump_requests);
goto out;
}
engine->busy = false;
engine->idling = true;
spin_unlock_irqrestore(&engine->queue_lock, flags);
if (engine->unprepare_crypt_hardware &&
engine->unprepare_crypt_hardware(engine))
pr_err("failed to unprepare crypt hardware\n");
spin_lock_irqsave(&engine->queue_lock, flags);
engine->idling = false;
goto out;
}
/* Get the fist request from the engine queue to handle */
backlog = crypto_get_backlog(&engine->queue);
async_req = crypto_dequeue_request(&engine->queue);
if (!async_req)
goto out;
req = ablkcipher_request_cast(async_req);
engine->cur_req = req;
if (backlog)
backlog->complete(backlog, -EINPROGRESS);
if (engine->busy)
was_busy = true;
else
engine->busy = true;
spin_unlock_irqrestore(&engine->queue_lock, flags);
/* Until here we get the request need to be encrypted successfully */
if (!was_busy && engine->prepare_crypt_hardware) {
ret = engine->prepare_crypt_hardware(engine);
if (ret) {
pr_err("failed to prepare crypt hardware\n");
goto req_err;
}
}
if (engine->prepare_request) {
ret = engine->prepare_request(engine, engine->cur_req);
if (ret) {
pr_err("failed to prepare request: %d\n", ret);
goto req_err;
}
engine->cur_req_prepared = true;
}
ret = engine->crypt_one_request(engine, engine->cur_req);
if (ret) {
pr_err("failed to crypt one request from queue\n");
goto req_err;
}
return;
req_err:
crypto_finalize_request(engine, engine->cur_req, ret);
return;
out:
spin_unlock_irqrestore(&engine->queue_lock, flags);
}
static void crypto_pump_work(struct kthread_work *work)
{
struct crypto_engine *engine =
container_of(work, struct crypto_engine, pump_requests);
crypto_pump_requests(engine, true);
}
/**
* crypto_transfer_request - transfer the new request into the engine queue
* @engine: the hardware engine
* @req: the request need to be listed into the engine queue
*/
int crypto_transfer_request(struct crypto_engine *engine,
struct ablkcipher_request *req, bool need_pump)
{
unsigned long flags;
int ret;
spin_lock_irqsave(&engine->queue_lock, flags);
if (!engine->running) {
spin_unlock_irqrestore(&engine->queue_lock, flags);
return -ESHUTDOWN;
}
ret = ablkcipher_enqueue_request(&engine->queue, req);
if (!engine->busy && need_pump)
queue_kthread_work(&engine->kworker, &engine->pump_requests);
spin_unlock_irqrestore(&engine->queue_lock, flags);
return ret;
}
EXPORT_SYMBOL_GPL(crypto_transfer_request);
/**
* crypto_transfer_request_to_engine - transfer one request to list into the
* engine queue
* @engine: the hardware engine
* @req: the request need to be listed into the engine queue
*/
int crypto_transfer_request_to_engine(struct crypto_engine *engine,
struct ablkcipher_request *req)
{
return crypto_transfer_request(engine, req, true);
}
EXPORT_SYMBOL_GPL(crypto_transfer_request_to_engine);
/**
* crypto_finalize_request - finalize one request if the request is done
* @engine: the hardware engine
* @req: the request need to be finalized
* @err: error number
*/
void crypto_finalize_request(struct crypto_engine *engine,
struct ablkcipher_request *req, int err)
{
unsigned long flags;
bool finalize_cur_req = false;
int ret;
spin_lock_irqsave(&engine->queue_lock, flags);
if (engine->cur_req == req)
finalize_cur_req = true;
spin_unlock_irqrestore(&engine->queue_lock, flags);
if (finalize_cur_req) {
if (engine->cur_req_prepared && engine->unprepare_request) {
ret = engine->unprepare_request(engine, req);
if (ret)
pr_err("failed to unprepare request\n");
}
spin_lock_irqsave(&engine->queue_lock, flags);
engine->cur_req = NULL;
engine->cur_req_prepared = false;
spin_unlock_irqrestore(&engine->queue_lock, flags);
}
req->base.complete(&req->base, err);
queue_kthread_work(&engine->kworker, &engine->pump_requests);
}
EXPORT_SYMBOL_GPL(crypto_finalize_request);
/**
* crypto_engine_start - start the hardware engine
* @engine: the hardware engine need to be started
*
* Return 0 on success, else on fail.
*/
int crypto_engine_start(struct crypto_engine *engine)
{
unsigned long flags;
spin_lock_irqsave(&engine->queue_lock, flags);
if (engine->running || engine->busy) {
spin_unlock_irqrestore(&engine->queue_lock, flags);
return -EBUSY;
}
engine->running = true;
spin_unlock_irqrestore(&engine->queue_lock, flags);
queue_kthread_work(&engine->kworker, &engine->pump_requests);
return 0;
}
EXPORT_SYMBOL_GPL(crypto_engine_start);
/**
* crypto_engine_stop - stop the hardware engine
* @engine: the hardware engine need to be stopped
*
* Return 0 on success, else on fail.
*/
int crypto_engine_stop(struct crypto_engine *engine)
{
unsigned long flags;
unsigned limit = 500;
int ret = 0;
spin_lock_irqsave(&engine->queue_lock, flags);
/*
* If the engine queue is not empty or the engine is on busy state,
* we need to wait for a while to pump the requests of engine queue.
*/
while ((crypto_queue_len(&engine->queue) || engine->busy) && limit--) {
spin_unlock_irqrestore(&engine->queue_lock, flags);
msleep(20);
spin_lock_irqsave(&engine->queue_lock, flags);
}
if (crypto_queue_len(&engine->queue) || engine->busy)
ret = -EBUSY;
else
engine->running = false;
spin_unlock_irqrestore(&engine->queue_lock, flags);
if (ret)
pr_warn("could not stop engine\n");
return ret;
}
EXPORT_SYMBOL_GPL(crypto_engine_stop);
/**
* crypto_engine_alloc_init - allocate crypto hardware engine structure and
* initialize it.
* @dev: the device attached with one hardware engine
* @rt: whether this queue is set to run as a realtime task
*
* This must be called from context that can sleep.
* Return: the crypto engine structure on success, else NULL.
*/
struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt)
{
struct sched_param param = { .sched_priority = MAX_RT_PRIO - 1 };
struct crypto_engine *engine;
if (!dev)
return NULL;
engine = devm_kzalloc(dev, sizeof(*engine), GFP_KERNEL);
if (!engine)
return NULL;
engine->rt = rt;
engine->running = false;
engine->busy = false;
engine->idling = false;
engine->cur_req_prepared = false;
engine->priv_data = dev;
snprintf(engine->name, sizeof(engine->name),
"%s-engine", dev_name(dev));
crypto_init_queue(&engine->queue, CRYPTO_ENGINE_MAX_QLEN);
spin_lock_init(&engine->queue_lock);
init_kthread_worker(&engine->kworker);
engine->kworker_task = kthread_run(kthread_worker_fn,
&engine->kworker, "%s",
engine->name);
if (IS_ERR(engine->kworker_task)) {
dev_err(dev, "failed to create crypto request pump task\n");
return NULL;
}
init_kthread_work(&engine->pump_requests, crypto_pump_work);
if (engine->rt) {
dev_info(dev, "will run requests pump with realtime priority\n");
sched_setscheduler(engine->kworker_task, SCHED_FIFO, &param);
}
return engine;
}
EXPORT_SYMBOL_GPL(crypto_engine_alloc_init);
/**
* crypto_engine_exit - free the resources of hardware engine when exit
* @engine: the hardware engine need to be freed
*
* Return 0 for success.
*/
int crypto_engine_exit(struct crypto_engine *engine)
{
int ret;
ret = crypto_engine_stop(engine);
if (ret)
return ret;
flush_kthread_worker(&engine->kworker);
kthread_stop(engine->kworker_task);
return 0;
}
EXPORT_SYMBOL_GPL(crypto_engine_exit);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Crypto hardware engine framework");
......@@ -219,48 +219,6 @@ static inline unsigned short drbg_sec_strength(drbg_flag_t flags)
}
}
/*
* FIPS 140-2 continuous self test
* The test is performed on the result of one round of the output
* function. Thus, the function implicitly knows the size of the
* buffer.
*
* @drbg DRBG handle
* @buf output buffer of random data to be checked
*
* return:
* true on success
* false on error
*/
static bool drbg_fips_continuous_test(struct drbg_state *drbg,
const unsigned char *buf)
{
#ifdef CONFIG_CRYPTO_FIPS
int ret = 0;
/* skip test if we test the overall system */
if (list_empty(&drbg->test_data.list))
return true;
/* only perform test in FIPS mode */
if (0 == fips_enabled)
return true;
if (!drbg->fips_primed) {
/* Priming of FIPS test */
memcpy(drbg->prev, buf, drbg_blocklen(drbg));
drbg->fips_primed = true;
/* return false due to priming, i.e. another round is needed */
return false;
}
ret = memcmp(drbg->prev, buf, drbg_blocklen(drbg));
if (!ret)
panic("DRBG continuous self test failed\n");
memcpy(drbg->prev, buf, drbg_blocklen(drbg));
/* the test shall pass when the two compared values are not equal */
return ret != 0;
#else
return true;
#endif /* CONFIG_CRYPTO_FIPS */
}
/*
* Convert an integer into a byte representation of this integer.
* The byte representation is big-endian
......@@ -603,11 +561,6 @@ static int drbg_ctr_generate(struct drbg_state *drbg,
}
outlen = (drbg_blocklen(drbg) < (buflen - len)) ?
drbg_blocklen(drbg) : (buflen - len);
if (!drbg_fips_continuous_test(drbg, drbg->scratchpad)) {
/* 10.2.1.5.2 step 6 */
crypto_inc(drbg->V, drbg_blocklen(drbg));
continue;
}
/* 10.2.1.5.2 step 4.3 */
memcpy(buf + len, drbg->scratchpad, outlen);
len += outlen;
......@@ -733,8 +686,6 @@ static int drbg_hmac_generate(struct drbg_state *drbg,
return ret;
outlen = (drbg_blocklen(drbg) < (buflen - len)) ?
drbg_blocklen(drbg) : (buflen - len);
if (!drbg_fips_continuous_test(drbg, drbg->V))
continue;
/* 10.1.2.5 step 4.2 */
memcpy(buf + len, drbg->V, outlen);
......@@ -963,10 +914,6 @@ static int drbg_hash_hashgen(struct drbg_state *drbg,
}
outlen = (drbg_blocklen(drbg) < (buflen - len)) ?
drbg_blocklen(drbg) : (buflen - len);
if (!drbg_fips_continuous_test(drbg, dst)) {
crypto_inc(src, drbg_statelen(drbg));
continue;
}
/* 10.1.1.4 step hashgen 4.2 */
memcpy(buf + len, dst, outlen);
len += outlen;
......@@ -1201,11 +1148,6 @@ static inline void drbg_dealloc_state(struct drbg_state *drbg)
drbg->reseed_ctr = 0;
drbg->d_ops = NULL;
drbg->core = NULL;
#ifdef CONFIG_CRYPTO_FIPS
kzfree(drbg->prev);
drbg->prev = NULL;
drbg->fips_primed = false;
#endif
}
/*
......@@ -1244,12 +1186,6 @@ static inline int drbg_alloc_state(struct drbg_state *drbg)
drbg->C = kmalloc(drbg_statelen(drbg), GFP_KERNEL);
if (!drbg->C)
goto err;
#ifdef CONFIG_CRYPTO_FIPS
drbg->prev = kmalloc(drbg_blocklen(drbg), GFP_KERNEL);
if (!drbg->prev)
goto err;
drbg->fips_primed = false;
#endif
/* scratchpad is only generated for CTR and Hash */
if (drbg->core->flags & DRBG_HMAC)
sb_size = 0;
......
......@@ -104,6 +104,9 @@ int crypto_probing_notify(unsigned long val, void *v);
unsigned int crypto_alg_extsize(struct crypto_alg *alg);
int crypto_type_has_alg(const char *name, const struct crypto_type *frontend,
u32 type, u32 mask);
static inline struct crypto_alg *crypto_alg_get(struct crypto_alg *alg)
{
atomic_inc(&alg->cra_refcnt);
......
......@@ -212,7 +212,7 @@ static int crypto_kw_decrypt(struct blkcipher_desc *desc,
SEMIBSIZE))
ret = -EBADMSG;
memzero_explicit(&block, sizeof(struct crypto_kw_block));
memzero_explicit(block, sizeof(struct crypto_kw_block));
return ret;
}
......@@ -297,7 +297,7 @@ static int crypto_kw_encrypt(struct blkcipher_desc *desc,
/* establish the IV for the caller to pick up */
memcpy(desc->info, block->A, SEMIBSIZE);
memzero_explicit(&block, sizeof(struct crypto_kw_block));
memzero_explicit(block, sizeof(struct crypto_kw_block));
return 0;
}
......
......@@ -522,6 +522,7 @@ static int mcryptd_create_hash(struct crypto_template *tmpl, struct rtattr **tb,
inst->alg.halg.base.cra_flags = type;
inst->alg.halg.digestsize = salg->digestsize;
inst->alg.halg.statesize = salg->statesize;
inst->alg.halg.base.cra_ctxsize = sizeof(struct mcryptd_hash_ctx);
inst->alg.halg.base.cra_init = mcryptd_hash_init_tfm;
......
/*
* Cryptographic API.
*
* Partial (de)compression operations.
*
* Copyright 2008 Sony Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; version 2 of the License.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program.
* If not, see <http://www.gnu.org/licenses/>.
*/
#include <linux/crypto.h>
#include <linux/errno.h>
#include <linux/module.h>
#include <linux/seq_file.h>
#include <linux/string.h>
#include <linux/cryptouser.h>
#include <net/netlink.h>
#include <crypto/compress.h>
#include <crypto/internal/compress.h>
#include "internal.h"
static int crypto_pcomp_init(struct crypto_tfm *tfm, u32 type, u32 mask)
{
return 0;
}
static int crypto_pcomp_init_tfm(struct crypto_tfm *tfm)
{
return 0;
}
#ifdef CONFIG_NET
static int crypto_pcomp_report(struct sk_buff *skb, struct crypto_alg *alg)
{
struct crypto_report_comp rpcomp;
strncpy(rpcomp.type, "pcomp", sizeof(rpcomp.type));
if (nla_put(skb, CRYPTOCFGA_REPORT_COMPRESS,
sizeof(struct crypto_report_comp), &rpcomp))
goto nla_put_failure;
return 0;
nla_put_failure:
return -EMSGSIZE;
}
#else
static int crypto_pcomp_report(struct sk_buff *skb, struct crypto_alg *alg)
{
return -ENOSYS;
}
#endif
static void crypto_pcomp_show(struct seq_file *m, struct crypto_alg *alg)
__attribute__ ((unused));
static void crypto_pcomp_show(struct seq_file *m, struct crypto_alg *alg)
{
seq_printf(m, "type : pcomp\n");
}
static const struct crypto_type crypto_pcomp_type = {
.extsize = crypto_alg_extsize,
.init = crypto_pcomp_init,
.init_tfm = crypto_pcomp_init_tfm,
#ifdef CONFIG_PROC_FS
.show = crypto_pcomp_show,
#endif
.report = crypto_pcomp_report,
.maskclear = ~CRYPTO_ALG_TYPE_MASK,
.maskset = CRYPTO_ALG_TYPE_MASK,
.type = CRYPTO_ALG_TYPE_PCOMPRESS,
.tfmsize = offsetof(struct crypto_pcomp, base),
};
struct crypto_pcomp *crypto_alloc_pcomp(const char *alg_name, u32 type,
u32 mask)
{
return crypto_alloc_tfm(alg_name, &crypto_pcomp_type, type, mask);
}
EXPORT_SYMBOL_GPL(crypto_alloc_pcomp);
int crypto_register_pcomp(struct pcomp_alg *alg)
{
struct crypto_alg *base = &alg->base;
base->cra_type = &crypto_pcomp_type;
base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
base->cra_flags |= CRYPTO_ALG_TYPE_PCOMPRESS;
return crypto_register_alg(base);
}
EXPORT_SYMBOL_GPL(crypto_register_pcomp);
int crypto_unregister_pcomp(struct pcomp_alg *alg)
{
return crypto_unregister_alg(&alg->base);
}
EXPORT_SYMBOL_GPL(crypto_unregister_pcomp);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Partial (de)compression type");
MODULE_AUTHOR("Sony Corporation");
......@@ -368,151 +368,6 @@ int crypto_init_shash_ops_async(struct crypto_tfm *tfm)
return 0;
}
static int shash_compat_setkey(struct crypto_hash *tfm, const u8 *key,
unsigned int keylen)
{
struct shash_desc **descp = crypto_hash_ctx(tfm);
struct shash_desc *desc = *descp;
return crypto_shash_setkey(desc->tfm, key, keylen);
}
static int shash_compat_init(struct hash_desc *hdesc)
{
struct shash_desc **descp = crypto_hash_ctx(hdesc->tfm);
struct shash_desc *desc = *descp;
desc->flags = hdesc->flags;
return crypto_shash_init(desc);
}
static int shash_compat_update(struct hash_desc *hdesc, struct scatterlist *sg,
unsigned int len)
{
struct shash_desc **descp = crypto_hash_ctx(hdesc->tfm);
struct shash_desc *desc = *descp;
struct crypto_hash_walk walk;
int nbytes;
for (nbytes = crypto_hash_walk_first_compat(hdesc, &walk, sg, len);
nbytes > 0; nbytes = crypto_hash_walk_done(&walk, nbytes))
nbytes = crypto_shash_update(desc, walk.data, nbytes);
return nbytes;
}
static int shash_compat_final(struct hash_desc *hdesc, u8 *out)
{
struct shash_desc **descp = crypto_hash_ctx(hdesc->tfm);
return crypto_shash_final(*descp, out);
}
static int shash_compat_digest(struct hash_desc *hdesc, struct scatterlist *sg,
unsigned int nbytes, u8 *out)
{
unsigned int offset = sg->offset;
int err;
if (nbytes < min(sg->length, ((unsigned int)(PAGE_SIZE)) - offset)) {
struct shash_desc **descp = crypto_hash_ctx(hdesc->tfm);
struct shash_desc *desc = *descp;
void *data;
desc->flags = hdesc->flags;
data = kmap_atomic(sg_page(sg));
err = crypto_shash_digest(desc, data + offset, nbytes, out);
kunmap_atomic(data);
crypto_yield(desc->flags);
goto out;
}
err = shash_compat_init(hdesc);
if (err)
goto out;
err = shash_compat_update(hdesc, sg, nbytes);
if (err)
goto out;
err = shash_compat_final(hdesc, out);
out:
return err;
}
static void crypto_exit_shash_ops_compat(struct crypto_tfm *tfm)
{
struct shash_desc **descp = crypto_tfm_ctx(tfm);
struct shash_desc *desc = *descp;
crypto_free_shash(desc->tfm);
kzfree(desc);
}
static int crypto_init_shash_ops_compat(struct crypto_tfm *tfm)
{
struct hash_tfm *crt = &tfm->crt_hash;
struct crypto_alg *calg = tfm->__crt_alg;
struct shash_alg *alg = __crypto_shash_alg(calg);
struct shash_desc **descp = crypto_tfm_ctx(tfm);
struct crypto_shash *shash;
struct shash_desc *desc;
if (!crypto_mod_get(calg))
return -EAGAIN;
shash = crypto_create_tfm(calg, &crypto_shash_type);
if (IS_ERR(shash)) {
crypto_mod_put(calg);
return PTR_ERR(shash);
}
desc = kmalloc(sizeof(*desc) + crypto_shash_descsize(shash),
GFP_KERNEL);
if (!desc) {
crypto_free_shash(shash);
return -ENOMEM;
}
*descp = desc;
desc->tfm = shash;
tfm->exit = crypto_exit_shash_ops_compat;
crt->init = shash_compat_init;
crt->update = shash_compat_update;
crt->final = shash_compat_final;
crt->digest = shash_compat_digest;
crt->setkey = shash_compat_setkey;
crt->digestsize = alg->digestsize;
return 0;
}
static int crypto_init_shash_ops(struct crypto_tfm *tfm, u32 type, u32 mask)
{
switch (mask & CRYPTO_ALG_TYPE_MASK) {
case CRYPTO_ALG_TYPE_HASH_MASK:
return crypto_init_shash_ops_compat(tfm);
}
return -EINVAL;
}
static unsigned int crypto_shash_ctxsize(struct crypto_alg *alg, u32 type,
u32 mask)
{
switch (mask & CRYPTO_ALG_TYPE_MASK) {
case CRYPTO_ALG_TYPE_HASH_MASK:
return sizeof(struct shash_desc *);
}
return 0;
}
static int crypto_shash_init_tfm(struct crypto_tfm *tfm)
{
struct crypto_shash *hash = __crypto_shash_cast(tfm);
......@@ -559,9 +414,7 @@ static void crypto_shash_show(struct seq_file *m, struct crypto_alg *alg)
}
static const struct crypto_type crypto_shash_type = {
.ctxsize = crypto_shash_ctxsize,
.extsize = crypto_alg_extsize,
.init = crypto_init_shash_ops,
.init_tfm = crypto_shash_init_tfm,
#ifdef CONFIG_PROC_FS
.show = crypto_shash_show,
......
......@@ -118,7 +118,7 @@ static int crypto_init_skcipher_ops_blkcipher(struct crypto_tfm *tfm)
skcipher->decrypt = skcipher_decrypt_blkcipher;
skcipher->ivsize = crypto_blkcipher_ivsize(blkcipher);
skcipher->has_setkey = calg->cra_blkcipher.max_keysize;
skcipher->keysize = calg->cra_blkcipher.max_keysize;
return 0;
}
......@@ -211,7 +211,7 @@ static int crypto_init_skcipher_ops_ablkcipher(struct crypto_tfm *tfm)
skcipher->ivsize = crypto_ablkcipher_ivsize(ablkcipher);
skcipher->reqsize = crypto_ablkcipher_reqsize(ablkcipher) +
sizeof(struct ablkcipher_request);
skcipher->has_setkey = calg->cra_ablkcipher.max_keysize;
skcipher->keysize = calg->cra_ablkcipher.max_keysize;
return 0;
}
......
......@@ -554,164 +554,6 @@ static void test_cipher_speed(const char *algo, int enc, unsigned int secs,
crypto_free_blkcipher(tfm);
}
static int test_hash_jiffies_digest(struct hash_desc *desc,
struct scatterlist *sg, int blen,
char *out, int secs)
{
unsigned long start, end;
int bcount;
int ret;
for (start = jiffies, end = start + secs * HZ, bcount = 0;
time_before(jiffies, end); bcount++) {
ret = crypto_hash_digest(desc, sg, blen, out);
if (ret)
return ret;
}
printk("%6u opers/sec, %9lu bytes/sec\n",
bcount / secs, ((long)bcount * blen) / secs);
return 0;
}
static int test_hash_jiffies(struct hash_desc *desc, struct scatterlist *sg,
int blen, int plen, char *out, int secs)
{
unsigned long start, end;
int bcount, pcount;
int ret;
if (plen == blen)
return test_hash_jiffies_digest(desc, sg, blen, out, secs);
for (start = jiffies, end = start + secs * HZ, bcount = 0;
time_before(jiffies, end); bcount++) {
ret = crypto_hash_init(desc);
if (ret)
return ret;
for (pcount = 0; pcount < blen; pcount += plen) {
ret = crypto_hash_update(desc, sg, plen);
if (ret)
return ret;
}
/* we assume there is enough space in 'out' for the result */
ret = crypto_hash_final(desc, out);
if (ret)
return ret;
}
printk("%6u opers/sec, %9lu bytes/sec\n",
bcount / secs, ((long)bcount * blen) / secs);
return 0;
}
static int test_hash_cycles_digest(struct hash_desc *desc,
struct scatterlist *sg, int blen, char *out)
{
unsigned long cycles = 0;
int i;
int ret;
local_irq_disable();
/* Warm-up run. */
for (i = 0; i < 4; i++) {
ret = crypto_hash_digest(desc, sg, blen, out);
if (ret)
goto out;
}
/* The real thing. */
for (i = 0; i < 8; i++) {
cycles_t start, end;
start = get_cycles();
ret = crypto_hash_digest(desc, sg, blen, out);
if (ret)
goto out;
end = get_cycles();
cycles += end - start;
}
out:
local_irq_enable();
if (ret)
return ret;
printk("%6lu cycles/operation, %4lu cycles/byte\n",
cycles / 8, cycles / (8 * blen));
return 0;
}
static int test_hash_cycles(struct hash_desc *desc, struct scatterlist *sg,
int blen, int plen, char *out)
{
unsigned long cycles = 0;
int i, pcount;
int ret;
if (plen == blen)
return test_hash_cycles_digest(desc, sg, blen, out);
local_irq_disable();
/* Warm-up run. */
for (i = 0; i < 4; i++) {
ret = crypto_hash_init(desc);
if (ret)
goto out;
for (pcount = 0; pcount < blen; pcount += plen) {
ret = crypto_hash_update(desc, sg, plen);
if (ret)
goto out;
}
ret = crypto_hash_final(desc, out);
if (ret)
goto out;
}
/* The real thing. */
for (i = 0; i < 8; i++) {
cycles_t start, end;
start = get_cycles();
ret = crypto_hash_init(desc);
if (ret)
goto out;
for (pcount = 0; pcount < blen; pcount += plen) {
ret = crypto_hash_update(desc, sg, plen);
if (ret)
goto out;
}
ret = crypto_hash_final(desc, out);
if (ret)
goto out;
end = get_cycles();
cycles += end - start;
}
out:
local_irq_enable();
if (ret)
return ret;
printk("%6lu cycles/operation, %4lu cycles/byte\n",
cycles / 8, cycles / (8 * blen));
return 0;
}
static void test_hash_sg_init(struct scatterlist *sg)
{
int i;
......@@ -723,69 +565,6 @@ static void test_hash_sg_init(struct scatterlist *sg)
}
}
static void test_hash_speed(const char *algo, unsigned int secs,
struct hash_speed *speed)
{
struct scatterlist sg[TVMEMSIZE];
struct crypto_hash *tfm;
struct hash_desc desc;
static char output[1024];
int i;
int ret;
tfm = crypto_alloc_hash(algo, 0, CRYPTO_ALG_ASYNC);
if (IS_ERR(tfm)) {
printk(KERN_ERR "failed to load transform for %s: %ld\n", algo,
PTR_ERR(tfm));
return;
}
printk(KERN_INFO "\ntesting speed of %s (%s)\n", algo,
get_driver_name(crypto_hash, tfm));
desc.tfm = tfm;
desc.flags = 0;
if (crypto_hash_digestsize(tfm) > sizeof(output)) {
printk(KERN_ERR "digestsize(%u) > outputbuffer(%zu)\n",
crypto_hash_digestsize(tfm), sizeof(output));
goto out;
}
test_hash_sg_init(sg);
for (i = 0; speed[i].blen != 0; i++) {
if (speed[i].blen > TVMEMSIZE * PAGE_SIZE) {
printk(KERN_ERR
"template (%u) too big for tvmem (%lu)\n",
speed[i].blen, TVMEMSIZE * PAGE_SIZE);
goto out;
}
if (speed[i].klen)
crypto_hash_setkey(tfm, tvmem[0], speed[i].klen);
printk(KERN_INFO "test%3u "
"(%5u byte blocks,%5u bytes per update,%4u updates): ",
i, speed[i].blen, speed[i].plen, speed[i].blen / speed[i].plen);
if (secs)
ret = test_hash_jiffies(&desc, sg, speed[i].blen,
speed[i].plen, output, secs);
else
ret = test_hash_cycles(&desc, sg, speed[i].blen,
speed[i].plen, output);
if (ret) {
printk(KERN_ERR "hashing failed ret=%d\n", ret);
break;
}
}
out:
crypto_free_hash(tfm);
}
static inline int do_one_ahash_op(struct ahash_request *req, int ret)
{
if (ret == -EINPROGRESS || ret == -EBUSY) {
......@@ -945,8 +724,8 @@ static int test_ahash_cycles(struct ahash_request *req, int blen,
return 0;
}
static void test_ahash_speed(const char *algo, unsigned int secs,
struct hash_speed *speed)
static void test_ahash_speed_common(const char *algo, unsigned int secs,
struct hash_speed *speed, unsigned mask)
{
struct scatterlist sg[TVMEMSIZE];
struct tcrypt_result tresult;
......@@ -955,7 +734,7 @@ static void test_ahash_speed(const char *algo, unsigned int secs,
char *output;
int i, ret;
tfm = crypto_alloc_ahash(algo, 0, 0);
tfm = crypto_alloc_ahash(algo, 0, mask);
if (IS_ERR(tfm)) {
pr_err("failed to load transform for %s: %ld\n",
algo, PTR_ERR(tfm));
......@@ -1021,6 +800,18 @@ static void test_ahash_speed(const char *algo, unsigned int secs,
crypto_free_ahash(tfm);
}
static void test_ahash_speed(const char *algo, unsigned int secs,
struct hash_speed *speed)
{
return test_ahash_speed_common(algo, secs, speed, 0);
}
static void test_hash_speed(const char *algo, unsigned int secs,
struct hash_speed *speed)
{
return test_ahash_speed_common(algo, secs, speed, CRYPTO_ALG_ASYNC);
}
static inline int do_one_acipher_op(struct ablkcipher_request *req, int ret)
{
if (ret == -EINPROGRESS || ret == -EBUSY) {
......
......@@ -96,13 +96,6 @@ struct comp_test_suite {
} comp, decomp;
};
struct pcomp_test_suite {
struct {
struct pcomp_testvec *vecs;
unsigned int count;
} comp, decomp;
};
struct hash_test_suite {
struct hash_testvec *vecs;
unsigned int count;
......@@ -133,7 +126,6 @@ struct alg_test_desc {
struct aead_test_suite aead;
struct cipher_test_suite cipher;
struct comp_test_suite comp;
struct pcomp_test_suite pcomp;
struct hash_test_suite hash;
struct cprng_test_suite cprng;
struct drbg_test_suite drbg;
......@@ -198,6 +190,61 @@ static int wait_async_op(struct tcrypt_result *tr, int ret)
return ret;
}
static int ahash_partial_update(struct ahash_request **preq,
struct crypto_ahash *tfm, struct hash_testvec *template,
void *hash_buff, int k, int temp, struct scatterlist *sg,
const char *algo, char *result, struct tcrypt_result *tresult)
{
char *state;
struct ahash_request *req;
int statesize, ret = -EINVAL;
req = *preq;
statesize = crypto_ahash_statesize(
crypto_ahash_reqtfm(req));
state = kmalloc(statesize, GFP_KERNEL);
if (!state) {
pr_err("alt: hash: Failed to alloc state for %s\n", algo);
goto out_nostate;
}
ret = crypto_ahash_export(req, state);
if (ret) {
pr_err("alt: hash: Failed to export() for %s\n", algo);
goto out;
}
ahash_request_free(req);
req = ahash_request_alloc(tfm, GFP_KERNEL);
if (!req) {
pr_err("alg: hash: Failed to alloc request for %s\n", algo);
goto out_noreq;
}
ahash_request_set_callback(req,
CRYPTO_TFM_REQ_MAY_BACKLOG,
tcrypt_complete, tresult);
memcpy(hash_buff, template->plaintext + temp,
template->tap[k]);
sg_init_one(&sg[0], hash_buff, template->tap[k]);
ahash_request_set_crypt(req, sg, result, template->tap[k]);
ret = crypto_ahash_import(req, state);
if (ret) {
pr_err("alg: hash: Failed to import() for %s\n", algo);
goto out;
}
ret = wait_async_op(tresult, crypto_ahash_update(req));
if (ret)
goto out;
*preq = req;
ret = 0;
goto out_noreq;
out:
ahash_request_free(req);
out_noreq:
kfree(state);
out_nostate:
return ret;
}
static int __test_hash(struct crypto_ahash *tfm, struct hash_testvec *template,
unsigned int tcount, bool use_digest,
const int align_offset)
......@@ -385,6 +432,84 @@ static int __test_hash(struct crypto_ahash *tfm, struct hash_testvec *template,
}
}
/* partial update exercise */
j = 0;
for (i = 0; i < tcount; i++) {
/* alignment tests are only done with continuous buffers */
if (align_offset != 0)
break;
if (template[i].np < 2)
continue;
j++;
memset(result, 0, MAX_DIGEST_SIZE);
ret = -EINVAL;
hash_buff = xbuf[0];
memcpy(hash_buff, template[i].plaintext,
template[i].tap[0]);
sg_init_one(&sg[0], hash_buff, template[i].tap[0]);
if (template[i].ksize) {
crypto_ahash_clear_flags(tfm, ~0);
if (template[i].ksize > MAX_KEYLEN) {
pr_err("alg: hash: setkey failed on test %d for %s: key size %d > %d\n",
j, algo, template[i].ksize, MAX_KEYLEN);
ret = -EINVAL;
goto out;
}
memcpy(key, template[i].key, template[i].ksize);
ret = crypto_ahash_setkey(tfm, key, template[i].ksize);
if (ret) {
pr_err("alg: hash: setkey failed on test %d for %s: ret=%d\n",
j, algo, -ret);
goto out;
}
}
ahash_request_set_crypt(req, sg, result, template[i].tap[0]);
ret = wait_async_op(&tresult, crypto_ahash_init(req));
if (ret) {
pr_err("alt: hash: init failed on test %d for %s: ret=%d\n",
j, algo, -ret);
goto out;
}
ret = wait_async_op(&tresult, crypto_ahash_update(req));
if (ret) {
pr_err("alt: hash: update failed on test %d for %s: ret=%d\n",
j, algo, -ret);
goto out;
}
temp = template[i].tap[0];
for (k = 1; k < template[i].np; k++) {
ret = ahash_partial_update(&req, tfm, &template[i],
hash_buff, k, temp, &sg[0], algo, result,
&tresult);
if (ret) {
pr_err("hash: partial update failed on test %d for %s: ret=%d\n",
j, algo, -ret);
goto out_noreq;
}
temp += template[i].tap[k];
}
ret = wait_async_op(&tresult, crypto_ahash_final(req));
if (ret) {
pr_err("alt: hash: final failed on test %d for %s: ret=%d\n",
j, algo, -ret);
goto out;
}
if (memcmp(result, template[i].digest,
crypto_ahash_digestsize(tfm))) {
pr_err("alg: hash: Partial Test %d failed for %s\n",
j, algo);
hexdump(result, crypto_ahash_digestsize(tfm));
ret = -EINVAL;
goto out;
}
}
ret = 0;
out:
......@@ -488,6 +613,8 @@ static int __test_aead(struct crypto_aead *tfm, int enc,
aead_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
tcrypt_complete, &result);
iv_len = crypto_aead_ivsize(tfm);
for (i = 0, j = 0; i < tcount; i++) {
if (template[i].np)
continue;
......@@ -508,7 +635,6 @@ static int __test_aead(struct crypto_aead *tfm, int enc,
memcpy(input, template[i].input, template[i].ilen);
memcpy(assoc, template[i].assoc, template[i].alen);
iv_len = crypto_aead_ivsize(tfm);
if (template[i].iv)
memcpy(iv, template[i].iv, iv_len);
else
......@@ -617,7 +743,7 @@ static int __test_aead(struct crypto_aead *tfm, int enc,
j++;
if (template[i].iv)
memcpy(iv, template[i].iv, MAX_IVLEN);
memcpy(iv, template[i].iv, iv_len);
else
memset(iv, 0, MAX_IVLEN);
......@@ -1293,183 +1419,6 @@ static int test_comp(struct crypto_comp *tfm, struct comp_testvec *ctemplate,
return ret;
}
static int test_pcomp(struct crypto_pcomp *tfm,
struct pcomp_testvec *ctemplate,
struct pcomp_testvec *dtemplate, int ctcount,
int dtcount)
{
const char *algo = crypto_tfm_alg_driver_name(crypto_pcomp_tfm(tfm));
unsigned int i;
char result[COMP_BUF_SIZE];
int res;
for (i = 0; i < ctcount; i++) {
struct comp_request req;
unsigned int produced = 0;
res = crypto_compress_setup(tfm, ctemplate[i].params,
ctemplate[i].paramsize);
if (res) {
pr_err("alg: pcomp: compression setup failed on test "
"%d for %s: error=%d\n", i + 1, algo, res);
return res;
}
res = crypto_compress_init(tfm);
if (res) {
pr_err("alg: pcomp: compression init failed on test "
"%d for %s: error=%d\n", i + 1, algo, res);
return res;
}
memset(result, 0, sizeof(result));
req.next_in = ctemplate[i].input;
req.avail_in = ctemplate[i].inlen / 2;
req.next_out = result;
req.avail_out = ctemplate[i].outlen / 2;
res = crypto_compress_update(tfm, &req);
if (res < 0 && (res != -EAGAIN || req.avail_in)) {
pr_err("alg: pcomp: compression update failed on test "
"%d for %s: error=%d\n", i + 1, algo, res);
return res;
}
if (res > 0)
produced += res;
/* Add remaining input data */
req.avail_in += (ctemplate[i].inlen + 1) / 2;
res = crypto_compress_update(tfm, &req);
if (res < 0 && (res != -EAGAIN || req.avail_in)) {
pr_err("alg: pcomp: compression update failed on test "
"%d for %s: error=%d\n", i + 1, algo, res);
return res;
}
if (res > 0)
produced += res;
/* Provide remaining output space */
req.avail_out += COMP_BUF_SIZE - ctemplate[i].outlen / 2;
res = crypto_compress_final(tfm, &req);
if (res < 0) {
pr_err("alg: pcomp: compression final failed on test "
"%d for %s: error=%d\n", i + 1, algo, res);
return res;
}
produced += res;
if (COMP_BUF_SIZE - req.avail_out != ctemplate[i].outlen) {
pr_err("alg: comp: Compression test %d failed for %s: "
"output len = %d (expected %d)\n", i + 1, algo,
COMP_BUF_SIZE - req.avail_out,
ctemplate[i].outlen);
return -EINVAL;
}
if (produced != ctemplate[i].outlen) {
pr_err("alg: comp: Compression test %d failed for %s: "
"returned len = %u (expected %d)\n", i + 1,
algo, produced, ctemplate[i].outlen);
return -EINVAL;
}
if (memcmp(result, ctemplate[i].output, ctemplate[i].outlen)) {
pr_err("alg: pcomp: Compression test %d failed for "
"%s\n", i + 1, algo);
hexdump(result, ctemplate[i].outlen);
return -EINVAL;
}
}
for (i = 0; i < dtcount; i++) {
struct comp_request req;
unsigned int produced = 0;
res = crypto_decompress_setup(tfm, dtemplate[i].params,
dtemplate[i].paramsize);
if (res) {
pr_err("alg: pcomp: decompression setup failed on "
"test %d for %s: error=%d\n", i + 1, algo, res);
return res;
}
res = crypto_decompress_init(tfm);
if (res) {
pr_err("alg: pcomp: decompression init failed on test "
"%d for %s: error=%d\n", i + 1, algo, res);
return res;
}
memset(result, 0, sizeof(result));
req.next_in = dtemplate[i].input;
req.avail_in = dtemplate[i].inlen / 2;
req.next_out = result;
req.avail_out = dtemplate[i].outlen / 2;
res = crypto_decompress_update(tfm, &req);
if (res < 0 && (res != -EAGAIN || req.avail_in)) {
pr_err("alg: pcomp: decompression update failed on "
"test %d for %s: error=%d\n", i + 1, algo, res);
return res;
}
if (res > 0)
produced += res;
/* Add remaining input data */
req.avail_in += (dtemplate[i].inlen + 1) / 2;
res = crypto_decompress_update(tfm, &req);
if (res < 0 && (res != -EAGAIN || req.avail_in)) {
pr_err("alg: pcomp: decompression update failed on "
"test %d for %s: error=%d\n", i + 1, algo, res);
return res;
}
if (res > 0)
produced += res;
/* Provide remaining output space */
req.avail_out += COMP_BUF_SIZE - dtemplate[i].outlen / 2;
res = crypto_decompress_final(tfm, &req);
if (res < 0 && (res != -EAGAIN || req.avail_in)) {
pr_err("alg: pcomp: decompression final failed on "
"test %d for %s: error=%d\n", i + 1, algo, res);
return res;
}
if (res > 0)
produced += res;
if (COMP_BUF_SIZE - req.avail_out != dtemplate[i].outlen) {
pr_err("alg: comp: Decompression test %d failed for "
"%s: output len = %d (expected %d)\n", i + 1,
algo, COMP_BUF_SIZE - req.avail_out,
dtemplate[i].outlen);
return -EINVAL;
}
if (produced != dtemplate[i].outlen) {
pr_err("alg: comp: Decompression test %d failed for "
"%s: returned len = %u (expected %d)\n", i + 1,
algo, produced, dtemplate[i].outlen);
return -EINVAL;
}
if (memcmp(result, dtemplate[i].output, dtemplate[i].outlen)) {
pr_err("alg: pcomp: Decompression test %d failed for "
"%s\n", i + 1, algo);
hexdump(result, dtemplate[i].outlen);
return -EINVAL;
}
}
return 0;
}
static int test_cprng(struct crypto_rng *tfm, struct cprng_testvec *template,
unsigned int tcount)
{
......@@ -1640,28 +1589,6 @@ static int alg_test_comp(const struct alg_test_desc *desc, const char *driver,
return err;
}
static int alg_test_pcomp(const struct alg_test_desc *desc, const char *driver,
u32 type, u32 mask)
{
struct crypto_pcomp *tfm;
int err;
tfm = crypto_alloc_pcomp(driver, type, mask);
if (IS_ERR(tfm)) {
pr_err("alg: pcomp: Failed to load transform for %s: %ld\n",
driver, PTR_ERR(tfm));
return PTR_ERR(tfm);
}
err = test_pcomp(tfm, desc->suite.pcomp.comp.vecs,
desc->suite.pcomp.decomp.vecs,
desc->suite.pcomp.comp.count,
desc->suite.pcomp.decomp.count);
crypto_free_pcomp(tfm);
return err;
}
static int alg_test_hash(const struct alg_test_desc *desc, const char *driver,
u32 type, u32 mask)
{
......@@ -2081,7 +2008,6 @@ static const struct alg_test_desc alg_test_descs[] = {
}, {
.alg = "ansi_cprng",
.test = alg_test_cprng,
.fips_allowed = 1,
.suite = {
.cprng = {
.vecs = ansi_cprng_aes_tv_template,
......@@ -2132,6 +2058,7 @@ static const struct alg_test_desc alg_test_descs[] = {
}, {
.alg = "authenc(hmac(sha1),cbc(des3_ede))",
.test = alg_test_aead,
.fips_allowed = 1,
.suite = {
.aead = {
.enc = {
......@@ -2142,6 +2069,10 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}
}
}, {
.alg = "authenc(hmac(sha1),ctr(aes))",
.test = alg_test_null,
.fips_allowed = 1,
}, {
.alg = "authenc(hmac(sha1),ecb(cipher_null))",
.test = alg_test_aead,
......@@ -2161,6 +2092,10 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}
}
}, {
.alg = "authenc(hmac(sha1),rfc3686(ctr(aes)))",
.test = alg_test_null,
.fips_allowed = 1,
}, {
.alg = "authenc(hmac(sha224),cbc(des))",
.test = alg_test_aead,
......@@ -2177,6 +2112,7 @@ static const struct alg_test_desc alg_test_descs[] = {
}, {
.alg = "authenc(hmac(sha224),cbc(des3_ede))",
.test = alg_test_aead,
.fips_allowed = 1,
.suite = {
.aead = {
.enc = {
......@@ -2190,6 +2126,7 @@ static const struct alg_test_desc alg_test_descs[] = {
}, {
.alg = "authenc(hmac(sha256),cbc(aes))",
.test = alg_test_aead,
.fips_allowed = 1,
.suite = {
.aead = {
.enc = {
......@@ -2216,6 +2153,7 @@ static const struct alg_test_desc alg_test_descs[] = {
}, {
.alg = "authenc(hmac(sha256),cbc(des3_ede))",
.test = alg_test_aead,
.fips_allowed = 1,
.suite = {
.aead = {
.enc = {
......@@ -2226,6 +2164,14 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}
}
}, {
.alg = "authenc(hmac(sha256),ctr(aes))",
.test = alg_test_null,
.fips_allowed = 1,
}, {
.alg = "authenc(hmac(sha256),rfc3686(ctr(aes)))",
.test = alg_test_null,
.fips_allowed = 1,
}, {
.alg = "authenc(hmac(sha384),cbc(des))",
.test = alg_test_aead,
......@@ -2242,6 +2188,7 @@ static const struct alg_test_desc alg_test_descs[] = {
}, {
.alg = "authenc(hmac(sha384),cbc(des3_ede))",
.test = alg_test_aead,
.fips_allowed = 1,
.suite = {
.aead = {
.enc = {
......@@ -2252,8 +2199,17 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}
}
}, {
.alg = "authenc(hmac(sha384),ctr(aes))",
.test = alg_test_null,
.fips_allowed = 1,
}, {
.alg = "authenc(hmac(sha384),rfc3686(ctr(aes)))",
.test = alg_test_null,
.fips_allowed = 1,
}, {
.alg = "authenc(hmac(sha512),cbc(aes))",
.fips_allowed = 1,
.test = alg_test_aead,
.suite = {
.aead = {
......@@ -2281,6 +2237,7 @@ static const struct alg_test_desc alg_test_descs[] = {
}, {
.alg = "authenc(hmac(sha512),cbc(des3_ede))",
.test = alg_test_aead,
.fips_allowed = 1,
.suite = {
.aead = {
.enc = {
......@@ -2291,6 +2248,14 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}
}
}, {
.alg = "authenc(hmac(sha512),ctr(aes))",
.test = alg_test_null,
.fips_allowed = 1,
}, {
.alg = "authenc(hmac(sha512),rfc3686(ctr(aes)))",
.test = alg_test_null,
.fips_allowed = 1,
}, {
.alg = "cbc(aes)",
.test = alg_test_skcipher,
......@@ -3840,22 +3805,6 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}
}
}, {
.alg = "zlib",
.test = alg_test_pcomp,
.fips_allowed = 1,
.suite = {
.pcomp = {
.comp = {
.vecs = zlib_comp_tv_template,
.count = ZLIB_COMP_TEST_VECTORS
},
.decomp = {
.vecs = zlib_decomp_tv_template,
.count = ZLIB_DECOMP_TEST_VECTORS
}
}
}
}
};
......
......@@ -25,9 +25,6 @@
#define _CRYPTO_TESTMGR_H
#include <linux/netlink.h>
#include <linux/zlib.h>
#include <crypto/compress.h>
#define MAX_DIGEST_SIZE 64
#define MAX_TAP 8
......@@ -32268,14 +32265,6 @@ struct comp_testvec {
char output[COMP_BUF_SIZE];
};
struct pcomp_testvec {
const void *params;
unsigned int paramsize;
int inlen, outlen;
char input[COMP_BUF_SIZE];
char output[COMP_BUF_SIZE];
};
/*
* Deflate test vectors (null-terminated strings).
* Params: winbits=-11, Z_DEFAULT_COMPRESSION, MAX_MEM_LEVEL.
......@@ -32356,139 +32345,6 @@ static struct comp_testvec deflate_decomp_tv_template[] = {
},
};
#define ZLIB_COMP_TEST_VECTORS 2
#define ZLIB_DECOMP_TEST_VECTORS 2
static const struct {
struct nlattr nla;
int val;
} deflate_comp_params[] = {
{
.nla = {
.nla_len = NLA_HDRLEN + sizeof(int),
.nla_type = ZLIB_COMP_LEVEL,
},
.val = Z_DEFAULT_COMPRESSION,
}, {
.nla = {
.nla_len = NLA_HDRLEN + sizeof(int),
.nla_type = ZLIB_COMP_METHOD,
},
.val = Z_DEFLATED,
}, {
.nla = {
.nla_len = NLA_HDRLEN + sizeof(int),
.nla_type = ZLIB_COMP_WINDOWBITS,
},
.val = -11,
}, {
.nla = {
.nla_len = NLA_HDRLEN + sizeof(int),
.nla_type = ZLIB_COMP_MEMLEVEL,
},
.val = MAX_MEM_LEVEL,
}, {
.nla = {
.nla_len = NLA_HDRLEN + sizeof(int),
.nla_type = ZLIB_COMP_STRATEGY,
},
.val = Z_DEFAULT_STRATEGY,
}
};
static const struct {
struct nlattr nla;
int val;
} deflate_decomp_params[] = {
{
.nla = {
.nla_len = NLA_HDRLEN + sizeof(int),
.nla_type = ZLIB_DECOMP_WINDOWBITS,
},
.val = -11,
}
};
static struct pcomp_testvec zlib_comp_tv_template[] = {
{
.params = &deflate_comp_params,
.paramsize = sizeof(deflate_comp_params),
.inlen = 70,
.outlen = 38,
.input = "Join us now and share the software "
"Join us now and share the software ",
.output = "\xf3\xca\xcf\xcc\x53\x28\x2d\x56"
"\xc8\xcb\x2f\x57\x48\xcc\x4b\x51"
"\x28\xce\x48\x2c\x4a\x55\x28\xc9"
"\x48\x55\x28\xce\x4f\x2b\x29\x07"
"\x71\xbc\x08\x2b\x01\x00",
}, {
.params = &deflate_comp_params,
.paramsize = sizeof(deflate_comp_params),
.inlen = 191,
.outlen = 122,
.input = "This document describes a compression method based on the DEFLATE"
"compression algorithm. This document defines the application of "
"the DEFLATE algorithm to the IP Payload Compression Protocol.",
.output = "\x5d\x8d\x31\x0e\xc2\x30\x10\x04"
"\xbf\xb2\x2f\xc8\x1f\x10\x04\x09"
"\x89\xc2\x85\x3f\x70\xb1\x2f\xf8"
"\x24\xdb\x67\xd9\x47\xc1\xef\x49"
"\x68\x12\x51\xae\x76\x67\xd6\x27"
"\x19\x88\x1a\xde\x85\xab\x21\xf2"
"\x08\x5d\x16\x1e\x20\x04\x2d\xad"
"\xf3\x18\xa2\x15\x85\x2d\x69\xc4"
"\x42\x83\x23\xb6\x6c\x89\x71\x9b"
"\xef\xcf\x8b\x9f\xcf\x33\xca\x2f"
"\xed\x62\xa9\x4c\x80\xff\x13\xaf"
"\x52\x37\xed\x0e\x52\x6b\x59\x02"
"\xd9\x4e\xe8\x7a\x76\x1d\x02\x98"
"\xfe\x8a\x87\x83\xa3\x4f\x56\x8a"
"\xb8\x9e\x8e\x5c\x57\xd3\xa0\x79"
"\xfa\x02",
},
};
static struct pcomp_testvec zlib_decomp_tv_template[] = {
{
.params = &deflate_decomp_params,
.paramsize = sizeof(deflate_decomp_params),
.inlen = 122,
.outlen = 191,
.input = "\x5d\x8d\x31\x0e\xc2\x30\x10\x04"
"\xbf\xb2\x2f\xc8\x1f\x10\x04\x09"
"\x89\xc2\x85\x3f\x70\xb1\x2f\xf8"
"\x24\xdb\x67\xd9\x47\xc1\xef\x49"
"\x68\x12\x51\xae\x76\x67\xd6\x27"
"\x19\x88\x1a\xde\x85\xab\x21\xf2"
"\x08\x5d\x16\x1e\x20\x04\x2d\xad"
"\xf3\x18\xa2\x15\x85\x2d\x69\xc4"
"\x42\x83\x23\xb6\x6c\x89\x71\x9b"
"\xef\xcf\x8b\x9f\xcf\x33\xca\x2f"
"\xed\x62\xa9\x4c\x80\xff\x13\xaf"
"\x52\x37\xed\x0e\x52\x6b\x59\x02"
"\xd9\x4e\xe8\x7a\x76\x1d\x02\x98"
"\xfe\x8a\x87\x83\xa3\x4f\x56\x8a"
"\xb8\x9e\x8e\x5c\x57\xd3\xa0\x79"
"\xfa\x02",
.output = "This document describes a compression method based on the DEFLATE"
"compression algorithm. This document defines the application of "
"the DEFLATE algorithm to the IP Payload Compression Protocol.",
}, {
.params = &deflate_decomp_params,
.paramsize = sizeof(deflate_decomp_params),
.inlen = 38,
.outlen = 70,
.input = "\xf3\xca\xcf\xcc\x53\x28\x2d\x56"
"\xc8\xcb\x2f\x57\x48\xcc\x4b\x51"
"\x28\xce\x48\x2c\x4a\x55\x28\xc9"
"\x48\x55\x28\xce\x4f\x2b\x29\x07"
"\x71\xbc\x08\x2b\x01\x00",
.output = "Join us now and share the software "
"Join us now and share the software ",
},
};
/*
* LZO test vectors (null-terminated strings).
*/
......@@ -35,16 +35,11 @@ static int setkey(struct crypto_tfm *parent, const u8 *key,
{
struct priv *ctx = crypto_tfm_ctx(parent);
struct crypto_cipher *child = ctx->tweak;
u32 *flags = &parent->crt_flags;
int err;
/* key consists of keys of equal size concatenated, therefore
* the length must be even */
if (keylen % 2) {
/* tell the user why there was an error */
*flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
return -EINVAL;
}
err = xts_check_key(parent, key, keylen);
if (err)
return err;
/* we need two cipher instances: one to compute the initial 'tweak'
* by encrypting the IV (usually the 'plain' iv) and the other
......
/*
* Cryptographic API.
*
* Zlib algorithm
*
* Copyright 2008 Sony Corporation
*
* Based on deflate.c, which is
* Copyright (c) 2003 James Morris <jmorris@intercode.com.au>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*
* FIXME: deflate transforms will require up to a total of about 436k of kernel
* memory on i386 (390k for compression, the rest for decompression), as the
* current zlib kernel code uses a worst case pre-allocation system by default.
* This needs to be fixed so that the amount of memory required is properly
* related to the winbits and memlevel parameters.
*/
#define pr_fmt(fmt) "%s: " fmt, __func__
#include <linux/init.h>
#include <linux/module.h>
#include <linux/zlib.h>
#include <linux/vmalloc.h>
#include <linux/interrupt.h>
#include <linux/mm.h>
#include <linux/net.h>
#include <crypto/internal/compress.h>
#include <net/netlink.h>
struct zlib_ctx {
struct z_stream_s comp_stream;
struct z_stream_s decomp_stream;
int decomp_windowBits;
};
static void zlib_comp_exit(struct zlib_ctx *ctx)
{
struct z_stream_s *stream = &ctx->comp_stream;
if (stream->workspace) {
zlib_deflateEnd(stream);
vfree(stream->workspace);
stream->workspace = NULL;
}
}
static void zlib_decomp_exit(struct zlib_ctx *ctx)
{
struct z_stream_s *stream = &ctx->decomp_stream;
if (stream->workspace) {
zlib_inflateEnd(stream);
vfree(stream->workspace);
stream->workspace = NULL;
}
}
static int zlib_init(struct crypto_tfm *tfm)
{
return 0;
}
static void zlib_exit(struct crypto_tfm *tfm)
{
struct zlib_ctx *ctx = crypto_tfm_ctx(tfm);
zlib_comp_exit(ctx);
zlib_decomp_exit(ctx);
}
static int zlib_compress_setup(struct crypto_pcomp *tfm, const void *params,
unsigned int len)
{
struct zlib_ctx *ctx = crypto_tfm_ctx(crypto_pcomp_tfm(tfm));
struct z_stream_s *stream = &ctx->comp_stream;
struct nlattr *tb[ZLIB_COMP_MAX + 1];
int window_bits, mem_level;
size_t workspacesize;
int ret;
ret = nla_parse(tb, ZLIB_COMP_MAX, params, len, NULL);
if (ret)
return ret;
zlib_comp_exit(ctx);
window_bits = tb[ZLIB_COMP_WINDOWBITS]
? nla_get_u32(tb[ZLIB_COMP_WINDOWBITS])
: MAX_WBITS;
mem_level = tb[ZLIB_COMP_MEMLEVEL]
? nla_get_u32(tb[ZLIB_COMP_MEMLEVEL])
: DEF_MEM_LEVEL;
workspacesize = zlib_deflate_workspacesize(window_bits, mem_level);
stream->workspace = vzalloc(workspacesize);
if (!stream->workspace)
return -ENOMEM;
ret = zlib_deflateInit2(stream,
tb[ZLIB_COMP_LEVEL]
? nla_get_u32(tb[ZLIB_COMP_LEVEL])
: Z_DEFAULT_COMPRESSION,
tb[ZLIB_COMP_METHOD]
? nla_get_u32(tb[ZLIB_COMP_METHOD])
: Z_DEFLATED,
window_bits,
mem_level,
tb[ZLIB_COMP_STRATEGY]
? nla_get_u32(tb[ZLIB_COMP_STRATEGY])
: Z_DEFAULT_STRATEGY);
if (ret != Z_OK) {
vfree(stream->workspace);
stream->workspace = NULL;
return -EINVAL;
}
return 0;
}
static int zlib_compress_init(struct crypto_pcomp *tfm)
{
int ret;
struct zlib_ctx *dctx = crypto_tfm_ctx(crypto_pcomp_tfm(tfm));
struct z_stream_s *stream = &dctx->comp_stream;
ret = zlib_deflateReset(stream);
if (ret != Z_OK)
return -EINVAL;
return 0;
}
static int zlib_compress_update(struct crypto_pcomp *tfm,
struct comp_request *req)
{
int ret;
struct zlib_ctx *dctx = crypto_tfm_ctx(crypto_pcomp_tfm(tfm));
struct z_stream_s *stream = &dctx->comp_stream;
pr_debug("avail_in %u, avail_out %u\n", req->avail_in, req->avail_out);
stream->next_in = req->next_in;
stream->avail_in = req->avail_in;
stream->next_out = req->next_out;
stream->avail_out = req->avail_out;
ret = zlib_deflate(stream, Z_NO_FLUSH);
switch (ret) {
case Z_OK:
break;
case Z_BUF_ERROR:
pr_debug("zlib_deflate could not make progress\n");
return -EAGAIN;
default:
pr_debug("zlib_deflate failed %d\n", ret);
return -EINVAL;
}
ret = req->avail_out - stream->avail_out;
pr_debug("avail_in %lu, avail_out %lu (consumed %lu, produced %u)\n",
stream->avail_in, stream->avail_out,
req->avail_in - stream->avail_in, ret);
req->next_in = stream->next_in;
req->avail_in = stream->avail_in;
req->next_out = stream->next_out;
req->avail_out = stream->avail_out;
return ret;
}
static int zlib_compress_final(struct crypto_pcomp *tfm,
struct comp_request *req)
{
int ret;
struct zlib_ctx *dctx = crypto_tfm_ctx(crypto_pcomp_tfm(tfm));
struct z_stream_s *stream = &dctx->comp_stream;
pr_debug("avail_in %u, avail_out %u\n", req->avail_in, req->avail_out);
stream->next_in = req->next_in;
stream->avail_in = req->avail_in;
stream->next_out = req->next_out;
stream->avail_out = req->avail_out;
ret = zlib_deflate(stream, Z_FINISH);
if (ret != Z_STREAM_END) {
pr_debug("zlib_deflate failed %d\n", ret);
return -EINVAL;
}
ret = req->avail_out - stream->avail_out;
pr_debug("avail_in %lu, avail_out %lu (consumed %lu, produced %u)\n",
stream->avail_in, stream->avail_out,
req->avail_in - stream->avail_in, ret);
req->next_in = stream->next_in;
req->avail_in = stream->avail_in;
req->next_out = stream->next_out;
req->avail_out = stream->avail_out;
return ret;
}
static int zlib_decompress_setup(struct crypto_pcomp *tfm, const void *params,
unsigned int len)
{
struct zlib_ctx *ctx = crypto_tfm_ctx(crypto_pcomp_tfm(tfm));
struct z_stream_s *stream = &ctx->decomp_stream;
struct nlattr *tb[ZLIB_DECOMP_MAX + 1];
int ret = 0;
ret = nla_parse(tb, ZLIB_DECOMP_MAX, params, len, NULL);
if (ret)
return ret;
zlib_decomp_exit(ctx);
ctx->decomp_windowBits = tb[ZLIB_DECOMP_WINDOWBITS]
? nla_get_u32(tb[ZLIB_DECOMP_WINDOWBITS])
: DEF_WBITS;
stream->workspace = vzalloc(zlib_inflate_workspacesize());
if (!stream->workspace)
return -ENOMEM;
ret = zlib_inflateInit2(stream, ctx->decomp_windowBits);
if (ret != Z_OK) {
vfree(stream->workspace);
stream->workspace = NULL;
return -EINVAL;
}
return 0;
}
static int zlib_decompress_init(struct crypto_pcomp *tfm)
{
int ret;
struct zlib_ctx *dctx = crypto_tfm_ctx(crypto_pcomp_tfm(tfm));
struct z_stream_s *stream = &dctx->decomp_stream;
ret = zlib_inflateReset(stream);
if (ret != Z_OK)
return -EINVAL;
return 0;
}
static int zlib_decompress_update(struct crypto_pcomp *tfm,
struct comp_request *req)
{
int ret;
struct zlib_ctx *dctx = crypto_tfm_ctx(crypto_pcomp_tfm(tfm));
struct z_stream_s *stream = &dctx->decomp_stream;
pr_debug("avail_in %u, avail_out %u\n", req->avail_in, req->avail_out);
stream->next_in = req->next_in;
stream->avail_in = req->avail_in;
stream->next_out = req->next_out;
stream->avail_out = req->avail_out;
ret = zlib_inflate(stream, Z_SYNC_FLUSH);
switch (ret) {
case Z_OK:
case Z_STREAM_END:
break;
case Z_BUF_ERROR:
pr_debug("zlib_inflate could not make progress\n");
return -EAGAIN;
default:
pr_debug("zlib_inflate failed %d\n", ret);
return -EINVAL;
}
ret = req->avail_out - stream->avail_out;
pr_debug("avail_in %lu, avail_out %lu (consumed %lu, produced %u)\n",
stream->avail_in, stream->avail_out,
req->avail_in - stream->avail_in, ret);
req->next_in = stream->next_in;
req->avail_in = stream->avail_in;
req->next_out = stream->next_out;
req->avail_out = stream->avail_out;
return ret;
}
static int zlib_decompress_final(struct crypto_pcomp *tfm,
struct comp_request *req)
{
int ret;
struct zlib_ctx *dctx = crypto_tfm_ctx(crypto_pcomp_tfm(tfm));
struct z_stream_s *stream = &dctx->decomp_stream;
pr_debug("avail_in %u, avail_out %u\n", req->avail_in, req->avail_out);
stream->next_in = req->next_in;
stream->avail_in = req->avail_in;
stream->next_out = req->next_out;
stream->avail_out = req->avail_out;
if (dctx->decomp_windowBits < 0) {
ret = zlib_inflate(stream, Z_SYNC_FLUSH);
/*
* Work around a bug in zlib, which sometimes wants to taste an
* extra byte when being used in the (undocumented) raw deflate
* mode. (From USAGI).
*/
if (ret == Z_OK && !stream->avail_in && stream->avail_out) {
const void *saved_next_in = stream->next_in;
u8 zerostuff = 0;
stream->next_in = &zerostuff;
stream->avail_in = 1;
ret = zlib_inflate(stream, Z_FINISH);
stream->next_in = saved_next_in;
stream->avail_in = 0;
}
} else
ret = zlib_inflate(stream, Z_FINISH);
if (ret != Z_STREAM_END) {
pr_debug("zlib_inflate failed %d\n", ret);
return -EINVAL;
}
ret = req->avail_out - stream->avail_out;
pr_debug("avail_in %lu, avail_out %lu (consumed %lu, produced %u)\n",
stream->avail_in, stream->avail_out,
req->avail_in - stream->avail_in, ret);
req->next_in = stream->next_in;
req->avail_in = stream->avail_in;
req->next_out = stream->next_out;
req->avail_out = stream->avail_out;
return ret;
}
static struct pcomp_alg zlib_alg = {
.compress_setup = zlib_compress_setup,
.compress_init = zlib_compress_init,
.compress_update = zlib_compress_update,
.compress_final = zlib_compress_final,
.decompress_setup = zlib_decompress_setup,
.decompress_init = zlib_decompress_init,
.decompress_update = zlib_decompress_update,
.decompress_final = zlib_decompress_final,
.base = {
.cra_name = "zlib",
.cra_flags = CRYPTO_ALG_TYPE_PCOMPRESS,
.cra_ctxsize = sizeof(struct zlib_ctx),
.cra_module = THIS_MODULE,
.cra_init = zlib_init,
.cra_exit = zlib_exit,
}
};
static int __init zlib_mod_init(void)
{
return crypto_register_pcomp(&zlib_alg);
}
static void __exit zlib_mod_fini(void)
{
crypto_unregister_pcomp(&zlib_alg);
}
module_init(zlib_mod_init);
module_exit(zlib_mod_fini);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Zlib Compression Algorithm");
MODULE_AUTHOR("Sony Corporation");
MODULE_ALIAS_CRYPTO("zlib");
......@@ -21,9 +21,9 @@
#include <linux/module.h>
#include <crypto/skcipher.h>
#include <linux/init.h>
#include <linux/string.h>
#include <linux/crypto.h>
#include <linux/blkdev.h>
#include <linux/scatterlist.h>
#include <asm/uaccess.h>
......@@ -46,7 +46,7 @@ cryptoloop_init(struct loop_device *lo, const struct loop_info64 *info)
char *cipher;
char *mode;
char *cmsp = cms; /* c-m string pointer */
struct crypto_blkcipher *tfm;
struct crypto_skcipher *tfm;
/* encryption breaks for non sector aligned offsets */
......@@ -82,12 +82,12 @@ cryptoloop_init(struct loop_device *lo, const struct loop_info64 *info)
*cmsp++ = ')';
*cmsp = 0;
tfm = crypto_alloc_blkcipher(cms, 0, CRYPTO_ALG_ASYNC);
tfm = crypto_alloc_skcipher(cms, 0, CRYPTO_ALG_ASYNC);
if (IS_ERR(tfm))
return PTR_ERR(tfm);
err = crypto_blkcipher_setkey(tfm, info->lo_encrypt_key,
info->lo_encrypt_key_size);
err = crypto_skcipher_setkey(tfm, info->lo_encrypt_key,
info->lo_encrypt_key_size);
if (err != 0)
goto out_free_tfm;
......@@ -96,17 +96,14 @@ cryptoloop_init(struct loop_device *lo, const struct loop_info64 *info)
return 0;
out_free_tfm:
crypto_free_blkcipher(tfm);
crypto_free_skcipher(tfm);
out:
return err;
}
typedef int (*encdec_cbc_t)(struct blkcipher_desc *desc,
struct scatterlist *sg_out,
struct scatterlist *sg_in,
unsigned int nsg);
typedef int (*encdec_cbc_t)(struct skcipher_request *req);
static int
cryptoloop_transfer(struct loop_device *lo, int cmd,
......@@ -114,11 +111,8 @@ cryptoloop_transfer(struct loop_device *lo, int cmd,
struct page *loop_page, unsigned loop_off,
int size, sector_t IV)
{
struct crypto_blkcipher *tfm = lo->key_data;
struct blkcipher_desc desc = {
.tfm = tfm,
.flags = CRYPTO_TFM_REQ_MAY_SLEEP,
};
struct crypto_skcipher *tfm = lo->key_data;
SKCIPHER_REQUEST_ON_STACK(req, tfm);
struct scatterlist sg_out;
struct scatterlist sg_in;
......@@ -127,6 +121,10 @@ cryptoloop_transfer(struct loop_device *lo, int cmd,
unsigned in_offs, out_offs;
int err;
skcipher_request_set_tfm(req, tfm);
skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP,
NULL, NULL);
sg_init_table(&sg_out, 1);
sg_init_table(&sg_in, 1);
......@@ -135,13 +133,13 @@ cryptoloop_transfer(struct loop_device *lo, int cmd,
in_offs = raw_off;
out_page = loop_page;
out_offs = loop_off;
encdecfunc = crypto_blkcipher_crt(tfm)->decrypt;
encdecfunc = crypto_skcipher_decrypt;
} else {
in_page = loop_page;
in_offs = loop_off;
out_page = raw_page;
out_offs = raw_off;
encdecfunc = crypto_blkcipher_crt(tfm)->encrypt;
encdecfunc = crypto_skcipher_encrypt;
}
while (size > 0) {
......@@ -152,10 +150,10 @@ cryptoloop_transfer(struct loop_device *lo, int cmd,
sg_set_page(&sg_in, in_page, sz, in_offs);
sg_set_page(&sg_out, out_page, sz, out_offs);
desc.info = iv;
err = encdecfunc(&desc, &sg_out, &sg_in, sz);
skcipher_request_set_crypt(req, &sg_in, &sg_out, sz, iv);
err = encdecfunc(req);
if (err)
return err;
goto out;
IV++;
size -= sz;
......@@ -163,7 +161,11 @@ cryptoloop_transfer(struct loop_device *lo, int cmd,
out_offs += sz;
}
return 0;
err = 0;
out:
skcipher_request_zero(req);
return err;
}
static int
......@@ -175,9 +177,9 @@ cryptoloop_ioctl(struct loop_device *lo, int cmd, unsigned long arg)
static int
cryptoloop_release(struct loop_device *lo)
{
struct crypto_blkcipher *tfm = lo->key_data;
struct crypto_skcipher *tfm = lo->key_data;
if (tfm != NULL) {
crypto_free_blkcipher(tfm);
crypto_free_skcipher(tfm);
lo->key_data = NULL;
return 0;
}
......
......@@ -26,13 +26,13 @@
#ifndef _DRBD_INT_H
#define _DRBD_INT_H
#include <crypto/hash.h>
#include <linux/compiler.h>
#include <linux/types.h>
#include <linux/list.h>
#include <linux/sched.h>
#include <linux/bitops.h>
#include <linux/slab.h>
#include <linux/crypto.h>
#include <linux/ratelimit.h>
#include <linux/tcp.h>
#include <linux/mutex.h>
......@@ -724,11 +724,11 @@ struct drbd_connection {
struct list_head transfer_log; /* all requests not yet fully processed */
struct crypto_hash *cram_hmac_tfm;
struct crypto_hash *integrity_tfm; /* checksums we compute, updates protected by connection->data->mutex */
struct crypto_hash *peer_integrity_tfm; /* checksums we verify, only accessed from receiver thread */
struct crypto_hash *csums_tfm;
struct crypto_hash *verify_tfm;
struct crypto_shash *cram_hmac_tfm;
struct crypto_ahash *integrity_tfm; /* checksums we compute, updates protected by connection->data->mutex */
struct crypto_ahash *peer_integrity_tfm; /* checksums we verify, only accessed from receiver thread */
struct crypto_ahash *csums_tfm;
struct crypto_ahash *verify_tfm;
void *int_dig_in;
void *int_dig_vv;
......@@ -1524,8 +1524,8 @@ static inline void ov_out_of_sync_print(struct drbd_device *device)
}
extern void drbd_csum_bio(struct crypto_hash *, struct bio *, void *);
extern void drbd_csum_ee(struct crypto_hash *, struct drbd_peer_request *, void *);
extern void drbd_csum_bio(struct crypto_ahash *, struct bio *, void *);
extern void drbd_csum_ee(struct crypto_ahash *, struct drbd_peer_request *, void *);
/* worker callbacks */
extern int w_e_end_data_req(struct drbd_work *, int);
extern int w_e_end_rsdata_req(struct drbd_work *, int);
......
......@@ -1340,7 +1340,7 @@ void drbd_send_ack_dp(struct drbd_peer_device *peer_device, enum drbd_packet cmd
struct p_data *dp, int data_size)
{
if (peer_device->connection->peer_integrity_tfm)
data_size -= crypto_hash_digestsize(peer_device->connection->peer_integrity_tfm);
data_size -= crypto_ahash_digestsize(peer_device->connection->peer_integrity_tfm);
_drbd_send_ack(peer_device, cmd, dp->sector, cpu_to_be32(data_size),
dp->block_id);
}
......@@ -1629,7 +1629,7 @@ int drbd_send_dblock(struct drbd_peer_device *peer_device, struct drbd_request *
sock = &peer_device->connection->data;
p = drbd_prepare_command(peer_device, sock);
digest_size = peer_device->connection->integrity_tfm ?
crypto_hash_digestsize(peer_device->connection->integrity_tfm) : 0;
crypto_ahash_digestsize(peer_device->connection->integrity_tfm) : 0;
if (!p)
return -EIO;
......@@ -1718,7 +1718,7 @@ int drbd_send_block(struct drbd_peer_device *peer_device, enum drbd_packet cmd,
p = drbd_prepare_command(peer_device, sock);
digest_size = peer_device->connection->integrity_tfm ?
crypto_hash_digestsize(peer_device->connection->integrity_tfm) : 0;
crypto_ahash_digestsize(peer_device->connection->integrity_tfm) : 0;
if (!p)
return -EIO;
......@@ -2498,11 +2498,11 @@ void conn_free_crypto(struct drbd_connection *connection)
{
drbd_free_sock(connection);
crypto_free_hash(connection->csums_tfm);
crypto_free_hash(connection->verify_tfm);
crypto_free_hash(connection->cram_hmac_tfm);
crypto_free_hash(connection->integrity_tfm);
crypto_free_hash(connection->peer_integrity_tfm);
crypto_free_ahash(connection->csums_tfm);
crypto_free_ahash(connection->verify_tfm);
crypto_free_shash(connection->cram_hmac_tfm);
crypto_free_ahash(connection->integrity_tfm);
crypto_free_ahash(connection->peer_integrity_tfm);
kfree(connection->int_dig_in);
kfree(connection->int_dig_vv);
......
......@@ -2160,19 +2160,34 @@ check_net_options(struct drbd_connection *connection, struct net_conf *new_net_c
}
struct crypto {
struct crypto_hash *verify_tfm;
struct crypto_hash *csums_tfm;
struct crypto_hash *cram_hmac_tfm;
struct crypto_hash *integrity_tfm;
struct crypto_ahash *verify_tfm;
struct crypto_ahash *csums_tfm;
struct crypto_shash *cram_hmac_tfm;
struct crypto_ahash *integrity_tfm;
};
static int
alloc_hash(struct crypto_hash **tfm, char *tfm_name, int err_alg)
alloc_shash(struct crypto_shash **tfm, char *tfm_name, int err_alg)
{
if (!tfm_name[0])
return NO_ERROR;
*tfm = crypto_alloc_hash(tfm_name, 0, CRYPTO_ALG_ASYNC);
*tfm = crypto_alloc_shash(tfm_name, 0, 0);
if (IS_ERR(*tfm)) {
*tfm = NULL;
return err_alg;
}
return NO_ERROR;
}
static int
alloc_ahash(struct crypto_ahash **tfm, char *tfm_name, int err_alg)
{
if (!tfm_name[0])
return NO_ERROR;
*tfm = crypto_alloc_ahash(tfm_name, 0, CRYPTO_ALG_ASYNC);
if (IS_ERR(*tfm)) {
*tfm = NULL;
return err_alg;
......@@ -2187,24 +2202,24 @@ alloc_crypto(struct crypto *crypto, struct net_conf *new_net_conf)
char hmac_name[CRYPTO_MAX_ALG_NAME];
enum drbd_ret_code rv;
rv = alloc_hash(&crypto->csums_tfm, new_net_conf->csums_alg,
ERR_CSUMS_ALG);
rv = alloc_ahash(&crypto->csums_tfm, new_net_conf->csums_alg,
ERR_CSUMS_ALG);
if (rv != NO_ERROR)
return rv;
rv = alloc_hash(&crypto->verify_tfm, new_net_conf->verify_alg,
ERR_VERIFY_ALG);
rv = alloc_ahash(&crypto->verify_tfm, new_net_conf->verify_alg,
ERR_VERIFY_ALG);
if (rv != NO_ERROR)
return rv;
rv = alloc_hash(&crypto->integrity_tfm, new_net_conf->integrity_alg,
ERR_INTEGRITY_ALG);
rv = alloc_ahash(&crypto->integrity_tfm, new_net_conf->integrity_alg,
ERR_INTEGRITY_ALG);
if (rv != NO_ERROR)
return rv;
if (new_net_conf->cram_hmac_alg[0] != 0) {
snprintf(hmac_name, CRYPTO_MAX_ALG_NAME, "hmac(%s)",
new_net_conf->cram_hmac_alg);
rv = alloc_hash(&crypto->cram_hmac_tfm, hmac_name,
ERR_AUTH_ALG);
rv = alloc_shash(&crypto->cram_hmac_tfm, hmac_name,
ERR_AUTH_ALG);
}
return rv;
......@@ -2212,10 +2227,10 @@ alloc_crypto(struct crypto *crypto, struct net_conf *new_net_conf)
static void free_crypto(struct crypto *crypto)
{
crypto_free_hash(crypto->cram_hmac_tfm);
crypto_free_hash(crypto->integrity_tfm);
crypto_free_hash(crypto->csums_tfm);
crypto_free_hash(crypto->verify_tfm);
crypto_free_shash(crypto->cram_hmac_tfm);
crypto_free_ahash(crypto->integrity_tfm);
crypto_free_ahash(crypto->csums_tfm);
crypto_free_ahash(crypto->verify_tfm);
}
int drbd_adm_net_opts(struct sk_buff *skb, struct genl_info *info)
......@@ -2292,23 +2307,23 @@ int drbd_adm_net_opts(struct sk_buff *skb, struct genl_info *info)
rcu_assign_pointer(connection->net_conf, new_net_conf);
if (!rsr) {
crypto_free_hash(connection->csums_tfm);
crypto_free_ahash(connection->csums_tfm);
connection->csums_tfm = crypto.csums_tfm;
crypto.csums_tfm = NULL;
}
if (!ovr) {
crypto_free_hash(connection->verify_tfm);
crypto_free_ahash(connection->verify_tfm);
connection->verify_tfm = crypto.verify_tfm;
crypto.verify_tfm = NULL;
}
crypto_free_hash(connection->integrity_tfm);
crypto_free_ahash(connection->integrity_tfm);
connection->integrity_tfm = crypto.integrity_tfm;
if (connection->cstate >= C_WF_REPORT_PARAMS && connection->agreed_pro_version >= 100)
/* Do this without trying to take connection->data.mutex again. */
__drbd_send_protocol(connection, P_PROTOCOL_UPDATE);
crypto_free_hash(connection->cram_hmac_tfm);
crypto_free_shash(connection->cram_hmac_tfm);
connection->cram_hmac_tfm = crypto.cram_hmac_tfm;
mutex_unlock(&connection->resource->conf_update);
......
......@@ -1627,7 +1627,7 @@ read_in_block(struct drbd_peer_device *peer_device, u64 id, sector_t sector,
digest_size = 0;
if (!trim && peer_device->connection->peer_integrity_tfm) {
digest_size = crypto_hash_digestsize(peer_device->connection->peer_integrity_tfm);
digest_size = crypto_ahash_digestsize(peer_device->connection->peer_integrity_tfm);
/*
* FIXME: Receive the incoming digest into the receive buffer
* here, together with its struct p_data?
......@@ -1741,7 +1741,7 @@ static int recv_dless_read(struct drbd_peer_device *peer_device, struct drbd_req
digest_size = 0;
if (peer_device->connection->peer_integrity_tfm) {
digest_size = crypto_hash_digestsize(peer_device->connection->peer_integrity_tfm);
digest_size = crypto_ahash_digestsize(peer_device->connection->peer_integrity_tfm);
err = drbd_recv_all_warn(peer_device->connection, dig_in, digest_size);
if (err)
return err;
......@@ -3321,7 +3321,7 @@ static int receive_protocol(struct drbd_connection *connection, struct packet_in
int p_proto, p_discard_my_data, p_two_primaries, cf;
struct net_conf *nc, *old_net_conf, *new_net_conf = NULL;
char integrity_alg[SHARED_SECRET_MAX] = "";
struct crypto_hash *peer_integrity_tfm = NULL;
struct crypto_ahash *peer_integrity_tfm = NULL;
void *int_dig_in = NULL, *int_dig_vv = NULL;
p_proto = be32_to_cpu(p->protocol);
......@@ -3402,14 +3402,14 @@ static int receive_protocol(struct drbd_connection *connection, struct packet_in
* change.
*/
peer_integrity_tfm = crypto_alloc_hash(integrity_alg, 0, CRYPTO_ALG_ASYNC);
peer_integrity_tfm = crypto_alloc_ahash(integrity_alg, 0, CRYPTO_ALG_ASYNC);
if (!peer_integrity_tfm) {
drbd_err(connection, "peer data-integrity-alg %s not supported\n",
integrity_alg);
goto disconnect;
}
hash_size = crypto_hash_digestsize(peer_integrity_tfm);
hash_size = crypto_ahash_digestsize(peer_integrity_tfm);
int_dig_in = kmalloc(hash_size, GFP_KERNEL);
int_dig_vv = kmalloc(hash_size, GFP_KERNEL);
if (!(int_dig_in && int_dig_vv)) {
......@@ -3439,7 +3439,7 @@ static int receive_protocol(struct drbd_connection *connection, struct packet_in
mutex_unlock(&connection->resource->conf_update);
mutex_unlock(&connection->data.mutex);
crypto_free_hash(connection->peer_integrity_tfm);
crypto_free_ahash(connection->peer_integrity_tfm);
kfree(connection->int_dig_in);
kfree(connection->int_dig_vv);
connection->peer_integrity_tfm = peer_integrity_tfm;
......@@ -3457,7 +3457,7 @@ static int receive_protocol(struct drbd_connection *connection, struct packet_in
disconnect_rcu_unlock:
rcu_read_unlock();
disconnect:
crypto_free_hash(peer_integrity_tfm);
crypto_free_ahash(peer_integrity_tfm);
kfree(int_dig_in);
kfree(int_dig_vv);
conn_request_state(connection, NS(conn, C_DISCONNECTING), CS_HARD);
......@@ -3469,15 +3469,15 @@ static int receive_protocol(struct drbd_connection *connection, struct packet_in
* return: NULL (alg name was "")
* ERR_PTR(error) if something goes wrong
* or the crypto hash ptr, if it worked out ok. */
static struct crypto_hash *drbd_crypto_alloc_digest_safe(const struct drbd_device *device,
static struct crypto_ahash *drbd_crypto_alloc_digest_safe(const struct drbd_device *device,
const char *alg, const char *name)
{
struct crypto_hash *tfm;
struct crypto_ahash *tfm;
if (!alg[0])
return NULL;
tfm = crypto_alloc_hash(alg, 0, CRYPTO_ALG_ASYNC);
tfm = crypto_alloc_ahash(alg, 0, CRYPTO_ALG_ASYNC);
if (IS_ERR(tfm)) {
drbd_err(device, "Can not allocate \"%s\" as %s (reason: %ld)\n",
alg, name, PTR_ERR(tfm));
......@@ -3530,8 +3530,8 @@ static int receive_SyncParam(struct drbd_connection *connection, struct packet_i
struct drbd_device *device;
struct p_rs_param_95 *p;
unsigned int header_size, data_size, exp_max_sz;
struct crypto_hash *verify_tfm = NULL;
struct crypto_hash *csums_tfm = NULL;
struct crypto_ahash *verify_tfm = NULL;
struct crypto_ahash *csums_tfm = NULL;
struct net_conf *old_net_conf, *new_net_conf = NULL;
struct disk_conf *old_disk_conf = NULL, *new_disk_conf = NULL;
const int apv = connection->agreed_pro_version;
......@@ -3678,14 +3678,14 @@ static int receive_SyncParam(struct drbd_connection *connection, struct packet_i
if (verify_tfm) {
strcpy(new_net_conf->verify_alg, p->verify_alg);
new_net_conf->verify_alg_len = strlen(p->verify_alg) + 1;
crypto_free_hash(peer_device->connection->verify_tfm);
crypto_free_ahash(peer_device->connection->verify_tfm);
peer_device->connection->verify_tfm = verify_tfm;
drbd_info(device, "using verify-alg: \"%s\"\n", p->verify_alg);
}
if (csums_tfm) {
strcpy(new_net_conf->csums_alg, p->csums_alg);
new_net_conf->csums_alg_len = strlen(p->csums_alg) + 1;
crypto_free_hash(peer_device->connection->csums_tfm);
crypto_free_ahash(peer_device->connection->csums_tfm);
peer_device->connection->csums_tfm = csums_tfm;
drbd_info(device, "using csums-alg: \"%s\"\n", p->csums_alg);
}
......@@ -3729,9 +3729,9 @@ static int receive_SyncParam(struct drbd_connection *connection, struct packet_i
mutex_unlock(&connection->resource->conf_update);
/* just for completeness: actually not needed,
* as this is not reached if csums_tfm was ok. */
crypto_free_hash(csums_tfm);
crypto_free_ahash(csums_tfm);
/* but free the verify_tfm again, if csums_tfm did not work out */
crypto_free_hash(verify_tfm);
crypto_free_ahash(verify_tfm);
conn_request_state(peer_device->connection, NS(conn, C_DISCONNECTING), CS_HARD);
return -EIO;
}
......@@ -4925,14 +4925,13 @@ static int drbd_do_auth(struct drbd_connection *connection)
{
struct drbd_socket *sock;
char my_challenge[CHALLENGE_LEN]; /* 64 Bytes... */
struct scatterlist sg;
char *response = NULL;
char *right_response = NULL;
char *peers_ch = NULL;
unsigned int key_len;
char secret[SHARED_SECRET_MAX]; /* 64 byte */
unsigned int resp_size;
struct hash_desc desc;
SHASH_DESC_ON_STACK(desc, connection->cram_hmac_tfm);
struct packet_info pi;
struct net_conf *nc;
int err, rv;
......@@ -4945,12 +4944,12 @@ static int drbd_do_auth(struct drbd_connection *connection)
memcpy(secret, nc->shared_secret, key_len);
rcu_read_unlock();
desc.tfm = connection->cram_hmac_tfm;
desc.flags = 0;
desc->tfm = connection->cram_hmac_tfm;
desc->flags = 0;
rv = crypto_hash_setkey(connection->cram_hmac_tfm, (u8 *)secret, key_len);
rv = crypto_shash_setkey(connection->cram_hmac_tfm, (u8 *)secret, key_len);
if (rv) {
drbd_err(connection, "crypto_hash_setkey() failed with %d\n", rv);
drbd_err(connection, "crypto_shash_setkey() failed with %d\n", rv);
rv = -1;
goto fail;
}
......@@ -5011,7 +5010,7 @@ static int drbd_do_auth(struct drbd_connection *connection)
goto fail;
}
resp_size = crypto_hash_digestsize(connection->cram_hmac_tfm);
resp_size = crypto_shash_digestsize(connection->cram_hmac_tfm);
response = kmalloc(resp_size, GFP_NOIO);
if (response == NULL) {
drbd_err(connection, "kmalloc of response failed\n");
......@@ -5019,10 +5018,7 @@ static int drbd_do_auth(struct drbd_connection *connection)
goto fail;
}
sg_init_table(&sg, 1);
sg_set_buf(&sg, peers_ch, pi.size);
rv = crypto_hash_digest(&desc, &sg, sg.length, response);
rv = crypto_shash_digest(desc, peers_ch, pi.size, response);
if (rv) {
drbd_err(connection, "crypto_hash_digest() failed with %d\n", rv);
rv = -1;
......@@ -5070,9 +5066,8 @@ static int drbd_do_auth(struct drbd_connection *connection)
goto fail;
}
sg_set_buf(&sg, my_challenge, CHALLENGE_LEN);
rv = crypto_hash_digest(&desc, &sg, sg.length, right_response);
rv = crypto_shash_digest(desc, my_challenge, CHALLENGE_LEN,
right_response);
if (rv) {
drbd_err(connection, "crypto_hash_digest() failed with %d\n", rv);
rv = -1;
......@@ -5091,6 +5086,7 @@ static int drbd_do_auth(struct drbd_connection *connection)
kfree(peers_ch);
kfree(response);
kfree(right_response);
shash_desc_zero(desc);
return rv;
}
......
......@@ -274,51 +274,56 @@ void drbd_request_endio(struct bio *bio)
complete_master_bio(device, &m);
}
void drbd_csum_ee(struct crypto_hash *tfm, struct drbd_peer_request *peer_req, void *digest)
void drbd_csum_ee(struct crypto_ahash *tfm, struct drbd_peer_request *peer_req, void *digest)
{
struct hash_desc desc;
AHASH_REQUEST_ON_STACK(req, tfm);
struct scatterlist sg;
struct page *page = peer_req->pages;
struct page *tmp;
unsigned len;
desc.tfm = tfm;
desc.flags = 0;
ahash_request_set_tfm(req, tfm);
ahash_request_set_callback(req, 0, NULL, NULL);
sg_init_table(&sg, 1);
crypto_hash_init(&desc);
crypto_ahash_init(req);
while ((tmp = page_chain_next(page))) {
/* all but the last page will be fully used */
sg_set_page(&sg, page, PAGE_SIZE, 0);
crypto_hash_update(&desc, &sg, sg.length);
ahash_request_set_crypt(req, &sg, NULL, sg.length);
crypto_ahash_update(req);
page = tmp;
}
/* and now the last, possibly only partially used page */
len = peer_req->i.size & (PAGE_SIZE - 1);
sg_set_page(&sg, page, len ?: PAGE_SIZE, 0);
crypto_hash_update(&desc, &sg, sg.length);
crypto_hash_final(&desc, digest);
ahash_request_set_crypt(req, &sg, digest, sg.length);
crypto_ahash_finup(req);
ahash_request_zero(req);
}
void drbd_csum_bio(struct crypto_hash *tfm, struct bio *bio, void *digest)
void drbd_csum_bio(struct crypto_ahash *tfm, struct bio *bio, void *digest)
{
struct hash_desc desc;
AHASH_REQUEST_ON_STACK(req, tfm);
struct scatterlist sg;
struct bio_vec bvec;
struct bvec_iter iter;
desc.tfm = tfm;
desc.flags = 0;
ahash_request_set_tfm(req, tfm);
ahash_request_set_callback(req, 0, NULL, NULL);
sg_init_table(&sg, 1);
crypto_hash_init(&desc);
crypto_ahash_init(req);
bio_for_each_segment(bvec, bio, iter) {
sg_set_page(&sg, bvec.bv_page, bvec.bv_len, bvec.bv_offset);
crypto_hash_update(&desc, &sg, sg.length);
ahash_request_set_crypt(req, &sg, NULL, sg.length);
crypto_ahash_update(req);
}
crypto_hash_final(&desc, digest);
ahash_request_set_crypt(req, NULL, digest, 0);
crypto_ahash_final(req);
ahash_request_zero(req);
}
/* MAYBE merge common code with w_e_end_ov_req */
......@@ -337,7 +342,7 @@ static int w_e_send_csum(struct drbd_work *w, int cancel)
if (unlikely((peer_req->flags & EE_WAS_ERROR) != 0))
goto out;
digest_size = crypto_hash_digestsize(peer_device->connection->csums_tfm);
digest_size = crypto_ahash_digestsize(peer_device->connection->csums_tfm);
digest = kmalloc(digest_size, GFP_NOIO);
if (digest) {
sector_t sector = peer_req->i.sector;
......@@ -1113,7 +1118,7 @@ int w_e_end_csum_rs_req(struct drbd_work *w, int cancel)
* a real fix would be much more involved,
* introducing more locking mechanisms */
if (peer_device->connection->csums_tfm) {
digest_size = crypto_hash_digestsize(peer_device->connection->csums_tfm);
digest_size = crypto_ahash_digestsize(peer_device->connection->csums_tfm);
D_ASSERT(device, digest_size == di->digest_size);
digest = kmalloc(digest_size, GFP_NOIO);
}
......@@ -1163,7 +1168,7 @@ int w_e_end_ov_req(struct drbd_work *w, int cancel)
if (unlikely(cancel))
goto out;
digest_size = crypto_hash_digestsize(peer_device->connection->verify_tfm);
digest_size = crypto_ahash_digestsize(peer_device->connection->verify_tfm);
digest = kmalloc(digest_size, GFP_NOIO);
if (!digest) {
err = 1; /* terminate the connection in case the allocation failed */
......@@ -1235,7 +1240,7 @@ int w_e_end_ov_reply(struct drbd_work *w, int cancel)
di = peer_req->digest;
if (likely((peer_req->flags & EE_WAS_ERROR) == 0)) {
digest_size = crypto_hash_digestsize(peer_device->connection->verify_tfm);
digest_size = crypto_ahash_digestsize(peer_device->connection->verify_tfm);
digest = kmalloc(digest_size, GFP_NOIO);
if (digest) {
drbd_csum_ee(peer_device->connection->verify_tfm, peer_req, digest);
......
......@@ -77,7 +77,7 @@ config HW_RANDOM_ATMEL
config HW_RANDOM_BCM63XX
tristate "Broadcom BCM63xx Random Number Generator support"
depends on BCM63XX
depends on BCM63XX || BMIPS_GENERIC
default HW_RANDOM
---help---
This driver provides kernel-side support for the Random Number
......@@ -382,6 +382,19 @@ config HW_RANDOM_STM32
If unsure, say N.
config HW_RANDOM_PIC32
tristate "Microchip PIC32 Random Number Generator support"
depends on HW_RANDOM && MACH_PIC32
default y
---help---
This driver provides kernel-side support for the Random Number
Generator hardware found on a PIC32.
To compile this driver as a module, choose M here. the
module will be called pic32-rng.
If unsure, say Y.
endif # HW_RANDOM
config UML_RANDOM
......
......@@ -33,3 +33,4 @@ obj-$(CONFIG_HW_RANDOM_MSM) += msm-rng.o
obj-$(CONFIG_HW_RANDOM_ST) += st-rng.o
obj-$(CONFIG_HW_RANDOM_XGENE) += xgene-rng.o
obj-$(CONFIG_HW_RANDOM_STM32) += stm32-rng.o
obj-$(CONFIG_HW_RANDOM_PIC32) += pic32-rng.o
......@@ -79,10 +79,8 @@ static int bcm63xx_rng_data_read(struct hwrng *rng, u32 *data)
static int bcm63xx_rng_probe(struct platform_device *pdev)
{
struct resource *r;
struct clk *clk;
int ret;
struct bcm63xx_rng_priv *priv;
struct hwrng *rng;
r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!r) {
......@@ -132,10 +130,19 @@ static int bcm63xx_rng_probe(struct platform_device *pdev)
return 0;
}
#ifdef CONFIG_OF
static const struct of_device_id bcm63xx_rng_of_match[] = {
{ .compatible = "brcm,bcm6368-rng", },
{},
};
MODULE_DEVICE_TABLE(of, bcm63xx_rng_of_match);
#endif
static struct platform_driver bcm63xx_rng_driver = {
.probe = bcm63xx_rng_probe,
.driver = {
.name = "bcm63xx-rng",
.of_match_table = of_match_ptr(bcm63xx_rng_of_match),
},
};
......
......@@ -144,8 +144,7 @@ static int exynos_rng_probe(struct platform_device *pdev)
return devm_hwrng_register(&pdev->dev, &exynos_rng->rng);
}
#ifdef CONFIG_PM
static int exynos_rng_runtime_suspend(struct device *dev)
static int __maybe_unused exynos_rng_runtime_suspend(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
struct exynos_rng *exynos_rng = platform_get_drvdata(pdev);
......@@ -155,7 +154,7 @@ static int exynos_rng_runtime_suspend(struct device *dev)
return 0;
}
static int exynos_rng_runtime_resume(struct device *dev)
static int __maybe_unused exynos_rng_runtime_resume(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
struct exynos_rng *exynos_rng = platform_get_drvdata(pdev);
......@@ -163,12 +162,12 @@ static int exynos_rng_runtime_resume(struct device *dev)
return clk_prepare_enable(exynos_rng->clk);
}
static int exynos_rng_suspend(struct device *dev)
static int __maybe_unused exynos_rng_suspend(struct device *dev)
{
return pm_runtime_force_suspend(dev);
}
static int exynos_rng_resume(struct device *dev)
static int __maybe_unused exynos_rng_resume(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
struct exynos_rng *exynos_rng = platform_get_drvdata(pdev);
......@@ -180,7 +179,6 @@ static int exynos_rng_resume(struct device *dev)
return exynos_rng_configure(exynos_rng);
}
#endif
static const struct dev_pm_ops exynos_rng_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(exynos_rng_suspend, exynos_rng_resume)
......
......@@ -743,6 +743,16 @@ static const struct of_device_id n2rng_match[] = {
.compatible = "SUNW,kt-rng",
.data = (void *) 1,
},
{
.name = "random-number-generator",
.compatible = "ORCL,m4-rng",
.data = (void *) 1,
},
{
.name = "random-number-generator",
.compatible = "ORCL,m7-rng",
.data = (void *) 1,
},
{},
};
MODULE_DEVICE_TABLE(of, n2rng_match);
......
/*
* PIC32 RNG driver
*
* Joshua Henderson <joshua.henderson@microchip.com>
* Copyright (C) 2016 Microchip Technology Inc. All rights reserved.
*
* This program is free software; you can distribute it and/or modify it
* under the terms of the GNU General Public License (Version 2) as
* published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*/
#include <linux/clk.h>
#include <linux/clkdev.h>
#include <linux/err.h>
#include <linux/hw_random.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#define RNGCON 0x04
#define TRNGEN BIT(8)
#define PRNGEN BIT(9)
#define PRNGCONT BIT(10)
#define TRNGMOD BIT(11)
#define SEEDLOAD BIT(12)
#define RNGPOLY1 0x08
#define RNGPOLY2 0x0C
#define RNGNUMGEN1 0x10
#define RNGNUMGEN2 0x14
#define RNGSEED1 0x18
#define RNGSEED2 0x1C
#define RNGRCNT 0x20
#define RCNT_MASK 0x7F
struct pic32_rng {
void __iomem *base;
struct hwrng rng;
struct clk *clk;
};
/*
* The TRNG can generate up to 24Mbps. This is a timeout that should be safe
* enough given the instructions in the loop and that the TRNG may not always
* be at maximum rate.
*/
#define RNG_TIMEOUT 500
static int pic32_rng_read(struct hwrng *rng, void *buf, size_t max,
bool wait)
{
struct pic32_rng *priv = container_of(rng, struct pic32_rng, rng);
u64 *data = buf;
u32 t;
unsigned int timeout = RNG_TIMEOUT;
if (max < 8)
return 0;
do {
t = readl(priv->base + RNGRCNT) & RCNT_MASK;
if (t == 64) {
/* TRNG value comes through the seed registers */
*data = ((u64)readl(priv->base + RNGSEED2) << 32) +
readl(priv->base + RNGSEED1);
return 8;
}
} while (wait && --timeout);
return -EIO;
}
static int pic32_rng_probe(struct platform_device *pdev)
{
struct pic32_rng *priv;
struct resource *res;
u32 v;
int ret;
priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
if (!priv)
return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
priv->base = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(priv->base))
return PTR_ERR(priv->base);
priv->clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(priv->clk))
return PTR_ERR(priv->clk);
ret = clk_prepare_enable(priv->clk);
if (ret)
return ret;
/* enable TRNG in enhanced mode */
v = TRNGEN | TRNGMOD;
writel(v, priv->base + RNGCON);
priv->rng.name = pdev->name;
priv->rng.read = pic32_rng_read;
ret = hwrng_register(&priv->rng);
if (ret)
goto err_register;
platform_set_drvdata(pdev, priv);
return 0;
err_register:
clk_disable_unprepare(priv->clk);
return ret;
}
static int pic32_rng_remove(struct platform_device *pdev)
{
struct pic32_rng *rng = platform_get_drvdata(pdev);
hwrng_unregister(&rng->rng);
writel(0, rng->base + RNGCON);
clk_disable_unprepare(rng->clk);
return 0;
}
static const struct of_device_id pic32_rng_of_match[] = {
{ .compatible = "microchip,pic32mzda-rng", },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, pic32_rng_of_match);
static struct platform_driver pic32_rng_driver = {
.probe = pic32_rng_probe,
.remove = pic32_rng_remove,
.driver = {
.name = "pic32-rng",
.owner = THIS_MODULE,
.of_match_table = of_match_ptr(pic32_rng_of_match),
},
};
module_platform_driver(pic32_rng_driver);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Joshua Henderson <joshua.henderson@microchip.com>");
MODULE_DESCRIPTION("Microchip PIC32 RNG Driver");
......@@ -296,6 +296,7 @@ config CRYPTO_DEV_OMAP_AES
depends on ARCH_OMAP2 || ARCH_OMAP3 || ARCH_OMAP2PLUS
select CRYPTO_AES
select CRYPTO_BLKCIPHER
select CRYPTO_ENGINE
help
OMAP processors have AES module accelerator. Select this if you
want to use the OMAP module for AES algorithms.
......@@ -487,7 +488,7 @@ config CRYPTO_DEV_IMGTEC_HASH
config CRYPTO_DEV_SUN4I_SS
tristate "Support for Allwinner Security System cryptographic accelerator"
depends on ARCH_SUNXI
depends on ARCH_SUNXI && !64BIT
select CRYPTO_MD5
select CRYPTO_SHA1
select CRYPTO_AES
......@@ -507,6 +508,10 @@ config CRYPTO_DEV_ROCKCHIP
depends on OF && ARCH_ROCKCHIP
select CRYPTO_AES
select CRYPTO_DES
select CRYPTO_MD5
select CRYPTO_SHA1
select CRYPTO_SHA256
select CRYPTO_HASH
select CRYPTO_BLKCIPHER
help
......
......@@ -369,12 +369,6 @@ static inline size_t atmel_aes_padlen(size_t len, size_t block_size)
return len ? block_size - len : 0;
}
static inline struct aead_request *
aead_request_cast(struct crypto_async_request *req)
{
return container_of(req, struct aead_request, base);
}
static struct atmel_aes_dev *atmel_aes_find_dev(struct atmel_aes_base_ctx *ctx)
{
struct atmel_aes_dev *aes_dd = NULL;
......@@ -2085,9 +2079,9 @@ static int atmel_aes_probe(struct platform_device *pdev)
}
aes_dd->io_base = devm_ioremap_resource(&pdev->dev, aes_res);
if (!aes_dd->io_base) {
if (IS_ERR(aes_dd->io_base)) {
dev_err(dev, "can't ioremap\n");
err = -ENOMEM;
err = PTR_ERR(aes_dd->io_base);
goto res_err;
}
......
......@@ -8,6 +8,8 @@
#define SHA_CR_START (1 << 0)
#define SHA_CR_FIRST (1 << 4)
#define SHA_CR_SWRST (1 << 8)
#define SHA_CR_WUIHV (1 << 12)
#define SHA_CR_WUIEHV (1 << 13)
#define SHA_MR 0x04
#define SHA_MR_MODE_MASK (0x3 << 0)
......@@ -15,6 +17,8 @@
#define SHA_MR_MODE_AUTO 0x1
#define SHA_MR_MODE_PDC 0x2
#define SHA_MR_PROCDLY (1 << 4)
#define SHA_MR_UIHV (1 << 5)
#define SHA_MR_UIEHV (1 << 6)
#define SHA_MR_ALGO_SHA1 (0 << 8)
#define SHA_MR_ALGO_SHA256 (1 << 8)
#define SHA_MR_ALGO_SHA384 (2 << 8)
......
......@@ -53,6 +53,7 @@
#define SHA_FLAGS_FINUP BIT(16)
#define SHA_FLAGS_SG BIT(17)
#define SHA_FLAGS_ALGO_MASK GENMASK(22, 18)
#define SHA_FLAGS_SHA1 BIT(18)
#define SHA_FLAGS_SHA224 BIT(19)
#define SHA_FLAGS_SHA256 BIT(20)
......@@ -60,11 +61,12 @@
#define SHA_FLAGS_SHA512 BIT(22)
#define SHA_FLAGS_ERROR BIT(23)
#define SHA_FLAGS_PAD BIT(24)
#define SHA_FLAGS_RESTORE BIT(25)
#define SHA_OP_UPDATE 1
#define SHA_OP_FINAL 2
#define SHA_BUFFER_LEN PAGE_SIZE
#define SHA_BUFFER_LEN (PAGE_SIZE / 16)
#define ATMEL_SHA_DMA_THRESHOLD 56
......@@ -73,10 +75,15 @@ struct atmel_sha_caps {
bool has_dualbuff;
bool has_sha224;
bool has_sha_384_512;
bool has_uihv;
};
struct atmel_sha_dev;
/*
* .statesize = sizeof(struct atmel_sha_reqctx) must be <= PAGE_SIZE / 8 as
* tested by the ahash_prepare_alg() function.
*/
struct atmel_sha_reqctx {
struct atmel_sha_dev *dd;
unsigned long flags;
......@@ -95,7 +102,7 @@ struct atmel_sha_reqctx {
size_t block_size;
u8 buffer[0] __aligned(sizeof(u32));
u8 buffer[SHA_BUFFER_LEN + SHA512_BLOCK_SIZE] __aligned(sizeof(u32));
};
struct atmel_sha_ctx {
......@@ -122,6 +129,7 @@ struct atmel_sha_dev {
spinlock_t lock;
int err;
struct tasklet_struct done_task;
struct tasklet_struct queue_task;
unsigned long flags;
struct crypto_queue queue;
......@@ -317,7 +325,8 @@ static int atmel_sha_init(struct ahash_request *req)
static void atmel_sha_write_ctrl(struct atmel_sha_dev *dd, int dma)
{
struct atmel_sha_reqctx *ctx = ahash_request_ctx(dd->req);
u32 valcr = 0, valmr = SHA_MR_MODE_AUTO;
u32 valmr = SHA_MR_MODE_AUTO;
unsigned int i, hashsize = 0;
if (likely(dma)) {
if (!dd->caps.has_dma)
......@@ -329,22 +338,62 @@ static void atmel_sha_write_ctrl(struct atmel_sha_dev *dd, int dma)
atmel_sha_write(dd, SHA_IER, SHA_INT_DATARDY);
}
if (ctx->flags & SHA_FLAGS_SHA1)
switch (ctx->flags & SHA_FLAGS_ALGO_MASK) {
case SHA_FLAGS_SHA1:
valmr |= SHA_MR_ALGO_SHA1;
else if (ctx->flags & SHA_FLAGS_SHA224)
hashsize = SHA1_DIGEST_SIZE;
break;
case SHA_FLAGS_SHA224:
valmr |= SHA_MR_ALGO_SHA224;
else if (ctx->flags & SHA_FLAGS_SHA256)
hashsize = SHA256_DIGEST_SIZE;
break;
case SHA_FLAGS_SHA256:
valmr |= SHA_MR_ALGO_SHA256;
else if (ctx->flags & SHA_FLAGS_SHA384)
hashsize = SHA256_DIGEST_SIZE;
break;
case SHA_FLAGS_SHA384:
valmr |= SHA_MR_ALGO_SHA384;
else if (ctx->flags & SHA_FLAGS_SHA512)
hashsize = SHA512_DIGEST_SIZE;
break;
case SHA_FLAGS_SHA512:
valmr |= SHA_MR_ALGO_SHA512;
hashsize = SHA512_DIGEST_SIZE;
break;
default:
break;
}
/* Setting CR_FIRST only for the first iteration */
if (!(ctx->digcnt[0] || ctx->digcnt[1]))
valcr = SHA_CR_FIRST;
if (!(ctx->digcnt[0] || ctx->digcnt[1])) {
atmel_sha_write(dd, SHA_CR, SHA_CR_FIRST);
} else if (dd->caps.has_uihv && (ctx->flags & SHA_FLAGS_RESTORE)) {
const u32 *hash = (const u32 *)ctx->digest;
/*
* Restore the hardware context: update the User Initialize
* Hash Value (UIHV) with the value saved when the latest
* 'update' operation completed on this very same crypto
* request.
*/
ctx->flags &= ~SHA_FLAGS_RESTORE;
atmel_sha_write(dd, SHA_CR, SHA_CR_WUIHV);
for (i = 0; i < hashsize / sizeof(u32); ++i)
atmel_sha_write(dd, SHA_REG_DIN(i), hash[i]);
atmel_sha_write(dd, SHA_CR, SHA_CR_FIRST);
valmr |= SHA_MR_UIHV;
}
/*
* WARNING: If the UIHV feature is not available, the hardware CANNOT
* process concurrent requests: the internal registers used to store
* the hash/digest are still set to the partial digest output values
* computed during the latest round.
*/
atmel_sha_write(dd, SHA_CR, valcr);
atmel_sha_write(dd, SHA_MR, valmr);
}
......@@ -713,23 +762,31 @@ static void atmel_sha_copy_hash(struct ahash_request *req)
{
struct atmel_sha_reqctx *ctx = ahash_request_ctx(req);
u32 *hash = (u32 *)ctx->digest;
int i;
unsigned int i, hashsize;
if (ctx->flags & SHA_FLAGS_SHA1)
for (i = 0; i < SHA1_DIGEST_SIZE / sizeof(u32); i++)
hash[i] = atmel_sha_read(ctx->dd, SHA_REG_DIGEST(i));
else if (ctx->flags & SHA_FLAGS_SHA224)
for (i = 0; i < SHA224_DIGEST_SIZE / sizeof(u32); i++)
hash[i] = atmel_sha_read(ctx->dd, SHA_REG_DIGEST(i));
else if (ctx->flags & SHA_FLAGS_SHA256)
for (i = 0; i < SHA256_DIGEST_SIZE / sizeof(u32); i++)
hash[i] = atmel_sha_read(ctx->dd, SHA_REG_DIGEST(i));
else if (ctx->flags & SHA_FLAGS_SHA384)
for (i = 0; i < SHA384_DIGEST_SIZE / sizeof(u32); i++)
hash[i] = atmel_sha_read(ctx->dd, SHA_REG_DIGEST(i));
else
for (i = 0; i < SHA512_DIGEST_SIZE / sizeof(u32); i++)
hash[i] = atmel_sha_read(ctx->dd, SHA_REG_DIGEST(i));
switch (ctx->flags & SHA_FLAGS_ALGO_MASK) {
case SHA_FLAGS_SHA1:
hashsize = SHA1_DIGEST_SIZE;
break;
case SHA_FLAGS_SHA224:
case SHA_FLAGS_SHA256:
hashsize = SHA256_DIGEST_SIZE;
break;
case SHA_FLAGS_SHA384:
case SHA_FLAGS_SHA512:
hashsize = SHA512_DIGEST_SIZE;
break;
default:
/* Should not happen... */
return;
}
for (i = 0; i < hashsize / sizeof(u32); ++i)
hash[i] = atmel_sha_read(ctx->dd, SHA_REG_DIGEST(i));
ctx->flags |= SHA_FLAGS_RESTORE;
}
static void atmel_sha_copy_ready_hash(struct ahash_request *req)
......@@ -788,7 +845,7 @@ static void atmel_sha_finish_req(struct ahash_request *req, int err)
req->base.complete(&req->base, err);
/* handle new request */
tasklet_schedule(&dd->done_task);
tasklet_schedule(&dd->queue_task);
}
static int atmel_sha_hw_init(struct atmel_sha_dev *dd)
......@@ -922,36 +979,17 @@ static int atmel_sha_update(struct ahash_request *req)
static int atmel_sha_final(struct ahash_request *req)
{
struct atmel_sha_reqctx *ctx = ahash_request_ctx(req);
struct atmel_sha_ctx *tctx = crypto_tfm_ctx(req->base.tfm);
struct atmel_sha_dev *dd = tctx->dd;
int err = 0;
ctx->flags |= SHA_FLAGS_FINUP;
if (ctx->flags & SHA_FLAGS_ERROR)
return 0; /* uncompleted hash is not needed */
if (ctx->bufcnt) {
return atmel_sha_enqueue(req, SHA_OP_FINAL);
} else if (!(ctx->flags & SHA_FLAGS_PAD)) { /* add padding */
err = atmel_sha_hw_init(dd);
if (err)
goto err1;
dd->flags |= SHA_FLAGS_BUSY;
err = atmel_sha_final_req(dd);
} else {
if (ctx->flags & SHA_FLAGS_PAD)
/* copy ready hash (+ finalize hmac) */
return atmel_sha_finish(req);
}
err1:
if (err != -EINPROGRESS)
/* done_task will not finish it, so do it here */
atmel_sha_finish_req(req, err);
return err;
return atmel_sha_enqueue(req, SHA_OP_FINAL);
}
static int atmel_sha_finup(struct ahash_request *req)
......@@ -979,11 +1017,27 @@ static int atmel_sha_digest(struct ahash_request *req)
return atmel_sha_init(req) ?: atmel_sha_finup(req);
}
static int atmel_sha_export(struct ahash_request *req, void *out)
{
const struct atmel_sha_reqctx *ctx = ahash_request_ctx(req);
memcpy(out, ctx, sizeof(*ctx));
return 0;
}
static int atmel_sha_import(struct ahash_request *req, const void *in)
{
struct atmel_sha_reqctx *ctx = ahash_request_ctx(req);
memcpy(ctx, in, sizeof(*ctx));
return 0;
}
static int atmel_sha_cra_init(struct crypto_tfm *tfm)
{
crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
sizeof(struct atmel_sha_reqctx) +
SHA_BUFFER_LEN + SHA512_BLOCK_SIZE);
sizeof(struct atmel_sha_reqctx));
return 0;
}
......@@ -995,8 +1049,11 @@ static struct ahash_alg sha_1_256_algs[] = {
.final = atmel_sha_final,
.finup = atmel_sha_finup,
.digest = atmel_sha_digest,
.export = atmel_sha_export,
.import = atmel_sha_import,
.halg = {
.digestsize = SHA1_DIGEST_SIZE,
.statesize = sizeof(struct atmel_sha_reqctx),
.base = {
.cra_name = "sha1",
.cra_driver_name = "atmel-sha1",
......@@ -1016,8 +1073,11 @@ static struct ahash_alg sha_1_256_algs[] = {
.final = atmel_sha_final,
.finup = atmel_sha_finup,
.digest = atmel_sha_digest,
.export = atmel_sha_export,
.import = atmel_sha_import,
.halg = {
.digestsize = SHA256_DIGEST_SIZE,
.statesize = sizeof(struct atmel_sha_reqctx),
.base = {
.cra_name = "sha256",
.cra_driver_name = "atmel-sha256",
......@@ -1039,8 +1099,11 @@ static struct ahash_alg sha_224_alg = {
.final = atmel_sha_final,
.finup = atmel_sha_finup,
.digest = atmel_sha_digest,
.export = atmel_sha_export,
.import = atmel_sha_import,
.halg = {
.digestsize = SHA224_DIGEST_SIZE,
.statesize = sizeof(struct atmel_sha_reqctx),
.base = {
.cra_name = "sha224",
.cra_driver_name = "atmel-sha224",
......@@ -1062,8 +1125,11 @@ static struct ahash_alg sha_384_512_algs[] = {
.final = atmel_sha_final,
.finup = atmel_sha_finup,
.digest = atmel_sha_digest,
.export = atmel_sha_export,
.import = atmel_sha_import,
.halg = {
.digestsize = SHA384_DIGEST_SIZE,
.statesize = sizeof(struct atmel_sha_reqctx),
.base = {
.cra_name = "sha384",
.cra_driver_name = "atmel-sha384",
......@@ -1083,8 +1149,11 @@ static struct ahash_alg sha_384_512_algs[] = {
.final = atmel_sha_final,
.finup = atmel_sha_finup,
.digest = atmel_sha_digest,
.export = atmel_sha_export,
.import = atmel_sha_import,
.halg = {
.digestsize = SHA512_DIGEST_SIZE,
.statesize = sizeof(struct atmel_sha_reqctx),
.base = {
.cra_name = "sha512",
.cra_driver_name = "atmel-sha512",
......@@ -1100,16 +1169,18 @@ static struct ahash_alg sha_384_512_algs[] = {
},
};
static void atmel_sha_queue_task(unsigned long data)
{
struct atmel_sha_dev *dd = (struct atmel_sha_dev *)data;
atmel_sha_handle_queue(dd, NULL);
}
static void atmel_sha_done_task(unsigned long data)
{
struct atmel_sha_dev *dd = (struct atmel_sha_dev *)data;
int err = 0;
if (!(SHA_FLAGS_BUSY & dd->flags)) {
atmel_sha_handle_queue(dd, NULL);
return;
}
if (SHA_FLAGS_CPU & dd->flags) {
if (SHA_FLAGS_OUTPUT_READY & dd->flags) {
dd->flags &= ~SHA_FLAGS_OUTPUT_READY;
......@@ -1272,14 +1343,23 @@ static void atmel_sha_get_cap(struct atmel_sha_dev *dd)
dd->caps.has_dualbuff = 0;
dd->caps.has_sha224 = 0;
dd->caps.has_sha_384_512 = 0;
dd->caps.has_uihv = 0;
/* keep only major version number */
switch (dd->hw_version & 0xff0) {
case 0x510:
dd->caps.has_dma = 1;
dd->caps.has_dualbuff = 1;
dd->caps.has_sha224 = 1;
dd->caps.has_sha_384_512 = 1;
dd->caps.has_uihv = 1;
break;
case 0x420:
dd->caps.has_dma = 1;
dd->caps.has_dualbuff = 1;
dd->caps.has_sha224 = 1;
dd->caps.has_sha_384_512 = 1;
dd->caps.has_uihv = 1;
break;
case 0x410:
dd->caps.has_dma = 1;
......@@ -1366,6 +1446,8 @@ static int atmel_sha_probe(struct platform_device *pdev)
tasklet_init(&sha_dd->done_task, atmel_sha_done_task,
(unsigned long)sha_dd);
tasklet_init(&sha_dd->queue_task, atmel_sha_queue_task,
(unsigned long)sha_dd);
crypto_init_queue(&sha_dd->queue, ATMEL_SHA_QUEUE_LENGTH);
......@@ -1404,9 +1486,9 @@ static int atmel_sha_probe(struct platform_device *pdev)
}
sha_dd->io_base = devm_ioremap_resource(&pdev->dev, sha_res);
if (!sha_dd->io_base) {
if (IS_ERR(sha_dd->io_base)) {
dev_err(dev, "can't ioremap\n");
err = -ENOMEM;
err = PTR_ERR(sha_dd->io_base);
goto res_err;
}
......@@ -1464,6 +1546,7 @@ static int atmel_sha_probe(struct platform_device *pdev)
iclk_unprepare:
clk_unprepare(sha_dd->iclk);
res_err:
tasklet_kill(&sha_dd->queue_task);
tasklet_kill(&sha_dd->done_task);
sha_dd_err:
dev_err(dev, "initialization failed.\n");
......@@ -1484,6 +1567,7 @@ static int atmel_sha_remove(struct platform_device *pdev)
atmel_sha_unregister_algs(sha_dd);
tasklet_kill(&sha_dd->queue_task);
tasklet_kill(&sha_dd->done_task);
if (sha_dd->caps.has_dma)
......
......@@ -1417,9 +1417,9 @@ static int atmel_tdes_probe(struct platform_device *pdev)
}
tdes_dd->io_base = devm_ioremap_resource(&pdev->dev, tdes_res);
if (!tdes_dd->io_base) {
if (IS_ERR(tdes_dd->io_base)) {
dev_err(dev, "can't ioremap\n");
err = -ENOMEM;
err = PTR_ERR(tdes_dd->io_base);
goto res_err;
}
......
......@@ -534,7 +534,7 @@ static int caam_probe(struct platform_device *pdev)
* long pointers in master configuration register
*/
clrsetbits_32(&ctrl->mcr, MCFGR_AWCACHE_MASK, MCFGR_AWCACHE_CACH |
MCFGR_AWCACHE_BUFF | MCFGR_WDENABLE |
MCFGR_AWCACHE_BUFF | MCFGR_WDENABLE | MCFGR_LARGE_BURST |
(sizeof(dma_addr_t) == sizeof(u64) ? MCFGR_LONG_PTR : 0));
/*
......
......@@ -65,7 +65,7 @@ static int caam_reset_hw_jr(struct device *dev)
/*
* Shutdown JobR independent of platform property code
*/
int caam_jr_shutdown(struct device *dev)
static int caam_jr_shutdown(struct device *dev)
{
struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);
dma_addr_t inpbusaddr, outbusaddr;
......
......@@ -455,7 +455,8 @@ struct caam_ctrl {
#define MCFGR_AXIPIPE_MASK (0xf << MCFGR_AXIPIPE_SHIFT)
#define MCFGR_AXIPRI 0x00000008 /* Assert AXI priority sideband */
#define MCFGR_BURST_64 0x00000001 /* Max burst size */
#define MCFGR_LARGE_BURST 0x00000004 /* 128/256-byte burst size */
#define MCFGR_BURST_64 0x00000001 /* 64-byte burst size */
/* JRSTART register offsets */
#define JRSTART_JR0_START 0x00000001 /* Start Job ring 0 */
......
obj-$(CONFIG_CRYPTO_DEV_CCP_DD) += ccp.o
ccp-objs := ccp-dev.o ccp-ops.o ccp-platform.o
ccp-objs := ccp-dev.o ccp-ops.o ccp-dev-v3.o ccp-platform.o
ccp-$(CONFIG_PCI) += ccp-pci.o
obj-$(CONFIG_CRYPTO_DEV_CCP_CRYPTO) += ccp-crypto.o
......
......@@ -220,6 +220,39 @@ static int ccp_aes_cmac_digest(struct ahash_request *req)
return ccp_aes_cmac_finup(req);
}
static int ccp_aes_cmac_export(struct ahash_request *req, void *out)
{
struct ccp_aes_cmac_req_ctx *rctx = ahash_request_ctx(req);
struct ccp_aes_cmac_exp_ctx state;
state.null_msg = rctx->null_msg;
memcpy(state.iv, rctx->iv, sizeof(state.iv));
state.buf_count = rctx->buf_count;
memcpy(state.buf, rctx->buf, sizeof(state.buf));
/* 'out' may not be aligned so memcpy from local variable */
memcpy(out, &state, sizeof(state));
return 0;
}
static int ccp_aes_cmac_import(struct ahash_request *req, const void *in)
{
struct ccp_aes_cmac_req_ctx *rctx = ahash_request_ctx(req);
struct ccp_aes_cmac_exp_ctx state;
/* 'in' may not be aligned so memcpy to local variable */
memcpy(&state, in, sizeof(state));
memset(rctx, 0, sizeof(*rctx));
rctx->null_msg = state.null_msg;
memcpy(rctx->iv, state.iv, sizeof(rctx->iv));
rctx->buf_count = state.buf_count;
memcpy(rctx->buf, state.buf, sizeof(rctx->buf));
return 0;
}
static int ccp_aes_cmac_setkey(struct crypto_ahash *tfm, const u8 *key,
unsigned int key_len)
{
......@@ -352,10 +385,13 @@ int ccp_register_aes_cmac_algs(struct list_head *head)
alg->final = ccp_aes_cmac_final;
alg->finup = ccp_aes_cmac_finup;
alg->digest = ccp_aes_cmac_digest;
alg->export = ccp_aes_cmac_export;
alg->import = ccp_aes_cmac_import;
alg->setkey = ccp_aes_cmac_setkey;
halg = &alg->halg;
halg->digestsize = AES_BLOCK_SIZE;
halg->statesize = sizeof(struct ccp_aes_cmac_exp_ctx);
base = &halg->base;
snprintf(base->cra_name, CRYPTO_MAX_ALG_NAME, "cmac(aes)");
......
/*
* AMD Cryptographic Coprocessor (CCP) AES crypto API support
*
* Copyright (C) 2013 Advanced Micro Devices, Inc.
* Copyright (C) 2013,2016 Advanced Micro Devices, Inc.
*
* Author: Tom Lendacky <thomas.lendacky@amd.com>
*
......@@ -259,6 +259,7 @@ static struct crypto_alg ccp_aes_rfc3686_defaults = {
struct ccp_aes_def {
enum ccp_aes_mode mode;
unsigned int version;
const char *name;
const char *driver_name;
unsigned int blocksize;
......@@ -269,6 +270,7 @@ struct ccp_aes_def {
static struct ccp_aes_def aes_algs[] = {
{
.mode = CCP_AES_MODE_ECB,
.version = CCP_VERSION(3, 0),
.name = "ecb(aes)",
.driver_name = "ecb-aes-ccp",
.blocksize = AES_BLOCK_SIZE,
......@@ -277,6 +279,7 @@ static struct ccp_aes_def aes_algs[] = {
},
{
.mode = CCP_AES_MODE_CBC,
.version = CCP_VERSION(3, 0),
.name = "cbc(aes)",
.driver_name = "cbc-aes-ccp",
.blocksize = AES_BLOCK_SIZE,
......@@ -285,6 +288,7 @@ static struct ccp_aes_def aes_algs[] = {
},
{
.mode = CCP_AES_MODE_CFB,
.version = CCP_VERSION(3, 0),
.name = "cfb(aes)",
.driver_name = "cfb-aes-ccp",
.blocksize = AES_BLOCK_SIZE,
......@@ -293,6 +297,7 @@ static struct ccp_aes_def aes_algs[] = {
},
{
.mode = CCP_AES_MODE_OFB,
.version = CCP_VERSION(3, 0),
.name = "ofb(aes)",
.driver_name = "ofb-aes-ccp",
.blocksize = 1,
......@@ -301,6 +306,7 @@ static struct ccp_aes_def aes_algs[] = {
},
{
.mode = CCP_AES_MODE_CTR,
.version = CCP_VERSION(3, 0),
.name = "ctr(aes)",
.driver_name = "ctr-aes-ccp",
.blocksize = 1,
......@@ -309,6 +315,7 @@ static struct ccp_aes_def aes_algs[] = {
},
{
.mode = CCP_AES_MODE_CTR,
.version = CCP_VERSION(3, 0),
.name = "rfc3686(ctr(aes))",
.driver_name = "rfc3686-ctr-aes-ccp",
.blocksize = 1,
......@@ -357,8 +364,11 @@ static int ccp_register_aes_alg(struct list_head *head,
int ccp_register_aes_algs(struct list_head *head)
{
int i, ret;
unsigned int ccpversion = ccp_version();
for (i = 0; i < ARRAY_SIZE(aes_algs); i++) {
if (aes_algs[i].version > ccpversion)
continue;
ret = ccp_register_aes_alg(head, &aes_algs[i]);
if (ret)
return ret;
......
/*
* AMD Cryptographic Coprocessor (CCP) SHA crypto API support
*
* Copyright (C) 2013 Advanced Micro Devices, Inc.
* Copyright (C) 2013,2016 Advanced Micro Devices, Inc.
*
* Author: Tom Lendacky <thomas.lendacky@amd.com>
*
......@@ -207,6 +207,43 @@ static int ccp_sha_digest(struct ahash_request *req)
return ccp_sha_finup(req);
}
static int ccp_sha_export(struct ahash_request *req, void *out)
{
struct ccp_sha_req_ctx *rctx = ahash_request_ctx(req);
struct ccp_sha_exp_ctx state;
state.type = rctx->type;
state.msg_bits = rctx->msg_bits;
state.first = rctx->first;
memcpy(state.ctx, rctx->ctx, sizeof(state.ctx));
state.buf_count = rctx->buf_count;
memcpy(state.buf, rctx->buf, sizeof(state.buf));
/* 'out' may not be aligned so memcpy from local variable */
memcpy(out, &state, sizeof(state));
return 0;
}
static int ccp_sha_import(struct ahash_request *req, const void *in)
{
struct ccp_sha_req_ctx *rctx = ahash_request_ctx(req);
struct ccp_sha_exp_ctx state;
/* 'in' may not be aligned so memcpy to local variable */
memcpy(&state, in, sizeof(state));
memset(rctx, 0, sizeof(*rctx));
rctx->type = state.type;
rctx->msg_bits = state.msg_bits;
rctx->first = state.first;
memcpy(rctx->ctx, state.ctx, sizeof(rctx->ctx));
rctx->buf_count = state.buf_count;
memcpy(rctx->buf, state.buf, sizeof(rctx->buf));
return 0;
}
static int ccp_sha_setkey(struct crypto_ahash *tfm, const u8 *key,
unsigned int key_len)
{
......@@ -304,6 +341,7 @@ static void ccp_hmac_sha_cra_exit(struct crypto_tfm *tfm)
}
struct ccp_sha_def {
unsigned int version;
const char *name;
const char *drv_name;
enum ccp_sha_type type;
......@@ -313,6 +351,7 @@ struct ccp_sha_def {
static struct ccp_sha_def sha_algs[] = {
{
.version = CCP_VERSION(3, 0),
.name = "sha1",
.drv_name = "sha1-ccp",
.type = CCP_SHA_TYPE_1,
......@@ -320,6 +359,7 @@ static struct ccp_sha_def sha_algs[] = {
.block_size = SHA1_BLOCK_SIZE,
},
{
.version = CCP_VERSION(3, 0),
.name = "sha224",
.drv_name = "sha224-ccp",
.type = CCP_SHA_TYPE_224,
......@@ -327,6 +367,7 @@ static struct ccp_sha_def sha_algs[] = {
.block_size = SHA224_BLOCK_SIZE,
},
{
.version = CCP_VERSION(3, 0),
.name = "sha256",
.drv_name = "sha256-ccp",
.type = CCP_SHA_TYPE_256,
......@@ -403,9 +444,12 @@ static int ccp_register_sha_alg(struct list_head *head,
alg->final = ccp_sha_final;
alg->finup = ccp_sha_finup;
alg->digest = ccp_sha_digest;
alg->export = ccp_sha_export;
alg->import = ccp_sha_import;
halg = &alg->halg;
halg->digestsize = def->digest_size;
halg->statesize = sizeof(struct ccp_sha_exp_ctx);
base = &halg->base;
snprintf(base->cra_name, CRYPTO_MAX_ALG_NAME, "%s", def->name);
......@@ -440,8 +484,11 @@ static int ccp_register_sha_alg(struct list_head *head,
int ccp_register_sha_algs(struct list_head *head)
{
int i, ret;
unsigned int ccpversion = ccp_version();
for (i = 0; i < ARRAY_SIZE(sha_algs); i++) {
if (sha_algs[i].version > ccpversion)
continue;
ret = ccp_register_sha_alg(head, &sha_algs[i]);
if (ret)
return ret;
......
......@@ -129,6 +129,15 @@ struct ccp_aes_cmac_req_ctx {
struct ccp_cmd cmd;
};
struct ccp_aes_cmac_exp_ctx {
unsigned int null_msg;
u8 iv[AES_BLOCK_SIZE];
unsigned int buf_count;
u8 buf[AES_BLOCK_SIZE];
};
/***** SHA related defines *****/
#define MAX_SHA_CONTEXT_SIZE SHA256_DIGEST_SIZE
#define MAX_SHA_BLOCK_SIZE SHA256_BLOCK_SIZE
......@@ -171,6 +180,19 @@ struct ccp_sha_req_ctx {
struct ccp_cmd cmd;
};
struct ccp_sha_exp_ctx {
enum ccp_sha_type type;
u64 msg_bits;
unsigned int first;
u8 ctx[MAX_SHA_CONTEXT_SIZE];
unsigned int buf_count;
u8 buf[MAX_SHA_BLOCK_SIZE];
};
/***** Common Context Structure *****/
struct ccp_ctx {
int (*complete)(struct crypto_async_request *req, int ret);
......
此差异已折叠。
此差异已折叠。
/*
* AMD Cryptographic Coprocessor (CCP) driver
*
* Copyright (C) 2013 Advanced Micro Devices, Inc.
* Copyright (C) 2013,2016 Advanced Micro Devices, Inc.
*
* Author: Tom Lendacky <thomas.lendacky@amd.com>
*
......@@ -23,6 +23,7 @@
#include <linux/hw_random.h>
#include <linux/bitops.h>
#define MAX_CCP_NAME_LEN 16
#define MAX_DMAPOOL_NAME_LEN 32
#define MAX_HW_QUEUES 5
......@@ -140,6 +141,29 @@
#define CCP_ECC_RESULT_OFFSET 60
#define CCP_ECC_RESULT_SUCCESS 0x0001
struct ccp_op;
/* Structure for computation functions that are device-specific */
struct ccp_actions {
int (*perform_aes)(struct ccp_op *);
int (*perform_xts_aes)(struct ccp_op *);
int (*perform_sha)(struct ccp_op *);
int (*perform_rsa)(struct ccp_op *);
int (*perform_passthru)(struct ccp_op *);
int (*perform_ecc)(struct ccp_op *);
int (*init)(struct ccp_device *);
void (*destroy)(struct ccp_device *);
irqreturn_t (*irqhandler)(int, void *);
};
/* Structure to hold CCP version-specific values */
struct ccp_vdata {
unsigned int version;
struct ccp_actions *perform;
};
extern struct ccp_vdata ccpv3;
struct ccp_device;
struct ccp_cmd;
......@@ -184,6 +208,13 @@ struct ccp_cmd_queue {
} ____cacheline_aligned;
struct ccp_device {
struct list_head entry;
struct ccp_vdata *vdata;
unsigned int ord;
char name[MAX_CCP_NAME_LEN];
char rngname[MAX_CCP_NAME_LEN];
struct device *dev;
/*
......@@ -258,18 +289,132 @@ struct ccp_device {
unsigned int axcache;
};
enum ccp_memtype {
CCP_MEMTYPE_SYSTEM = 0,
CCP_MEMTYPE_KSB,
CCP_MEMTYPE_LOCAL,
CCP_MEMTYPE__LAST,
};
struct ccp_dma_info {
dma_addr_t address;
unsigned int offset;
unsigned int length;
enum dma_data_direction dir;
};
struct ccp_dm_workarea {
struct device *dev;
struct dma_pool *dma_pool;
unsigned int length;
u8 *address;
struct ccp_dma_info dma;
};
struct ccp_sg_workarea {
struct scatterlist *sg;
int nents;
struct scatterlist *dma_sg;
struct device *dma_dev;
unsigned int dma_count;
enum dma_data_direction dma_dir;
unsigned int sg_used;
u64 bytes_left;
};
struct ccp_data {
struct ccp_sg_workarea sg_wa;
struct ccp_dm_workarea dm_wa;
};
struct ccp_mem {
enum ccp_memtype type;
union {
struct ccp_dma_info dma;
u32 ksb;
} u;
};
struct ccp_aes_op {
enum ccp_aes_type type;
enum ccp_aes_mode mode;
enum ccp_aes_action action;
};
struct ccp_xts_aes_op {
enum ccp_aes_action action;
enum ccp_xts_aes_unit_size unit_size;
};
struct ccp_sha_op {
enum ccp_sha_type type;
u64 msg_bits;
};
struct ccp_rsa_op {
u32 mod_size;
u32 input_len;
};
struct ccp_passthru_op {
enum ccp_passthru_bitwise bit_mod;
enum ccp_passthru_byteswap byte_swap;
};
struct ccp_ecc_op {
enum ccp_ecc_function function;
};
struct ccp_op {
struct ccp_cmd_queue *cmd_q;
u32 jobid;
u32 ioc;
u32 soc;
u32 ksb_key;
u32 ksb_ctx;
u32 init;
u32 eom;
struct ccp_mem src;
struct ccp_mem dst;
union {
struct ccp_aes_op aes;
struct ccp_xts_aes_op xts;
struct ccp_sha_op sha;
struct ccp_rsa_op rsa;
struct ccp_passthru_op passthru;
struct ccp_ecc_op ecc;
} u;
};
static inline u32 ccp_addr_lo(struct ccp_dma_info *info)
{
return lower_32_bits(info->address + info->offset);
}
static inline u32 ccp_addr_hi(struct ccp_dma_info *info)
{
return upper_32_bits(info->address + info->offset) & 0x0000ffff;
}
int ccp_pci_init(void);
void ccp_pci_exit(void);
int ccp_platform_init(void);
void ccp_platform_exit(void);
void ccp_add_device(struct ccp_device *ccp);
void ccp_del_device(struct ccp_device *ccp);
struct ccp_device *ccp_alloc_struct(struct device *dev);
int ccp_init(struct ccp_device *ccp);
void ccp_destroy(struct ccp_device *ccp);
bool ccp_queues_suspended(struct ccp_device *ccp);
irqreturn_t ccp_irq_handler(int irq, void *data);
int ccp_cmd_queue_thread(void *data);
int ccp_run_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd);
......
此差异已折叠。
/*
* AMD Cryptographic Coprocessor (CCP) driver
*
* Copyright (C) 2013 Advanced Micro Devices, Inc.
* Copyright (C) 2013,2016 Advanced Micro Devices, Inc.
*
* Author: Tom Lendacky <thomas.lendacky@amd.com>
*
......@@ -59,9 +59,11 @@ static int ccp_get_msix_irqs(struct ccp_device *ccp)
ccp_pci->msix_count = ret;
for (v = 0; v < ccp_pci->msix_count; v++) {
/* Set the interrupt names and request the irqs */
snprintf(ccp_pci->msix[v].name, name_len, "ccp-%u", v);
snprintf(ccp_pci->msix[v].name, name_len, "%s-%u",
ccp->name, v);
ccp_pci->msix[v].vector = msix_entry[v].vector;
ret = request_irq(ccp_pci->msix[v].vector, ccp_irq_handler,
ret = request_irq(ccp_pci->msix[v].vector,
ccp->vdata->perform->irqhandler,
0, ccp_pci->msix[v].name, dev);
if (ret) {
dev_notice(dev, "unable to allocate MSI-X IRQ (%d)\n",
......@@ -94,7 +96,8 @@ static int ccp_get_msi_irq(struct ccp_device *ccp)
return ret;
ccp->irq = pdev->irq;
ret = request_irq(ccp->irq, ccp_irq_handler, 0, "ccp", dev);
ret = request_irq(ccp->irq, ccp->vdata->perform->irqhandler, 0,
ccp->name, dev);
if (ret) {
dev_notice(dev, "unable to allocate MSI IRQ (%d)\n", ret);
goto e_msi;
......@@ -179,6 +182,12 @@ static int ccp_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
goto e_err;
ccp->dev_specific = ccp_pci;
ccp->vdata = (struct ccp_vdata *)id->driver_data;
if (!ccp->vdata || !ccp->vdata->version) {
ret = -ENODEV;
dev_err(dev, "missing driver data\n");
goto e_err;
}
ccp->get_irq = ccp_get_irqs;
ccp->free_irq = ccp_free_irqs;
......@@ -221,7 +230,7 @@ static int ccp_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
dev_set_drvdata(dev, ccp);
ret = ccp_init(ccp);
ret = ccp->vdata->perform->init(ccp);
if (ret)
goto e_iomap;
......@@ -251,7 +260,7 @@ static void ccp_pci_remove(struct pci_dev *pdev)
if (!ccp)
return;
ccp_destroy(ccp);
ccp->vdata->perform->destroy(ccp);
pci_iounmap(pdev, ccp->io_map);
......@@ -312,7 +321,7 @@ static int ccp_pci_resume(struct pci_dev *pdev)
#endif
static const struct pci_device_id ccp_pci_table[] = {
{ PCI_VDEVICE(AMD, 0x1537), },
{ PCI_VDEVICE(AMD, 0x1537), (kernel_ulong_t)&ccpv3 },
/* Last entry must be zero */
{ 0, }
};
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
......@@ -55,8 +55,8 @@
#define ADF_DH895XCC_DEVICE_NAME "dh895xcc"
#define ADF_DH895XCCVF_DEVICE_NAME "dh895xccvf"
#define ADF_C62X_DEVICE_NAME "c62x"
#define ADF_C62XVF_DEVICE_NAME "c62xvf"
#define ADF_C62X_DEVICE_NAME "c6xx"
#define ADF_C62XVF_DEVICE_NAME "c6xxvf"
#define ADF_C3XXX_DEVICE_NAME "c3xxx"
#define ADF_C3XXXVF_DEVICE_NAME "c3xxxvf"
#define ADF_DH895XCC_PCI_DEVICE_ID 0x435
......
......@@ -121,7 +121,6 @@ static void adf_device_reset_worker(struct work_struct *work)
adf_dev_restarting_notify(accel_dev);
adf_dev_stop(accel_dev);
adf_dev_shutdown(accel_dev);
adf_dev_restore(accel_dev);
if (adf_dev_init(accel_dev) || adf_dev_start(accel_dev)) {
/* The device hanged and we can't restart it so stop here */
dev_err(&GET_DEV(accel_dev), "Restart device failed\n");
......
......@@ -688,7 +688,7 @@ static int qat_uclo_map_ae(struct icp_qat_fw_loader_handle *handle, int max_ae)
int mflag = 0;
struct icp_qat_uclo_objhandle *obj_handle = handle->obj_handle;
for (ae = 0; ae <= max_ae; ae++) {
for (ae = 0; ae < max_ae; ae++) {
if (!test_bit(ae,
(unsigned long *)&handle->hal_handle->ae_mask))
continue;
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册