- 13 6月, 2019 3 次提交
-
-
由 Eric Biggers 提交于
Call cond_resched() after each fuzz test iteration. This avoids stall warnings if fuzz_iterations is set very high for testing purposes. While we're at it, also call cond_resched() after finishing testing each test vector. Signed-off-by: NEric Biggers <ebiggers@google.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
Now that all algorithms explicitly set cra_driver_name, make it required for algorithm registration and remove the code that generated a default cra_driver_name. Also add an explicit check that cra_name is set too, since that's obviously required too, yet it didn't seem to be checked anywhere. Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
Most generic crypto algorithms declare a driver name ending in "-generic". The rest don't declare a driver name and instead rely on the crypto API automagically appending "-generic" upon registration. Having multiple conventions is unnecessarily confusing and makes it harder to grep for all generic algorithms in the kernel source tree. But also, allowing NULL driver names is problematic because sometimes people fail to set it, e.g. the case fixed by commit 41798036 ("crypto: cavium/zip - fix collision with generic cra_driver_name"). Of course, people can also incorrectly name their drivers "-generic". But that's much easier to notice / grep for. Therefore, let's make cra_driver_name mandatory. In preparation for this, this patch makes all generic algorithms set cra_driver_name. Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 06 6月, 2019 6 次提交
-
-
由 Eric Biggers 提交于
Clear the CRYPTO_TFM_REQ_MAY_SLEEP flag when the chacha20poly1305 operation is being continued from an async completion callback, since sleeping may not be allowed in that context. This is basically the same bug that was recently fixed in the xts and lrw templates. But, it's always been broken in chacha20poly1305 too. This was found using syzkaller in combination with the updated crypto self-tests which actually test the MAY_SLEEP flag now. Reproducer: python -c 'import socket; socket.socket(socket.AF_ALG, 5, 0).bind( ("aead", "rfc7539(cryptd(chacha20-generic),poly1305-generic)"))' Kernel output: BUG: sleeping function called from invalid context at include/crypto/algapi.h:426 in_atomic(): 1, irqs_disabled(): 0, pid: 1001, name: kworker/2:2 [...] CPU: 2 PID: 1001 Comm: kworker/2:2 Not tainted 5.2.0-rc2 #5 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-20181126_142135-anatol 04/01/2014 Workqueue: crypto cryptd_queue_worker Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x4d/0x6a lib/dump_stack.c:113 ___might_sleep kernel/sched/core.c:6138 [inline] ___might_sleep.cold.19+0x8e/0x9f kernel/sched/core.c:6095 crypto_yield include/crypto/algapi.h:426 [inline] crypto_hash_walk_done+0xd6/0x100 crypto/ahash.c:113 shash_ahash_update+0x41/0x60 crypto/shash.c:251 shash_async_update+0xd/0x10 crypto/shash.c:260 crypto_ahash_update include/crypto/hash.h:539 [inline] poly_setkey+0xf6/0x130 crypto/chacha20poly1305.c:337 poly_init+0x51/0x60 crypto/chacha20poly1305.c:364 async_done_continue crypto/chacha20poly1305.c:78 [inline] poly_genkey_done+0x15/0x30 crypto/chacha20poly1305.c:369 cryptd_skcipher_complete+0x29/0x70 crypto/cryptd.c:279 cryptd_skcipher_decrypt+0xcd/0x110 crypto/cryptd.c:339 cryptd_queue_worker+0x70/0xa0 crypto/cryptd.c:184 process_one_work+0x1ed/0x420 kernel/workqueue.c:2269 worker_thread+0x3e/0x3a0 kernel/workqueue.c:2415 kthread+0x11f/0x140 kernel/kthread.c:255 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:352 Fixes: 71ebc4d1 ("crypto: chacha20poly1305 - Add a ChaCha20-Poly1305 AEAD construction, RFC7539") Cc: <stable@vger.kernel.org> # v4.2+ Cc: Martin Willi <martin@strongswan.org> Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
Commit c778f96b ("crypto: lrw - Optimize tweak computation") incorrectly reduced the alignmask of LRW instances from '__alignof__(u64) - 1' to '__alignof__(__be32) - 1'. However, xor_tweak() and setkey() assume that the data and key, respectively, are aligned to 'be128', which has u64 alignment. Fix the alignmask to be at least '__alignof__(be128) - 1'. Fixes: c778f96b ("crypto: lrw - Optimize tweak computation") Cc: <stable@vger.kernel.org> # v4.20+ Cc: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
Changing ghash_mod_init() to be subsys_initcall made it start running before the alignment fault handler has been installed on ARM. In kernel builds where the keys in the ghash test vectors happened to be misaligned in the kernel image, this exposed the longstanding bug that ghash_setkey() is incorrectly casting the key buffer (which can have any alignment) to be128 for passing to gf128mul_init_4k_lle(). Fix this by memcpy()ing the key to a temporary buffer. Don't fix it by setting an alignmask on the algorithm instead because that would unnecessarily force alignment of the data too. Fixes: 2cdc6899 ("crypto: ghash - Add GHASH digest algorithm for GCM") Reported-by: NPeter Robinson <pbrobinson@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: NEric Biggers <ebiggers@google.com> Tested-by: NPeter Robinson <pbrobinson@gmail.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Nikolay Borisov 提交于
xxhash is currently implemented as a self-contained module in /lib. This patch enables that module to be used as part of the generic kernel crypto framework. It adds a simple wrapper to the 64bit version. I've also added test vectors (with help from Nick Terrell). The upstream xxhash code is tested by running hashing operation on random 222 byte data with seed values of 0 and a prime number. The upstream test suite can be found at https://github.com/Cyan4973/xxHash/blob/cf46e0c/xxhsum.c#L664 Essentially hashing is run on data of length 0,1,14,222 with the aforementioned seed values 0 and prime 2654435761. The particular random 222 byte string was provided to me by Nick Terrell by reading /dev/random and the checksums were calculated by the upstream xxsum utility with the following bash script: dd if=/dev/random of=TEST_VECTOR bs=1 count=222 for a in 0 1; do for l in 0 1 14 222; do for s in 0 2654435761; do echo algo $a length $l seed $s; head -c $l TEST_VECTOR | ~/projects/kernel/xxHash/xxhsum -H$a -s$s done done done This produces output as follows: algo 0 length 0 seed 0 02cc5d05 stdin algo 0 length 0 seed 2654435761 02cc5d05 stdin algo 0 length 1 seed 0 25201171 stdin algo 0 length 1 seed 2654435761 25201171 stdin algo 0 length 14 seed 0 c1d95975 stdin algo 0 length 14 seed 2654435761 c1d95975 stdin algo 0 length 222 seed 0 b38662a6 stdin algo 0 length 222 seed 2654435761 b38662a6 stdin algo 1 length 0 seed 0 ef46db3751d8e999 stdin algo 1 length 0 seed 2654435761 ac75fda2929b17ef stdin algo 1 length 1 seed 0 27c3f04c2881203a stdin algo 1 length 1 seed 2654435761 4a15ed26415dfe4d stdin algo 1 length 14 seed 0 3d33dc700231dfad stdin algo 1 length 14 seed 2654435761 ea5f7ddef9a64f80 stdin algo 1 length 222 seed 0 5f3d3c08ec2bef34 stdin algo 1 length 222 seed 2654435761 6a9df59664c7ed62 stdin algo 1 is xx64 variant, algo 0 is the 32 bit variant which is currently not hooked up. Signed-off-by: NNikolay Borisov <nborisov@suse.com> Reviewed-by: NEric Biggers <ebiggers@kernel.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Stephan Müller 提交于
The Jitter RNG implementation is updated to comply with upstream version 2.1.2. The change covers the following aspects: * Time variation measurement is conducted over the LFSR operation instead of the XOR folding * Invcation of stuck test during initialization * Removal of the stirring functionality and the Von-Neumann unbiaser as the LFSR using a primitive and irreducible polynomial generates an identical distribution of random bits This implementation was successfully used in FIPS 140-2 validations as well as in German BSI evaluations. This kernel implementation was tested as follows: * The unchanged kernel code file jitterentropy.c is compiled as part of user space application to generate raw unconditioned noise data. That data is processed with the NIST SP800-90B non-IID test tool to verify that the kernel code exhibits an equal amount of noise as the upstream Jitter RNG version 2.1.2. * Using AF_ALG with the libkcapi tool of kcapi-rng the Jitter RNG was output tested with dieharder to verify that the output does not exhibit statistical weaknesses. The following command was used: kcapi-rng -n "jitterentropy_rng" -b 100000000000 | dieharder -a -g 200 * The unchanged kernel code file jitterentropy.c is compiled as part of user space application to test the LFSR implementation. The LFSR is injected a monotonically increasing counter as input and the output is fed into dieharder to verify that the LFSR operation does not exhibit statistical weaknesses. * The patch was tested on the Muen separation kernel which returns a more coarse time stamp to verify that the Jitter RNG does not cause regressions with its initialization test considering that the Jitter RNG depends on a high-resolution timer. Tested-by: NReto Buerki <reet@codelabs.ch> Signed-off-by: NStephan Mueller <smueller@chronox.de> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
For hash algorithms implemented using the "shash" algorithm type, test both the ahash and shash APIs, not just the ahash API. Testing the ahash API already tests the shash API indirectly, which is normally good enough. However, there have been corner cases where there have been shash bugs that don't get exposed through the ahash API. So, update testmgr to test the shash API too. This would have detected the arm64 SHA-1 and SHA-2 bugs for which fixes were just sent out (https://patchwork.kernel.org/patch/10964843/ and https://patchwork.kernel.org/patch/10965089/): alg: shash: sha1-ce test failed (wrong result) on test vector 0, cfg="init+finup aligned buffer" alg: shash: sha224-ce test failed (wrong result) on test vector 0, cfg="init+finup aligned buffer" alg: shash: sha256-ce test failed (wrong result) on test vector 0, cfg="init+finup aligned buffer" This also would have detected the bugs fixed by commit 307508d1 ("crypto: crct10dif-generic - fix use via crypto_shash_digest()") and commit dec3d0b1 ("crypto: x86/crct10dif-pcl - fix use via crypto_shash_digest()"). Signed-off-by: NEric Biggers <ebiggers@google.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 30 5月, 2019 8 次提交
-
-
由 Eric Biggers 提交于
Remove the crypto_tfm_in_queue() function, which is unused. Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
Remove the unnecessary constant CRYPTO_ALG_TYPE_DIGEST, which has the same value as CRYPTO_ALG_TYPE_HASH. Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
kcrypto_wq is only used by cryptd, so move it into cryptd.c and change the workqueue name from "crypto" to "cryptd". Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
There's no reason for users to select CONFIG_CRYPTO_GF128MUL, since it's just some helper functions, and algorithms that need it select it. Remove the prompt string so that it's not shown to users. Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
echainiv is the only algorithm or template in the crypto API that is enabled by default. But there doesn't seem to be a good reason for it. And it pulls in a lot of stuff as dependencies, like AEAD support and a "NIST SP800-90A DRBG" including HMAC-SHA256. The commit which made it default 'm', commit 3491244c ("crypto: echainiv - Set Kconfig default to m"), mentioned that it's needed for IPsec. However, later commit 32b6170c ("ipv4+ipv6: Make INET*_ESP select CRYPTO_ECHAINIV") made the IPsec kconfig options select it. So, remove the 'default m'. Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
The "cryptomgr" module is required for templates to be used. Many templates select it, but others don't. Make all templates select it. Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
The crypto self-tests are part of the "cryptomgr" module, which can technically be disabled (though it rarely is). If you do so, currently you can still enable CRYPTO_MANAGER_EXTRA_TESTS, which doesn't make sense since in that case testmgr.c isn't compiled at all. Fix it by making it CRYPTO_MANAGER_EXTRA_TESTS depend on CRYPTO_MANAGER2, like CRYPTO_MANAGER_DISABLE_TESTS already does. Fixes: 5b2706a4 ("crypto: testmgr - introduce CONFIG_CRYPTO_MANAGER_EXTRA_TESTS") Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
On PowerPC with CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y, there is sometimes a crash in generate_random_aead_testvec(). The problem is that the generated test vectors use data lengths of up to about 2 * PAGE_SIZE, which is 128 KiB on PowerPC; however, the data length fields in the test vectors are 'unsigned short', so the lengths get truncated. Fix this by changing the relevant fields to 'unsigned int'. Fixes: 40153b10 ("crypto: testmgr - fuzz AEADs against their generic implementation") Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 23 5月, 2019 1 次提交
-
-
由 Stephan Mueller 提交于
FIPS 140-2 section 4.9.2 requires a continuous self test of the noise source. Up to kernel 4.8 drivers/char/random.c provided this continuous self test. Afterwards it was moved to a location that is inconsistent with the FIPS 140-2 requirements. The relevant patch was e192be9d . Thus, the FIPS 140-2 CTRNG is added to the DRBG when it obtains the seed. This patch resurrects the function drbg_fips_continous_test that existed some time ago and applies it to the noise sources. The patch that removed the drbg_fips_continous_test was b3614763 . The Jitter RNG implements its own FIPS 140-2 self test and thus does not need to be subjected to the test in the DRBG. The patch contains a tiny fix to ensure proper zeroization in case of an error during the Jitter RNG data gathering. Signed-off-by: NStephan Mueller <smueller@chronox.de> Reviewed-by: NYann Droneaud <ydroneaud@opteya.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 28 4月, 2019 1 次提交
-
-
由 Johannes Berg 提交于
We currently have two levels of strict validation: 1) liberal (default) - undefined (type >= max) & NLA_UNSPEC attributes accepted - attribute length >= expected accepted - garbage at end of message accepted 2) strict (opt-in) - NLA_UNSPEC attributes accepted - attribute length >= expected accepted Split out parsing strictness into four different options: * TRAILING - check that there's no trailing data after parsing attributes (in message or nested) * MAXTYPE - reject attrs > max known type * UNSPEC - reject attributes with NLA_UNSPEC policy entries * STRICT_ATTRS - strictly validate attribute size The default for future things should be *everything*. The current *_strict() is a combination of TRAILING and MAXTYPE, and is renamed to _deprecated_strict(). The current regular parsing has none of this, and is renamed to *_parse_deprecated(). Additionally it allows us to selectively set one of the new flags even on old policies. Notably, the UNSPEC flag could be useful in this case, since it can be arranged (by filling in the policy) to not be an incompatible userspace ABI change, but would then going forward prevent forgetting attribute entries. Similar can apply to the POLICY flag. We end up with the following renames: * nla_parse -> nla_parse_deprecated * nla_parse_strict -> nla_parse_deprecated_strict * nlmsg_parse -> nlmsg_parse_deprecated * nlmsg_parse_strict -> nlmsg_parse_deprecated_strict * nla_parse_nested -> nla_parse_nested_deprecated * nla_validate_nested -> nla_validate_nested_deprecated Using spatch, of course: @@ expression TB, MAX, HEAD, LEN, POL, EXT; @@ -nla_parse(TB, MAX, HEAD, LEN, POL, EXT) +nla_parse_deprecated(TB, MAX, HEAD, LEN, POL, EXT) @@ expression NLH, HDRLEN, TB, MAX, POL, EXT; @@ -nlmsg_parse(NLH, HDRLEN, TB, MAX, POL, EXT) +nlmsg_parse_deprecated(NLH, HDRLEN, TB, MAX, POL, EXT) @@ expression NLH, HDRLEN, TB, MAX, POL, EXT; @@ -nlmsg_parse_strict(NLH, HDRLEN, TB, MAX, POL, EXT) +nlmsg_parse_deprecated_strict(NLH, HDRLEN, TB, MAX, POL, EXT) @@ expression TB, MAX, NLA, POL, EXT; @@ -nla_parse_nested(TB, MAX, NLA, POL, EXT) +nla_parse_nested_deprecated(TB, MAX, NLA, POL, EXT) @@ expression START, MAX, POL, EXT; @@ -nla_validate_nested(START, MAX, POL, EXT) +nla_validate_nested_deprecated(START, MAX, POL, EXT) @@ expression NLH, HDRLEN, MAX, POL, EXT; @@ -nlmsg_validate(NLH, HDRLEN, MAX, POL, EXT) +nlmsg_validate_deprecated(NLH, HDRLEN, MAX, POL, EXT) For this patch, don't actually add the strict, non-renamed versions yet so that it breaks compile if I get it wrong. Also, while at it, make nla_validate and nla_parse go down to a common __nla_validate_parse() function to avoid code duplication. Ultimately, this allows us to have very strict validation for every new caller of nla_parse()/nlmsg_parse() etc as re-introduced in the next patch, while existing things will continue to work as is. In effect then, this adds fully strict validation for any new command. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 25 4月, 2019 4 次提交
-
-
由 Vitaly Chikunov 提交于
Fix undefined symbol issue in ecrdsa_generic module when ASN1 or OID_REGISTRY aren't enabled in the config by selecting these options for CRYPTO_ECRDSA. ERROR: "asn1_ber_decoder" [crypto/ecrdsa_generic.ko] undefined! ERROR: "look_up_OID" [crypto/ecrdsa_generic.ko] undefined! Reported-by: NRandy Dunlap <rdunlap@infradead.org> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NVitaly Chikunov <vt@altlinux.org> Acked-by: Randy Dunlap <rdunlap@infradead.org> # build-tested Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Gilad Ben-Yossef 提交于
Mark sm4 and missing aes using protected keys which are indetical to same algs with no HW protected keys as tested. Signed-off-by: NGilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
The flags field in 'struct shash_desc' never actually does anything. The only ostensibly supported flag is CRYPTO_TFM_REQ_MAY_SLEEP. However, no shash algorithm ever sleeps, making this flag a no-op. With this being the case, inevitably some users who can't sleep wrongly pass MAY_SLEEP. These would all need to be fixed if any shash algorithm actually started sleeping. For example, the shash_ahash_*() functions, which wrap a shash algorithm with the ahash API, pass through MAY_SLEEP from the ahash API to the shash API. However, the shash functions are called under kmap_atomic(), so actually they're assumed to never sleep. Even if it turns out that some users do need preemption points while hashing large buffers, we could easily provide a helper function crypto_shash_update_large() which divides the data into smaller chunks and calls crypto_shash_update() and cond_resched() for each chunk. It's not necessary to have a flag in 'struct shash_desc', nor is it necessary to make individual shash algorithms aware of this at all. Therefore, remove shash_desc::flags, and document that the crypto_shash_*() functions can be called from any context. Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
The crypto_yield() in shash_ahash_digest() occurs after the entire digest operation already happened, so there's no real point. Remove it. Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 19 4月, 2019 2 次提交
-
-
由 Eric Biggers 提交于
CCM instances can be created by either the "ccm" template, which only allows choosing the block cipher, e.g. "ccm(aes)"; or by "ccm_base", which allows choosing the ctr and cbcmac implementations, e.g. "ccm_base(ctr(aes-generic),cbcmac(aes-generic))". However, a "ccm_base" instance prevents a "ccm" instance from being registered using the same implementations. Nor will the instance be found by lookups of "ccm". This can be used as a denial of service. Moreover, "ccm_base" instances are never tested by the crypto self-tests, even if there are compatible "ccm" tests. The root cause of these problems is that instances of the two templates use different cra_names. Therefore, fix these problems by making "ccm_base" instances set the same cra_name as "ccm" instances, e.g. "ccm(aes)" instead of "ccm_base(ctr(aes-generic),cbcmac(aes-generic))". This requires extracting the block cipher name from the name of the ctr and cbcmac algorithms. It also requires starting to verify that the algorithms are really ctr and cbcmac using the same block cipher, not something else entirely. But it would be bizarre if anyone were actually using non-ccm-compatible algorithms with ccm_base, so this shouldn't break anyone in practice. Fixes: 4a49b499 ("[CRYPTO] ccm: Added CCM mode") Cc: stable@vger.kernel.org Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
GCM instances can be created by either the "gcm" template, which only allows choosing the block cipher, e.g. "gcm(aes)"; or by "gcm_base", which allows choosing the ctr and ghash implementations, e.g. "gcm_base(ctr(aes-generic),ghash-generic)". However, a "gcm_base" instance prevents a "gcm" instance from being registered using the same implementations. Nor will the instance be found by lookups of "gcm". This can be used as a denial of service. Moreover, "gcm_base" instances are never tested by the crypto self-tests, even if there are compatible "gcm" tests. The root cause of these problems is that instances of the two templates use different cra_names. Therefore, fix these problems by making "gcm_base" instances set the same cra_name as "gcm" instances, e.g. "gcm(aes)" instead of "gcm_base(ctr(aes-generic),ghash-generic)". This requires extracting the block cipher name from the name of the ctr algorithm. It also requires starting to verify that the algorithms are really ctr and ghash, not something else entirely. But it would be bizarre if anyone were actually using non-gcm-compatible algorithms with gcm_base, so this shouldn't break anyone in practice. Fixes: d00aa19b ("[CRYPTO] gcm: Allow block cipher parameter") Cc: stable@vger.kernel.org Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 18 4月, 2019 15 次提交
-
-
由 Eric Biggers 提交于
shash_ahash_digest(), which is the ->digest() method for ahash tfms that use an shash algorithm, has an optimization where crypto_shash_digest() is called if the data is in a single page. But an off-by-one error prevented this path from being taken unless the user happened to provide extra data in the scatterlist. Fix it. Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
Remove cryptd_alloc_ablkcipher() and the ability of cryptd to create algorithms with the deprecated "ablkcipher" type. This has been unused since commit 0e145b47 ("crypto: ablk_helper - remove ablk_helper"). Instead, cryptd_alloc_skcipher() is used. Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
In commit 71052dcf ("crypto: scompress - Use per-CPU struct instead multiple variables") I accidentally initialized multiple times the memory on a random CPU. I should have initialize the memory on every CPU like it has been done earlier. I didn't notice this because the scheduler didn't move the task to another CPU. Guenter managed to do that and the code crashed as expected. Allocate / free per-CPU memory on each CPU. Fixes: 71052dcf ("crypto: scompress - Use per-CPU struct instead multiple variables") Reported-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Tested-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
Use subsys_initcall for registration of all templates and generic algorithm implementations, rather than module_init. Then change cryptomgr to use arch_initcall, to place it before the subsys_initcalls. This is needed so that when both a generic and optimized implementation of an algorithm are built into the kernel (not loadable modules), the generic implementation is registered before the optimized one. Otherwise, the self-tests for the optimized implementation are unable to allocate the generic implementation for the new comparison fuzz tests. Note that on arm, a side effect of this change is that self-tests for generic implementations may run before the unaligned access handler has been installed. So, unaligned accesses will crash the kernel. This is arguably a good thing as it makes it easier to detect that type of bug. Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
When the extra crypto self-tests are enabled, test each AEAD algorithm against its generic implementation when one is available. This involves: checking the algorithm properties for consistency, then randomly generating test vectors using the generic implementation and running them against the implementation under test. Both good and bad inputs are tested. Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
When the extra crypto self-tests are enabled, test each skcipher algorithm against its generic implementation when one is available. This involves: checking the algorithm properties for consistency, then randomly generating test vectors using the generic implementation and running them against the implementation under test. Both good and bad inputs are tested. This has already detected a bug in the skcipher_walk API, a bug in the LRW template, and an inconsistency in the cts implementations. Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
When the extra crypto self-tests are enabled, test each hash algorithm against its generic implementation when one is available. This involves: checking the algorithm properties for consistency, then randomly generating test vectors using the generic implementation and running them against the implementation under test. Both good and bad inputs are tested. This has already detected a bug in the x86 implementation of poly1305, bugs in crct10dif, and an inconsistency in cbcmac. Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
Add some helper functions in preparation for fuzz testing algorithms against their generic implementation. Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
In preparation for fuzz testing algorithms against their generic implementation, make error messages in testmgr identify test vectors by name rather than index. Built-in test vectors are simply "named" by their index in testmgr.h, as before. But (in later patches) generated test vectors will be given more descriptive names to help developers debug problems detected with them. Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
Update testmgr to support testing for specific errors from setkey() and digest() for hashes; setkey() and encrypt()/decrypt() for skciphers and ciphers; and setkey(), setauthsize(), and encrypt()/decrypt() for AEADs. This is useful because algorithms usually restrict the lengths or format of the message, key, and/or authentication tag in some way. And bad inputs should be tested too, not just good inputs. As part of this change, remove the ambiguously-named 'fail' flag and replace it with 'setkey_error = -EINVAL' for the only test vector that used it -- the DES weak key test vector. Note that this tightens the test to require -EINVAL rather than any error code, but AFAICS this won't cause any test failure. Other than that, these new fields aren't set on any test vectors yet. Later patches will do so. Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Vitaly Chikunov 提交于
Add testmgr test vectors for EC-RDSA algorithm for every of five supported parameters (curves). Because there are no officially published test vectors for the curves, the vectors are generated by gost-engine. Signed-off-by: NVitaly Chikunov <vt@altlinux.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Vitaly Chikunov 提交于
Add Elliptic Curve Russian Digital Signature Algorithm (GOST R 34.10-2012, RFC 7091, ISO/IEC 14888-3) is one of the Russian (and since 2018 the CIS countries) cryptographic standard algorithms (called GOST algorithms). Only signature verification is supported, with intent to be used in the IMA. Summary of the changes: * crypto/Kconfig: - EC-RDSA is added into Public-key cryptography section. * crypto/Makefile: - ecrdsa objects are added. * crypto/asymmetric_keys/x509_cert_parser.c: - Recognize EC-RDSA and Streebog OIDs. * include/linux/oid_registry.h: - EC-RDSA OIDs are added to the enum. Also, a two currently not implemented curve OIDs are added for possible extension later (to not change numbering and grouping). * crypto/ecc.c: - Kenneth MacKay copyright date is updated to 2014, because vli_mmod_slow, ecc_point_add, ecc_point_mult_shamir are based on his code from micro-ecc. - Functions needed for ecrdsa are EXPORT_SYMBOL'ed. - New functions: vli_is_negative - helper to determine sign of vli; vli_from_be64 - unpack big-endian array into vli (used for a signature); vli_from_le64 - unpack little-endian array into vli (used for a public key); vli_uadd, vli_usub - add/sub u64 value to/from vli (used for increment/decrement); mul_64_64 - optimized to use __int128 where appropriate, this speeds up point multiplication (and as a consequence signature verification) by the factor of 1.5-2; vli_umult - multiply vli by a small value (speeds up point multiplication by another factor of 1.5-2, depending on vli sizes); vli_mmod_special - module reduction for some form of Pseudo-Mersenne primes (used for the curves A); vli_mmod_special2 - module reduction for another form of Pseudo-Mersenne primes (used for the curves B); vli_mmod_barrett - module reduction using pre-computed value (used for the curve C); vli_mmod_slow - more general module reduction which is much slower (used when the modulus is subgroup order); vli_mod_mult_slow - modular multiplication; ecc_point_add - add two points; ecc_point_mult_shamir - add two points multiplied by scalars in one combined multiplication (this gives speed up by another factor 2 in compare to two separate multiplications). ecc_is_pubkey_valid_partial - additional samity check is added. - Updated vli_mmod_fast with non-strict heuristic to call optimal module reduction function depending on the prime value; - All computations for the previously defined (two NIST) curves should not unaffected. * crypto/ecc.h: - Newly exported functions are documented. * crypto/ecrdsa_defs.h - Five curves are defined. * crypto/ecrdsa.c: - Signature verification is implemented. * crypto/ecrdsa_params.asn1, crypto/ecrdsa_pub_key.asn1: - Templates for BER decoder for EC-RDSA parameters and public key. Cc: linux-integrity@vger.kernel.org Signed-off-by: NVitaly Chikunov <vt@altlinux.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Vitaly Chikunov 提交于
ecc.c have algorithms that could be used togeter by ecdh and ecrdsa. Make it separate module. Add CRYPTO_ECC into Kconfig. EXPORT_SYMBOL and document to what seems appropriate. Move structs ecc_point and ecc_curve from ecc_curve_defs.h into ecc.h. No code changes. Signed-off-by: NVitaly Chikunov <vt@altlinux.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Vitaly Chikunov 提交于
Group RSA, DH, and ECDH into Public-key cryptography config section. Signed-off-by: NVitaly Chikunov <vt@altlinux.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Vitaly Chikunov 提交于
Some public key algorithms (like EC-DSA) keep in parameters field important data such as digest and curve OIDs (possibly more for different EC-DSA variants). Thus, just setting a public key (as for RSA) is not enough. Append parameters into the key stream for akcipher_set_{pub,priv}_key. Appended data is: (u32) algo OID, (u32) parameters length, parameters data. This does not affect current akcipher API nor RSA ciphers (they could ignore it). Idea of appending parameters to the key stream is by Herbert Xu. Cc: David Howells <dhowells@redhat.com> Cc: Denis Kenzior <denkenz@gmail.com> Cc: keyrings@vger.kernel.org Signed-off-by: NVitaly Chikunov <vt@altlinux.org> Reviewed-by: NDenis Kenzior <denkenz@gmail.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-