1. 27 7月, 2019 1 次提交
    • A
      crypto: aegis - fix badly optimized clang output · 97ac82d9
      Arnd Bergmann 提交于
      Clang sometimes makes very different inlining decisions from gcc.
      In case of the aegis crypto algorithms, it decides to turn the innermost
      primitives (and, xor, ...) into separate functions but inline most of
      the rest.
      
      This results in a huge amount of variables spilled on the stack, leading
      to rather slow execution as well as kernel stack usage beyond the 32-bit
      warning limit when CONFIG_KASAN is enabled:
      
      crypto/aegis256.c:123:13: warning: stack frame size of 648 bytes in function 'crypto_aegis256_encrypt_chunk' [-Wframe-larger-than=]
      crypto/aegis256.c:366:13: warning: stack frame size of 1264 bytes in function 'crypto_aegis256_crypt' [-Wframe-larger-than=]
      crypto/aegis256.c:187:13: warning: stack frame size of 656 bytes in function 'crypto_aegis256_decrypt_chunk' [-Wframe-larger-than=]
      crypto/aegis128l.c:135:13: warning: stack frame size of 832 bytes in function 'crypto_aegis128l_encrypt_chunk' [-Wframe-larger-than=]
      crypto/aegis128l.c:415:13: warning: stack frame size of 1480 bytes in function 'crypto_aegis128l_crypt' [-Wframe-larger-than=]
      crypto/aegis128l.c:218:13: warning: stack frame size of 848 bytes in function 'crypto_aegis128l_decrypt_chunk' [-Wframe-larger-than=]
      crypto/aegis128.c:116:13: warning: stack frame size of 584 bytes in function 'crypto_aegis128_encrypt_chunk' [-Wframe-larger-than=]
      crypto/aegis128.c:351:13: warning: stack frame size of 1064 bytes in function 'crypto_aegis128_crypt' [-Wframe-larger-than=]
      crypto/aegis128.c:177:13: warning: stack frame size of 592 bytes in function 'crypto_aegis128_decrypt_chunk' [-Wframe-larger-than=]
      
      Forcing the primitives to all get inlined avoids the issue and the
      resulting code is similar to what gcc produces.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Acked-by: NNick Desaulniers <ndesaulniers@google.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      97ac82d9
  2. 26 7月, 2019 17 次提交
  3. 03 7月, 2019 2 次提交
  4. 27 6月, 2019 5 次提交
    • A
      crypto: asymmetric_keys - select CRYPTO_HASH where needed · 90acc065
      Arnd Bergmann 提交于
      Build testing with some core crypto options disabled revealed
      a few modules that are missing CRYPTO_HASH:
      
      crypto/asymmetric_keys/x509_public_key.o: In function `x509_get_sig_params':
      x509_public_key.c:(.text+0x4c7): undefined reference to `crypto_alloc_shash'
      x509_public_key.c:(.text+0x5e5): undefined reference to `crypto_shash_digest'
      crypto/asymmetric_keys/pkcs7_verify.o: In function `pkcs7_digest.isra.0':
      pkcs7_verify.c:(.text+0xab): undefined reference to `crypto_alloc_shash'
      pkcs7_verify.c:(.text+0x1b2): undefined reference to `crypto_shash_digest'
      pkcs7_verify.c:(.text+0x3c1): undefined reference to `crypto_shash_update'
      pkcs7_verify.c:(.text+0x411): undefined reference to `crypto_shash_finup'
      
      This normally doesn't show up in randconfig tests because there is
      a large number of other options that select CRYPTO_HASH.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      90acc065
    • A
      crypto: serpent - mark __serpent_setkey_sbox noinline · 47397118
      Arnd Bergmann 提交于
      The same bug that gcc hit in the past is apparently now showing
      up with clang, which decides to inline __serpent_setkey_sbox:
      
      crypto/serpent_generic.c:268:5: error: stack frame size of 2112 bytes in function '__serpent_setkey' [-Werror,-Wframe-larger-than=]
      
      Marking it 'noinline' reduces the stack usage from 2112 bytes to
      192 and 96 bytes, respectively, and seems to generate more
      useful object code.
      
      Fixes: c871c10e ("crypto: serpent - improve __serpent_setkey with UBSAN")
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Reviewed-by: NEric Biggers <ebiggers@kernel.org>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      47397118
    • A
      crypto: testmgr - dynamically allocate crypto_shash · 149c4e6e
      Arnd Bergmann 提交于
      The largest stack object in this file is now the shash descriptor.
      Since there are many other stack variables, this can push it
      over the 1024 byte warning limit, in particular with clang and
      KASAN:
      
      crypto/testmgr.c:1693:12: error: stack frame size of 1312 bytes in function '__alg_test_hash' [-Werror,-Wframe-larger-than=]
      
      Make test_hash_vs_generic_impl() do the same thing as the
      corresponding eaed and skcipher functions by allocating the
      descriptor dynamically. We can still do better than this,
      but it brings us well below the 1024 byte limit.
      Suggested-by: NEric Biggers <ebiggers@kernel.org>
      Fixes: 9a8a6b3f ("crypto: testmgr - fuzz hashes against their generic implementation")
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Reviewed-by: NEric Biggers <ebiggers@kernel.org>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      149c4e6e
    • A
      crypto: testmgr - dynamically allocate testvec_config · 6b5ca646
      Arnd Bergmann 提交于
      On arm32, we get warnings about high stack usage in some of the functions:
      
      crypto/testmgr.c:2269:12: error: stack frame size of 1032 bytes in function 'alg_test_aead' [-Werror,-Wframe-larger-than=]
      static int alg_test_aead(const struct alg_test_desc *desc, const char *driver,
                 ^
      crypto/testmgr.c:1693:12: error: stack frame size of 1312 bytes in function '__alg_test_hash' [-Werror,-Wframe-larger-than=]
      static int __alg_test_hash(const struct hash_testvec *vecs,
                 ^
      
      On of the larger objects on the stack here is struct testvec_config, so
      change that to dynamic allocation.
      
      Fixes: 40153b10 ("crypto: testmgr - fuzz AEADs against their generic implementation")
      Fixes: d435e10e ("crypto: testmgr - fuzz skciphers against their generic implementation")
      Fixes: 9a8a6b3f ("crypto: testmgr - fuzz hashes against their generic implementation")
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Reviewed-by: NEric Biggers <ebiggers@kernel.org>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      6b5ca646
    • D
      keys: Add a 'recurse' flag for keyring searches · dcf49dbc
      David Howells 提交于
      Add a 'recurse' flag for keyring searches so that the flag can be omitted
      and recursion disabled, thereby allowing just the nominated keyring to be
      searched and none of the children.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      dcf49dbc
  5. 20 6月, 2019 2 次提交
  6. 19 6月, 2019 2 次提交
  7. 18 6月, 2019 1 次提交
  8. 13 6月, 2019 7 次提交
  9. 06 6月, 2019 3 次提交
    • E
      crypto: chacha20poly1305 - fix atomic sleep when using async algorithm · 7545b6c2
      Eric Biggers 提交于
      Clear the CRYPTO_TFM_REQ_MAY_SLEEP flag when the chacha20poly1305
      operation is being continued from an async completion callback, since
      sleeping may not be allowed in that context.
      
      This is basically the same bug that was recently fixed in the xts and
      lrw templates.  But, it's always been broken in chacha20poly1305 too.
      This was found using syzkaller in combination with the updated crypto
      self-tests which actually test the MAY_SLEEP flag now.
      
      Reproducer:
      
          python -c 'import socket; socket.socket(socket.AF_ALG, 5, 0).bind(
          	       ("aead", "rfc7539(cryptd(chacha20-generic),poly1305-generic)"))'
      
      Kernel output:
      
          BUG: sleeping function called from invalid context at include/crypto/algapi.h:426
          in_atomic(): 1, irqs_disabled(): 0, pid: 1001, name: kworker/2:2
          [...]
          CPU: 2 PID: 1001 Comm: kworker/2:2 Not tainted 5.2.0-rc2 #5
          Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-20181126_142135-anatol 04/01/2014
          Workqueue: crypto cryptd_queue_worker
          Call Trace:
           __dump_stack lib/dump_stack.c:77 [inline]
           dump_stack+0x4d/0x6a lib/dump_stack.c:113
           ___might_sleep kernel/sched/core.c:6138 [inline]
           ___might_sleep.cold.19+0x8e/0x9f kernel/sched/core.c:6095
           crypto_yield include/crypto/algapi.h:426 [inline]
           crypto_hash_walk_done+0xd6/0x100 crypto/ahash.c:113
           shash_ahash_update+0x41/0x60 crypto/shash.c:251
           shash_async_update+0xd/0x10 crypto/shash.c:260
           crypto_ahash_update include/crypto/hash.h:539 [inline]
           poly_setkey+0xf6/0x130 crypto/chacha20poly1305.c:337
           poly_init+0x51/0x60 crypto/chacha20poly1305.c:364
           async_done_continue crypto/chacha20poly1305.c:78 [inline]
           poly_genkey_done+0x15/0x30 crypto/chacha20poly1305.c:369
           cryptd_skcipher_complete+0x29/0x70 crypto/cryptd.c:279
           cryptd_skcipher_decrypt+0xcd/0x110 crypto/cryptd.c:339
           cryptd_queue_worker+0x70/0xa0 crypto/cryptd.c:184
           process_one_work+0x1ed/0x420 kernel/workqueue.c:2269
           worker_thread+0x3e/0x3a0 kernel/workqueue.c:2415
           kthread+0x11f/0x140 kernel/kthread.c:255
           ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:352
      
      Fixes: 71ebc4d1 ("crypto: chacha20poly1305 - Add a ChaCha20-Poly1305 AEAD construction, RFC7539")
      Cc: <stable@vger.kernel.org> # v4.2+
      Cc: Martin Willi <martin@strongswan.org>
      Signed-off-by: NEric Biggers <ebiggers@google.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      7545b6c2
    • E
      crypto: lrw - use correct alignmask · 20a0f976
      Eric Biggers 提交于
      Commit c778f96b ("crypto: lrw - Optimize tweak computation")
      incorrectly reduced the alignmask of LRW instances from
      '__alignof__(u64) - 1' to '__alignof__(__be32) - 1'.
      
      However, xor_tweak() and setkey() assume that the data and key,
      respectively, are aligned to 'be128', which has u64 alignment.
      
      Fix the alignmask to be at least '__alignof__(be128) - 1'.
      
      Fixes: c778f96b ("crypto: lrw - Optimize tweak computation")
      Cc: <stable@vger.kernel.org> # v4.20+
      Cc: Ondrej Mosnacek <omosnace@redhat.com>
      Signed-off-by: NEric Biggers <ebiggers@google.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      20a0f976
    • E
      crypto: ghash - fix unaligned memory access in ghash_setkey() · 5c6bc4df
      Eric Biggers 提交于
      Changing ghash_mod_init() to be subsys_initcall made it start running
      before the alignment fault handler has been installed on ARM.  In kernel
      builds where the keys in the ghash test vectors happened to be
      misaligned in the kernel image, this exposed the longstanding bug that
      ghash_setkey() is incorrectly casting the key buffer (which can have any
      alignment) to be128 for passing to gf128mul_init_4k_lle().
      
      Fix this by memcpy()ing the key to a temporary buffer.
      
      Don't fix it by setting an alignmask on the algorithm instead because
      that would unnecessarily force alignment of the data too.
      
      Fixes: 2cdc6899 ("crypto: ghash - Add GHASH digest algorithm for GCM")
      Reported-by: NPeter Robinson <pbrobinson@gmail.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NEric Biggers <ebiggers@google.com>
      Tested-by: NPeter Robinson <pbrobinson@gmail.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      5c6bc4df