1. 24 3月, 2017 1 次提交
  2. 10 3月, 2017 1 次提交
    • D
      net: Work around lockdep limitation in sockets that use sockets · cdfbabfb
      David Howells 提交于
      Lockdep issues a circular dependency warning when AFS issues an operation
      through AF_RXRPC from a context in which the VFS/VM holds the mmap_sem.
      
      The theory lockdep comes up with is as follows:
      
       (1) If the pagefault handler decides it needs to read pages from AFS, it
           calls AFS with mmap_sem held and AFS begins an AF_RXRPC call, but
           creating a call requires the socket lock:
      
      	mmap_sem must be taken before sk_lock-AF_RXRPC
      
       (2) afs_open_socket() opens an AF_RXRPC socket and binds it.  rxrpc_bind()
           binds the underlying UDP socket whilst holding its socket lock.
           inet_bind() takes its own socket lock:
      
      	sk_lock-AF_RXRPC must be taken before sk_lock-AF_INET
      
       (3) Reading from a TCP socket into a userspace buffer might cause a fault
           and thus cause the kernel to take the mmap_sem, but the TCP socket is
           locked whilst doing this:
      
      	sk_lock-AF_INET must be taken before mmap_sem
      
      However, lockdep's theory is wrong in this instance because it deals only
      with lock classes and not individual locks.  The AF_INET lock in (2) isn't
      really equivalent to the AF_INET lock in (3) as the former deals with a
      socket entirely internal to the kernel that never sees userspace.  This is
      a limitation in the design of lockdep.
      
      Fix the general case by:
      
       (1) Double up all the locking keys used in sockets so that one set are
           used if the socket is created by userspace and the other set is used
           if the socket is created by the kernel.
      
       (2) Store the kern parameter passed to sk_alloc() in a variable in the
           sock struct (sk_kern_sock).  This informs sock_lock_init(),
           sock_init_data() and sk_clone_lock() as to the lock keys to be used.
      
           Note that the child created by sk_clone_lock() inherits the parent's
           kern setting.
      
       (3) Add a 'kern' parameter to ->accept() that is analogous to the one
           passed in to ->create() that distinguishes whether kernel_accept() or
           sys_accept4() was the caller and can be passed to sk_alloc().
      
           Note that a lot of accept functions merely dequeue an already
           allocated socket.  I haven't touched these as the new socket already
           exists before we get the parameter.
      
           Note also that there are a couple of places where I've made the accepted
           socket unconditionally kernel-based:
      
      	irda_accept()
      	rds_rcp_accept_one()
      	tcp_accept_from_sock()
      
           because they follow a sock_create_kern() and accept off of that.
      
      Whilst creating this, I noticed that lustre and ocfs don't create sockets
      through sock_create_kern() and thus they aren't marked as for-kernel,
      though they appear to be internal.  I wonder if these should do that so
      that they use the new set of lock keys.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cdfbabfb
  3. 09 3月, 2017 9 次提交
  4. 02 3月, 2017 3 次提交
  5. 01 3月, 2017 1 次提交
    • L
      crypto: testmgr - Pad aes_ccm_enc_tv_template vector · 1c68bb0f
      Laura Abbott 提交于
      Running with KASAN and crypto tests currently gives
      
       BUG: KASAN: global-out-of-bounds in __test_aead+0x9d9/0x2200 at addr ffffffff8212fca0
       Read of size 16 by task cryptomgr_test/1107
       Address belongs to variable 0xffffffff8212fca0
       CPU: 0 PID: 1107 Comm: cryptomgr_test Not tainted 4.10.0+ #45
       Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.9.1-1.fc24 04/01/2014
       Call Trace:
        dump_stack+0x63/0x8a
        kasan_report.part.1+0x4a7/0x4e0
        ? __test_aead+0x9d9/0x2200
        ? crypto_ccm_init_crypt+0x218/0x3c0 [ccm]
        kasan_report+0x20/0x30
        check_memory_region+0x13c/0x1a0
        memcpy+0x23/0x50
        __test_aead+0x9d9/0x2200
        ? kasan_unpoison_shadow+0x35/0x50
        ? alg_test_akcipher+0xf0/0xf0
        ? crypto_skcipher_init_tfm+0x2e3/0x310
        ? crypto_spawn_tfm2+0x37/0x60
        ? crypto_ccm_init_tfm+0xa9/0xd0 [ccm]
        ? crypto_aead_init_tfm+0x7b/0x90
        ? crypto_alloc_tfm+0xc4/0x190
        test_aead+0x28/0xc0
        alg_test_aead+0x54/0xd0
        alg_test+0x1eb/0x3d0
        ? alg_find_test+0x90/0x90
        ? __sched_text_start+0x8/0x8
        ? __wake_up_common+0x70/0xb0
        cryptomgr_test+0x4d/0x60
        kthread+0x173/0x1c0
        ? crypto_acomp_scomp_free_ctx+0x60/0x60
        ? kthread_create_on_node+0xa0/0xa0
        ret_from_fork+0x2c/0x40
       Memory state around the buggy address:
        ffffffff8212fb80: 00 00 00 00 01 fa fa fa fa fa fa fa 00 00 00 00
        ffffffff8212fc00: 00 01 fa fa fa fa fa fa 00 00 00 00 01 fa fa fa
       >ffffffff8212fc80: fa fa fa fa 00 05 fa fa fa fa fa fa 00 00 00 00
                                         ^
        ffffffff8212fd00: 01 fa fa fa fa fa fa fa 00 00 00 00 01 fa fa fa
        ffffffff8212fd80: fa fa fa fa 00 00 00 00 00 05 fa fa fa fa fa fa
      
      This always happens on the same IV which is less than 16 bytes.
      
      Per Ard,
      
      "CCM IVs are 16 bytes, but due to the way they are constructed
      internally, the final couple of bytes of input IV are dont-cares.
      
      Apparently, we do read all 16 bytes, which triggers the KASAN errors."
      
      Fix this by padding the IV with null bytes to be at least 16 bytes.
      
      Cc: stable@vger.kernel.org
      Fixes: 0bc5a6c5 ("crypto: testmgr - Disable rfc4309 test and convert
      test vectors")
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NLaura Abbott <labbott@redhat.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      1c68bb0f
  6. 28 2月, 2017 1 次提交
  7. 27 2月, 2017 1 次提交
  8. 25 2月, 2017 1 次提交
  9. 23 2月, 2017 1 次提交
  10. 15 2月, 2017 2 次提交
  11. 11 2月, 2017 6 次提交
    • A
      crypto: algapi - make crypto_xor() and crypto_inc() alignment agnostic · db91af0f
      Ard Biesheuvel 提交于
      Instead of unconditionally forcing 4 byte alignment for all generic
      chaining modes that rely on crypto_xor() or crypto_inc() (which may
      result in unnecessary copying of data when the underlying hardware
      can perform unaligned accesses efficiently), make those functions
      deal with unaligned input explicitly, but only if the Kconfig symbol
      HAVE_EFFICIENT_UNALIGNED_ACCESS is set. This will allow us to drop
      the alignmasks from the CBC, CMAC, CTR, CTS, PCBC and SEQIV drivers.
      
      For crypto_inc(), this simply involves making the 4-byte stride
      conditional on HAVE_EFFICIENT_UNALIGNED_ACCESS being set, given that
      it typically operates on 16 byte buffers.
      
      For crypto_xor(), an algorithm is implemented that simply runs through
      the input using the largest strides possible if unaligned accesses are
      allowed. If they are not, an optimal sequence of memory accesses is
      emitted that takes the relative alignment of the input buffers into
      account, e.g., if the relative misalignment of dst and src is 4 bytes,
      the entire xor operation will be completed using 4 byte loads and stores
      (modulo unaligned bits at the start and end). Note that all expressions
      involving misalign are simply eliminated by the compiler when
      HAVE_EFFICIENT_UNALIGNED_ACCESS is defined.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      db91af0f
    • A
      crypto: improve gcc optimization flags for serpent and wp512 · 7d6e9105
      Arnd Bergmann 提交于
      An ancient gcc bug (first reported in 2003) has apparently resurfaced
      on MIPS, where kernelci.org reports an overly large stack frame in the
      whirlpool hash algorithm:
      
      crypto/wp512.c:987:1: warning: the frame size of 1112 bytes is larger than 1024 bytes [-Wframe-larger-than=]
      
      With some testing in different configurations, I'm seeing large
      variations in stack frames size up to 1500 bytes for what should have
      around 300 bytes at most. I also checked the reference implementation,
      which is essentially the same code but also comes with some test and
      benchmarking infrastructure.
      
      It seems that recent compiler versions on at least arm, arm64 and powerpc
      have a partial fix for this problem, but enabling "-fsched-pressure", but
      even with that fix they suffer from the issue to a certain degree. Some
      testing on arm64 shows that the time needed to hash a given amount of
      data is roughly proportional to the stack frame size here, which makes
      sense given that the wp512 implementation is doing lots of loads for
      table lookups, and the problem with the overly large stack is a result
      of doing a lot more loads and stores for spilled registers (as seen from
      inspecting the object code).
      
      Disabling -fschedule-insns consistently fixes the problem for wp512,
      in my collection of cross-compilers, the results are consistently better
      or identical when comparing the stack sizes in this function, though
      some architectures (notable x86) have schedule-insns disabled by
      default.
      
      The four columns are:
      default: -O2
      press:	 -O2 -fsched-pressure
      nopress: -O2 -fschedule-insns -fno-sched-pressure
      nosched: -O2 -no-schedule-insns (disables sched-pressure)
      
      				default	press	nopress	nosched
      alpha-linux-gcc-4.9.3		1136	848	1136	176
      am33_2.0-linux-gcc-4.9.3	2100	2076	2100	2104
      arm-linux-gnueabi-gcc-4.9.3	848	848	1048	352
      cris-linux-gcc-4.9.3		272	272	272	272
      frv-linux-gcc-4.9.3		1128	1000	1128	280
      hppa64-linux-gcc-4.9.3		1128	336	1128	184
      hppa-linux-gcc-4.9.3		644	308	644	276
      i386-linux-gcc-4.9.3		352	352	352	352
      m32r-linux-gcc-4.9.3		720	656	720	268
      microblaze-linux-gcc-4.9.3	1108	604	1108	256
      mips64-linux-gcc-4.9.3		1328	592	1328	208
      mips-linux-gcc-4.9.3		1096	624	1096	240
      powerpc64-linux-gcc-4.9.3	1088	432	1088	160
      powerpc-linux-gcc-4.9.3		1080	584	1080	224
      s390-linux-gcc-4.9.3		456	456	624	360
      sh3-linux-gcc-4.9.3		292	292	292	292
      sparc64-linux-gcc-4.9.3		992	240	992	208
      sparc-linux-gcc-4.9.3		680	592	680	312
      x86_64-linux-gcc-4.9.3		224	240	272	224
      xtensa-linux-gcc-4.9.3		1152	704	1152	304
      
      aarch64-linux-gcc-7.0.0		224	224	1104	208
      arm-linux-gnueabi-gcc-7.0.1	824	824	1048	352
      mips-linux-gcc-7.0.0		1120	648	1120	272
      x86_64-linux-gcc-7.0.1		240	240	304	240
      
      arm-linux-gnueabi-gcc-4.4.7	840			392
      arm-linux-gnueabi-gcc-4.5.4	784	728	784	320
      arm-linux-gnueabi-gcc-4.6.4	736	728	736	304
      arm-linux-gnueabi-gcc-4.7.4	944	784	944	352
      arm-linux-gnueabi-gcc-4.8.5	464	464	760	352
      arm-linux-gnueabi-gcc-4.9.3	848	848	1048	352
      arm-linux-gnueabi-gcc-5.3.1	824	824	1064	336
      arm-linux-gnueabi-gcc-6.1.1	808	808	1056	344
      arm-linux-gnueabi-gcc-7.0.1	824	824	1048	352
      
      Trying the same test for serpent-generic, the picture is a bit different,
      and while -fno-schedule-insns is generally better here than the default,
      -fsched-pressure wins overall, so I picked that instead.
      
      				default	press	nopress	nosched
      alpha-linux-gcc-4.9.3		1392	864	1392	960
      am33_2.0-linux-gcc-4.9.3	536	524	536	528
      arm-linux-gnueabi-gcc-4.9.3	552	552	776	536
      cris-linux-gcc-4.9.3		528	528	528	528
      frv-linux-gcc-4.9.3		536	400	536	504
      hppa64-linux-gcc-4.9.3		524	208	524	480
      hppa-linux-gcc-4.9.3		768	472	768	508
      i386-linux-gcc-4.9.3		564	564	564	564
      m32r-linux-gcc-4.9.3		712	576	712	532
      microblaze-linux-gcc-4.9.3	724	392	724	512
      mips64-linux-gcc-4.9.3		720	384	720	496
      mips-linux-gcc-4.9.3		728	384	728	496
      powerpc64-linux-gcc-4.9.3	704	304	704	480
      powerpc-linux-gcc-4.9.3		704	296	704	480
      s390-linux-gcc-4.9.3		560	560	592	536
      sh3-linux-gcc-4.9.3		540	540	540	540
      sparc64-linux-gcc-4.9.3		544	352	544	496
      sparc-linux-gcc-4.9.3		544	344	544	496
      x86_64-linux-gcc-4.9.3		528	536	576	528
      xtensa-linux-gcc-4.9.3		752	544	752	544
      
      aarch64-linux-gcc-7.0.0		432	432	656	480
      arm-linux-gnueabi-gcc-7.0.1	616	616	808	536
      mips-linux-gcc-7.0.0		720	464	720	488
      x86_64-linux-gcc-7.0.1		536	528	600	536
      
      arm-linux-gnueabi-gcc-4.4.7	592			440
      arm-linux-gnueabi-gcc-4.5.4	776	448	776	544
      arm-linux-gnueabi-gcc-4.6.4	776	448	776	544
      arm-linux-gnueabi-gcc-4.7.4	768	448	768	544
      arm-linux-gnueabi-gcc-4.8.5	488	488	776	544
      arm-linux-gnueabi-gcc-4.9.3	552	552	776	536
      arm-linux-gnueabi-gcc-5.3.1	552	552	776	536
      arm-linux-gnueabi-gcc-6.1.1	560	560	776	536
      arm-linux-gnueabi-gcc-7.0.1	616	616	808	536
      
      I did not do any runtime tests with serpent, so it is possible that stack
      frame size does not directly correlate with runtime performance here and
      it actually makes things worse, but it's more likely to help here, and
      the reduced stack frame size is probably enough reason to apply the patch,
      especially given that the crypto code is often used in deep call chains.
      
      Link: https://kernelci.org/build/id/58797d7559b5149efdf6c3a9/logs/
      Link: http://www.larc.usp.br/~pbarreto/WhirlpoolPage.html
      Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=11488
      Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79149
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      7d6e9105
    • A
      crypto: ccm - switch to separate cbcmac driver · f15f05b0
      Ard Biesheuvel 提交于
      Update the generic CCM driver to defer CBC-MAC processing to a
      dedicated CBC-MAC ahash transform rather than open coding this
      transform (and much of the associated scatterwalk plumbing) in
      the CCM driver itself.
      
      This cleans up the code considerably, but more importantly, it allows
      the use of alternative CBC-MAC implementations that don't suffer from
      performance degradation due to significant setup time (e.g., the NEON
      based AES code needs to enable/disable the NEON, and load the S-box
      into 16 SIMD registers, which cannot be amortized over the entire input
      when using the cipher interface)
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      f15f05b0
    • A
      crypto: testmgr - add test cases for cbcmac(aes) · 092acf06
      Ard Biesheuvel 提交于
      In preparation of splitting off the CBC-MAC transform in the CCM
      driver into a separate algorithm, define some test cases for the
      AES incarnation of cbcmac.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      092acf06
    • A
      crypto: aes - add generic time invariant AES cipher · b5e0b032
      Ard Biesheuvel 提交于
      Lookup table based AES is sensitive to timing attacks, which is due to
      the fact that such table lookups are data dependent, and the fact that
      8 KB worth of tables covers a significant number of cachelines on any
      architecture, resulting in an exploitable correlation between the key
      and the processing time for known plaintexts.
      
      For network facing algorithms such as CTR, CCM or GCM, this presents a
      security risk, which is why arch specific AES ports are typically time
      invariant, either through the use of special instructions, or by using
      SIMD algorithms that don't rely on table lookups.
      
      For generic code, this is difficult to achieve without losing too much
      performance, but we can improve the situation significantly by switching
      to an implementation that only needs 256 bytes of table data (the actual
      S-box itself), which can be prefetched at the start of each block to
      eliminate data dependent latencies.
      
      This code encrypts at ~25 cycles per byte on ARM Cortex-A57 (while the
      ordinary generic AES driver manages 18 cycles per byte on this
      hardware). Decryption is substantially slower.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      b5e0b032
    • A
      crypto: aes-generic - drop alignment requirement · ec38a937
      Ard Biesheuvel 提交于
      The generic AES code exposes a 32-bit align mask, which forces all
      users of the code to use temporary buffers or take other measures to
      ensure the alignment requirement is adhered to, even on architectures
      that don't care about alignment for software algorithms such as this
      one.
      
      So drop the align mask, and fix the code to use get_unaligned_le32()
      where appropriate, which will resolve to whatever is optimal for the
      architecture.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      ec38a937
  12. 03 2月, 2017 1 次提交
  13. 23 1月, 2017 2 次提交
  14. 13 1月, 2017 4 次提交
    • A
      crypto: testmgr - use calculated count for number of test vectors · 21c8e720
      Ard Biesheuvel 提交于
      When working on AES in CCM mode for ARM, my code passed the internal
      tcrypt test before I had even bothered to implement the AES-192 and
      AES-256 code paths, which is strange because the tcrypt does contain
      AES-192 and AES-256 test vectors for CCM.
      
      As it turned out, the define AES_CCM_ENC_TEST_VECTORS was out of sync
      with the actual number of test vectors, causing only the AES-128 ones
      to be executed.
      
      So get rid of the defines, and wrap the test vector references in a
      macro that calculates the number of vectors automatically.
      
      The following test vector counts were out of sync with the respective
      defines:
      
          BF_CTR_ENC_TEST_VECTORS          2 ->  3
          BF_CTR_DEC_TEST_VECTORS          2 ->  3
          TF_CTR_ENC_TEST_VECTORS          2 ->  3
          TF_CTR_DEC_TEST_VECTORS          2 ->  3
          SERPENT_CTR_ENC_TEST_VECTORS     2 ->  3
          SERPENT_CTR_DEC_TEST_VECTORS     2 ->  3
          AES_CCM_ENC_TEST_VECTORS         8 -> 14
          AES_CCM_DEC_TEST_VECTORS         7 -> 17
          AES_CCM_4309_ENC_TEST_VECTORS    7 -> 23
          AES_CCM_4309_DEC_TEST_VECTORS   10 -> 23
          CAMELLIA_CTR_ENC_TEST_VECTORS    2 ->  3
          CAMELLIA_CTR_DEC_TEST_VECTORS    2 ->  3
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      21c8e720
    • A
      crypto: testmgr - Allocate only the required output size for hash tests · e93acd6f
      Andrew Lutomirski 提交于
      There are some hashes (e.g. sha224) that have some internal trickery
      to make sure that only the correct number of output bytes are
      generated.  If something goes wrong, they could potentially overrun
      the output buffer.
      
      Make the test more robust by allocating only enough space for the
      correct output size so that memory debugging will catch the error if
      the output is overrun.
      
      Tested by intentionally breaking sha224 to output all 256
      internally-generated bits while running on KASAN.
      
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      e93acd6f
    • G
      crypto: Replaced gcc specific attributes with macros from compiler.h · d8c34b94
      Gideon Israel Dsouza 提交于
      Continuing from this commit: 52f5684c
      ("kernel: use macros from compiler.h instead of __attribute__((...))")
      
      I submitted 4 total patches. They are part of task I've taken up to
      increase compiler portability in the kernel. I've cleaned up the
      subsystems under /kernel /mm /block and /security, this patch targets
      /crypto.
      
      There is <linux/compiler.h> which provides macros for various gcc specific
      constructs. Eg: __weak for __attribute__((weak)). I've cleaned all
      instances of gcc specific attributes with the right macros for the crypto
      subsystem.
      
      I had to make one additional change into compiler-gcc.h for the case when
      one wants to use this: __attribute__((aligned) and not specify an alignment
      factor. From the gcc docs, this will result in the largest alignment for
      that data type on the target machine so I've named the macro
      __aligned_largest. Please advise if another name is more appropriate.
      Signed-off-by: NGideon Israel Dsouza <gidisrael@gmail.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      d8c34b94
    • E
      crypto: testmgr - use kmemdup instead of kmalloc+memcpy · d2110224
      Eric Biggers 提交于
      It's recommended to use kmemdup instead of kmalloc followed by memcpy.
      Signed-off-by: NEric Biggers <ebiggers@google.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      d2110224
  15. 30 12月, 2016 1 次提交
    • A
      crypto: skcipher - introduce walksize attribute for SIMD algos · c821f6ab
      Ard Biesheuvel 提交于
      In some cases, SIMD algorithms can only perform optimally when
      allowed to operate on multiple input blocks in parallel. This is
      especially true for bit slicing algorithms, which typically take
      the same amount of time processing a single block or 8 blocks in
      parallel. However, other SIMD algorithms may benefit as well from
      bigger strides.
      
      So add a walksize attribute to the skcipher algorithm definition, and
      wire it up to the skcipher walk API. To avoid confusion between the
      skcipher and AEAD attributes, rename the skcipher_walk chunksize
      attribute to 'stride', and set it from the walksize (in the skcipher
      case) or from the chunksize (in the AEAD case).
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      c821f6ab
  16. 27 12月, 2016 3 次提交
    • J
      crypto: algif_hash - avoid zero-sized array · 62071194
      Jiri Slaby 提交于
      With this reproducer:
        struct sockaddr_alg alg = {
                .salg_family = 0x26,
                .salg_type = "hash",
                .salg_feat = 0xf,
                .salg_mask = 0x5,
                .salg_name = "digest_null",
        };
        int sock, sock2;
      
        sock = socket(AF_ALG, SOCK_SEQPACKET, 0);
        bind(sock, (struct sockaddr *)&alg, sizeof(alg));
        sock2 = accept(sock, NULL, NULL);
        setsockopt(sock, SOL_ALG, ALG_SET_KEY, "\x9b\xca", 2);
        accept(sock2, NULL, NULL);
      
      ==== 8< ======== 8< ======== 8< ======== 8< ====
      
      one can immediatelly see an UBSAN warning:
      UBSAN: Undefined behaviour in crypto/algif_hash.c:187:7
      variable length array bound value 0 <= 0
      CPU: 0 PID: 15949 Comm: syz-executor Tainted: G            E      4.4.30-0-default #1
      ...
      Call Trace:
      ...
       [<ffffffff81d598fd>] ? __ubsan_handle_vla_bound_not_positive+0x13d/0x188
       [<ffffffff81d597c0>] ? __ubsan_handle_out_of_bounds+0x1bc/0x1bc
       [<ffffffffa0e2204d>] ? hash_accept+0x5bd/0x7d0 [algif_hash]
       [<ffffffffa0e2293f>] ? hash_accept_nokey+0x3f/0x51 [algif_hash]
       [<ffffffffa0e206b0>] ? hash_accept_parent_nokey+0x4a0/0x4a0 [algif_hash]
       [<ffffffff8235c42b>] ? SyS_accept+0x2b/0x40
      
      It is a correct warning, as hash state is propagated to accept as zero,
      but creating a zero-length variable array is not allowed in C.
      
      Fix this as proposed by Herbert -- do "?: 1" on that site. No sizeof or
      similar happens in the code there, so we just allocate one byte even
      though we do not use the array.
      Signed-off-by: NJiri Slaby <jslaby@suse.cz>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: "David S. Miller" <davem@davemloft.net> (maintainer:CRYPTO API)
      Reported-by: NSasha Levin <sasha.levin@oracle.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      62071194
    • A
      crypto: chacha20 - convert generic and x86 versions to skcipher · 9ae433bc
      Ard Biesheuvel 提交于
      This converts the ChaCha20 code from a blkcipher to a skcipher, which
      is now the preferred way to implement symmetric block and stream ciphers.
      
      This ports the generic and x86 versions at the same time because the
      latter reuses routines of the former.
      
      Note that the skcipher_walk() API guarantees that all presented blocks
      except the final one are a multiple of the chunk size, so we can simplify
      the encrypt() routine somewhat.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      9ae433bc
    • L
      crypto: testmgr - Use heap buffer for acomp test input · 02608e02
      Laura Abbott 提交于
      Christopher Covington reported a crash on aarch64 on recent Fedora
      kernels:
      
      kernel BUG at ./include/linux/scatterlist.h:140!
      Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
      Modules linked in:
      CPU: 2 PID: 752 Comm: cryptomgr_test Not tainted 4.9.0-11815-ge93b1cc8 #162
      Hardware name: linux,dummy-virt (DT)
      task: ffff80007c650080 task.stack: ffff800008910000
      PC is at sg_init_one+0xa0/0xb8
      LR is at sg_init_one+0x24/0xb8
      ...
      [<ffff000008398db8>] sg_init_one+0xa0/0xb8
      [<ffff000008350a44>] test_acomp+0x10c/0x438
      [<ffff000008350e20>] alg_test_comp+0xb0/0x118
      [<ffff00000834f28c>] alg_test+0x17c/0x2f0
      [<ffff00000834c6a4>] cryptomgr_test+0x44/0x50
      [<ffff0000080dac70>] kthread+0xf8/0x128
      [<ffff000008082ec0>] ret_from_fork+0x10/0x50
      
      The test vectors used for input are part of the kernel image. These
      inputs are passed as a buffer to sg_init_one which eventually blows up
      with BUG_ON(!virt_addr_valid(buf)). On arm64, virt_addr_valid returns
      false for the kernel image since virt_to_page will not return the
      correct page. Fix this by copying the input vectors to heap buffer
      before setting up the scatterlist.
      Reported-by: NChristopher Covington <cov@codeaurora.org>
      Fixes: d7db7a88 ("crypto: acomp - update testmgr with support for acomp")
      Signed-off-by: NLaura Abbott <labbott@redhat.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      02608e02
  17. 14 12月, 2016 2 次提交
    • A
      crypto: skcipher - fix crash in virtual walk · 18e615ad
      Ard Biesheuvel 提交于
      The new skcipher walk API may crash in the following way. (Interestingly,
      the tcrypt boot time tests seem unaffected, while an explicit test using
      the module triggers it)
      
        Unable to handle kernel NULL pointer dereference at virtual address 00000000
        ...
        [<ffff000008431d84>] __memcpy+0x84/0x180
        [<ffff0000083ec0d0>] skcipher_walk_done+0x328/0x340
        [<ffff0000080c5c04>] ctr_encrypt+0x84/0x100
        [<ffff000008406d60>] simd_skcipher_encrypt+0x88/0x98
        [<ffff0000083fa05c>] crypto_rfc3686_crypt+0x8c/0x98
        [<ffff0000009b0900>] test_skcipher_speed+0x518/0x820 [tcrypt]
        [<ffff0000009b31c0>] do_test+0x1408/0x3b70 [tcrypt]
        [<ffff0000009bd050>] tcrypt_mod_init+0x50/0x1000 [tcrypt]
        [<ffff0000080838f4>] do_one_initcall+0x44/0x138
        [<ffff0000081aee60>] do_init_module+0x68/0x1e0
        [<ffff0000081524d0>] load_module+0x1fd0/0x2458
        [<ffff000008152c38>] SyS_finit_module+0xe0/0xf0
        [<ffff0000080836f0>] el0_svc_naked+0x24/0x28
      
      This is due to the fact that skcipher_done_slow() may be entered with
      walk->buffer unset. Since skcipher_walk_done() already deals with the
      case where walk->buffer == walk->page, it appears to be the intention
      that walk->buffer point to walk->page after skcipher_next_slow(), so
      ensure that is the case.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      18e615ad
    • P
      crypto: asymmetric_keys - set error code on failure · fbb72630
      Pan Bian 提交于
      In function public_key_verify_signature(), returns variable ret on
      error paths. When the call to kmalloc() fails, the value of ret is 0,
      and it is not set to an errno before returning. This patch fixes the
      bug.
      
      Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=188891Signed-off-by: NPan Bian <bianpan2016@163.com>
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      fbb72630