1. 17 8月, 2015 1 次提交
  2. 14 7月, 2015 1 次提交
  3. 17 6月, 2015 1 次提交
  4. 22 5月, 2015 1 次提交
  5. 13 5月, 2015 1 次提交
  6. 26 11月, 2014 1 次提交
  7. 24 11月, 2014 1 次提交
  8. 01 8月, 2014 1 次提交
  9. 07 10月, 2013 1 次提交
    • J
      crypto: crypto_memneq - add equality testing of memory regions w/o timing leaks · 6bf37e5a
      James Yonan 提交于
      When comparing MAC hashes, AEAD authentication tags, or other hash
      values in the context of authentication or integrity checking, it
      is important not to leak timing information to a potential attacker,
      i.e. when communication happens over a network.
      
      Bytewise memory comparisons (such as memcmp) are usually optimized so
      that they return a nonzero value as soon as a mismatch is found. E.g,
      on x86_64/i5 for 512 bytes this can be ~50 cyc for a full mismatch
      and up to ~850 cyc for a full match (cold). This early-return behavior
      can leak timing information as a side channel, allowing an attacker to
      iteratively guess the correct result.
      
      This patch adds a new method crypto_memneq ("memory not equal to each
      other") to the crypto API that compares memory areas of the same length
      in roughly "constant time" (cache misses could change the timing, but
      since they don't reveal information about the content of the strings
      being compared, they are effectively benign). Iow, best and worst case
      behaviour take the same amount of time to complete (in contrast to
      memcmp).
      
      Note that crypto_memneq (unlike memcmp) can only be used to test for
      equality or inequality, NOT for lexicographical order. This, however,
      is not an issue for its use-cases within the crypto API.
      
      We tried to locate all of the places in the crypto API where memcmp was
      being used for authentication or integrity checking, and convert them
      over to crypto_memneq.
      
      crypto_memneq is declared noinline, placed in its own source file,
      and compiled with optimizations that might increase code size disabled
      ("Os") because a smart compiler (or LTO) might notice that the return
      value is always compared against zero/nonzero, and might then
      reintroduce the same early-return optimization that we are trying to
      avoid.
      
      Using #pragma or __attribute__ optimization annotations of the code
      for disabling optimization was avoided as it seems to be considered
      broken or unmaintained for long time in GCC [1]. Therefore, we work
      around that by specifying the compile flag for memneq.o directly in
      the Makefile. We found that this seems to be most appropriate.
      
      As we use ("Os"), this patch also provides a loop-free "fast-path" for
      frequently used 16 byte digests. Similarly to kernel library string
      functions, leave an option for future even further optimized architecture
      specific assembler implementations.
      
      This was a joint work of James Yonan and Daniel Borkmann. Also thanks
      for feedback from Florian Weimer on this and earlier proposals [2].
      
        [1] http://gcc.gnu.org/ml/gcc/2012-07/msg00211.html
        [2] https://lkml.org/lkml/2013/2/10/131Signed-off-by: NJames Yonan <james@openvpn.net>
      Signed-off-by: NDaniel Borkmann <dborkman@redhat.com>
      Cc: Florian Weimer <fw@deneb.enyo.de>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      6bf37e5a
  10. 25 4月, 2013 2 次提交
  11. 02 4月, 2013 1 次提交
  12. 04 2月, 2013 1 次提交
  13. 02 12月, 2010 1 次提交
  14. 17 1月, 2010 1 次提交
  15. 16 11月, 2009 1 次提交
    • H
      crypto: gcm - fix another complete call in complete fuction · 62c5593a
      Huang Ying 提交于
      The flow of the complete function (xxx_done) in gcm.c is as follow:
      
      void complete(struct crypto_async_request *areq, int err)
      {
      	struct aead_request *req = areq->data;
      
      	if (!err) {
      		err = async_next_step();
      		if (err == -EINPROGRESS || err == -EBUSY)
      			return;
      	}
      
      	complete_for_next_step(areq, err);
      }
      
      But *areq may be destroyed in async_next_step(), this makes
      complete_for_next_step() can not work properly. To fix this, one of
      following methods is used for each complete function.
      
      - Add a __complete() for each complete(), which accept struct
        aead_request *req instead of areq, so avoid using areq after it is
        destroyed.
      
      - Expand complete_for_next_step().
      
      The fixing method is based on the idea of Herbert Xu.
      Signed-off-by: NHuang Ying <ying.huang@intel.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      62c5593a
  16. 06 8月, 2009 1 次提交
  17. 11 1月, 2008 13 次提交