1. 30 9月, 2016 1 次提交
  2. 29 9月, 2016 1 次提交
  3. 22 9月, 2016 1 次提交
  4. 21 9月, 2016 1 次提交
  5. 20 9月, 2016 1 次提交
  6. 13 9月, 2016 1 次提交
  7. 19 8月, 2016 1 次提交
    • M
      Fix DTLS replay protection · 1fb9fdc3
      Matt Caswell 提交于
      The DTLS implementation provides some protection against replay attacks
      in accordance with RFC6347 section 4.1.2.6.
      
      A sliding "window" of valid record sequence numbers is maintained with
      the "right" hand edge of the window set to the highest sequence number we
      have received so far. Records that arrive that are off the "left" hand
      edge of the window are rejected. Records within the window are checked
      against a list of records received so far. If we already received it then
      we also reject the new record.
      
      If we have not already received the record, or the sequence number is off
      the right hand edge of the window then we verify the MAC of the record.
      If MAC verification fails then we discard the record. Otherwise we mark
      the record as received. If the sequence number was off the right hand edge
      of the window, then we slide the window along so that the right hand edge
      is in line with the newly received sequence number.
      
      Records may arrive for future epochs, i.e. a record from after a CCS being
      sent, can arrive before the CCS does if the packets get re-ordered. As we
      have not yet received the CCS we are not yet in a position to decrypt or
      validate the MAC of those records. OpenSSL places those records on an
      unprocessed records queue. It additionally updates the window immediately,
      even though we have not yet verified the MAC. This will only occur if
      currently in a handshake/renegotiation.
      
      This could be exploited by an attacker by sending a record for the next
      epoch (which does not have to decrypt or have a valid MAC), with a very
      large sequence number. This means the right hand edge of the window is
      moved very far to the right, and all subsequent legitimate packets are
      dropped causing a denial of service.
      
      A similar effect can be achieved during the initial handshake. In this
      case there is no MAC key negotiated yet. Therefore an attacker can send a
      message for the current epoch with a very large sequence number. The code
      will process the record as normal. If the hanshake message sequence number
      (as opposed to the record sequence number that we have been talking about
      so far) is in the future then the injected message is bufferred to be
      handled later, but the window is still updated. Therefore all subsequent
      legitimate handshake records are dropped. This aspect is not considered a
      security issue because there are many ways for an attacker to disrupt the
      initial handshake and prevent it from completing successfully (e.g.
      injection of a handshake message will cause the Finished MAC to fail and
      the handshake to be aborted). This issue comes about as a result of trying
      to do replay protection, but having no integrity mechanism in place yet.
      Does it even make sense to have replay protection in epoch 0? That
      issue isn't addressed here though.
      
      This addressed an OCAP Audit issue.
      
      CVE-2016-2181
      Reviewed-by: NRichard Levitte <levitte@openssl.org>
      1fb9fdc3
  8. 17 8月, 2016 1 次提交
  9. 21 7月, 2016 1 次提交
  10. 19 7月, 2016 4 次提交
  11. 09 7月, 2016 1 次提交
  12. 22 6月, 2016 1 次提交
  13. 04 6月, 2016 2 次提交
  14. 24 5月, 2016 1 次提交
  15. 29 4月, 2016 2 次提交
  16. 22 4月, 2016 1 次提交
  17. 08 4月, 2016 1 次提交
  18. 05 4月, 2016 4 次提交
  19. 28 3月, 2016 1 次提交
  20. 08 3月, 2016 1 次提交
    • M
      Implement write pipeline support in libssl · d102d9df
      Matt Caswell 提交于
      Use the new pipeline cipher capability to encrypt multiple records being
      written out all in one go. Two new SSL/SSL_CTX parameters can be used to
      control how this works: max_pipelines and split_send_fragment.
      
      max_pipelines defines the maximum number of pipelines that can ever be used
      in one go for a single connection. It must always be less than or equal to
      SSL_MAX_PIPELINES (currently defined to be 32). By default only one
      pipeline will be used (i.e. normal non-parallel operation).
      
      split_send_fragment defines how data is split up into pipelines. The number
      of pipelines used will be determined by the amount of data provided to the
      SSL_write call divided by split_send_fragment. For example if
      split_send_fragment is set to 2000 and max_pipelines is 4 then:
      SSL_write called with 0-2000 bytes == 1 pipeline used
      SSL_write called with 2001-4000 bytes == 2 pipelines used
      SSL_write called with 4001-6000 bytes == 3 pipelines used
      SSL_write_called with 6001+ bytes == 4 pipelines used
      
      split_send_fragment must always be less than or equal to max_send_fragment.
      By default it is set to be equal to max_send_fragment. This will mean that
      the same number of records will always be created as would have been
      created in the non-parallel case, although the data will be apportioned
      differently. In the parallel case data will be spread equally between the
      pipelines.
      Reviewed-by: NTim Hudson <tjh@openssl.org>
      d102d9df
  21. 04 3月, 2016 1 次提交
  22. 25 2月, 2016 1 次提交
  23. 20 2月, 2016 1 次提交
    • E
      TLS: reject duplicate extensions · aa474d1f
      Emilia Kasper 提交于
      Adapted from BoringSSL. Added a test.
      
      The extension parsing code is already attempting to already handle this for
      some individual extensions, but it is doing so inconsistently. Duplicate
      efforts in individual extension parsing will be cleaned up in a follow-up.
      Reviewed-by: NStephen Henson <steve@openssl.org>
      aa474d1f
  24. 12 2月, 2016 1 次提交
  25. 11 2月, 2016 2 次提交
  26. 08 2月, 2016 1 次提交
    • M
      Handle SSL_shutdown while in init more appropriately #2 · 64f9f406
      Matt Caswell 提交于
      Previous commit 7bb196a7 attempted to "fix" a problem with the way
      SSL_shutdown() behaved whilst in mid-handshake. The original behaviour had
      SSL_shutdown() return immediately having taken no action if called mid-
      handshake with a return value of 1 (meaning everything was shutdown
      successfully). In fact the shutdown has not been successful.
      
      Commit 7bb196a7 changed that to send a close_notify anyway and then
      return. This seems to be causing some problems for some applications so
      perhaps a better (much simpler) approach is revert to the previous
      behaviour (no attempt at a shutdown), but return -1 (meaning the shutdown
      was not successful).
      
      This also fixes a bug where SSL_shutdown always returns 0 when shutdown
      *very* early in the handshake (i.e. we are still using SSLv23_method).
      Reviewed-by: NViktor Dukhovni <viktor@openssl.org>
      64f9f406
  27. 06 2月, 2016 1 次提交
  28. 27 1月, 2016 1 次提交
    • R
      Remove /* foo.c */ comments · 34980760
      Rich Salz 提交于
      This was done by the following
              find . -name '*.[ch]' | /tmp/pl
      where /tmp/pl is the following three-line script:
              print unless $. == 1 && m@/\* .*\.[ch] \*/@;
              close ARGV if eof; # Close file to reset $.
      
      And then some hand-editing of other files.
      Reviewed-by: NViktor Dukhovni <viktor@openssl.org>
      34980760
  29. 20 1月, 2016 1 次提交
    • M
      Handle SSL_shutdown while in init more appropriately · 7bb196a7
      Matt Caswell 提交于
      Calling SSL_shutdown while in init previously gave a "1" response, meaning
      everything was successfully closed down (even though it wasn't). Better is
      to send our close_notify, but fail when trying to receive one.
      
      The problem with doing a shutdown while in the middle of a handshake is
      that once our close_notify is sent we shouldn't really do anything else
      (including process handshake/CCS messages) until we've received a
      close_notify back from the peer. However the peer might send a CCS before
      acting on our close_notify - so we won't be able to read it because we're
      not acting on CCS messages!
      Reviewed-by: NViktor Dukhovni <viktor@openssl.org>
      7bb196a7
  30. 11 1月, 2016 1 次提交
  31. 08 1月, 2016 1 次提交
    • R
      mem functions cleanup · bbd86bf5
      Rich Salz 提交于
      Only two macros CRYPTO_MDEBUG and CRYPTO_MDEBUG_ABORT to control this.
      If CRYPTO_MDEBUG is not set, #ifdef out the whole debug machinery.
              (Thanks to Jakob Bohm for the suggestion!)
      Make the "change wrapper functions" be the only paradigm.
      Wrote documentation!
      Format the 'set func' functions so their paramlists are legible.
      Format some multi-line comments.
      Remove ability to get/set the "memory debug" functions at runtme.
      Remove MemCheck_* and CRYPTO_malloc_debug_init macros.
      Add CRYPTO_mem_debug(int flag) function.
      Add test/memleaktest.
      Rename CRYPTO_malloc_init to OPENSSL_malloc_init; remove needless calls.
      Reviewed-by: NRichard Levitte <levitte@openssl.org>
      bbd86bf5