1. 04 11月, 2016 1 次提交
  2. 03 11月, 2016 2 次提交
    • M
      Fail if an unrecognised record type is received · 436a2a01
      Matt Caswell 提交于
      TLS1.0 and TLS1.1 say you SHOULD ignore unrecognised record types, but
      TLS 1.2 says you MUST send an unexpected message alert. We swap to the
      TLS 1.2 behaviour for all protocol versions to prevent issues where no
      progress is being made and the peer continually sends unrecognised record
      types, using up resources processing them.
      
      Issue reported by 郭志攀
      Reviewed-by: NTim Hudson <tjh@openssl.org>
      436a2a01
    • M
      Fix read_ahead · a7faa6da
      Matt Caswell 提交于
      The function ssl3_read_n() takes a parameter |clearold| which, if set,
      causes any old data in the read buffer to be forgotten, and any unread data
      to be moved to the start of the buffer. This is supposed to happen when we
      first read the record header.
      
      However, the data move was only taking place if there was not already
      sufficient data in the buffer to satisfy the request. If read_ahead is set
      then the record header could be in the buffer already from when we read the
      preceding record. So with read_ahead we can get into a situation where even
      though |clearold| is set, the data does not get moved to the start of the
      read buffer when we read the record header. This means there is insufficient
      room in the read buffer to consume the rest of the record body, resulting in
      an internal error.
      
      This commit moves the |clearold| processing to earlier in ssl3_read_n()
      to ensure that it always takes place.
      Reviewed-by: NRichard Levitte <levitte@openssl.org>
      a7faa6da
  3. 28 10月, 2016 1 次提交
    • M
      A zero return from BIO_read()/BIO_write() could be retryable · 4880672a
      Matt Caswell 提交于
      A zero return from BIO_read()/BIO_write() could mean that an IO operation
      is retryable. A zero return from SSL_read()/SSL_write() means that the
      connection has been closed down (either cleanly or not). Therefore we
      should not propagate a zero return value from BIO_read()/BIO_write() back
      up the stack to SSL_read()/SSL_write(). This could result in a retryable
      failure being treated as fatal.
      Reviewed-by: NRichard Levitte <levitte@openssl.org>
      4880672a
  4. 22 9月, 2016 2 次提交
    • M
      Fix a hang with SSL_peek() · b8d24395
      Matt Caswell 提交于
      If while calling SSL_peek() we read an empty record then we go into an
      infinite loop, continually trying to read data from the empty record and
      never making any progress. This could be exploited by a malicious peer in
      a Denial Of Service attack.
      
      CVE-2016-6305
      
      GitHub Issue #1563
      Reviewed-by: NRich Salz <rsalz@openssl.org>
      b8d24395
    • M
      Don't allow too many consecutive warning alerts · af58be76
      Matt Caswell 提交于
      Certain warning alerts are ignored if they are received. This can mean that
      no progress will be made if one peer continually sends those warning alerts.
      Implement a count so that we abort the connection if we receive too many.
      
      Issue reported by Shi Lei.
      Reviewed-by: NRich Salz <rsalz@openssl.org>
      af58be76
  5. 16 9月, 2016 1 次提交
    • M
      Revert "Abort on unrecognised warning alerts" · 3c0c68ae
      Matt Caswell 提交于
      This reverts commit 77a6be4d.
      
      There were some unexpected side effects to this commit, e.g. in SSLv3 a
      warning alert gets sent "no_certificate" if a client does not send a
      Certificate during Client Auth. With the above commit this causes the
      connection to abort, which is incorrect. There may be some other edge cases
      like this so we need to have a rethink on this.
      Reviewed-by: NTim Hudson <tjh@openssl.org>
      3c0c68ae
  6. 13 9月, 2016 1 次提交
    • M
      Abort on unrecognised warning alerts · 77a6be4d
      Matt Caswell 提交于
      A peer continually sending unrecognised warning alerts could mean that we
      make no progress on a connection. We should abort rather than continuing if
      we receive an unrecognised warning alert.
      
      Thanks to Shi Lei for reporting this issue.
      Reviewed-by: NRich Salz <rsalz@openssl.org>
      77a6be4d
  7. 24 8月, 2016 1 次提交
  8. 18 8月, 2016 1 次提交
  9. 16 8月, 2016 2 次提交
  10. 30 7月, 2016 1 次提交
    • M
      Fix crash as a result of MULTIBLOCK · 58c27c20
      Matt Caswell 提交于
      The MULTIBLOCK code uses a "jumbo" sized write buffer which it allocates
      and then frees later. Pipelining however introduced multiple pipelines. It
      keeps track of how many pipelines are initialised using numwpipes.
      Unfortunately the MULTIBLOCK code was not updating this when in deallocated
      its buffers, leading to a buffer being marked as initialised but set to
      NULL.
      
      RT#4618
      Reviewed-by: NRich Salz <rsalz@openssl.org>
      58c27c20
  11. 20 7月, 2016 1 次提交
  12. 29 6月, 2016 1 次提交
  13. 27 6月, 2016 1 次提交
  14. 08 6月, 2016 3 次提交
  15. 27 5月, 2016 1 次提交
  16. 18 5月, 2016 1 次提交
  17. 02 5月, 2016 2 次提交
  18. 29 4月, 2016 1 次提交
  19. 05 4月, 2016 2 次提交
  20. 01 4月, 2016 1 次提交
  21. 08 3月, 2016 7 次提交
    • M
      Fix building without multiblock support · fa22f98f
      Matt Caswell 提交于
      Not all platforms support multiblock. Building without it fails prior to
      this fix.
      
      RT#4396
      Reviewed-by: NRichard Levitte <levitte@openssl.org>
      fa22f98f
    • M
      Remove the wrec record layer field · f482740f
      Matt Caswell 提交于
      We used to use the wrec field in the record layer for keeping track of the
      current record that we are writing out. As part of the pipelining changes
      this has been moved to stack allocated variables to do the same thing,
      therefore the field is no longer needed.
      Reviewed-by: NTim Hudson <tjh@openssl.org>
      f482740f
    • M
      Add an SSL_has_pending() function · 49580f25
      Matt Caswell 提交于
      This is similar to SSL_pending() but just returns a 1 if there is data
      pending in the internal OpenSSL buffers or 0 otherwise (as opposed to
      SSL_pending() which returns the number of bytes available). Unlike
      SSL_pending() this will work even if "read_ahead" is set (which is the
      case if you are using read pipelining, or if you are doing DTLS). A 1
      return value means that we have unprocessed data. It does *not* necessarily
      indicate that there will be application data returned from a call to
      SSL_read(). The unprocessed data may not be application data or there
      could be errors when we attempt to parse the records.
      Reviewed-by: NTim Hudson <tjh@openssl.org>
      49580f25
    • M
      Add an ability to set the SSL read buffer size · dad78fb1
      Matt Caswell 提交于
      This capability is required for read pipelining. We will only read in as
      many records as will fit in the read buffer (and the network can provide
      in one go). The bigger the buffer the more records we can process in
      parallel.
      Reviewed-by: NTim Hudson <tjh@openssl.org>
      dad78fb1
    • M
      Lazily initialise the compression buffer · 0220fee4
      Matt Caswell 提交于
      With read pipelining we use multiple SSL3_RECORD structures for reading.
      There are SSL_MAX_PIPELINES (32) of them defined (typically not all of these
      would be used). Each one has a 16k compression buffer allocated! This
      results in a significant amount of memory being consumed which, most of the
      time, is not needed.  This change swaps the allocation of the compression
      buffer to be lazy so that it is only done immediately before it is actually
      used.
      Reviewed-by: NTim Hudson <tjh@openssl.org>
      0220fee4
    • M
      Implement read pipeline support in libssl · 94777c9c
      Matt Caswell 提交于
      Read pipelining is controlled in a slightly different way than with write
      pipelining. While reading we are constrained by the number of records that
      the peer (and the network) can provide to us in one go. The more records
      we can get in one go the more opportunity we have to parallelise the
      processing.
      
      There are two parameters that affect this:
      * The number of pipelines that we are willing to process in one go. This is
      controlled by max_pipelines (as for write pipelining)
      * The size of our read buffer. A subsequent commit will provide an API for
      adjusting the size of the buffer.
      
      Another requirement for this to work is that "read_ahead" must be set. The
      read_ahead parameter will attempt to read as much data into our read buffer
      as the network can provide. Without this set, data is read into the read
      buffer on demand. Setting the max_pipelines parameter to a value greater
      than 1 will automatically also turn read_ahead on.
      
      Finally, the read pipelining as currently implemented will only parallelise
      the processing of application data records. This would only make a
      difference for renegotiation so is unlikely to have a significant impact.
      Reviewed-by: NTim Hudson <tjh@openssl.org>
      94777c9c
    • M
      Implement write pipeline support in libssl · d102d9df
      Matt Caswell 提交于
      Use the new pipeline cipher capability to encrypt multiple records being
      written out all in one go. Two new SSL/SSL_CTX parameters can be used to
      control how this works: max_pipelines and split_send_fragment.
      
      max_pipelines defines the maximum number of pipelines that can ever be used
      in one go for a single connection. It must always be less than or equal to
      SSL_MAX_PIPELINES (currently defined to be 32). By default only one
      pipeline will be used (i.e. normal non-parallel operation).
      
      split_send_fragment defines how data is split up into pipelines. The number
      of pipelines used will be determined by the amount of data provided to the
      SSL_write call divided by split_send_fragment. For example if
      split_send_fragment is set to 2000 and max_pipelines is 4 then:
      SSL_write called with 0-2000 bytes == 1 pipeline used
      SSL_write called with 2001-4000 bytes == 2 pipelines used
      SSL_write called with 4001-6000 bytes == 3 pipelines used
      SSL_write_called with 6001+ bytes == 4 pipelines used
      
      split_send_fragment must always be less than or equal to max_send_fragment.
      By default it is set to be equal to max_send_fragment. This will mean that
      the same number of records will always be created as would have been
      created in the non-parallel case, although the data will be apportioned
      differently. In the parallel case data will be spread equally between the
      pipelines.
      Reviewed-by: NTim Hudson <tjh@openssl.org>
      d102d9df
  22. 12 2月, 2016 1 次提交
  23. 27 1月, 2016 1 次提交
    • R
      Remove /* foo.c */ comments · 34980760
      Rich Salz 提交于
      This was done by the following
              find . -name '*.[ch]' | /tmp/pl
      where /tmp/pl is the following three-line script:
              print unless $. == 1 && m@/\* .*\.[ch] \*/@;
              close ARGV if eof; # Close file to reset $.
      
      And then some hand-editing of other files.
      Reviewed-by: NViktor Dukhovni <viktor@openssl.org>
      34980760
  24. 12 1月, 2016 1 次提交
  25. 06 1月, 2016 1 次提交
  26. 10 11月, 2015 1 次提交
  27. 02 11月, 2015 1 次提交
    • M
      Remove a reachable assert from ssl3_write_bytes · 1c2e5d56
      Matt Caswell 提交于
      A buggy application that call SSL_write with a different length after a
      NBIO event could cause an OPENSSL_assert to be reached. The assert is not
      actually necessary because there was an explicit check a little further
      down that would catch this scenario. Therefore remove the assert an move
      the check a little higher up.
      Reviewed-by: NRich Salz <rsalz@openssl.org>
      1c2e5d56