1. 06 12月, 2016 4 次提交
  2. 17 11月, 2016 1 次提交
  3. 07 11月, 2016 1 次提交
  4. 04 11月, 2016 11 次提交
  5. 18 10月, 2016 1 次提交
    • D
      Fix encrypt-then-mac implementation for DTLS · e23d5071
      David Woodhouse 提交于
      OpenSSL 1.1.0 will negotiate EtM on DTLS but will then not actually *do* it.
      
      If we use DTLSv1.2 that will hopefully be harmless since we'll tend to use
      an AEAD ciphersuite anyway. But if we're using DTLSv1, then we certainly
      will end up using CBC, so EtM is relevant — and we fail to interoperate with
      anything that implements EtM correctly.
      
      Fixing it in HEAD and 1.1.0c will mean that 1.1.0[ab] are incompatible with
      1.1.0c+... for the limited case of non-AEAD ciphers, where they're *already*
      incompatible with other implementations due to this bug anyway. That seems
      reasonable enough, so let's do it. The only alternative is just to turn it
      off for ever... which *still* leaves 1.0.0[ab] failing to communicate with
      non-OpenSSL implementations anyway.
      
      Tested against itself as well as against GnuTLS both with and without EtM.
      Reviewed-by: NTim Hudson <tjh@openssl.org>
      Reviewed-by: NMatt Caswell <matt@openssl.org>
      e23d5071
  6. 19 8月, 2016 2 次提交
    • M
      Fix DTLS replay protection · 1fb9fdc3
      Matt Caswell 提交于
      The DTLS implementation provides some protection against replay attacks
      in accordance with RFC6347 section 4.1.2.6.
      
      A sliding "window" of valid record sequence numbers is maintained with
      the "right" hand edge of the window set to the highest sequence number we
      have received so far. Records that arrive that are off the "left" hand
      edge of the window are rejected. Records within the window are checked
      against a list of records received so far. If we already received it then
      we also reject the new record.
      
      If we have not already received the record, or the sequence number is off
      the right hand edge of the window then we verify the MAC of the record.
      If MAC verification fails then we discard the record. Otherwise we mark
      the record as received. If the sequence number was off the right hand edge
      of the window, then we slide the window along so that the right hand edge
      is in line with the newly received sequence number.
      
      Records may arrive for future epochs, i.e. a record from after a CCS being
      sent, can arrive before the CCS does if the packets get re-ordered. As we
      have not yet received the CCS we are not yet in a position to decrypt or
      validate the MAC of those records. OpenSSL places those records on an
      unprocessed records queue. It additionally updates the window immediately,
      even though we have not yet verified the MAC. This will only occur if
      currently in a handshake/renegotiation.
      
      This could be exploited by an attacker by sending a record for the next
      epoch (which does not have to decrypt or have a valid MAC), with a very
      large sequence number. This means the right hand edge of the window is
      moved very far to the right, and all subsequent legitimate packets are
      dropped causing a denial of service.
      
      A similar effect can be achieved during the initial handshake. In this
      case there is no MAC key negotiated yet. Therefore an attacker can send a
      message for the current epoch with a very large sequence number. The code
      will process the record as normal. If the hanshake message sequence number
      (as opposed to the record sequence number that we have been talking about
      so far) is in the future then the injected message is bufferred to be
      handled later, but the window is still updated. Therefore all subsequent
      legitimate handshake records are dropped. This aspect is not considered a
      security issue because there are many ways for an attacker to disrupt the
      initial handshake and prevent it from completing successfully (e.g.
      injection of a handshake message will cause the Finished MAC to fail and
      the handshake to be aborted). This issue comes about as a result of trying
      to do replay protection, but having no integrity mechanism in place yet.
      Does it even make sense to have replay protection in epoch 0? That
      issue isn't addressed here though.
      
      This addressed an OCAP Audit issue.
      
      CVE-2016-2181
      Reviewed-by: NRichard Levitte <levitte@openssl.org>
      1fb9fdc3
    • M
      Fix DTLS unprocessed records bug · 738ad946
      Matt Caswell 提交于
      During a DTLS handshake we may get records destined for the next epoch
      arrive before we have processed the CCS. In that case we can't decrypt or
      verify the record yet, so we buffer it for later use. When we do receive
      the CCS we work through the queue of unprocessed records and process them.
      
      Unfortunately the act of processing wipes out any existing packet data
      that we were still working through. This includes any records from the new
      epoch that were in the same packet as the CCS. We should only process the
      buffered records if we've not got any data left.
      Reviewed-by: NRichard Levitte <levitte@openssl.org>
      738ad946
  7. 18 8月, 2016 1 次提交
  8. 16 8月, 2016 4 次提交
  9. 09 8月, 2016 1 次提交
  10. 29 7月, 2016 1 次提交
    • M
      Make the checks for an SSLv2 style record stricter · 0647719d
      Matt Caswell 提交于
      SSLv2 is no longer supported in 1.1.0, however we *do* still accept an SSLv2
      style ClientHello, as long as we then subsequently negotiate a protocol
      version >= SSLv3. The record format for SSLv2 style ClientHellos is quite
      different to SSLv3+. We only accept this format in the first record of an
      initial ClientHello. Previously we checked this by confirming
      s->first_packet is set and s->server is true. However, this really only
      tells us that we are dealing with an initial ClientHello, not that it is
      the first record (s->first_packet is badly named...it really means this is
      the first message). To check this is the first record of the initial
      ClientHello we should also check that we've not received any data yet
      (s->init_num == 0), and that we've not had any empty records.
      
      GitHub Issue #1298
      Reviewed-by: NEmilia Käsper <emilia@openssl.org>
      0647719d
  11. 20 7月, 2016 1 次提交
  12. 15 7月, 2016 1 次提交
  13. 08 6月, 2016 1 次提交
    • M
      Reject out of context empty records · 255cfeac
      Matt Caswell 提交于
      Previously if we received an empty record we just threw it away and
      ignored it. Really though if we get an empty record of a different content
      type to what we are expecting then that should be an error, i.e. we should
      reject out of context empty records. This commit makes the necessary changes
      to achieve that.
      
      RT#4395
      Reviewed-by: NAndy Polyakov <appro@openssl.org>
      255cfeac
  14. 18 5月, 2016 1 次提交
  15. 17 5月, 2016 3 次提交
    • M
      Add a comment to explain the use of |num_recs| · be9c8deb
      Matt Caswell 提交于
      In the SSLV2ClientHello processing code in ssl3_get_record, the value of
      |num_recs| will always be 0. This isn't obvious from the code so a comment
      is added to explain it.
      Reviewed-by: NViktor Dukhovni <viktor@openssl.org>
      be9c8deb
    • M
      Use the current record offset in ssl3_get_record · de0717eb
      Matt Caswell 提交于
      The function ssl3_get_record() can obtain multiple records in one go
      as long as we are set up for pipelining and all the records are app
      data records. The logic in the while loop which reads in each record is
      supposed to only continue looping if the last record we read was app data
      and we have an app data record waiting in the buffer to be processed. It
      was actually checking that the first record had app data and we have an
      app data record waiting. This actually amounts to the same thing so wasn't
      wrong - but it looks a bit odd because it uses the |rr| array without an
      offset.
      Reviewed-by: NViktor Dukhovni <viktor@openssl.org>
      de0717eb
    • M
      There is only one read buffer · 6da57392
      Matt Caswell 提交于
      Pipelining introduced the concept of multiple records being read in one
      go. Therefore we work with an array of SSL3_RECORD objects. The pipelining
      change erroneously made a change in ssl3_get_record() to apply the current
      record offset to the SSL3_BUFFER we are using for reading. This is wrong -
      there is only ever one read buffer. This reverts that change. In practice
      this should make little difference because the code block in question is
      only ever used when we are processing a single record.
      Reviewed-by: NViktor Dukhovni <viktor@openssl.org>
      6da57392
  16. 08 3月, 2016 6 次提交
    • M
      Rename the numpipes argument to ssl3_enc/tls1_enc · 37205971
      Matt Caswell 提交于
      The numpipes argument to ssl3_enc/tls1_enc is actually the number of
      records passed in the array. To make this clearer rename the argument to
      |n_recs|.
      Reviewed-by: NTim Hudson <tjh@openssl.org>
      37205971
    • M
      Rename a function · ea71906e
      Matt Caswell 提交于
      Rename the have_whole_app_data_record_waiting() function to include the
      ssl3_record prefix...and make it a bit shorter.
      Reviewed-by: NTim Hudson <tjh@openssl.org>
      ea71906e
    • M
      Update a comment · d3b324a1
      Matt Caswell 提交于
      Update a comment that was out of date due to the pipelining changes
      Reviewed-by: NTim Hudson <tjh@openssl.org>
      d3b324a1
    • M
      Lazily initialise the compression buffer · 0220fee4
      Matt Caswell 提交于
      With read pipelining we use multiple SSL3_RECORD structures for reading.
      There are SSL_MAX_PIPELINES (32) of them defined (typically not all of these
      would be used). Each one has a 16k compression buffer allocated! This
      results in a significant amount of memory being consumed which, most of the
      time, is not needed.  This change swaps the allocation of the compression
      buffer to be lazy so that it is only done immediately before it is actually
      used.
      Reviewed-by: NTim Hudson <tjh@openssl.org>
      0220fee4
    • M
      Implement read pipeline support in libssl · 94777c9c
      Matt Caswell 提交于
      Read pipelining is controlled in a slightly different way than with write
      pipelining. While reading we are constrained by the number of records that
      the peer (and the network) can provide to us in one go. The more records
      we can get in one go the more opportunity we have to parallelise the
      processing.
      
      There are two parameters that affect this:
      * The number of pipelines that we are willing to process in one go. This is
      controlled by max_pipelines (as for write pipelining)
      * The size of our read buffer. A subsequent commit will provide an API for
      adjusting the size of the buffer.
      
      Another requirement for this to work is that "read_ahead" must be set. The
      read_ahead parameter will attempt to read as much data into our read buffer
      as the network can provide. Without this set, data is read into the read
      buffer on demand. Setting the max_pipelines parameter to a value greater
      than 1 will automatically also turn read_ahead on.
      
      Finally, the read pipelining as currently implemented will only parallelise
      the processing of application data records. This would only make a
      difference for renegotiation so is unlikely to have a significant impact.
      Reviewed-by: NTim Hudson <tjh@openssl.org>
      94777c9c
    • M
      Implement write pipeline support in libssl · d102d9df
      Matt Caswell 提交于
      Use the new pipeline cipher capability to encrypt multiple records being
      written out all in one go. Two new SSL/SSL_CTX parameters can be used to
      control how this works: max_pipelines and split_send_fragment.
      
      max_pipelines defines the maximum number of pipelines that can ever be used
      in one go for a single connection. It must always be less than or equal to
      SSL_MAX_PIPELINES (currently defined to be 32). By default only one
      pipeline will be used (i.e. normal non-parallel operation).
      
      split_send_fragment defines how data is split up into pipelines. The number
      of pipelines used will be determined by the amount of data provided to the
      SSL_write call divided by split_send_fragment. For example if
      split_send_fragment is set to 2000 and max_pipelines is 4 then:
      SSL_write called with 0-2000 bytes == 1 pipeline used
      SSL_write called with 2001-4000 bytes == 2 pipelines used
      SSL_write called with 4001-6000 bytes == 3 pipelines used
      SSL_write_called with 6001+ bytes == 4 pipelines used
      
      split_send_fragment must always be less than or equal to max_send_fragment.
      By default it is set to be equal to max_send_fragment. This will mean that
      the same number of records will always be created as would have been
      created in the non-parallel case, although the data will be apportioned
      differently. In the parallel case data will be spread equally between the
      pipelines.
      Reviewed-by: NTim Hudson <tjh@openssl.org>
      d102d9df