1. 28 5月, 2014 3 次提交
    • J
      http: default text charset to iso-8859-1 · c553fd1c
      Jeff King 提交于
      This is specified by RFC 2616 as the default if no "charset"
      parameter is given.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      c553fd1c
    • J
      http: optionally extract charset parameter from content-type · e3131626
      Jeff King 提交于
      Since the previous commit, we now give a sanitized,
      shortened version of the content-type header to any callers
      who ask for it.
      
      This patch adds back a way for them to cleanly access
      specific parameters to the type. We could easily extract all
      parameters and make them available via a string_list, but:
      
        1. That complicates the interface and memory management.
      
        2. In practice, no planned callers care about anything
           except the charset.
      
      This patch therefore goes with the simplest thing, and we
      can expand or change the interface later if it becomes
      necessary.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      e3131626
    • J
      http: extract type/subtype portion of content-type · bf197fd7
      Jeff King 提交于
      When we get a content-type from curl, we get the whole
      header line, including any parameters, and without any
      normalization (like downcasing or whitespace) applied.
      If we later try to match it with strcmp() or even
      strcasecmp(), we may get false negatives.
      
      This could cause two visible behaviors:
      
        1. We might fail to recognize a smart-http server by its
           content-type.
      
        2. We might fail to relay text/plain error messages to
           users (especially if they contain a charset parameter).
      
      This patch teaches the http code to extract and normalize
      just the type/subtype portion of the string. This is
      technically passing out less information to the callers, who
      can no longer see the parameters. But none of the current
      callers cares, and a future patch will add back an
      easier-to-use method for accessing those parameters.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      bf197fd7
  2. 25 2月, 2014 1 次提交
  3. 19 2月, 2014 1 次提交
    • J
      http: never use curl_easy_perform · beed336c
      Jeff King 提交于
      We currently don't reuse http connections when fetching via
      the smart-http protocol. This is bad because the TCP
      handshake introduces latency, and especially because SSL
      connection setup may be non-trivial.
      
      We can fix it by consistently using curl's "multi"
      interface.  The reason is rather complicated:
      
      Our http code has two ways of being used: queuing many
      "slots" to be fetched in parallel, or fetching a single
      request in a blocking manner. The parallel code is built on
      curl's "multi" interface. Most of the single-request code
      uses http_request, which is built on top of the parallel
      code (we just feed it one slot, and wait until it finishes).
      
      However, one could also accomplish the single-request scheme
      by avoiding curl's multi interface entirely and just using
      curl_easy_perform. This is simpler, and is used by post_rpc
      in the smart-http protocol.
      
      It does work to use the same curl handle in both contexts,
      as long as it is not at the same time.  However, internally
      curl may not share all of the cached resources between both
      contexts. In particular, a connection formed using the
      "multi" code will go into a reuse pool connected to the
      "multi" object. Further requests using the "easy" interface
      will not be able to reuse that connection.
      
      The smart http protocol does ref discovery via http_request,
      which uses the "multi" interface, and then follows up with
      the "easy" interface for its rpc calls. As a result, we make
      two HTTP connections rather than reusing a single one.
      
      We could teach the ref discovery to use the "easy"
      interface. But it is only once we have done this discovery
      that we know whether the protocol will be smart or dumb. If
      it is dumb, then our further requests, which want to fetch
      objects in parallel, will not be able to reuse the same
      connection.
      
      Instead, this patch switches post_rpc to build on the
      parallel interface, which means that we use it consistently
      everywhere. It's a little more complicated to use, but since
      we have the infrastructure already, it doesn't add any code;
      we can just factor out the relevant bits from http_request.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      beed336c
  4. 06 12月, 2013 1 次提交
    • C
      replace {pre,suf}fixcmp() with {starts,ends}_with() · 59556548
      Christian Couder 提交于
      Leaving only the function definitions and declarations so that any
      new topic in flight can still make use of the old functions, replace
      existing uses of the prefixcmp() and suffixcmp() with new API
      functions.
      
      The change can be recreated by mechanically applying this:
      
          $ git grep -l -e prefixcmp -e suffixcmp -- \*.c |
            grep -v strbuf\\.c |
            xargs perl -pi -e '
              s|!prefixcmp\(|starts_with\(|g;
              s|prefixcmp\(|!starts_with\(|g;
              s|!suffixcmp\(|ends_with\(|g;
              s|suffixcmp\(|!ends_with\(|g;
            '
      
      on the result of preparatory changes in this series.
      Signed-off-by: NChristian Couder <chriscool@tuxfamily.org>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      59556548
  5. 01 11月, 2013 1 次提交
    • J
      http: return curl's AUTHAVAIL via slot_results · 0972ccd9
      Jeff King 提交于
      Callers of the http code may want to know which auth types
      were available for the previous request. But after finishing
      with the curl slot, they are not supposed to look at the
      curl handle again. We already handle returning other
      information via the slot_results struct; let's add a flag to
      check the available auth.
      
      Note that older versions of curl did not support this, so we
      simply return 0 (something like "-1" would be worse, as the
      value is a bitflag and we might accidentally set a flag).
      This is sufficient for the callers planned in this series,
      who only trigger some optional behavior if particular bits
      are set, and can live with a fake "no bits" answer.
      Signed-off-by: NJeff King <peff@peff.net>
      0972ccd9
  6. 25 10月, 2013 1 次提交
  7. 17 10月, 2013 1 次提交
    • J
      http: use curl's tcp keepalive if available · 47ce1153
      Jeff King 提交于
      Commit a15d069a taught git to use curl's SOCKOPTFUNCTION hook
      to turn on TCP keepalives. However, modern versions of curl
      have a TCP_KEEPALIVE option, which can do this for us. As an
      added bonus, the curl code knows how to turn on keepalive
      for a much wider variety of platforms. The only downside to
      using this option is that not everybody has a new enough curl.
      Let's split our keepalive options into three conditionals:
      
        1. With curl 7.25.0 and newer, we rely on curl to do it
           right.
      
        2. With older curl that still knows SOCKOPTFUNCTION, we
           use the code from a15d069a.
      
        3. Otherwise, we are out of luck, and the call is a no-op.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      47ce1153
  8. 15 10月, 2013 3 次提交
    • J
      http: update base URLs when we see redirects · c93c92f3
      Jeff King 提交于
      If a caller asks the http_get_* functions to go to a
      particular URL and we end up elsewhere due to a redirect,
      the effective_url field can tell us where we went.
      
      It would be nice to remember this redirect and short-cut
      further requests for two reasons:
      
        1. It's more efficient. Otherwise we spend an extra http
           round-trip to the server for each subsequent request,
           just to get redirected.
      
        2. If we end up with an http 401 and are going to ask for
           credentials, it is to feed them to the redirect target.
           If the redirect is an http->https upgrade, this means
           our credentials may be provided on the http leg, just
           to end up redirected to https. And if the redirect
           crosses server boundaries, then curl will drop the
           credentials entirely as it follows the redirect.
      
      However, it, it is not enough to simply record the effective
      URL we saw and use that for subsequent requests. We were
      originally fed a "base" url like:
      
         http://example.com/foo.git
      
      and we want to figure out what the new base is, even though
      the URLs we see may be:
      
           original: http://example.com/foo.git/info/refs
          effective: http://example.com/bar.git/info/refs
      
      Subsequent requests will not be for "info/refs", but for
      other paths relative to the base. We must ask the caller to
      pass in the original base, and we must pass the redirected
      base back to the caller (so that it can generate more URLs
      from it). Furthermore, we need to feed the new base to the
      credential code, so that requests to credential helpers (or
      to the user) match the URL we will be requesting.
      
      This patch teaches http_request_reauth to do this munging.
      Since it is the caller who cares about making more URLs, it
      seems at first glance that callers could simply check
      effective_url themselves and handle it. However, since we
      need to update the credential struct before the second
      re-auth request, we have to do it inside http_request_reauth.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJonathan Nieder <jrnieder@gmail.com>
      c93c92f3
    • J
      http: provide effective url to callers · 78868962
      Jeff King 提交于
      When we ask curl to access a URL, it may follow one or more
      redirects to reach the final location. We have no idea
      this has happened, as curl takes care of the details and
      simply returns the final content to us.
      
      The final URL that we ended up with can be accessed via
      CURLINFO_EFFECTIVE_URL. Let's make that optionally available
      to callers of http_get_*, so that they can make further
      decisions based on the redirection.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJonathan Nieder <jrnieder@gmail.com>
      78868962
    • J
      http: hoist credential request out of handle_curl_result · 2501aff8
      Jeff King 提交于
      When we are handling a curl response code in http_request or
      in the remote-curl RPC code, we use the handle_curl_result
      helper to translate curl's response into an easy-to-use
      code. When we see an HTTP 401, we do one of two things:
      
        1. If we already had a filled-in credential, we mark it as
           rejected, and then return HTTP_NOAUTH to indicate to
           the caller that we failed.
      
        2. If we didn't, then we ask for a new credential and tell
           the caller HTTP_REAUTH to indicate that they may want
           to try again.
      
      Rejecting in the first case makes sense; it is the natural
      result of the request we just made. However, prompting for
      more credentials in the second step does not always make
      sense. We do not know for sure that the caller is going to
      make a second request, and nor are we sure that it will be
      to the same URL. Logically, the prompt belongs not to the
      request we just finished, but to the request we are (maybe)
      about to make.
      
      In practice, it is very hard to trigger any bad behavior.
      Currently, if we make a second request, it will always be to
      the same URL (even in the face of redirects, because curl
      handles the redirects internally). And we almost always
      retry on HTTP_REAUTH these days. The one exception is if we
      are streaming a large RPC request to the server (e.g., a
      pushed packfile), in which case we cannot restart. It's
      extremely unlikely to see a 401 response at this stage,
      though, as we would typically have seen it when we sent a
      probe request, before streaming the data.
      
      This patch drops the automatic prompt out of case 2, and
      instead requires the caller to do it. This is a few extra
      lines of code, and the bug it fixes is unlikely to come up
      in practice. But it is conceptually cleaner, and paves the
      way for better handling of credentials across redirects.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJonathan Nieder <jrnieder@gmail.com>
      2501aff8
  9. 14 10月, 2013 1 次提交
    • E
      http: enable keepalive on TCP sockets · a15d069a
      Eric Wong 提交于
      This is a follow up to commit e47a8583 (enable SO_KEEPALIVE for
      connected TCP sockets, 2011-12-06).
      
      Sockets may never receive notification of some link errors,
      causing "git fetch" or similar processes to hang forever.
      Enabling keepalive messages allows hung processes to error out
      after a few minutes/hours depending on the keepalive settings of
      the system.
      
      I noticed this problem with some non-interactive cronjobs getting
      hung when talking to HTTP servers.
      Signed-off-by: NEric Wong <normalperson@yhbt.net>
      Signed-off-by: NJonathan Nieder <jrnieder@gmail.com>
      a15d069a
  10. 01 10月, 2013 3 次提交
    • J
      http: refactor options to http_get_* · 1bbcc224
      Jeff King 提交于
      Over time, the http_get_strbuf function has grown several
      optional parameters. We now have a bitfield with multiple
      boolean options, as well as an optional strbuf for returning
      the content-type of the response. And a future patch in this
      series is going to add another strbuf option.
      
      Treating these as separate arguments has a few downsides:
      
        1. Most call sites need to add extra NULLs and 0s for the
           options they aren't interested in.
      
        2. The http_get_* functions are actually wrappers around
           2 layers of low-level implementation functions. We have
           to pass these options through individually.
      
        3. The http_get_strbuf wrapper learned these options, but
           nobody bothered to do so for http_get_file, even though
           it is backed by the same function that does understand
           the options.
      
      Let's consolidate the options into a single struct. For the
      common case of the default options, we'll allow callers to
      simply pass a NULL for the options struct.
      
      The resulting code is often a few lines longer, but it ends
      up being easier to read (and to change as we add new
      options, since we do not need to update each call site).
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJonathan Nieder <jrnieder@gmail.com>
      1bbcc224
    • J
      http_request: factor out curlinfo_strbuf · 132b70a2
      Jeff King 提交于
      When we retrieve the content-type of an http response, curl
      gives us a pointer to internal storage, which we then copy
      into a strbuf. Let's factor out the get-and-copy routine,
      which can be used for getting other curl info.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJonathan Nieder <jrnieder@gmail.com>
      132b70a2
    • J
      http_get_file: style fixes · 3d1fb769
      Jeff King 提交于
      Besides being ugly, the extra parentheses are idiomatic for
      suppressing compiler warnings when we are assigning within a
      conditional. We aren't doing that here, and they just
      confuse the reader.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJonathan Nieder <jrnieder@gmail.com>
      3d1fb769
  11. 06 8月, 2013 1 次提交
  12. 01 8月, 2013 1 次提交
  13. 31 7月, 2013 1 次提交
  14. 20 6月, 2013 1 次提交
    • B
      http.c: don't rewrite the user:passwd string multiple times · a94cf2cb
      Brandon Casey 提交于
      Curl older than 7.17 (RHEL 4.X provides 7.12 and RHEL 5.X provides
      7.15) requires that we manage any strings that we pass to it as
      pointers.  So, we really shouldn't be modifying this strbuf after we
      have passed it to curl.
      
      Our interaction with curl is currently safe (before or after this
      patch) since the pointer that is passed to curl is never invalidated;
      it is repeatedly rewritten with the same sequence of characters but
      the strbuf functions never need to allocate a larger string, so the
      same memory buffer is reused.
      
      This "guarantee" of safety is somewhat subtle and could be overlooked
      by someone who may want to add a more complex handling of the username
      and password.  So, let's stop modifying this strbuf after we have
      passed it to curl, but also leave a note to describe the assumptions
      that have been made about username/password lifetime and to draw
      attention to the code.
      Signed-off-by: NBrandon Casey <drafnel@gmail.com>
      Acked-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      a94cf2cb
  15. 17 4月, 2013 1 次提交
    • J
      http: set curl FAILONERROR each time we select a handle · b793acf1
      Jeff King 提交于
      Because we reuse curl handles for multiple requests, the
      setup of a handle happens in two stages: stable, global
      setup and per-request setup. The lifecycle of a handle is
      something like:
      
        1. get_curl_handle; do basic global setup that will last
           through the whole program (e.g., setting the user
           agent, ssl options, etc)
      
        2. get_active_slot; set up a per-request baseline (e.g.,
           clearing the read/write functions, making it a GET
           request, etc)
      
        3. perform the request with curl_*_perform functions
      
        4. goto step 2 to perform another request
      
      Breaking it down this way means we can avoid doing global
      setup from step (1) repeatedly, but we still finish step (2)
      with a predictable baseline setup that callers can rely on.
      
      Until commit 6d052d78 (http: add HTTP_KEEP_ERROR option,
      2013-04-05), setting curl's FAILONERROR option was a global
      setup; we never changed it. However, 6d052d78 introduced an
      option where some requests might turn off FAILONERROR. Later
      requests using the same handle would have the option
      unexpectedly turned off, which meant they would not notice
      http failures at all.
      
      This could easily be seen in the test-suite for the
      "half-auth" cases of t5541 and t5551. The initial requests
      turned off FAILONERROR, which meant it was erroneously off
      for the rpc POST. That worked fine for a successful request,
      but meant that we failed to react properly to the HTTP 401
      (instead, we treated whatever the server handed us as a
      successful message body).
      
      The solution is simple: now that FAILONERROR is a
      per-request setting, we move it to get_active_slot to make
      sure it is reset for each request.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      b793acf1
  16. 12 4月, 2013 1 次提交
  17. 07 4月, 2013 4 次提交
    • J
      http: drop http_error function · 4df13f69
      Jeff King 提交于
      This function is a single-liner and is only called from one
      place. Just inline it, which makes the code more obvious.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      4df13f69
    • J
      http: re-word http error message · 39a570f2
      Jeff King 提交于
      When we report an http error code, we say something like:
      
        error: The requested URL reported failure: 403 Forbidden while accessing http://example.com/repo.git
      
      Everything between "error:" and "while" is written by curl,
      and the resulting sentence is hard to read (especially
      because there is no punctuation between curl's sentence and
      the remainder of ours). Instead, let's re-order this to give
      better flow:
      
        error: unable to access 'http://example.com/repo.git: The requested URL reported failure: 403 Forbidden
      
      This is still annoyingly long, but at least reads more
      clearly left to right.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      39a570f2
    • J
      http: simplify http_error helper function · 67d2a7b5
      Jeff King 提交于
      This helper function should really be a one-liner that
      prints an error message, but it has ended up unnecessarily
      complicated:
      
        1. We call error() directly when we fail to start the curl
           request, so we must later avoid printing a duplicate
           error in http_error().
      
           It would be much simpler in this case to just stuff the
           error message into our usual curl_errorstr buffer
           rather than printing it ourselves. This means that
           http_error does not even have to care about curl's exit
           value (the interesting part is in the errorstr buffer
           already).
      
        2. We return the "ret" value passed in to us, but none of
           the callers actually cares about our return value. We
           can just drop this entirely.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      67d2a7b5
    • J
      http: add HTTP_KEEP_ERROR option · 6d052d78
      Jeff King 提交于
      We currently set curl's FAILONERROR option, which means that
      any http failures are reported as curl errors, and the
      http body content from the server is thrown away.
      
      This patch introduces a new option to http_get_strbuf which
      specifies that the body content from a failed http response
      should be placed in the destination strbuf, where it can be
      accessed by the caller.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      6d052d78
  18. 21 2月, 2013 1 次提交
  19. 06 2月, 2013 1 次提交
  20. 05 2月, 2013 1 次提交
    • S
      Verify Content-Type from smart HTTP servers · 4656bf47
      Shawn Pearce 提交于
      Before parsing a suspected smart-HTTP response verify the returned
      Content-Type matches the standard. This protects a client from
      attempting to process a payload that smells like a smart-HTTP
      server response.
      
      JGit has been doing this check on all responses since the dawn of
      time. I mistakenly failed to include it in git-core when smart HTTP
      was introduced. At the time I didn't know how to get the Content-Type
      from libcurl. I punted, meant to circle back and fix this, and just
      plain forgot about it.
      Signed-off-by: NShawn Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      4656bf47
  21. 22 12月, 2012 1 次提交
  22. 20 10月, 2012 1 次提交
    • S
      Fix potential hang in https handshake · 7202b81f
      Stefan Zager 提交于
      It has been observed that curl_multi_timeout may return a very long
      timeout value (e.g., 294 seconds and some usec) just before
      curl_multi_fdset returns no file descriptors for reading.  The
      upshot is that select() will hang for a long time -- long enough for
      an https handshake to be dropped.  The observed behavior is that
      the git command will hang at the terminal and never transfer any
      data.
      
      This patch is a workaround for a probable bug in libcurl.  The bug
      only seems to manifest around a very specific set of circumstances:
      
      - curl version (from curl/curlver.h):
      
       #define LIBCURL_VERSION_NUM 0x071307
      
      - git-remote-https running on an ubuntu-lucid VM.
      - Connecting through squid proxy running on another VM.
      
      Interestingly, the problem doesn't manifest if a host connects
      through squid proxy running on localhost; only if the proxy is on
      a separate VM (not sure if the squid host needs to be on a separate
      physical machine).  That would seem to suggest that this issue
      is timing-sensitive.
      
      This patch is more or less in line with a recommendation in the
      curl docs about how to behave when curl_multi_fdset doesn't return
      and file descriptors:
      
      http://curl.haxx.se/libcurl/c/curl_multi_fdset.htmlSigned-off-by: NStefan Zager <szager@google.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      7202b81f
  23. 13 10月, 2012 2 次提交
    • J
      http: do not set up curl auth after a 401 · 1960897e
      Jeff King 提交于
      When we get an http 401, we prompt for credentials and put
      them in our global credential struct. We also feed them to
      the curl handle that produced the 401, with the intent that
      they will be used on a retry.
      
      When the code was originally introduced in commit 42653c09,
      this was a necessary step. However, since dfa1725a, we always
      feed our global credential into every curl handle when we
      initialize the slot with get_active_slot. So every further
      request already feeds the credential to curl.
      
      Moreover, accessing the slot here is somewhat dubious. After
      the slot has produced a response, we don't actually control
      it any more.  If we are using curl_multi, it may even have
      been re-initialized to handle a different request.
      
      It just so happens that we will reuse the curl handle within
      the slot in such a case, and that because we only keep one
      global credential, it will be the one we want.  So the
      current code is not buggy, but it is misleading.
      
      By cleaning it up, we can remove the slot argument entirely
      from handle_curl_result, making it much more obvious that
      slots should not be accessed after they are marked as
      finished.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      1960897e
    • J
      http: fix segfault in handle_curl_result · 188923f0
      Jeff King 提交于
      When we create an http active_request_slot, we can set its
      "results" pointer back to local storage. The http code will
      fill in the details of how the request went, and we can
      access those details even after the slot has been cleaned
      up.
      
      Commit 88097030 (http: factor out http error code handling)
      switched us from accessing our local results struct directly
      to accessing it via the "results" pointer of the slot. That
      means we're accessing the slot after it has been marked as
      finished, defeating the whole purpose of keeping the results
      storage separate.
      
      Most of the time this doesn't matter, as finishing the slot
      does not actually clean up the pointer. However, when using
      curl's multi interface with the dumb-http revision walker,
      we might actually start a new request before handing control
      back to the original caller. In that case, we may reuse the
      slot, zeroing its results pointer, and leading the original
      caller to segfault while looking for its results inside the
      slot.
      
      Instead, we need to pass a pointer to our local results
      storage to the handle_curl_result function, rather than
      relying on the pointer in the slot struct. This matches what
      the original code did before the refactoring (which did not
      use a separate function, and therefore just accessed the
      results struct directly).
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      188923f0
  24. 21 9月, 2012 1 次提交
    • S
      Enable info/refs gzip decompression in HTTP client · aa90b969
      Shawn O. Pearce 提交于
      Some HTTP servers try to use gzip compression on the /info/refs
      request to save transfer bandwidth. Repositories with many tags
      may find the /info/refs request can be gzipped to be 50% of the
      original size due to the few but often repeated bytes used (hex
      SHA-1 and commonly digits in tag names).
      
      For most HTTP requests enable "Accept-Encoding: gzip" ensuring
      the /info/refs payload can use this encoding format.
      
      Only request gzip encoding from servers. Although deflate is
      supported by libcurl, most servers have standardized on gzip
      encoding for compression as that is what most browsers support.
      Asking for deflate increases request sizes by a few bytes, but is
      unlikely to ever be used by a server.
      
      Disable the Accept-Encoding header on probe RPCs as response bodies
      are supposed to be exactly 4 bytes long, "0000". The HTTP headers
      requesting and indicating compression use more space than the data
      transferred in the body.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      aa90b969
  25. 28 8月, 2012 1 次提交
    • J
      http: factor out http error code handling · 88097030
      Jeff King 提交于
      Most of our http requests go through the http_request()
      interface, which does some nice post-processing on the
      results. In particular, it handles prompting for missing
      credentials as well as approving and rejecting valid or
      invalid credentials. Unfortunately, it only handles GET
      requests. Making it handle POSTs would be quite complex, so
      let's pull result handling code into its own function so
      that it can be reused from the POST code paths.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      88097030
  26. 24 8月, 2012 1 次提交
  27. 04 6月, 2012 1 次提交
  28. 01 5月, 2012 1 次提交
  29. 15 4月, 2012 2 次提交
    • J
      http: use newer curl options for setting credentials · 6f4c347c
      Jeff King 提交于
      We give the username and password to curl by sticking them
      in a buffer of the form "user:pass" and handing the result
      to CURLOPT_USERPWD. Since curl 7.19.1, there is a split
      mechanism, where you can specify each element individually.
      
      This has the advantage that a username can contain a ":"
      character. It also is less code for us, since we can hand
      our strings over to curl directly. And since curl 7.17.0 and
      higher promise to copy the strings for us, we we don't even
      have to worry about memory ownership issues.
      
      Unfortunately, we have to keep the ugly code for old curl
      around, but as it is now nicely #if'd out, we can easily get
      rid of it when we decide that 7.19.1 is "old enough".
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      6f4c347c
    • J
      http: clean up leak in init_curl_http_auth · aa0834a0
      Jeff King 提交于
      When we have a credential to give to curl, we must copy it
      into a "user:pass" buffer and then hand the buffer to curl.
      Old versions of curl did not copy the buffer, and we were
      expected to keep it valid. Newer versions of curl will copy
      the buffer.
      
      Our solution was to use a strbuf and detach it, giving
      ownership of the resulting buffer to curl. However, this
      meant that we were leaking the buffer on newer versions of
      curl, since curl was just copying it and throwing away the
      string we passed. Furthermore, when we replaced a
      credential (e.g., because our original one was rejected), we
      were also leaking on both old and new versions of curl.
      
      This got even worse in the last patch, which started
      replacing the credential (and thus leaking) on every http
      request.
      
      Instead, let's use a static buffer to make the ownership
      more clear and less leaky.  We already keep a static "struct
      credential", so we are only handling a single credential at
      a time, anyway.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      aa0834a0