1. 01 3月, 2016 1 次提交
    • J
      http: honor no_http env variable to bypass proxy · d445fda4
      Jiang Xin 提交于
      Curl and its families honor several proxy related environment variables:
      
      * http_proxy and https_proxy define proxy for http/https connections.
      * no_proxy (a comma separated hosts) defines hosts bypass the proxy.
      
      This command will bypass the bad-proxy and connect to the host directly:
      
          no_proxy=* https_proxy=http://bad-proxy/ \
          curl -sk https://google.com/
      
      Before commit 372370f1 (http: use credential API to handle proxy auth...),
      Environment variable "no_proxy" will take effect if the config variable
      "http.proxy" is not set.  So the following comamnd won't fail if not
      behind a firewall.
      
          no_proxy=* https_proxy=http://bad-proxy/ \
          git ls-remote https://github.com/git/git
      
      But commit 372370f1 not only read git config variable "http.proxy", but
      also read "http_proxy" and "https_proxy" environment variables, and set
      the curl option using:
      
          curl_easy_setopt(result, CURLOPT_PROXY, proxy_auth.host);
      
      This caused "no_proxy" environment variable not working any more.
      
      Set extra curl option "CURLOPT_NOPROXY" will fix this issue.
      Signed-off-by: NJiang Xin <xin.jiang@huawei.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      d445fda4
  2. 16 2月, 2016 2 次提交
  3. 13 2月, 2016 1 次提交
  4. 27 1月, 2016 2 次提交
  5. 25 11月, 2015 1 次提交
  6. 20 11月, 2015 2 次提交
  7. 12 11月, 2015 1 次提交
    • R
      http: fix some printf format warnings · 838ecf0b
      Ramsay Jones 提交于
      Commit f8117f55 ("http: use off_t to store partial file size",
      02-11-2015) changed the type of some variables from long to off_t.
      Unfortunately, the off_t type is not portable and can be represented
      by several different actual types (even multiple types on the same
      platform). This makes it difficult to print an off_t variable in
      a platform independent way. As a result, this commit causes gcc to
      issue some printf format warnings on a couple of different platforms.
      
      In order to suppress the warnings, change the format specifier to use
      the PRIuMAX macro and cast the off_t argument to uintmax_t. (See also
      the http_opt_request_remainder() function, which uses the same
      solution).
      Signed-off-by: NRamsay Jones <ramsay@ramsayjones.plus.com>
      Signed-off-by: NJeff King <peff@peff.net>
      838ecf0b
  8. 03 11月, 2015 2 次提交
    • J
      http: use off_t to store partial file size · f8117f55
      Jeff King 提交于
      When we try to resume transfer of a partially-downloaded
      object or pack, we fopen() the existing file for append,
      then use ftell() to get the current position. We use a
      "long", which can hold only 2GB on a 32-bit system, even
      though packfiles may be larger than that.
      
      Let's switch to using off_t, which should hold any file size
      our system is capable of storing. We need to use ftello() to
      get the off_t. This is in POSIX and hopefully available
      everywhere; if not, we should be able to wrap it by falling
      back to ftell(), which would presumably return "-1" on such
      a large file (and we would simply skip resuming in that case).
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      f8117f55
    • D
      http.c: use CURLOPT_RANGE for range requests · 835c4d36
      David Turner 提交于
      A HTTP server is permitted to return a non-range response to a HTTP
      range request (and Apache httpd in fact does this in some cases).
      While libcurl knows how to correctly handle this (by skipping bytes
      before and after the requested range), it only turns on this handling
      if it is aware that a range request is being made.  By manually
      setting the range header instead of using CURLOPT_RANGE, we were
      hiding the fact that this was a range request from libcurl.  This
      could cause corruption.
      Signed-off-by: NDavid Turner <dturner@twopensource.com>
      Reviewed-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      835c4d36
  9. 26 9月, 2015 4 次提交
    • B
      http: limit redirection depth · b2581164
      Blake Burkhart 提交于
      By default, libcurl will follow circular http redirects
      forever. Let's put a cap on this so that somebody who can
      trigger an automated fetch of an arbitrary repository (e.g.,
      for CI) cannot convince git to loop infinitely.
      
      The value chosen is 20, which is the same default that
      Firefox uses.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      b2581164
    • B
      http: limit redirection to protocol-whitelist · f4113cac
      Blake Burkhart 提交于
      Previously, libcurl would follow redirection to any protocol
      it was compiled for support with. This is desirable to allow
      redirection from HTTP to HTTPS. However, it would even
      successfully allow redirection from HTTP to SFTP, a protocol
      that git does not otherwise support at all. Furthermore
      git's new protocol-whitelisting could be bypassed by
      following a redirect within the remote helper, as it was
      only enforced at transport selection time.
      
      This patch limits redirects within libcurl to HTTP, HTTPS,
      FTP and FTPS. If there is a protocol-whitelist present, this
      list is limited to those also allowed by the whitelist. As
      redirection happens from within libcurl, it is impossible
      for an HTTP redirect to a protocol implemented within
      another remote helper.
      
      When the curl version git was compiled with is too old to
      support restrictions on protocol redirection, we warn the
      user if GIT_ALLOW_PROTOCOL restrictions were requested. This
      is a little inaccurate, as even without that variable in the
      environment, we would still restrict SFTP, etc, and we do
      not warn in that case. But anything else means we would
      literally warn every time git accesses an http remote.
      
      This commit includes a test, but it is not as robust as we
      would hope. It redirects an http request to ftp, and checks
      that curl complained about the protocol, which means that we
      are relying on curl's specific error message to know what
      happened. Ideally we would redirect to a working ftp server
      and confirm that we can clone without protocol restrictions,
      and not with them. But we do not have a portable way of
      providing an ftp server, nor any other protocol that curl
      supports (https is the closest, but we would have to deal
      with certificates).
      
      [jk: added test and version warning]
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      f4113cac
    • J
      use strip_suffix and xstrfmt to replace suffix · 9ae97018
      Jeff King 提交于
      When we want to convert "foo.pack" to "foo.idx", we do it by
      duplicating the original string and then munging the bytes
      in place. Let's use strip_suffix and xstrfmt instead, which
      has several advantages:
      
        1. It's more clear what the intent is.
      
        2. It does not implicitly rely on the fact that
           strlen(".idx") <= strlen(".pack") to avoid an overflow.
      
        3. We communicate the assumption that the input file ends
           with ".pack" (and get a run-time check that this is so).
      
        4. We drop calls to strcpy, which makes auditing the code
           base easier.
      
      Likewise, we can do this to convert ".pack" to ".bitmap",
      avoiding some manual memory computation.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      9ae97018
    • J
      convert trivial sprintf / strcpy calls to xsnprintf · 5096d490
      Jeff King 提交于
      We sometimes sprintf into fixed-size buffers when we know
      that the buffer is large enough to fit the input (either
      because it's a constant, or because it's numeric input that
      is bounded in size). Likewise with strcpy of constant
      strings.
      
      However, these sites make it hard to audit sprintf and
      strcpy calls for buffer overflows, as a reader has to
      cross-reference the size of the array with the input. Let's
      use xsnprintf instead, which communicates to a reader that
      we don't expect this to overflow (and catches the mistake in
      case we do).
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      5096d490
  10. 18 8月, 2015 1 次提交
  11. 11 8月, 2015 1 次提交
    • J
      sha1_file.c: rename move_temp_to_file() to finalize_object_file() · cb5add58
      Junio C Hamano 提交于
      Since 5a688fe4 ("core.sharedrepository = 0mode" should set, not
      loosen, 2009-03-25), we kept reminding ourselves:
      
          NEEDSWORK: this should be renamed to finalize_temp_file() as
          "moving" is only a part of what it does, when no patch between
          master to pu changes the call sites of this function.
      
      without doing anything about it.  Let's do so.
      
      The purpose of this function was not to move but to finalize.  The
      detail of the primarily implementation of finalizing was to link the
      temporary file to its final name and then to unlink, which wasn't
      even "moving".  The alternative implementation did "move" by calling
      rename(2), which is a fun tangent.
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      cb5add58
  12. 30 6月, 2015 1 次提交
    • E
      http: always use any proxy auth method available · 5841520b
      Enrique Tobis 提交于
      We set CURLOPT_PROXYAUTH to use the most secure authentication
      method available only when the user has set configuration variables
      to specify a proxy.  However, libcurl also supports specifying a
      proxy through environment variables.  In that case libcurl defaults
      to only using the Basic proxy authentication method, because we do
      not use CURLOPT_PROXYAUTH.
      
      Set CURLOPT_PROXYAUTH to always use the most secure authentication
      method available, even when there is no git configuration telling us
      to use a proxy. This allows the user to use environment variables to
      configure a proxy that requires an authentication method different
      from Basic.
      Signed-off-by: NEnrique A. Tobis <etobis@twosigma.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      5841520b
  13. 09 5月, 2015 1 次提交
  14. 25 3月, 2015 1 次提交
  15. 27 2月, 2015 1 次提交
  16. 04 2月, 2015 1 次提交
  17. 29 1月, 2015 1 次提交
    • Y
      http: add Accept-Language header if possible · f18604bb
      Yi EungJun 提交于
      Add an Accept-Language header which indicates the user's preferred
      languages defined by $LANGUAGE, $LC_ALL, $LC_MESSAGES and $LANG.
      
      Examples:
        LANGUAGE= -> ""
        LANGUAGE=ko:en -> "Accept-Language: ko, en;q=0.9, *;q=0.1"
        LANGUAGE=ko LANG=en_US.UTF-8 -> "Accept-Language: ko, *;q=0.1"
        LANGUAGE= LANG=en_US.UTF-8 -> "Accept-Language: en-US, *;q=0.1"
      
      This gives git servers a chance to display remote error messages in
      the user's preferred language.
      
      Limit the number of languages to 1,000 because q-value must not be
      smaller than 0.001, and limit the length of Accept-Language header to
      4,000 bytes for some HTTP servers which cannot accept such long header.
      Signed-off-by: NYi EungJun <eungjun.yi@navercorp.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      f18604bb
  18. 28 1月, 2015 1 次提交
    • J
      dumb-http: do not pass NULL path to parse_pack_index · 8b9c2dd4
      Jeff King 提交于
      Once upon a time, dumb http always fetched .idx files
      directly into their final location, and then checked their
      validity with parse_pack_index. This was refactored in
      commit 750ef425 (http-fetch: Use temporary files for
      pack-*.idx until verified, 2010-04-19), which uses the
      following logic:
      
        1. If we have the idx already in place, see if it's
           valid (using parse_pack_index). If so, use it.
      
        2. Otherwise, fetch the .idx to a tempfile, check
           that, and if so move it into place.
      
        3. Either way, fetch the pack itself if necessary.
      
      However, it got step 1 wrong. We pass a NULL path parameter
      to parse_pack_index, so an existing .idx file always looks
      broken. Worse, we do not treat this broken .idx as an
      opportunity to re-fetch, but instead return an error,
      ignoring the pack entirely. This can lead to a dumb-http
      fetch failing to retrieve the necessary objects.
      
      This doesn't come up much in practice, because it must be a
      packfile that we found out about (and whose .idx we stored)
      during an earlier dumb-http fetch, but whose packfile we
      _didn't_ fetch. I.e., we did a partial clone of a
      repository, didn't need some packfiles, and now a followup
      fetch needs them.
      
      Discovery and tests by Charles Bailey <charles@hashpling.org>.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      8b9c2dd4
  19. 16 1月, 2015 1 次提交
  20. 08 1月, 2015 1 次提交
    • B
      remote-curl: fall back to Basic auth if Negotiate fails · 4dbe6646
      brian m. carlson 提交于
      Apache servers using mod_auth_kerb can be configured to allow the user
      to authenticate either using Negotiate (using the Kerberos ticket) or
      Basic authentication (using the Kerberos password).  Often, one will
      want to use Negotiate authentication if it is available, but fall back
      to Basic authentication if the ticket is missing or expired.
      
      However, libcurl will try very hard to use something other than Basic
      auth, even over HTTPS.  If Basic and something else are offered, libcurl
      will never attempt to use Basic, even if the other option fails.
      Teach the HTTP client code to stop trying authentication mechanisms that
      don't use a password (currently Negotiate) after the first failure,
      since if they failed the first time, they will never succeed.
      Signed-off-by: Nbrian m. carlson <sandals@crustytoothpaste.net>
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      4dbe6646
  21. 16 9月, 2014 1 次提交
  22. 22 8月, 2014 1 次提交
  23. 21 8月, 2014 1 次提交
  24. 19 8月, 2014 1 次提交
  25. 21 6月, 2014 1 次提交
  26. 18 6月, 2014 1 次提交
    • Y
      http: fix charset detection of extract_content_type() · f34a655d
      Yi EungJun 提交于
      extract_content_type() could not extract a charset parameter if the
      parameter is not the first one and there is a whitespace and a following
      semicolon just before the parameter. For example:
      
          text/plain; format=fixed ;charset=utf-8
      
      And it also could not handle correctly some other cases, such as:
      
          text/plain; charset=utf-8; format=fixed
          text/plain; some-param="a long value with ;semicolons;"; charset=utf-8
      
      Thanks-to: Jeff King <peff@peff.net>
      Signed-off-by: NYi EungJun <eungjun.yi@navercorp.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      f34a655d
  27. 28 5月, 2014 3 次提交
    • J
      http: default text charset to iso-8859-1 · c553fd1c
      Jeff King 提交于
      This is specified by RFC 2616 as the default if no "charset"
      parameter is given.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      c553fd1c
    • J
      http: optionally extract charset parameter from content-type · e3131626
      Jeff King 提交于
      Since the previous commit, we now give a sanitized,
      shortened version of the content-type header to any callers
      who ask for it.
      
      This patch adds back a way for them to cleanly access
      specific parameters to the type. We could easily extract all
      parameters and make them available via a string_list, but:
      
        1. That complicates the interface and memory management.
      
        2. In practice, no planned callers care about anything
           except the charset.
      
      This patch therefore goes with the simplest thing, and we
      can expand or change the interface later if it becomes
      necessary.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      e3131626
    • J
      http: extract type/subtype portion of content-type · bf197fd7
      Jeff King 提交于
      When we get a content-type from curl, we get the whole
      header line, including any parameters, and without any
      normalization (like downcasing or whitespace) applied.
      If we later try to match it with strcmp() or even
      strcasecmp(), we may get false negatives.
      
      This could cause two visible behaviors:
      
        1. We might fail to recognize a smart-http server by its
           content-type.
      
        2. We might fail to relay text/plain error messages to
           users (especially if they contain a charset parameter).
      
      This patch teaches the http code to extract and normalize
      just the type/subtype portion of the string. This is
      technically passing out less information to the callers, who
      can no longer see the parameters. But none of the current
      callers cares, and a future patch will add back an
      easier-to-use method for accessing those parameters.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      bf197fd7
  28. 25 2月, 2014 1 次提交
  29. 19 2月, 2014 1 次提交
    • J
      http: never use curl_easy_perform · beed336c
      Jeff King 提交于
      We currently don't reuse http connections when fetching via
      the smart-http protocol. This is bad because the TCP
      handshake introduces latency, and especially because SSL
      connection setup may be non-trivial.
      
      We can fix it by consistently using curl's "multi"
      interface.  The reason is rather complicated:
      
      Our http code has two ways of being used: queuing many
      "slots" to be fetched in parallel, or fetching a single
      request in a blocking manner. The parallel code is built on
      curl's "multi" interface. Most of the single-request code
      uses http_request, which is built on top of the parallel
      code (we just feed it one slot, and wait until it finishes).
      
      However, one could also accomplish the single-request scheme
      by avoiding curl's multi interface entirely and just using
      curl_easy_perform. This is simpler, and is used by post_rpc
      in the smart-http protocol.
      
      It does work to use the same curl handle in both contexts,
      as long as it is not at the same time.  However, internally
      curl may not share all of the cached resources between both
      contexts. In particular, a connection formed using the
      "multi" code will go into a reuse pool connected to the
      "multi" object. Further requests using the "easy" interface
      will not be able to reuse that connection.
      
      The smart http protocol does ref discovery via http_request,
      which uses the "multi" interface, and then follows up with
      the "easy" interface for its rpc calls. As a result, we make
      two HTTP connections rather than reusing a single one.
      
      We could teach the ref discovery to use the "easy"
      interface. But it is only once we have done this discovery
      that we know whether the protocol will be smart or dumb. If
      it is dumb, then our further requests, which want to fetch
      objects in parallel, will not be able to reuse the same
      connection.
      
      Instead, this patch switches post_rpc to build on the
      parallel interface, which means that we use it consistently
      everywhere. It's a little more complicated to use, but since
      we have the infrastructure already, it doesn't add any code;
      we can just factor out the relevant bits from http_request.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      beed336c
  30. 06 12月, 2013 1 次提交
    • C
      replace {pre,suf}fixcmp() with {starts,ends}_with() · 59556548
      Christian Couder 提交于
      Leaving only the function definitions and declarations so that any
      new topic in flight can still make use of the old functions, replace
      existing uses of the prefixcmp() and suffixcmp() with new API
      functions.
      
      The change can be recreated by mechanically applying this:
      
          $ git grep -l -e prefixcmp -e suffixcmp -- \*.c |
            grep -v strbuf\\.c |
            xargs perl -pi -e '
              s|!prefixcmp\(|starts_with\(|g;
              s|prefixcmp\(|!starts_with\(|g;
              s|!suffixcmp\(|ends_with\(|g;
              s|suffixcmp\(|!ends_with\(|g;
            '
      
      on the result of preparatory changes in this series.
      Signed-off-by: NChristian Couder <chriscool@tuxfamily.org>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      59556548
  31. 01 11月, 2013 1 次提交
    • J
      http: return curl's AUTHAVAIL via slot_results · 0972ccd9
      Jeff King 提交于
      Callers of the http code may want to know which auth types
      were available for the previous request. But after finishing
      with the curl slot, they are not supposed to look at the
      curl handle again. We already handle returning other
      information via the slot_results struct; let's add a flag to
      check the available auth.
      
      Note that older versions of curl did not support this, so we
      simply return 0 (something like "-1" would be worse, as the
      value is a bitflag and we might accidentally set a flag).
      This is sufficient for the callers planned in this series,
      who only trigger some optional behavior if particular bits
      are set, and can live with a fake "no bits" answer.
      Signed-off-by: NJeff King <peff@peff.net>
      0972ccd9