1. 30 9月, 2016 1 次提交
    • P
      http: control GSSAPI credential delegation · 26a7b234
      Petr Stodulka 提交于
      Delegation of credentials is disabled by default in libcurl since
      version 7.21.7 due to security vulnerability CVE-2011-2192. Which
      makes troubles with GSS/kerberos authentication when delegation
      of credentials is required. This can be changed with option
      CURLOPT_GSSAPI_DELEGATION in libcurl with set expected parameter
      since libcurl version 7.22.0.
      
      This patch provides new configuration variable http.delegation
      which corresponds to curl parameter "--delegation" (see man 1 curl).
      
      The following values are supported:
      
      * none (default).
      * policy
      * always
      Signed-off-by: NPetr Stodulka <pstodulk@redhat.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      26a7b234
  2. 06 8月, 2016 1 次提交
  3. 13 7月, 2016 1 次提交
  4. 25 5月, 2016 1 次提交
  5. 10 5月, 2016 1 次提交
  6. 05 5月, 2016 1 次提交
  7. 28 4月, 2016 1 次提交
    • J
      http: support sending custom HTTP headers · 8cb01e2f
      Johannes Schindelin 提交于
      We introduce a way to send custom HTTP headers with all requests.
      
      This allows us, for example, to send an extra token from build agents
      for temporary access to private repositories. (This is the use case that
      triggered this patch.)
      
      This feature can be used like this:
      
      	git -c http.extraheader='Secret: sssh!' fetch $URL $REF
      
      Note that `curl_easy_setopt(..., CURLOPT_HTTPHEADER, ...)` takes only
      a single list, overriding any previous call. This means we have to
      collect _all_ of the headers we want to use into a single list, and
      feed it to cURL in one shot. Since we already unconditionally set a
      "pragma" header when initializing the curl handles, we can add our new
      headers to that list.
      
      For callers which override the default header list (like probe_rpc),
      we provide `http_copy_default_headers()` so they can do the same
      trick.
      
      Big thanks to Jeff King and Junio Hamano for their outstanding help and
      patient reviews.
      Signed-off-by: NJohannes Schindelin <johannes.schindelin@gmx.de>
      Reviewed-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      8cb01e2f
  8. 11 4月, 2016 1 次提交
    • J
      http: differentiate socks5:// and socks5h:// · 87f8a0b2
      Junio C Hamano 提交于
      Felix Ruess <felix.ruess@gmail.com> noticed that with configuration
      
          $ git config --global 'http.proxy=socks5h://127.0.0.1:1080'
      
      connections to remote sites time out, waiting for DNS resolution.
      
      The logic to detect various flavours of SOCKS proxy and ask the
      libcurl layer to use appropriate one understands the proxy string
      that begin with socks5, socks4a, etc., but does not know socks5h,
      and we end up using CURLPROXY_SOCKS5.  The correct one to use is
      CURLPROXY_SOCKS5_HOSTNAME.
      
      https://curl.haxx.se/libcurl/c/CURLOPT_PROXY.html says
      
        ..., socks5h:// (the last one to enable socks5 and asking the
        proxy to do the resolving, also known as CURLPROXY_SOCKS5_HOSTNAME
        type).
      
      which is consistent with the way the breakage was reported.
      Tested-by: NFelix Ruess <felix.ruess@gmail.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      87f8a0b2
  9. 01 3月, 2016 1 次提交
    • J
      http: honor no_http env variable to bypass proxy · d445fda4
      Jiang Xin 提交于
      Curl and its families honor several proxy related environment variables:
      
      * http_proxy and https_proxy define proxy for http/https connections.
      * no_proxy (a comma separated hosts) defines hosts bypass the proxy.
      
      This command will bypass the bad-proxy and connect to the host directly:
      
          no_proxy=* https_proxy=http://bad-proxy/ \
          curl -sk https://google.com/
      
      Before commit 372370f1 (http: use credential API to handle proxy auth...),
      Environment variable "no_proxy" will take effect if the config variable
      "http.proxy" is not set.  So the following comamnd won't fail if not
      behind a firewall.
      
          no_proxy=* https_proxy=http://bad-proxy/ \
          git ls-remote https://github.com/git/git
      
      But commit 372370f1 not only read git config variable "http.proxy", but
      also read "http_proxy" and "https_proxy" environment variables, and set
      the curl option using:
      
          curl_easy_setopt(result, CURLOPT_PROXY, proxy_auth.host);
      
      This caused "no_proxy" environment variable not working any more.
      
      Set extra curl option "CURLOPT_NOPROXY" will fix this issue.
      Signed-off-by: NJiang Xin <xin.jiang@huawei.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      d445fda4
  10. 16 2月, 2016 2 次提交
  11. 13 2月, 2016 1 次提交
  12. 27 1月, 2016 2 次提交
  13. 25 11月, 2015 1 次提交
  14. 20 11月, 2015 2 次提交
  15. 12 11月, 2015 1 次提交
    • R
      http: fix some printf format warnings · 838ecf0b
      Ramsay Jones 提交于
      Commit f8117f55 ("http: use off_t to store partial file size",
      02-11-2015) changed the type of some variables from long to off_t.
      Unfortunately, the off_t type is not portable and can be represented
      by several different actual types (even multiple types on the same
      platform). This makes it difficult to print an off_t variable in
      a platform independent way. As a result, this commit causes gcc to
      issue some printf format warnings on a couple of different platforms.
      
      In order to suppress the warnings, change the format specifier to use
      the PRIuMAX macro and cast the off_t argument to uintmax_t. (See also
      the http_opt_request_remainder() function, which uses the same
      solution).
      Signed-off-by: NRamsay Jones <ramsay@ramsayjones.plus.com>
      Signed-off-by: NJeff King <peff@peff.net>
      838ecf0b
  16. 03 11月, 2015 2 次提交
    • J
      http: use off_t to store partial file size · f8117f55
      Jeff King 提交于
      When we try to resume transfer of a partially-downloaded
      object or pack, we fopen() the existing file for append,
      then use ftell() to get the current position. We use a
      "long", which can hold only 2GB on a 32-bit system, even
      though packfiles may be larger than that.
      
      Let's switch to using off_t, which should hold any file size
      our system is capable of storing. We need to use ftello() to
      get the off_t. This is in POSIX and hopefully available
      everywhere; if not, we should be able to wrap it by falling
      back to ftell(), which would presumably return "-1" on such
      a large file (and we would simply skip resuming in that case).
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      f8117f55
    • D
      http.c: use CURLOPT_RANGE for range requests · 835c4d36
      David Turner 提交于
      A HTTP server is permitted to return a non-range response to a HTTP
      range request (and Apache httpd in fact does this in some cases).
      While libcurl knows how to correctly handle this (by skipping bytes
      before and after the requested range), it only turns on this handling
      if it is aware that a range request is being made.  By manually
      setting the range header instead of using CURLOPT_RANGE, we were
      hiding the fact that this was a range request from libcurl.  This
      could cause corruption.
      Signed-off-by: NDavid Turner <dturner@twopensource.com>
      Reviewed-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      835c4d36
  17. 26 9月, 2015 4 次提交
    • B
      http: limit redirection depth · b2581164
      Blake Burkhart 提交于
      By default, libcurl will follow circular http redirects
      forever. Let's put a cap on this so that somebody who can
      trigger an automated fetch of an arbitrary repository (e.g.,
      for CI) cannot convince git to loop infinitely.
      
      The value chosen is 20, which is the same default that
      Firefox uses.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      b2581164
    • B
      http: limit redirection to protocol-whitelist · f4113cac
      Blake Burkhart 提交于
      Previously, libcurl would follow redirection to any protocol
      it was compiled for support with. This is desirable to allow
      redirection from HTTP to HTTPS. However, it would even
      successfully allow redirection from HTTP to SFTP, a protocol
      that git does not otherwise support at all. Furthermore
      git's new protocol-whitelisting could be bypassed by
      following a redirect within the remote helper, as it was
      only enforced at transport selection time.
      
      This patch limits redirects within libcurl to HTTP, HTTPS,
      FTP and FTPS. If there is a protocol-whitelist present, this
      list is limited to those also allowed by the whitelist. As
      redirection happens from within libcurl, it is impossible
      for an HTTP redirect to a protocol implemented within
      another remote helper.
      
      When the curl version git was compiled with is too old to
      support restrictions on protocol redirection, we warn the
      user if GIT_ALLOW_PROTOCOL restrictions were requested. This
      is a little inaccurate, as even without that variable in the
      environment, we would still restrict SFTP, etc, and we do
      not warn in that case. But anything else means we would
      literally warn every time git accesses an http remote.
      
      This commit includes a test, but it is not as robust as we
      would hope. It redirects an http request to ftp, and checks
      that curl complained about the protocol, which means that we
      are relying on curl's specific error message to know what
      happened. Ideally we would redirect to a working ftp server
      and confirm that we can clone without protocol restrictions,
      and not with them. But we do not have a portable way of
      providing an ftp server, nor any other protocol that curl
      supports (https is the closest, but we would have to deal
      with certificates).
      
      [jk: added test and version warning]
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      f4113cac
    • J
      convert trivial sprintf / strcpy calls to xsnprintf · 5096d490
      Jeff King 提交于
      We sometimes sprintf into fixed-size buffers when we know
      that the buffer is large enough to fit the input (either
      because it's a constant, or because it's numeric input that
      is bounded in size). Likewise with strcpy of constant
      strings.
      
      However, these sites make it hard to audit sprintf and
      strcpy calls for buffer overflows, as a reader has to
      cross-reference the size of the array with the input. Let's
      use xsnprintf instead, which communicates to a reader that
      we don't expect this to overflow (and catches the mistake in
      case we do).
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      5096d490
    • J
      use strip_suffix and xstrfmt to replace suffix · 9ae97018
      Jeff King 提交于
      When we want to convert "foo.pack" to "foo.idx", we do it by
      duplicating the original string and then munging the bytes
      in place. Let's use strip_suffix and xstrfmt instead, which
      has several advantages:
      
        1. It's more clear what the intent is.
      
        2. It does not implicitly rely on the fact that
           strlen(".idx") <= strlen(".pack") to avoid an overflow.
      
        3. We communicate the assumption that the input file ends
           with ".pack" (and get a run-time check that this is so).
      
        4. We drop calls to strcpy, which makes auditing the code
           base easier.
      
      Likewise, we can do this to convert ".pack" to ".bitmap",
      avoiding some manual memory computation.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      9ae97018
  18. 18 8月, 2015 1 次提交
  19. 11 8月, 2015 1 次提交
    • J
      sha1_file.c: rename move_temp_to_file() to finalize_object_file() · cb5add58
      Junio C Hamano 提交于
      Since 5a688fe4 ("core.sharedrepository = 0mode" should set, not
      loosen, 2009-03-25), we kept reminding ourselves:
      
          NEEDSWORK: this should be renamed to finalize_temp_file() as
          "moving" is only a part of what it does, when no patch between
          master to pu changes the call sites of this function.
      
      without doing anything about it.  Let's do so.
      
      The purpose of this function was not to move but to finalize.  The
      detail of the primarily implementation of finalizing was to link the
      temporary file to its final name and then to unlink, which wasn't
      even "moving".  The alternative implementation did "move" by calling
      rename(2), which is a fun tangent.
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      cb5add58
  20. 30 6月, 2015 1 次提交
    • E
      http: always use any proxy auth method available · 5841520b
      Enrique Tobis 提交于
      We set CURLOPT_PROXYAUTH to use the most secure authentication
      method available only when the user has set configuration variables
      to specify a proxy.  However, libcurl also supports specifying a
      proxy through environment variables.  In that case libcurl defaults
      to only using the Basic proxy authentication method, because we do
      not use CURLOPT_PROXYAUTH.
      
      Set CURLOPT_PROXYAUTH to always use the most secure authentication
      method available, even when there is no git configuration telling us
      to use a proxy. This allows the user to use environment variables to
      configure a proxy that requires an authentication method different
      from Basic.
      Signed-off-by: NEnrique A. Tobis <etobis@twosigma.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      5841520b
  21. 09 5月, 2015 1 次提交
  22. 25 3月, 2015 1 次提交
  23. 27 2月, 2015 1 次提交
  24. 04 2月, 2015 1 次提交
  25. 29 1月, 2015 1 次提交
    • Y
      http: add Accept-Language header if possible · f18604bb
      Yi EungJun 提交于
      Add an Accept-Language header which indicates the user's preferred
      languages defined by $LANGUAGE, $LC_ALL, $LC_MESSAGES and $LANG.
      
      Examples:
        LANGUAGE= -> ""
        LANGUAGE=ko:en -> "Accept-Language: ko, en;q=0.9, *;q=0.1"
        LANGUAGE=ko LANG=en_US.UTF-8 -> "Accept-Language: ko, *;q=0.1"
        LANGUAGE= LANG=en_US.UTF-8 -> "Accept-Language: en-US, *;q=0.1"
      
      This gives git servers a chance to display remote error messages in
      the user's preferred language.
      
      Limit the number of languages to 1,000 because q-value must not be
      smaller than 0.001, and limit the length of Accept-Language header to
      4,000 bytes for some HTTP servers which cannot accept such long header.
      Signed-off-by: NYi EungJun <eungjun.yi@navercorp.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      f18604bb
  26. 28 1月, 2015 1 次提交
    • J
      dumb-http: do not pass NULL path to parse_pack_index · 8b9c2dd4
      Jeff King 提交于
      Once upon a time, dumb http always fetched .idx files
      directly into their final location, and then checked their
      validity with parse_pack_index. This was refactored in
      commit 750ef425 (http-fetch: Use temporary files for
      pack-*.idx until verified, 2010-04-19), which uses the
      following logic:
      
        1. If we have the idx already in place, see if it's
           valid (using parse_pack_index). If so, use it.
      
        2. Otherwise, fetch the .idx to a tempfile, check
           that, and if so move it into place.
      
        3. Either way, fetch the pack itself if necessary.
      
      However, it got step 1 wrong. We pass a NULL path parameter
      to parse_pack_index, so an existing .idx file always looks
      broken. Worse, we do not treat this broken .idx as an
      opportunity to re-fetch, but instead return an error,
      ignoring the pack entirely. This can lead to a dumb-http
      fetch failing to retrieve the necessary objects.
      
      This doesn't come up much in practice, because it must be a
      packfile that we found out about (and whose .idx we stored)
      during an earlier dumb-http fetch, but whose packfile we
      _didn't_ fetch. I.e., we did a partial clone of a
      repository, didn't need some packfiles, and now a followup
      fetch needs them.
      
      Discovery and tests by Charles Bailey <charles@hashpling.org>.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      8b9c2dd4
  27. 16 1月, 2015 1 次提交
  28. 08 1月, 2015 1 次提交
    • B
      remote-curl: fall back to Basic auth if Negotiate fails · 4dbe6646
      brian m. carlson 提交于
      Apache servers using mod_auth_kerb can be configured to allow the user
      to authenticate either using Negotiate (using the Kerberos ticket) or
      Basic authentication (using the Kerberos password).  Often, one will
      want to use Negotiate authentication if it is available, but fall back
      to Basic authentication if the ticket is missing or expired.
      
      However, libcurl will try very hard to use something other than Basic
      auth, even over HTTPS.  If Basic and something else are offered, libcurl
      will never attempt to use Basic, even if the other option fails.
      Teach the HTTP client code to stop trying authentication mechanisms that
      don't use a password (currently Negotiate) after the first failure,
      since if they failed the first time, they will never succeed.
      Signed-off-by: Nbrian m. carlson <sandals@crustytoothpaste.net>
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      4dbe6646
  29. 16 9月, 2014 1 次提交
  30. 22 8月, 2014 1 次提交
  31. 21 8月, 2014 1 次提交
  32. 19 8月, 2014 1 次提交
  33. 21 6月, 2014 1 次提交