1. 26 9月, 2015 2 次提交
    • B
      http: limit redirection depth · b2581164
      Blake Burkhart 提交于
      By default, libcurl will follow circular http redirects
      forever. Let's put a cap on this so that somebody who can
      trigger an automated fetch of an arbitrary repository (e.g.,
      for CI) cannot convince git to loop infinitely.
      
      The value chosen is 20, which is the same default that
      Firefox uses.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      b2581164
    • B
      http: limit redirection to protocol-whitelist · f4113cac
      Blake Burkhart 提交于
      Previously, libcurl would follow redirection to any protocol
      it was compiled for support with. This is desirable to allow
      redirection from HTTP to HTTPS. However, it would even
      successfully allow redirection from HTTP to SFTP, a protocol
      that git does not otherwise support at all. Furthermore
      git's new protocol-whitelisting could be bypassed by
      following a redirect within the remote helper, as it was
      only enforced at transport selection time.
      
      This patch limits redirects within libcurl to HTTP, HTTPS,
      FTP and FTPS. If there is a protocol-whitelist present, this
      list is limited to those also allowed by the whitelist. As
      redirection happens from within libcurl, it is impossible
      for an HTTP redirect to a protocol implemented within
      another remote helper.
      
      When the curl version git was compiled with is too old to
      support restrictions on protocol redirection, we warn the
      user if GIT_ALLOW_PROTOCOL restrictions were requested. This
      is a little inaccurate, as even without that variable in the
      environment, we would still restrict SFTP, etc, and we do
      not warn in that case. But anything else means we would
      literally warn every time git accesses an http remote.
      
      This commit includes a test, but it is not as robust as we
      would hope. It redirects an http request to ftp, and checks
      that curl complained about the protocol, which means that we
      are relying on curl's specific error message to know what
      happened. Ideally we would redirect to a working ftp server
      and confirm that we can clone without protocol restrictions,
      and not with them. But we do not have a portable way of
      providing an ftp server, nor any other protocol that curl
      supports (https is the closest, but we would have to deal
      with certificates).
      
      [jk: added test and version warning]
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      f4113cac
  2. 24 9月, 2015 2 次提交
    • J
      submodule: allow only certain protocols for submodule fetches · 33cfccbb
      Jeff King 提交于
      Some protocols (like git-remote-ext) can execute arbitrary
      code found in the URL. The URLs that submodules use may come
      from arbitrary sources (e.g., .gitmodules files in a remote
      repository). Let's restrict submodules to fetching from a
      known-good subset of protocols.
      
      Note that we apply this restriction to all submodule
      commands, whether the URL comes from .gitmodules or not.
      This is more restrictive than we need to be; for example, in
      the tests we run:
      
        git submodule add ext::...
      
      which should be trusted, as the URL comes directly from the
      command line provided by the user. But doing it this way is
      simpler, and makes it much less likely that we would miss a
      case. And since such protocols should be an exception
      (especially because nobody who clones from them will be able
      to update the submodules!), it's not likely to inconvenience
      anyone in practice.
      Reported-by: NBlake Burkhart <bburky@bburky.com>
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      33cfccbb
    • J
      transport: add a protocol-whitelist environment variable · a5adaced
      Jeff King 提交于
      If we are cloning an untrusted remote repository into a
      sandbox, we may also want to fetch remote submodules in
      order to get the complete view as intended by the other
      side. However, that opens us up to attacks where a malicious
      user gets us to clone something they would not otherwise
      have access to (this is not necessarily a problem by itself,
      but we may then act on the cloned contents in a way that
      exposes them to the attacker).
      
      Ideally such a setup would sandbox git entirely away from
      high-value items, but this is not always practical or easy
      to set up (e.g., OS network controls may block multiple
      protocols, and we would want to enable some but not others).
      
      We can help this case by providing a way to restrict
      particular protocols. We use a whitelist in the environment.
      This is more annoying to set up than a blacklist, but
      defaults to safety if the set of protocols git supports
      grows). If no whitelist is specified, we continue to default
      to allowing all protocols (this is an "unsafe" default, but
      since the minority of users will want this sandboxing
      effect, it is the only sensible one).
      
      A note on the tests: ideally these would all be in a single
      test file, but the git-daemon and httpd test infrastructure
      is an all-or-nothing proposition rather than a test-by-test
      prerequisite. By putting them all together, we would be
      unable to test the file-local code on machines without
      apache.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      a5adaced
  3. 02 7月, 2015 1 次提交
    • J
      rev-list: disable --use-bitmap-index when pruning commits · c8a70d35
      Jeff King 提交于
      The reachability bitmaps do not have enough information to
      tell us which commits might have changed path "foo", so the
      current code produces wrong answers for:
      
        git rev-list --use-bitmap-index --count HEAD -- foo
      
      (it silently ignores the "foo" limiter). Instead, we should
      fall back to doing a normal traversal (it is OK to fall
      back rather than complain, because --use-bitmap-index is a
      pure optimization, and might not kick in for other reasons,
      such as there being no bitmaps in the repository).
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      c8a70d35
  4. 30 6月, 2015 2 次提交
  5. 18 6月, 2015 2 次提交
    • R
      test-lib.sh: fix color support when tput needs ~/.terminfo · d5c1b7c2
      Richard Hansen 提交于
      If tput needs ~/.terminfo for the current $TERM, then tput will
      succeed before HOME is changed to $TRASH_DIRECTORY (causing color to
      be set to 't') but fail afterward.
      
      One possible way to fix this is to treat HOME like TERM: back up the
      original value and temporarily restore it before say_color() runs
      tput.
      
      Instead, pre-compute and save the color control sequences before
      changing either TERM or HOME.  Use the saved control sequences in
      say_color() rather than call tput each time.  This avoids the need to
      back up and restore the TERM and HOME variables, and it avoids the
      overhead of a subshell and two invocations of tput per call to
      say_color().
      Signed-off-by: NRichard Hansen <rhansen@bbn.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      d5c1b7c2
    • R
      Revert "test-lib.sh: do tests for color support after changing HOME" · ca92a660
      Richard Hansen 提交于
      This reverts commit 102fc80d.
      
      There are two issues with that commit:
      
        * It is buggy.  In pseudocode, it is doing:
      
             color is set || TERM != dumb && color works && color=t
      
          when it should be doing:
      
             color is set || { TERM != dumb && color works && color=t }
      
        * It unnecessarily disables color when tput needs to read
          ~/.terminfo to get the control sequences.
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      ca92a660
  6. 16 6月, 2015 1 次提交
    • J
      Revert "stash: require a clean index to apply" · 19376104
      Jeff King 提交于
      This reverts commit ed178ef1.
      
      That commit was an attempt to improve the safety of applying
      a stash, because the application process may create
      conflicted index entries, after which it is hard to restore
      the original index state.
      
      Unfortunately, this hurts some common workflows around "git
      stash -k", like:
      
          git add -p       ;# (1) stage set of proposed changes
          git stash -k     ;# (2) get rid of everything else
          make test        ;# (3) make sure proposal is reasonable
          git stash apply  ;# (4) restore original working tree
      
      If you "git commit" between steps (3) and (4), then this
      just works. However, if these steps are part of a pre-commit
      hook, you don't have that opportunity (you have to restore
      the original state regardless of whether the tests passed or
      failed).
      
      It's possible that we could provide better tools for this
      sort of workflow. In particular, even before ed178ef1, it
      could fail with a conflict if there were conflicting hunks
      in the working tree and index (since the "stash -k" puts the
      index version into the working tree, and we then attempt to
      apply the differences between HEAD and the old working tree
      on top of that). But the fact remains that people have been
      using it happily for a while, and the safety provided by
      ed178ef1 is simply not that great. Let's revert it for now.
      In the long run, people can work on improving stash for this
      sort of workflow, but the safety tradeoff is not worth it in
      the meantime.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      19376104
  7. 13 6月, 2015 1 次提交
  8. 10 6月, 2015 1 次提交
    • S
      commit: cope with scissors lines in commit message · fbfa0973
      SZEDER Gábor 提交于
      The diff and submodule shortlog appended to the commit message template
      by 'git commit --verbose' are not stripped when the commit message
      contains an indented scissors line.
      
      When cleaning up a commit message with 'git commit --verbose' or
      '--cleanup=scissors' the code is careful and triggers only on a pure
      scissors line, i.e. a line containing nothing but a comment character, a
      space, and the scissors cut.  This is good, because people can embed
      scissors lines in the commit message while using 'git commit --verbose',
      and the text they write after their indented scissors line doesn't get
      deleted.
      
      While doing so, however, the cleanup function only looks at the first
      line matching the scissors pattern and if it doesn't start at the
      beginning of the line, then the function just returns without performing
      any cleanup.  This is wrong, because a "real" scissors line added by
      'git commit --verbose' might follow, and in that case the diff and
      submodule shortlog get included in the commit message.
      
      Fix this by changing the scissors pattern to match only at the beginning
      of the line, yet be careful to catch scissors on the first line as well.
      Helped-by: NJunio C Hamano <gitster@pobox.com>
      Signed-off-by: NSZEDER Gábor <szeder@ira.uka.de>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      fbfa0973
  9. 09 6月, 2015 7 次提交
    • P
      am --abort: keep unrelated commits on unborn branch · 6ea3b67b
      Paul Tan 提交于
      Since 7b3b7e37 (am --abort: keep unrelated commits since the last failure
      and warn, 2010-12-21), git-am would refuse to rewind HEAD if commits
      were made since the last git-am failure. This check was implemented in
      safe_to_abort(), which checked to see if HEAD's hash matched the
      abort-safety file.
      
      However, this check was skipped if the abort-safety file was empty,
      which can happen if git-am failed while on an unborn branch. As such, if
      any commits were made since then, they would be discarded. Fix this by
      carrying on the abort safety check even if the abort-safety file is
      empty.
      Signed-off-by: NPaul Tan <pyokagan@gmail.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      6ea3b67b
    • P
      am --abort: support aborting to unborn branch · e06764c8
      Paul Tan 提交于
      When git-am is first run on an unborn branch, no ORIG_HEAD is created.
      As such, any applied commits will remain even after a git am --abort.
      
      To be consistent with the behavior of git am --abort when it is not run
      from an unborn branch, we empty the index, and then destroy the branch
      pointed to by HEAD if there is no ORIG_HEAD.
      Signed-off-by: NPaul Tan <pyokagan@gmail.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      e06764c8
    • P
      am --abort: revert changes introduced by failed 3way merge · 20c3fe76
      Paul Tan 提交于
      Even when a merge conflict occurs with am --3way, the index will be
      modified with the results of any successfully merged files. These
      changes to the index will not be reverted with a
      "git read-tree --reset -u HEAD ORIG_HEAD", as git read-tree will not be
      aware of how the current index differs from HEAD or ORIG_HEAD.
      
      To fix this, we first reset any conflicting entries in the index. The
      resulting index will contain the results of successfully merged files
      introduced by the failed merge. We write this index to a tree, and then
      use git read-tree to fast-forward this "index tree" back to ORIG_HEAD,
      thus undoing all the changes from the failed merge.
      
      When we are on an unborn branch, HEAD and ORIG_HEAD will not point to
      valid trees. In this case, use an empty tree.
      Signed-off-by: NPaul Tan <pyokagan@gmail.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      20c3fe76
    • P
      am --skip: support skipping while on unborn branch · f8da6801
      Paul Tan 提交于
      When git am --skip is run, git am will copy HEAD's tree entries to the
      index with "git reset HEAD". However, on an unborn branch, HEAD does not
      point to a tree, so "git reset HEAD" will fail.
      
      Fix this by treating HEAD as en empty tree when we are on an unborn
      branch.
      Signed-off-by: NPaul Tan <pyokagan@gmail.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      f8da6801
    • P
      am -3: support 3way merge on unborn branch · 2c970c9e
      Paul Tan 提交于
      While on an unborn branch, git am -3 will fail to do a threeway merge as
      it references HEAD as "our tree", but HEAD does not point to a valid
      tree.
      
      Fix this by using an empty tree as "our tree" when we are on an unborn
      branch.
      Signed-off-by: NPaul Tan <pyokagan@gmail.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      2c970c9e
    • P
      am --skip: revert changes introduced by failed 3way merge · 88d50724
      Paul Tan 提交于
      Even when a merge conflict occurs with am --3way, the index will be
      modified with the results of any succesfully merged files (such as a new
      file). These changes to the index will not be reverted with a
      "git read-tree --reset -u HEAD HEAD", as git read-tree will not be aware
      of how the current index differs from HEAD.
      
      To fix this, we first reset any conflicting entries from the index. The
      resulting index will contain the results of successfully merged files.
      We write the index to a tree, then use git read-tree -m to fast-forward
      the "index tree" back to HEAD, thus undoing all the changes from the
      failed merge.
      Signed-off-by: NPaul Tan <pyokagan@gmail.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      88d50724
    • M
      read_loose_refs(): treat NULL_SHA1 loose references as broken · 501cf47c
      Michael Haggerty 提交于
      NULL_SHA1 is used to indicate an "invalid object name" throughout our
      code (and the code of other git implementations), so it is vastly more
      likely that an on-disk reference was set to this value due to a
      software bug than that NULL_SHA1 is the legitimate SHA-1 of an actual
      object.  Therefore, if a loose reference has the value NULL_SHA1,
      consider it to be broken.
      Signed-off-by: NMichael Haggerty <mhagger@alum.mit.edu>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      501cf47c
  10. 03 6月, 2015 2 次提交
    • M
      for-each-ref: report broken references correctly · 8afc493d
      Michael Haggerty 提交于
      If there is a loose reference file with invalid contents, "git
      for-each-ref" incorrectly reports the problem as being a missing
      object with name NULL_SHA1:
      
          $ echo '12345678' >.git/refs/heads/nonsense
          $ git for-each-ref
          fatal: missing object 0000000000000000000000000000000000000000 for refs/heads/nonsense
      
      With an explicit "--format" string, it can even report that the
      reference validly points at NULL_SHA1:
      
          $ git for-each-ref --format='%(objectname) %(refname)'
          0000000000000000000000000000000000000000 refs/heads/nonsense
          $ echo $?
          0
      
      This has been broken since
      
          b7dd2d20 for-each-ref: Do not lookup objects when they will not be used (2009-05-27)
      
      , which changed for-each-ref from using for_each_ref() to using
      git_for_each_rawref() in order to avoid looking up the referred-to
      objects unnecessarily. (When "git for-each-ref" is given a "--format"
      string that doesn't include information about the pointed-to object,
      it does not look up the object at all, which makes it considerably
      faster. Iterating with DO_FOR_EACH_INCLUDE_BROKEN is essential to this
      optimization because otherwise for_each_ref() would itself need to
      check whether the object exists as part of its brokenness test.)
      
      But for_each_rawref() includes broken references in the iteration, and
      "git for-each-ref" doesn't itself reject references with REF_ISBROKEN.
      The result is that broken references are processed *as if* they had
      the value NULL_SHA1, which is the value stored in entries for broken
      references.
      
      Change "git for-each-ref" to emit warnings for references that are
      REF_ISBROKEN but to otherwise skip them.
      Signed-off-by: NMichael Haggerty <mhagger@alum.mit.edu>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      8afc493d
    • M
      t6301: new tests of for-each-ref error handling · c3e23dc1
      Michael Haggerty 提交于
      Add tests that for-each-ref correctly reports broken loose reference
      files and references that point at missing objects. In fact, two of
      these tests fail, because (1) NULL_SHA1 is not recognized as an
      invalid reference value, and (2) for-each-ref doesn't respect
      REF_ISBROKEN. Fixes to come.
      
      Note that when for-each-ref is run with a --format option that doesn't
      require the object to be looked up, then we should still notice if a
      loose reference file is corrupt or contains NULL_SHA1, but we don't
      notice if it points at a missing object because we don't do an object
      lookup. This is OK.
      Signed-off-by: NMichael Haggerty <mhagger@alum.mit.edu>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      c3e23dc1
  11. 02 6月, 2015 2 次提交
    • J
      format-patch: do not feed tags to clear_commit_marks() · 9b7a61d7
      Junio C Hamano 提交于
      "git format-patch --ignore-if-in-upstream A..B", when either A or B
      is a tag, failed miserably.
      
      This is because the code passes the tips it used for traversal to
      clear_commit_marks(), after running a temporary revision traversal
      to enumerate the commits on both branches to find if they have
      commits that make equivalent changes.  The revision traversal
      machinery knows how to enumerate commits reachable starting from a
      tag, but clear_commit_marks() wants to take nothing but a commit.
      
      In the longer term, it might be a more correct fix to teach
      clear_commit_marks() to do the same "committish to commit"
      dereferencing that is done in the revision traversal machinery,
      but for now this fix should suffice.
      Reported-by: NBruce Korb <bruce.korb@gmail.com>
      Helped-by: NChristian Couder <christian.couder@gmail.com>
      Helped-by: Nbrian m. carlson <sandals@crustytoothpaste.net>
      Helped-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      9b7a61d7
    • J
      silence broken link warnings with revs->ignore_missing_links · daf7d867
      Jeff King 提交于
      We set revs->ignore_missing_links to instruct the
      revision-walking machinery that we know the history graph
      may be incomplete. For example, we use it when walking
      unreachable but recent objects; we want to add what we can,
      but it's OK if the history is incomplete.
      
      However, we still print error messages for the missing
      objects, which can be confusing. This is not an error, but
      just a normal situation when transitioning from a repository
      last pruned by an older git (which can leave broken segments
      of history) to a more recent one (where we try to preserve
      whole reachable segments).
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      daf7d867
  12. 30 5月, 2015 1 次提交
    • J
      setup_git_directory: delay core.bare/core.worktree errors · fada7674
      Jeff King 提交于
      If both core.bare and core.worktree are set, we complain
      about the bogus config and die. Dying is good, because it
      avoids commands running and doing damage in a potentially
      incorrect setup. But dying _there_ is bad, because it means
      that commands which do not even care about the work tree
      cannot run. This can make repairing the situation harder:
      
        [setup]
        $ git config core.bare true
        $ git config core.worktree /some/path
      
        [OK, expected.]
        $ git status
        fatal: core.bare and core.worktree do not make sense
      
        [Hrm...]
        $ git config --unset core.worktree
        fatal: core.bare and core.worktree do not make sense
      
        [Nope...]
        $ git config --edit
        fatal: core.bare and core.worktree do not make sense
      
        [Gaaah.]
        $ git help config
        fatal: core.bare and core.worktree do not make sense
      
      Instead, let's issue a warning about the bogus config when
      we notice it (i.e., for all commands), but only die when the
      command tries to use the work tree (by calling setup_work_tree).
      So we now get:
      
        $ git status
        warning: core.bare and core.worktree do not make sense
        fatal: unable to set up work tree using invalid config
      
        $ git config --unset core.worktree
        warning: core.bare and core.worktree do not make sense
      
      We have to update t1510 to accomodate this; it uses
      symbolic-ref to check whether the configuration works or
      not, but of course that command does not use the working
      tree. Instead, we switch it to use `git status`, as it
      requires a work-tree, does not need any special setup, and
      is read-only (so a failure will not adversely affect further
      tests).
      
      In addition, we add a new test that checks the desired
      behavior (i.e., that running "git config" with the bogus
      config does in fact work).
      Reported-by: NSZEDER Gábor <szeder@ira.uka.de>
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      fada7674
  13. 26 5月, 2015 1 次提交
    • J
      http-backend: spool ref negotiation requests to buffer · 6bc0cb51
      Jeff King 提交于
      When http-backend spawns "upload-pack" to do ref
      negotiation, it streams the http request body to
      upload-pack, who then streams the http response back to the
      client as it reads. In theory, git can go full-duplex; the
      client can consume our response while it is still sending
      the request.  In practice, however, HTTP is a half-duplex
      protocol. Even if our client is ready to read and write
      simultaneously, we may have other HTTP infrastructure in the
      way, including the webserver that spawns our CGI, or any
      intermediate proxies.
      
      In at least one documented case[1], this leads to deadlock
      when trying a fetch over http. What happens is basically:
      
        1. Apache proxies the request to the CGI, http-backend.
      
        2. http-backend gzip-inflates the data and sends
           the result to upload-pack.
      
        3. upload-pack acts on the data and generates output over
           the pipe back to Apache. Apache isn't reading because
           it's busy writing (step 1).
      
      This works fine most of the time, because the upload-pack
      output ends up in a system pipe buffer, and Apache reads
      it as soon as it finishes writing. But if both the request
      and the response exceed the system pipe buffer size, then we
      deadlock (Apache blocks writing to http-backend,
      http-backend blocks writing to upload-pack, and upload-pack
      blocks writing to Apache).
      
      We need to break the deadlock by spooling either the input
      or the output. In this case, it's ideal to spool the input,
      because Apache does not start reading either stdout _or_
      stderr until we have consumed all of the input. So until we
      do so, we cannot even get an error message out to the
      client.
      
      The solution is fairly straight-forward: we read the request
      body into an in-memory buffer in http-backend, freeing up
      Apache, and then feed the data ourselves to upload-pack. But
      there are a few important things to note:
      
        1. We limit the in-memory buffer to prevent an obvious
           denial-of-service attack. This is a new hard limit on
           requests, but it's unlikely to come into play. The
           default value is 10MB, which covers even the ridiculous
           100,000-ref negotation in the included test (that
           actually caps out just over 5MB). But it's configurable
           on the off chance that you don't mind spending some
           extra memory to make even ridiculous requests work.
      
        2. We must take care only to buffer when we have to. For
           pushes, the incoming packfile may be of arbitrary
           size, and we should connect the input directly to
           receive-pack. There's no deadlock problem here, though,
           because we do not produce any output until the whole
           packfile has been read.
      
           For upload-pack's initial ref advertisement, we
           similarly do not need to buffer. Even though we may
           generate a lot of output, there is no request body at
           all (i.e., it is a GET, not a POST).
      
      [1] http://article.gmane.org/gmane.comp.version-control.git/269020
      
      Test-adapted-from: Dennis Kaarsemaker <dennis@kaarsemaker.net>
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      6bc0cb51
  14. 22 5月, 2015 3 次提交
    • J
      t5407: use <<- to align the expected output · 141ff8f9
      Junio C Hamano 提交于
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      141ff8f9
    • M
      rebase -i: fix post-rewrite hook with failed exec command · b12d3e90
      Matthieu Moy 提交于
      Usually, when 'git rebase' stops before completing the rebase, it is to
      give the user an opportunity to edit a commit (e.g. with the 'edit'
      command). In such cases, 'git rebase' leaves the sha1 of the commit being
      rewritten in "$state_dir"/stopped-sha, and subsequent 'git rebase
      --continue' will call the post-rewrite hook with this sha1 as <old-sha1>
      argument to the post-rewrite hook.
      
      The case of 'git rebase' stopping because of a failed 'exec' command is
      different: it gives the opportunity to the user to examine or fix the
      failure, but does not stop saying "here's a commit to edit, use
      --continue when you're done". So, there's no reason to call the
      post-rewrite hook for 'exec' commands. If the user did rewrite the
      commit, it would be with 'git commit --amend' which already called the
      post-rewrite hook.
      
      Fix the behavior to leave no stopped-sha file in case of failed exec
      command, and teach 'git rebase --continue' to skip record_in_rewritten if
      no stopped-sha file is found.
      Signed-off-by: NMatthieu Moy <Matthieu.Moy@imag.fr>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      b12d3e90
    • M
      rebase -i: demonstrate incorrect behavior of post-rewrite · 1d968ca6
      Matthieu Moy 提交于
      The 'exec' command is sending the current commit to stopped-sha, which is
      supposed to contain the original commit (before rebase). As a result, if
      an 'exec' command fails, the next 'git rebase --continue' will send the
      current commit as <old-sha1> to the post-rewrite hook.
      
      The test currently fails with :
      
        --- expected.data       2015-05-21 17:55:29.000000000 +0000
        +++ [...]post-rewrite.data      2015-05-21 17:55:29.000000000 +0000
        @@ -1,2 +1,3 @@
         2362ae8e1b1b865e6161e6f0e165ffb974abf018 488028e9fac0b598b70cbeb594258a917e3f6fab
        +488028e9fac0b598b70cbeb594258a917e3f6fab 488028e9fac0b598b70cbeb594258a917e3f6fab
         babc8a4c7470895886fc129f1a015c486d05a351 8edffcc4e69a4e696a1d4bab047df450caf99507
      Signed-off-by: NMatthieu Moy <Matthieu.Moy@imag.fr>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      1d968ca6
  15. 21 5月, 2015 2 次提交
    • J
      stash: complain about unknown flags · d6cc2df5
      Jeff King 提交于
      The option parser for git-stash stuffs unknown flags into
      the $FLAGS variable, where they can be accessed by the
      individual commands. However, most commands do not even look
      at these extra flags, leading to unexpected results like
      this:
      
        $ git stash drop --help
        Dropped refs/stash@{0} (e6cf6d80faf92bb7828f7b60c47fc61c03bd30a1)
      
      We should notice the extra flags and bail. Rather than
      annotate each command to reject a non-empty $FLAGS variable,
      we can notice that "stash show" is the only command that
      actually _wants_ arbitrary flags. So we switch the default
      mode to reject unknown flags, and let stash_show() opt into
      the feature.
      Reported-by: NVincent Legoll <vincent.legoll@gmail.com>
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      d6cc2df5
    • J
      t5551: factor out tag creation · cc969c8d
      Jeff King 提交于
      One of our tests in t5551 creates a large number of tags,
      and jumps through some hoops to do it efficiently. Let's
      factor that out into a function so we can make other similar
      tests.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      cc969c8d
  16. 19 5月, 2015 4 次提交
    • S
      subdirectory tests: code cleanup, uncomment test · 66d2e04e
      Stefan Beller 提交于
      Back when these tests were written, we wanted to make sure that Git
      notices it is in a bare repository and "git show -s HEAD" would
      refrain from complaining that HEAD might mean a file it sees in its
      current working directory (because it does not).  But the version of
      Git back then didn't behave well, without (doubly) being told that
      it is inside a bare repository by exporting "GIT_DIR=.".  The form
      of the test we originally wanted to have was left commented out as
      a reminder.
      
      Nowadays the test as originally intended works, so add it to the
      test suite.  We'll keep the old test that explicitly sets GIT_DIR=.
      to make sure that use case will not regress.
      Signed-off-by: NStefan Beller <sbeller@google.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      66d2e04e
    • P
      pull: make pull.ff=true override merge.ff · eb8dc05c
      Paul Tan 提交于
      Since b814da89 (pull: add pull.ff configuration, 2014-01-15), running
      git-pull with the configuration pull.ff=false or pull.ff=only is
      equivalent to passing --no-ff and --ff-only to git-merge. However, if
      pull.ff=true, no switch is passed to git-merge. This leads to the
      confusing behavior where pull.ff=false or pull.ff=only is able to
      override merge.ff, while pull.ff=true is unable to.
      
      Fix this by adding the --ff switch if pull.ff=true, and add a test to
      catch future regressions.
      
      Furthermore, clarify in the documentation that pull.ff overrides
      merge.ff.
      Signed-off-by: NPaul Tan <pyokagan@gmail.com>
      Reviewed-by: NJohannes Schindelin <johannes.schindelin@gmx.de>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      eb8dc05c
    • P
      pull: handle --log=<n> · 5061a44b
      Paul Tan 提交于
      Since efb779f8 (merge, pull: add '--(no-)log' command line option,
      2008-04-06) git-pull supported the (--no-)log switch and would pass it
      to git-merge.
      
      96e9420c (merge: Make '--log' an integer option for number of shortlog
      entries, 2010-09-08) implemented support for the --log=<n> switch, which
      would explicitly set the number of shortlog entries. However, git-pull
      does not recognize this option, and will instead pass it to git-fetch,
      leading to "unknown option" errors.
      
      Fix this by matching --log=* in addition to --log and --no-log.
      
      Implement a test for this use case.
      Signed-off-by: NPaul Tan <pyokagan@gmail.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      5061a44b
    • J
      sha1_file: pass empty buffer to index empty file · f6a1e1e2
      Jim Hill 提交于
      `git add` of an empty file with a filter pops complaints from
      `copy_fd` about a bad file descriptor.
      
      This traces back to these lines in sha1_file.c:index_core:
      
      	if (!size) {
      		ret = index_mem(sha1, NULL, size, type, path, flags);
      
      The problem here is that content to be added to the index can be
      supplied from an fd, or from a memory buffer, or from a pathname. This
      call is supplying a NULL buffer pointer and a zero size.
      
      Downstream logic takes the complete absence of a buffer to mean the
      data is to be found elsewhere -- for instance, these, from convert.c:
      
      	if (params->src) {
      		write_err = (write_in_full(child_process.in, params->src, params->size) < 0);
      	} else {
      		write_err = copy_fd(params->fd, child_process.in);
      	}
      
      ~If there's a buffer, write from that, otherwise the data must be coming
      from an open fd.~
      
      Perfectly reasonable logic in a routine that's going to write from
      either a buffer or an fd.
      
      So change `index_core` to supply an empty buffer when indexing an empty
      file.
      
      There's a patch out there that instead changes the logic quoted above to
      take a `-1` fd to mean "use the buffer", but it seems to me that the
      distinction between a missing buffer and an empty one carries intrinsic
      semantics, where the logic change is adapting the code to handle
      incorrect arguments.
      Signed-off-by: NJim Hill <gjthill@gmail.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      f6a1e1e2
  17. 14 5月, 2015 1 次提交
    • J
      log: decorate HEAD with branch name under --decorate=full, too · 76c61fbd
      Junio C Hamano 提交于
      The previous step to teach "log --decorate" to show "HEAD -> master"
      instead of "HEAD, master" when showing the commit at the tip of the
      'master' branch, when the 'master' branch is checked out, did not
      work for "log --decorate=full".
      
      The commands in the "log" family prepare commit decorations for all
      refs upfront, and the actual string used in a decoration depends on
      how load_ref_decorations() is called very early in the process.  By
      default, "git log --decorate" stores names with common prefixes such
      as "refs/heads" stripped; "git log --decorate=full" stores the full
      refnames.
      
      When the current_pointed_by_HEAD() function has to decide if "HEAD"
      points at the branch a decoration describes, however, what was
      passed to load_ref_decorations() to decide to strip (or keep) such a
      common prefix is long lost.  This makes it impossible to reliably
      tell if a decoration that stores "refs/heads/master", for example,
      is the 'master' branch (under "--decorate" with prefix omitted) or
      'refs/heads/master' branch (under "--decorate=full").
      
      Keep what was passed to load_ref_decorations() in a global next to
      the global variable name_decoration, and use that to decide how to
      match what was read from "HEAD" and what is in a decoration.
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      76c61fbd
  18. 13 5月, 2015 5 次提交
    • M
      ref_transaction_commit(): fix atomicity and avoid fd exhaustion · cf018ee0
      Michael Haggerty 提交于
      The old code was roughly
      
          for update in updates:
              acquire locks and check old_sha
          for update in updates:
              if changing value:
                  write_ref_to_lockfile()
                  commit_ref_update()
          for update in updates:
              if deleting value:
                  unlink()
          rewrite packed-refs file
          for update in updates:
              if reference still locked:
                  unlock_ref()
      
      This has two problems.
      
      Non-atomic updates
      ==================
      
      The atomicity of the reference transaction depends on all pre-checks
      being done in the first loop, before any changes have started being
      committed in the second loop. The problem is that
      write_ref_to_lockfile() (previously part of write_ref_sha1()), which
      is called from the second loop, contains two more checks:
      
      * It verifies that new_sha1 is a valid object
      
      * If the reference being updated is a branch, it verifies that
        new_sha1 points at a commit object (as opposed to a tag, tree, or
        blob).
      
      If either of these checks fails, the "transaction" is aborted during
      the second loop. But this might happen after some reference updates
      have already been permanently committed. In other words, the
      all-or-nothing promise of "git update-ref --stdin" could be violated.
      
      So these checks have to be moved to the first loop.
      
      File descriptor exhaustion
      ==========================
      
      The old code locked all of the references in the first loop, leaving
      all of the lockfiles open until later loops. Since we might be
      updating a lot of references, this could result in file descriptor
      exhaustion.
      
      The solution
      ============
      
      After this patch, the code looks like
      
          for update in updates:
              acquire locks and check old_sha
              if changing value:
                  write_ref_to_lockfile()
              else:
                  close_ref()
          for update in updates:
              if changing value:
                  commit_ref_update()
          for update in updates:
              if deleting value:
                  unlink()
          rewrite packed-refs file
          for update in updates:
              if reference still locked:
                  unlock_ref()
      
      This fixes both problems:
      
      1. The pre-checks in write_ref_to_lockfile() are now done in the first
         loop, before any changes have been committed. If any of the checks
         fails, the whole transaction can now be rolled back correctly.
      
      2. All lockfiles are closed in the first loop immediately after they
         are created (either by write_ref_to_lockfile() or by close_ref()).
         This means that there is never more than one open lockfile at a
         time, preventing file descriptor exhaustion.
      
      To simplify the bookkeeping across loops, add a new REF_NEEDS_COMMIT
      bit to update->flags, which keeps track of whether the corresponding
      lockfile needs to be committed, as opposed to just unlocked. (Since
      "struct ref_update" is internal to the refs module, this change is not
      visible to external callers.)
      
      This change fixes two tests in t1400.
      Signed-off-by: NMichael Haggerty <mhagger@alum.mit.edu>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      cf018ee0
    • S
      t7004: rename ULIMIT test prerequisite to ULIMIT_STACK_SIZE · fc38a9bb
      Stefan Beller 提交于
      During creation of the patch series our discussion we could have a
      more descriptive name for the prerequisite for the test so it stays
      unique when other limits of ulimit are introduced.
      Signed-off-by: NStefan Beller <sbeller@google.com>
      Signed-off-by: NMichael Haggerty <mhagger@alum.mit.edu>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      fc38a9bb
    • S
    • M
      ref_transaction_commit(): fix atomicity and avoid fd exhaustion · 6c34492a
      Michael Haggerty 提交于
      The old code was roughly
      
          for update in updates:
              acquire locks and check old_sha
          for update in updates:
              if changing value:
                  write_ref_to_lockfile()
                  commit_ref_update()
          for update in updates:
              if deleting value:
                  unlink()
          rewrite packed-refs file
          for update in updates:
              if reference still locked:
                  unlock_ref()
      
      This has two problems.
      
      Non-atomic updates
      ==================
      
      The atomicity of the reference transaction depends on all pre-checks
      being done in the first loop, before any changes have started being
      committed in the second loop. The problem is that
      write_ref_to_lockfile() (previously part of write_ref_sha1()), which
      is called from the second loop, contains two more checks:
      
      * It verifies that new_sha1 is a valid object
      
      * If the reference being updated is a branch, it verifies that
        new_sha1 points at a commit object (as opposed to a tag, tree, or
        blob).
      
      If either of these checks fails, the "transaction" is aborted during
      the second loop. But this might happen after some reference updates
      have already been permanently committed. In other words, the
      all-or-nothing promise of "git update-ref --stdin" could be violated.
      
      So these checks have to be moved to the first loop.
      
      File descriptor exhaustion
      ==========================
      
      The old code locked all of the references in the first loop, leaving
      all of the lockfiles open until later loops. Since we might be
      updating a lot of references, this could result in file descriptor
      exhaustion.
      
      The solution
      ============
      
      After this patch, the code looks like
      
          for update in updates:
              acquire locks and check old_sha
              if changing value:
                  write_ref_to_lockfile()
              else:
                  close_ref()
          for update in updates:
              if changing value:
                  commit_ref_update()
          for update in updates:
              if deleting value:
                  unlink()
          rewrite packed-refs file
          for update in updates:
              if reference still locked:
                  unlock_ref()
      
      This fixes both problems:
      
      1. The pre-checks in write_ref_to_lockfile() are now done in the first
         loop, before any changes have been committed. If any of the checks
         fails, the whole transaction can now be rolled back correctly.
      
      2. All lockfiles are closed in the first loop immediately after they
         are created (either by write_ref_to_lockfile() or by close_ref()).
         This means that there is never more than one open lockfile at a
         time, preventing file descriptor exhaustion.
      
      To simplify the bookkeeping across loops, add a new REF_NEEDS_COMMIT
      bit to update->flags, which keeps track of whether the corresponding
      lockfile needs to be committed, as opposed to just unlocked. (Since
      "struct ref_update" is internal to the refs module, this change is not
      visible to external callers.)
      
      This change fixes two tests in t1400.
      Signed-off-by: NMichael Haggerty <mhagger@alum.mit.edu>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      6c34492a
    • S
      t7004: rename ULIMIT test prerequisite to ULIMIT_STACK_SIZE · 71ad0505
      Stefan Beller 提交于
      During creation of the patch series, our discussion revealed that
      we could have a more descriptive name for the prerequisite for the
      test so it stays unique when other limits of ulimit are introduced.
      
      Let's rename the existing ulimit about setting the stack size to
      a more explicit ULIMIT_STACK_SIZE.
      Signed-off-by: NStefan Beller <sbeller@google.com>
      Signed-off-by: NMichael Haggerty <mhagger@alum.mit.edu>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      71ad0505