1. 29 6月, 2018 2 次提交
    • B
      upload-pack: test negotiation with changing repository · 3374292e
      Brandon Williams 提交于
      Add tests to check the behavior of fetching from a repository which
      changes between rounds of negotiation (for example, when different
      servers in a load-balancing agreement participate in the same stateless
      RPC negotiation). This forms a baseline of comparison to the ref-in-want
      functionality (which will be introduced to the client in subsequent
      commits), and ensures that subsequent commits do not change existing
      behavior.
      
      As part of this effort, a mechanism to substitute strings in a single
      HTTP response is added.
      Signed-off-by: NBrandon Williams <bmwill@google.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      3374292e
    • B
      upload-pack: implement ref-in-want · 516e2b76
      Brandon Williams 提交于
      Currently, while performing packfile negotiation, clients are only
      allowed to specify their desired objects using object ids.  This causes
      a vulnerability to failure when an object turns non-existent during
      negotiation, which may happen if, for example, the desired repository is
      provided by multiple Git servers in a load-balancing arrangement and
      there exists replication delay.
      
      In order to eliminate this vulnerability, implement the ref-in-want
      feature for the 'fetch' command in protocol version 2.  This feature
      enables the 'fetch' command to support requests in the form of ref names
      through a new "want-ref <ref>" parameter.  At the conclusion of
      negotiation, the server will send a list of all of the wanted references
      (as provided by "want-ref" lines) in addition to the generated packfile.
      Signed-off-by: NBrandon Williams <bmwill@google.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      516e2b76
  2. 22 6月, 2018 1 次提交
  3. 20 6月, 2018 2 次提交
  4. 19 6月, 2018 4 次提交
  5. 18 6月, 2018 1 次提交
    • K
      t3200: clarify description of --set-upstream test · cf317877
      Kaartic Sivaraam 提交于
      Support for the --set-upstream option was removed in 52668846
      (builtin/branch: stop supporting the "--set-upstream" option,
      2017-08-17). The change did not completely remove the command
      due to an issue noted in the commit's log message.
      
      So, a test was added to ensure that a command which uses the
      '--set-upstream' option fails instead of silently acting as an alias
      for the '--set-upstream-to' option due to option parsing features.
      
      To avoid confusion, clarify that the option is disabled intentionally
      in the corresponding test description.
      
      The test is expected to be around as long as we intentionally fail
      on seeing the '--set-upstream' option which in turn we expect to
      do for a period of time after which we can be sure that existing
      users of '--set-upstream' are aware that the option is no
      longer supported.
      Signed-off-by: NKaartic Sivaraam <kaartic.sivaraam@gmail.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      cf317877
  6. 13 6月, 2018 3 次提交
    • L
      git-p4: auto-size the block · 3deed5e0
      Luke Diamand 提交于
      git-p4 originally would fetch changes in one query. On large repos this
      could fail because of the limits that Perforce imposes on the number of
      items returned and the number of queries in the database.
      
      To fix this, git-p4 learned to query changes in blocks of 512 changes,
      However, this can be very slow - if you have a few million changes,
      with each chunk taking about a second, it can be an hour or so.
      
      Although it's possible to tune this value manually with the
      "--changes-block-size" option, it's far from obvious to ordinary users
      that this is what needs doing.
      
      This change alters the block size dynamically by looking for the
      specific error messages returned from the Perforce server, and reducing
      the block size if the error is seen, either to the limit reported by the
      server, or to half the current block size.
      
      That means we can start out with a very large block size, and then let
      it automatically drop down to a value that works without error, while
      still failing correctly if some other error occurs.
      Signed-off-by: NLuke Diamand <luke@diamand.org>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      3deed5e0
    • L
      git-p4: better error reporting when p4 fails · 0ef67acd
      Luke Diamand 提交于
      Currently when p4 fails to run, git-p4 just crashes with an obscure
      error message.
      
      For example, if the P4 ticket has expired, you get:
      
        Error: Cannot locate perforce checkout of <path> in client view
      
      This change checks whether git-p4 can talk to the Perforce server when
      the first P4 operation is attempted, and tries to print a meaningful
      error message if it fails.
      Signed-off-by: NLuke Diamand <luke@diamand.org>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      0ef67acd
    • R
      git-p4: add options --commit and --disable-rebase · f55b87c1
      Romain Merland 提交于
      On a daily work with multiple local git branches, the usual way to
      submit only a specified commit was to cherry-pick the commit on
      master then run git-p4 submit.  It can be very annoying to switch
      between local branches and master, only to submit one commit.  The
      proposed new way is to select directly the commit you want to
      submit.
      
      Add option --commit to command 'git-p4 submit' in order to submit
      only specified commit(s) in p4.
      
      On a daily work developping software with big compilation time, one
      may not want to rebase on his local git tree, in order to avoid long
      recompilation.
      
      Add option --disable-rebase to command 'git-p4 submit' in order to
      disable rebase after submission.
      
      Thanks-to: Cedric Borgese <cedric.borgese@gmail.com>
      Reviewed-by: NLuke Diamand <luke@diamand.org>
      Signed-off-by: NRomain Merland <merlorom@yahoo.fr>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      f55b87c1
  7. 12 6月, 2018 2 次提交
    • J
      fsck: avoid looking at NULL blob->object · 47cc9131
      Jeff King 提交于
      Commit 159e7b08 (fsck: detect gitmodules files,
      2018-05-02) taught fsck to look at the content of
      .gitmodules files. If the object turns out not to be a blob
      at all, we just complain and punt on checking the content.
      And since this was such an obvious and trivial code path, I
      didn't even bother to add a test.
      
      Except it _does_ do one non-trivial thing, which is call the
      report() function, which wants us to pass a pointer to a
      "struct object". Which we don't have (we have only a "struct
      object_id"). So we erroneously pass a NULL object to
      report(), which gets dereferenced and causes a segfault.
      
      It seems like we could refactor report() to just take the
      object_id itself. But we pass the object pointer along to
      a callback function, and indeed this ends up in
      builtin/fsck.c's objreport() which does want to look at
      other parts of the object (like the type).
      
      So instead, let's just use lookup_unknown_object() to get
      the real "struct object", and pass that.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      47cc9131
    • J
      t7415: don't bother creating commit for symlink test · 431acd2d
      Jeff King 提交于
      Early versions of the fsck .gitmodules detection code
      actually required a tree to be at the root of a commit for
      it to be checked for .gitmodules. What we ended up with in
      159e7b08 (fsck: detect gitmodules files, 2018-05-02),
      though, finds a .gitmodules file in _any_ tree (see that
      commit for more discussion).
      
      As a result, there's no need to create a commit in our
      tests. Let's drop it in the name of simplicity. And since
      that was the only thing referencing $tree, we can pull our
      tree creation out of a command substitution.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      431acd2d
  8. 04 6月, 2018 1 次提交
  9. 01 6月, 2018 3 次提交
    • J
      fetch: do not pass ref-prefixes for fetch by exact SHA1 · 6c301adb
      Jonathan Nieder 提交于
      When v2.18.0-rc0~10^2~1 (refspec: consolidate ref-prefix generation
      logic, 2018-05-16) factored out the ref-prefix generation code for
      reuse, it left out the 'if (!item->exact_sha1)' test in the original
      ref-prefix generation code. As a result, fetches by SHA-1 generate
      ref-prefixes as though the SHA-1 being fetched were an abbreviated ref
      name:
      
       $ GIT_TRACE_PACKET=1 bin-wrappers/git -c protocol.version=2 \
      	fetch origin 12039e00
      [...]
       packet:        fetch> ref-prefix 12039e00
       packet:        fetch> ref-prefix refs/12039e00
       packet:        fetch> ref-prefix refs/tags/12039e00
       packet:        fetch> ref-prefix refs/heads/12039e00
       packet:        fetch> ref-prefix refs/remotes/12039e00
       packet:        fetch> ref-prefix refs/remotes/12039e00/HEAD
       packet:        fetch> 0000
      
      If there is another ref name on the command line or the object being
      fetched is already available locally, then that's mostly harmless.
      But otherwise, we error out with
      
       fatal: no matching remote head
      
      since the server did not send any refs we are interested in.  Filter
      out the exact_sha1 refspecs to avoid this.
      
      This patch adds a test to check this behavior that notices another
      behavior difference between protocol v0 and v2 in the process.  Add a
      NEEDSWORK comment to clear it up.
      Signed-off-by: NJonathan Nieder <jrnieder@gmail.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      6c301adb
    • J
      index-pack: handle --strict checks of non-repo packs · 368b4e59
      Jeff King 提交于
      Commit 73c3f0f7 (index-pack: check .gitmodules files with
      --strict, 2018-05-04) added a call to add_packed_git(), with
      the intent that the newly-indexed objects would be available
      to the process when we run fsck_finish().  But that's not
      what add_packed_git() does. It only allocates the struct,
      and you must install_packed_git() on the result. So that
      call was effectively doing nothing (except leaking a
      struct).
      
      But wait, we passed all of the tests! Does that mean we
      don't need the call at all?
      
      For normal cases, no. When we run "index-pack --stdin"
      inside a repository, we write the new pack into the object
      directory. If fsck_finish() needs to access one of the new
      objects, then our initial lookup will fail to find it, but
      we'll follow up by running reprepare_packed_git() and
      looking again. That logic was meant to handle somebody else
      repacking simultaneously, but it ends up working for us
      here.
      
      But there is a case that does need this, that we were not
      testing. You can run "git index-pack foo.pack" on any file,
      even when it is not inside the object directory. Or you may
      not even be in a repository at all! This case fails without
      doing the proper install_packed_git() call.
      
      We can make this work by adding the install call.
      
      Note that we should be prepared to handle add_packed_git()
      failing. We can just silently ignore this case, though. If
      fsck_finish() later needs the objects and they're not
      available, it will complain itself. And if it doesn't
      (because we were able to resolve the whole fsck in the first
      pass), then it actually isn't an interesting error at all.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      368b4e59
    • J
      prepare_commit_graft: treat non-repository as a noop · 14a9bd28
      Jeff King 提交于
      The parse_commit_buffer() function consults lookup_commit_graft()
      to see if we need to rewrite parents. The latter will look
      at $GIT_DIR/info/grafts. If you're outside of a repository,
      then this will trigger a BUG() as of b1ef400e (setup_git_env:
      avoid blind fall-back to ".git", 2016-10-20).
      
      It's probably uncommon to actually parse a commit outside of
      a repository, but you can see it in action with:
      
        cd /not/a/git/repo
        git index-pack --strict /some/file.pack
      
      This works fine without --strict, but the fsck checks will
      try to parse any commits, triggering the BUG(). We can fix
      that by teaching the graft code to behave as if there are no
      grafts when we aren't in a repository.
      
      Arguably index-pack (and fsck) are wrong to consider grafts
      at all. So another solution is to disable grafts entirely
      for those commands. But given that the graft feature is
      deprecated anyway, it's not worth even thinking through the
      ramifications that might have.
      
      There is one other corner case I considered here. What
      should:
      
        cd /not/a/git/repo
        export GIT_GRAFT_FILE=/file/with/grafts
        git index-pack --strict /some/file.pack
      
      do? We don't have a repository, but the user has pointed us
      directly at a graft file, which we could respect. I believe
      this case did work that way prior to b1ef400e. However,
      fixing it now would be pretty invasive. Back then we would
      just call into setup_git_env() even without a repository.
      But these days it actually takes a git_dir argument. So
      there would be a fair bit of refactoring of the setup code
      involved.
      
      Given the obscurity of this case, plus the fact that grafts
      are deprecated and probably shouldn't work under index-pack
      anyway, it's not worth pursuing further. This patch at least
      un-breaks the common case where you're _not_ using grafts,
      but we BUG() anyway trying to even find that out.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      14a9bd28
  10. 30 5月, 2018 2 次提交
  11. 25 5月, 2018 2 次提交
  12. 24 5月, 2018 1 次提交
    • L
      git-p4: add unshelve command · 123f6317
      Luke Diamand 提交于
      This can be used to "unshelve" a shelved P4 commit into
      a git commit.
      
      For example:
      
        $ git p4 unshelve 12345
      
      The resulting commit ends up in the branch:
         refs/remotes/p4/unshelved/12345
      
      If that branch already exists, it is renamed - for example
      the above branch would be saved as p4/unshelved/12345.1.
      
      git-p4 checks that the shelved changelist is based on files
      which are at the same Perforce revision as the origin branch
      being used for the unshelve (HEAD by default). If they are not,
      it will refuse to unshelve. This is to ensure that the unshelved
      change does not contain other changes mixed-in.
      
      The reference branch can be changed manually with the "--origin"
      option.
      
      The change adds a new Unshelve command class. This just runs the
      existing P4Sync code tweaked to handle a shelved changelist.
      Signed-off-by: NLuke Diamand <luke@diamand.org>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      123f6317
  13. 23 5月, 2018 2 次提交
  14. 22 5月, 2018 8 次提交
    • J
      fsck: complain when .gitmodules is a symlink · b7b1fca1
      Jeff King 提交于
      We've recently forbidden .gitmodules to be a symlink in
      verify_path(). And it's an easy way to circumvent our fsck
      checks for .gitmodules content. So let's complain when we
      see it.
      Signed-off-by: NJeff King <peff@peff.net>
      b7b1fca1
    • J
      index-pack: check .gitmodules files with --strict · 73c3f0f7
      Jeff King 提交于
      Now that the internal fsck code has all of the plumbing we
      need, we can start checking incoming .gitmodules files.
      Naively, it seems like we would just need to add a call to
      fsck_finish() after we've processed all of the objects. And
      that would be enough to cover the initial test included
      here. But there are two extra bits:
      
        1. We currently don't bother calling fsck_object() at all
           for blobs, since it has traditionally been a noop. We'd
           actually catch these blobs in fsck_finish() at the end,
           but it's more efficient to check them when we already
           have the object loaded in memory.
      
        2. The second pass done by fsck_finish() needs to access
           the objects, but we're actually indexing the pack in
           this process. In theory we could give the fsck code a
           special callback for accessing the in-pack data, but
           it's actually quite tricky:
      
             a. We don't have an internal efficient index mapping
      	  oids to packfile offsets. We only generate it on
      	  the fly as part of writing out the .idx file.
      
             b. We'd still have to reconstruct deltas, which means
                we'd basically have to replicate all of the
      	  reading logic in packfile.c.
      
           Instead, let's avoid running fsck_finish() until after
           we've written out the .idx file, and then just add it
           to our internal packed_git list.
      
           This does mean that the objects are "in the repository"
           before we finish our fsck checks. But unpack-objects
           already exhibits this same behavior, and it's an
           acceptable tradeoff here for the same reason: the
           quarantine mechanism means that pushes will be
           fully protected.
      
      In addition to a basic push test in t7415, we add a sneaky
      pack that reverses the usual object order in the pack,
      requiring that index-pack access the tree and blob during
      the "finish" step.
      
      This already works for unpack-objects (since it will have
      written out loose objects), but we'll check it with this
      sneaky pack for good measure.
      Signed-off-by: NJeff King <peff@peff.net>
      73c3f0f7
    • J
      unpack-objects: call fsck_finish() after fscking objects · 6e328d6c
      Jeff King 提交于
      As with the previous commit, we must call fsck's "finish"
      function in order to catch any queued objects for
      .gitmodules checks.
      
      This second pass will be able to access any incoming
      objects, because we will have exploded them to loose objects
      by now.
      
      This isn't quite ideal, because it means that bad objects
      may have been written to the object database (and a
      subsequent operation could then reference them, even if the
      other side doesn't send the objects again). However, this is
      sufficient when used with receive.fsckObjects, since those
      loose objects will all be placed in a temporary quarantine
      area that will get wiped if we find any problems.
      Signed-off-by: NJeff King <peff@peff.net>
      6e328d6c
    • J
      fsck: call fsck_finish() after fscking objects · 1995b5e0
      Jeff King 提交于
      Now that the internal fsck code is capable of checking
      .gitmodules files, we just need to teach its callers to use
      the "finish" function to check any queued objects.
      
      With this, we can now catch the malicious case in t7415 with
      git-fsck.
      Signed-off-by: NJeff King <peff@peff.net>
      1995b5e0
    • J
      is_{hfs,ntfs}_dotgitmodules: add tests · dc2d9ba3
      Johannes Schindelin 提交于
      This tests primarily for NTFS issues, but also adds one example of an
      HFS+ issue.
      
      Thanks go to Congyi Wu for coming up with the list of examples where
      NTFS would possibly equate the filename with `.gitmodules`.
      Signed-off-by: NJohannes Schindelin <johannes.schindelin@gmx.de>
      Signed-off-by: NJeff King <peff@peff.net>
      dc2d9ba3
    • J
      submodule-config: verify submodule names as paths · 0383bbb9
      Jeff King 提交于
      Submodule "names" come from the untrusted .gitmodules file,
      but we blindly append them to $GIT_DIR/modules to create our
      on-disk repo paths. This means you can do bad things by
      putting "../" into the name (among other things).
      
      Let's sanity-check these names to avoid building a path that
      can be exploited. There are two main decisions:
      
        1. What should the allowed syntax be?
      
           It's tempting to reuse verify_path(), since submodule
           names typically come from in-repo paths. But there are
           two reasons not to:
      
             a. It's technically more strict than what we need, as
                we really care only about breaking out of the
                $GIT_DIR/modules/ hierarchy.  E.g., having a
                submodule named "foo/.git" isn't actually
                dangerous, and it's possible that somebody has
                manually given such a funny name.
      
             b. Since we'll eventually use this checking logic in
                fsck to prevent downstream repositories, it should
                be consistent across platforms. Because
                verify_path() relies on is_dir_sep(), it wouldn't
                block "foo\..\bar" on a non-Windows machine.
      
        2. Where should we enforce it? These days most of the
           .gitmodules reads go through submodule-config.c, so
           I've put it there in the reading step. That should
           cover all of the C code.
      
           We also construct the name for "git submodule add"
           inside the git-submodule.sh script. This is probably
           not a big deal for security since the name is coming
           from the user anyway, but it would be polite to remind
           them if the name they pick is invalid (and we need to
           expose the name-checker to the shell anyway for our
           test scripts).
      
           This patch issues a warning when reading .gitmodules
           and just ignores the related config entry completely.
           This will generally end up producing a sensible error,
           as it works the same as a .gitmodules file which is
           missing a submodule entry (so "submodule update" will
           barf, but "git clone --recurse-submodules" will print
           an error but not abort the clone.
      
           There is one minor oddity, which is that we print the
           warning once per malformed config key (since that's how
           the config subsystem gives us the entries). So in the
           new test, for example, the user would see three
           warnings. That's OK, since the intent is that this case
           should never come up outside of malicious repositories
           (and then it might even benefit the user to see the
           message multiple times).
      
      Credit for finding this vulnerability and the proof of
      concept from which the test script was adapted goes to
      Etienne Stalmans.
      Signed-off-by: NJeff King <peff@peff.net>
      0383bbb9
    • C
      submodule: add --dissociate option to add/update commands · a0ef2934
      Casey Fitzpatrick 提交于
      Add --dissociate option to add and update commands, both clone helper commands
      that already have the --reference option --dissociate pairs with.
      Signed-off-by: NCasey Fitzpatrick <kcghost@gmail.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      a0ef2934
    • C
      submodule: add --progress option to add command · 6d33e1c2
      Casey Fitzpatrick 提交于
      The '--progress' was introduced in 72c5f883 (clone: pass --progress
      decision to recursive submodules, 2016-09-22) to fix the progress reporting
      of the clone command. Also add the progress option to the 'submodule add'
      command. The update command already supports the progress flag, but it
      is not documented.
      Signed-off-by: NCasey Fitzpatrick <kcghost@gmail.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      6d33e1c2
  15. 21 5月, 2018 6 次提交