1. 31 10月, 2013 1 次提交
  2. 25 10月, 2013 1 次提交
  3. 31 8月, 2013 1 次提交
  4. 29 8月, 2013 1 次提交
  5. 08 8月, 2013 4 次提交
    • J
      fetch: work around "transport-take-over" hack · b26ed430
      Junio C Hamano 提交于
      A Git-aware "connect" transport allows the "transport_take_over" to
      redirect generic transport requests like fetch(), push_refs() and
      get_refs_list() to the native Git transport handling methods.  The
      take-over process replaces transport->data with a fake data that
      these method implementations understand.
      
      While this hack works OK for a single request, it breaks when the
      transport needs to make more than one requests.  transport->data
      that used to hold necessary information for the specific helper to
      work correctly is destroyed during the take-over process.
      
      One codepath that this matters is "git fetch" in auto-follow mode;
      when it does not get all the tags that ought to point at the history
      it got (which can be determined by looking at the peeled tags in the
      initial advertisement) from the primary transfer, it internally
      makes a second request to complete the fetch.  Because "take-over"
      hack has already destroyed the data necessary to talk to the
      transport helper by the time this happens, the second request cannot
      make a request to the helper to make another connection to fetch
      these additional tags.
      
      Mark such a transport as "cannot_reuse", and use a separate
      transport to perform the backfill fetch in order to work around
      this breakage.
      
      Note that this problem does not manifest itself when running t5802,
      because our upload-pack gives you all the necessary auto-followed
      tags during the primary transfer.  You would need to step through
      "git fetch" in a debugger, stop immediately after the primary
      transfer finishes and writes these auto-followed tags, remove the
      tag references and repack/prune the repository to convince the
      "find-non-local-tags" procedure that the primary transfer failed to
      give us all the necessary tags, and then let it continue, in order
      to trigger the bug in the secondary transfer this patch fixes.
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      b26ed430
    • J
      fetch: refactor code that fetches leftover tags · 069d5032
      Junio C Hamano 提交于
      Usually the upload-pack process running on the other side will give
      us all the reachable tags we need during the primary object transfer
      in do_fetch().  If that does not happen (e.g. the other side may be
      running a third-party implementation of upload-pack), we will run
      another fetch to pick up leftover tags that we know point at the
      commits reachable from our updated tips.
      
      Separate out the code to run this second fetch into a helper
      function.
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      069d5032
    • J
      fetch: refactor code that prepares a transport · db5723c6
      Junio C Hamano 提交于
      Make a helper function prepare_transport() that returns a transport
      to talk to a given remote.
      
      The set_option() helper that used to always affect the file-scope
      global "gtransport" now takes a transport as its parameter.
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      db5723c6
    • J
      fetch: rename file-scope global "transport" to "gtransport" · af234459
      Junio C Hamano 提交于
      Although many functions in this file take a "struct transport" as a
      parameter, "fetch_one()" assigns to the global singleton instance
      which is a file-scope static, in order to allow a parameterless
      signal handler unlock_pack() to access it.
      
      Rename the variable to gtransport to make sure these uses stand out.
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      af234459
  6. 06 8月, 2013 1 次提交
  7. 19 7月, 2013 1 次提交
    • M
      fetch: make --prune configurable · 737c5a9c
      Michael Schubert 提交于
      Without "git fetch --prune", remote-tracking branches for a branch
      the other side already has removed will stay forever.  Some people
      want to always run "git fetch --prune".
      
      To accommodate users who want to either prune always or when fetching
      from a particular remote, add two new configuration variables
      "fetch.prune" and "remote.<name>.prune":
      
       - "fetch.prune" allows to enable prune for all fetch operations.
      
       - "remote.<name>.prune" allows to change the behaviour per remote.
      
      The latter will naturally override the former, and the --[no-]prune
      option from the command line will override the configured default.
      
      Since --prune is a potentially destructive operation (Git doesn't
      keep reflogs for deleted references yet), we don't want to prune
      without users consent, so this configuration will not be on by
      default.
      Helped-by: NJunio C Hamano <gitster@pobox.com>
      Signed-off-by: NMichael Schubert <mschub@elegosoft.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      737c5a9c
  8. 03 6月, 2013 3 次提交
  9. 29 5月, 2013 1 次提交
  10. 28 5月, 2013 1 次提交
  11. 13 5月, 2013 2 次提交
    • J
      fetch: opportunistically update tracking refs · f2690487
      Jeff King 提交于
      When we run a regular "git fetch" without arguments, we
      update the tracking refs according to the configured
      refspec. However, when we run "git fetch origin master" (or
      "git pull origin master"), we do not look at the configured
      refspecs at all, and just update FETCH_HEAD.
      
      We miss an opportunity to update "refs/remotes/origin/master"
      (or whatever the user has configured). Some users find this
      confusing, because they would want to do further comparisons
      against the old state of the remote master, like:
      
        $ git pull origin master
        $ git log HEAD...origin/master
      
      In the currnet code, they are comparing against whatever
      commit happened to be in origin/master from the last time
      they did a complete "git fetch".  This patch will update a
      ref from the RHS of a configured refspec whenever we happen
      to be fetching its LHS. That makes the case above work.
      
      The downside is that any users who really care about whether
      and when their tracking branches are updated may be
      surprised.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      f2690487
    • J
      refactor "ref->merge" flag · 900f2814
      Jeff King 提交于
      Each "struct ref" has a boolean flag that is set by the
      fetch code to determine whether the ref should be marked as
      "not-for-merge" or not when we write it out to FETCH_HEAD.
      
      It would be useful to turn this boolean into a tri-state,
      with the third state meaning "do not bother writing it out
      to FETCH_HEAD at all". That would let us add extra refs to
      the set of refs to be stored (e.g., to store copies of
      things we fetched) without impacting FETCH_HEAD.
      
      This patch turns it into an enum that covers the tri-state
      case, and hopefully makes the code more explicit and easier
      to read.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      900f2814
  12. 27 1月, 2013 1 次提交
    • J
      fetch: run gc --auto after fetching · 131b8fcb
      Jeff King 提交于
      We generally try to run "gc --auto" after any commands that
      might introduce a large number of new objects. An obvious
      place to do so is after running "fetch", which may introduce
      new loose objects or packs (depending on the size of the
      fetch).
      
      While an active developer repository will probably
      eventually trigger a "gc --auto" on another action (e.g.,
      git-rebase), there are two good reasons why it is nicer to
      do it at fetch time:
      
        1. Read-only repositories which track an upstream (e.g., a
           continuous integration server which fetches and builds,
           but never makes new commits) will accrue loose objects
           and small packs, but never coalesce them into a more
           efficient larger pack.
      
        2. Fetching is often already perceived to be slow to the
           user, since they have to wait on the network. It's much
           more pleasant to include a potentially slow auto-gc as
           part of the already-long network fetch than in the
           middle of productive work with git-rebase or similar.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      131b8fcb
  13. 12 1月, 2013 1 次提交
    • N
      fetch: add --unshallow for turning shallow repo into complete one · 4dcb167f
      Nguyễn Thái Ngọc Duy 提交于
      The user can do --depth=2147483647 (*) for restoring complete repo
      now. But it's hard to remember. Any other numbers larger than the
      longest commit chain in the repository would also do, but some
      guessing may be involved. Make easy-to-remember --unshallow an alias
      for --depth=2147483647.
      
      Make upload-pack recognize this special number as infinite depth. The
      effect is essentially the same as before, except that upload-pack is
      more efficient because it does not have to traverse to the bottom
      anymore.
      
      The chance of a user actually wanting exactly 2147483647 commits
      depth, not infinite, on a repository with a history that long, is
      probably too small to consider. The client can learn to add or
      subtract one commit to avoid the special treatment when that actually
      happens.
      
      (*) This is the largest positive number a 32-bit signed integer can
          contain. JGit and older C Git store depth as "int" so both are OK
          with this number. Dulwich does not support shallow clone.
      Signed-off-by: NNguyễn Thái Ngọc Duy <pclouds@gmail.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      4dcb167f
  14. 15 9月, 2012 1 次提交
  15. 08 9月, 2012 1 次提交
  16. 03 9月, 2012 2 次提交
    • J
      submodule: use argv_array instead of hand-building arrays · 50d89ad6
      Jens Lehmann 提交于
      fetch_populated_submodules() allocates the full argv array it uses to
      recurse into the submodules from the number of given options plus the six
      argv values it is going to add. It then initializes it with those values
      which won't change during the iteration and copies the given options into
      it. Inside the loop the two argv values different for each submodule get
      replaced with those currently valid.
      
      However, this technique is brittle and error-prone (as the comment to
      explain the magic number 6 indicates), so let's replace it with an
      argv_array. Instead of replacing the argv values, push them to the
      argv_array just before the run_command() call (including the option
      separating them) and pop them from the argv_array right after that.
      Signed-off-by: NJens Lehmann <Jens.Lehmann@web.de>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      50d89ad6
    • J
      fetch: use argv_array instead of hand-building arrays · 85556d4e
      Jeff King 提交于
      Fetch invokes itself recursively when recursing into
      submodules or handling "fetch --multiple". In both cases, it
      builds the child's command line by pushing options onto a
      statically-sized array. In both cases, the array is
      currently just big enough to handle the largest possible
      case. However, this technique is brittle and error-prone, so
      let's replace it with a dynamic argv_array.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      85556d4e
  17. 28 8月, 2012 1 次提交
  18. 21 8月, 2012 1 次提交
  19. 25 4月, 2012 1 次提交
  20. 17 4月, 2012 2 次提交
  21. 15 4月, 2012 1 次提交
    • J
      submodules: recursive fetch also checks new tags for submodule commits · a6801adc
      Jens Lehmann 提交于
      Since 88a21979 (fetch/pull: recurse into submodules when necessary) all
      fetched commits are examined if they contain submodule changes (unless
      configuration or command line options inhibit that). If a newly recorded
      submodule commit is not present in the submodule, a fetch is run inside
      it to download that commit.
      
      Checking new refs was done in an else branch where it wasn't executed for
      tags. This normally isn't a problem because tags are only fetched with
      the branches they live on, then checking the new commits in the fetched
      branches for submodule commits will also process all tags. But when a
      specific tag is fetched (or the refspec contains refs/tags/) commits only
      reachable by tags won't be searched for submodule commits, which is a bug.
      
      Fix that by moving the code outside the if/else construct to handle new
      tags just like any other ref. The performance impact of adding tags that
      most of the time lie on a branch which is checked anyway for new submodule
      commit should be minimal, as since 6859de45 (fetch: avoid quadratic loop
      checking for updated submodules) all ref-tips are collected first and then
      fed to a single rev-list.
      Spotted-by: NJeff King <peff@peff.net>
      Signed-off-by: NJens Lehmann <Jens.Lehmann@web.de>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      a6801adc
  22. 14 2月, 2012 1 次提交
  23. 12 1月, 2012 1 次提交
  24. 04 1月, 2012 1 次提交
    • J
      write first for-merge ref to FETCH_HEAD first · 96890f4c
      Joey Hess 提交于
      The FETCH_HEAD refname is supposed to refer to the ref that was fetched
      and should be merged. However all fetched refs are written to
      .git/FETCH_HEAD in an arbitrary order, and resolve_ref_unsafe simply
      takes the first ref as the FETCH_HEAD, which is often the wrong one,
      when other branches were also fetched.
      
      The solution is to write the for-merge ref(s) to FETCH_HEAD first.
      Then, unless --append is used, the FETCH_HEAD refname behaves as intended.
      If the user uses --append, they presumably are doing so in order to
      preserve the old FETCH_HEAD.
      
      While we are at it, update an old example in the read-tree documentation
      that implied that each entry in FETCH_HEAD only has the object name, which
      is not true for quite a while.
      
      [jc: adjusted tests]
      Signed-off-by: NJoey Hess <joey@kitenet.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      96890f4c
  25. 10 12月, 2011 1 次提交
  26. 05 11月, 2011 1 次提交
    • L
      fetch: do not store peeled tag object names in FETCH_HEAD · 7a2b128d
      Linus Torvalds 提交于
      We do not want to record tags as parents of a merge when the user does
      "git pull $there tag v1.0" to merge tagged commit, but that is not a good
      enough excuse to peel the tag down to commit when storing in FETCH_HEAD.
      The caller of underlying "git fetch $there tag v1.0" may have other uses
      of information contained in v1.0 tag in mind.
      
      [jc: the test adjustment is to update for the new expectation]
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      7a2b128d
  27. 16 10月, 2011 2 次提交
  28. 08 10月, 2011 2 次提交
  29. 10 9月, 2011 2 次提交