1. 13 5月, 2015 5 次提交
  2. 21 3月, 2015 2 次提交
    • J
      refs.c: drop curate_packed_refs · ea56c4e0
      Jeff King 提交于
      When we delete a ref, we have to rewrite the entire
      packed-refs file. We take this opportunity to "curate" the
      packed-refs file and drop any entries that are crufty or
      broken.
      
      Dropping broken entries (e.g., with bogus names, or ones
      that point to missing objects) is actively a bad idea, as it
      means that we lose any notion that the data was there in the
      first place. Aside from the general hackiness that we might
      lose any information about ref "foo" while deleting an
      unrelated ref "bar", this may seriously hamper any attempts
      by the user at recovering from the corruption in "foo".
      
      They will lose the sha1 and name of "foo"; the exact pointer
      may still be useful even if they recover missing objects
      from a different copy of the repository. But worse, once the
      ref is gone, there is no trace of the corruption. A
      follow-up "git prune" may delete objects, even though it
      would otherwise bail when seeing corruption.
      
      We could just drop the "broken" bits from
      curate_packed_refs, and continue to drop the "crufty" bits:
      refs whose loose counterpart exists in the filesystem. This
      is not wrong to do, and it does have the advantage that we
      may write out a slightly smaller packed-refs file. But it
      has two disadvantages:
      
        1. It is a potential source of races or mistakes with
           respect to these refs that are otherwise unrelated to
           the operation. To my knowledge, there aren't any active
           problems in this area, but it seems like an unnecessary
           risk.
      
        2. We have to spend time looking up the matching loose
           refs for every item in the packed-refs file. If you
           have a large number of packed refs that do not change,
           that outweighs the benefit from writing out a smaller
           packed-refs file (it doesn't get smaller, and you do a
           bunch of directory traversal to find that out).
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      ea56c4e0
    • J
      refs: introduce a "ref paranoia" flag · 49672f26
      Jeff King 提交于
      Most operations that iterate over refs are happy to ignore
      broken cruft. However, some operations should be performed
      with knowledge of these broken refs, because it is better
      for the operation to choke on a missing object than it is to
      silently pretend that the ref did not exist (e.g., if we are
      computing the set of reachable tips in order to prune
      objects).
      
      These processes could just call for_each_rawref, except that
      ref iteration is often hidden behind other interfaces. For
      instance, for a destructive "repack -ad", we would have to
      inform "pack-objects" that we are destructive, and then it
      would in turn have to tell the revision code that our
      "--all" should include broken refs.
      
      It's much simpler to just set a global for "dangerous"
      operations that includes broken refs in all iterations.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      49672f26
  3. 06 3月, 2015 6 次提交
  4. 18 2月, 2015 8 次提交
  5. 13 2月, 2015 2 次提交
  6. 14 1月, 2015 1 次提交
  7. 30 12月, 2014 1 次提交
  8. 23 12月, 2014 5 次提交
  9. 13 12月, 2014 1 次提交
  10. 11 12月, 2014 3 次提交
    • J
      read_packed_refs: use skip_prefix instead of static array · ea417833
      Jeff King 提交于
      We want to recognize the packed-refs header and skip to the
      "traits" part of the line. We currently do it by feeding
      sizeof() a static const array to strncmp. However, it's a
      bit simpler to just skip_prefix, which expresses the
      intention more directly, and without remembering to account
      for the NUL-terminator in each sizeof() call.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      ea417833
    • J
      read_packed_refs: pass strbuf to parse_ref_line · 6a49870a
      Jeff King 提交于
      Now that we have a strbuf in read_packed_refs, we can pass
      it straight to the line parser, which saves us an extra
      strlen.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      6a49870a
    • J
      read_packed_refs: use a strbuf for reading lines · 10c497aa
      Jeff King 提交于
      Current code uses a fixed PATH_MAX-sized buffer for reading
      packed-refs lines. This is a reasonable guess, in the sense
      that git generally cannot work with refs larger than
      PATH_MAX.  However, there are a few cases where it is not
      great:
      
        1. Some systems may have a low value of PATH_MAX, but can
           actually handle larger paths in practice. Fixing this
           code path probably isn't enough to make them work
           completely with long refs, but it is a step in the
           right direction.
      
        2. We use fgets, which will happily give us half a line on
           the first read, and then the rest of the line on the
           second. This is probably OK in practice, because our
           refline parser is careful enough to look for the
           trailing newline on the first line. The second line may
           look like a peeled line to us, but since "^" is illegal
           in refnames, it is not likely to come up.
      
           Still, it does not hurt to be more careful.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      10c497aa
  11. 06 12月, 2014 2 次提交
    • J
      for_each_reflog_ent_reverse: turn leftover check into assertion · 69216bf7
      Jeff King 提交于
      Our loop should always process all lines, even if we hit the
      beginning of the file. We have a conditional after the loop
      ends to double-check that there is nothing left and to
      process it. But this should never happen, and is a sign of a
      logic bug in the loop. Let's turn it into a BUG assertion.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      69216bf7
    • J
      for_each_reflog_ent_reverse: fix newlines on block boundaries · e5e73ff2
      Jeff King 提交于
      When we read a reflog file in reverse, we read whole chunks
      of BUFSIZ bytes, then loop over the buffer, parsing any
      lines we find. We find the beginning of each line by looking
      for the newline from the previous line. If we don't find
      one, we know that we are either at the beginning of
      the file, or that we have to read another block.
      
      In the latter case, we stuff away what we have into a
      strbuf, read another block, and continue our parse. But we
      missed one case here. If we did find a newline, and it is at
      the beginning of the block, we must also stuff that newline
      into the strbuf, as it belongs to the block we are about to
      read.
      
      The minimal fix here would be to add this special case to
      the conditional that checks whether we found a newline.
      But we can make the flow a little clearer by rearranging a
      bit: we first handle lines that we are going to show, and
      then at the end of each loop, stuff away any leftovers if
      necessary. That lets us fold this special-case in with the
      more common "we ended in the middle of a line" case.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      e5e73ff2
  12. 05 12月, 2014 2 次提交
  13. 26 11月, 2014 1 次提交
  14. 21 11月, 2014 1 次提交
    • R
      lock_ref_sha1_basic: do not die on locking errors · 06839515
      Ronnie Sahlberg 提交于
      lock_ref_sha1_basic is inconsistent about when it calls
      die() and when it returns NULL to signal an error. This is
      annoying to any callers that want to recover from a locking
      error.
      
      This seems to be mostly historical accident. It was added in
      4bd18c43 (Improve abstraction of ref lock/write.,
      2006-05-17), which returned an error in all cases except
      calling safe_create_leading_directories, in which case it
      died.  Later, 40aaae88 (Better error message when we are
      unable to lock the index file, 2006-08-12) asked
      hold_lock_file_for_update to die for us, leaving the
      resolve_ref code-path the only one which returned NULL.
      
      We tried to correct that in 5cc3cef9 (lock_ref_sha1(): do not
      sometimes error() and sometimes die()., 2006-09-30),
      by converting all of the die() calls into returns. But we
      missed the "die" flag passed to the lock code, leaving us
      inconsistent. This state persisted until e5c223e9
      (lock_ref_sha1_basic(): if locking fails with ENOENT, retry,
      2014-01-18). Because of its retry scheme, it does not ask
      the lock code to die, but instead manually dies with
      unable_to_lock_die().
      
      We can make this consistent with the other return paths by
      converting this to use unable_to_lock_message(), and
      returning NULL. This is safe to do because all callers
      already needed to check the return value of the function,
      since it could fail (and return NULL) for other reasons.
      
      [jk: Added excessive history explanation]
      Signed-off-by: NRonnie Sahlberg <sahlberg@google.com>
      Signed-off-by: NJeff King <peff@peff.net>
      Reviewed-by: NJonathan Nieder <jrnieder@gmail.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      06839515