1. 12 4月, 2007 1 次提交
    • N
      simple random data generator for tests · 2dca1af4
      Nicolas Pitre 提交于
      Reliance on /dev/urandom produces test vectors that are, well, random.
      This can cause problems impossible to track down when the data is
      different from one test invokation to another.
      
      The goal is not to have random data to test, but rather to have a
      convenient way to create sets of large files with non compressible and
      non deltifiable data in a reproducible way.
      Signed-off-by: NNicolas Pitre <nico@cam.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      2dca1af4
  2. 11 4月, 2007 12 次提交
    • N
      validate reused pack data with CRC when possible · 91ecbeca
      Nicolas Pitre 提交于
      This replaces the inflate validation with a CRC validation when reusing
      data from a pack which uses index version 2.  That makes repacking much
      safer against corruptions, and it should be a bit faster too.
      Signed-off-by: NNicolas Pitre <nico@cam.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      91ecbeca
    • N
      allow forcing index v2 and 64-bit offset treshold · 4ba7d711
      Nicolas Pitre 提交于
      This is necessary for testing the new capabilities in some automated
      way without having an actual 4GB+ pack.
      Signed-off-by: NNicolas Pitre <nico@cam.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      4ba7d711
    • N
      pack-redundant.c: learn about index v2 · 8c681e07
      Nicolas Pitre 提交于
      Initially the conversion was made using nth_packed_object_sha1() which
      made this file completely index version agnostic. Unfortunately the
      overhead was quite significant so I went back to raw index walking but
      with selectable base and step values which brought back similar
      performances as the original.
      Signed-off-by: NNicolas Pitre <nico@cam.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      8c681e07
    • N
      show-index.c: learn about index v2 · 32637cdf
      Nicolas Pitre 提交于
      When index v2 is encountered, the CRC32 of each object is also displayed
      in parenthesis at the end of the line.
      Signed-off-by: NNicolas Pitre <nico@cam.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      32637cdf
    • N
      sha1_file.c: learn about index version 2 · 74e34e1f
      Nicolas Pitre 提交于
      With this patch, packs larger than 4GB are usable, even on a 32-bit machine
      (at least on Linux).  If off_t is not large enough to deal with a large
      pack then die() is called instead of attempting to use the pack and
      producing garbage.
      
      This was tested with a 8GB pack specially created for the occasion on
      a 32-bit machine.
      Signed-off-by: NNicolas Pitre <nico@cam.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      74e34e1f
    • N
      index-pack: learn about pack index version 2 · d1a46a9e
      Nicolas Pitre 提交于
      Like previous patch but for index-pack.
      
      [ There is quite some code duplication between pack-objects and index-pack
        for generating a pack index (and fast-import as well I suppose).  This
        should be reworked into a common function eventually. But not now. ]
      Signed-off-by: NNicolas Pitre <nico@cam.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      d1a46a9e
    • N
      pack-objects: learn about pack index version 2 · c553ca25
      Nicolas Pitre 提交于
      Pack index version 2 goes as follows:
      
       - 8 bytes of header with signature and version.
      
       - 256 entries of 4-byte first-level fan-out table.
      
       - Table of sorted 20-byte SHA1 records for each object in pack.
      
       - Table of 4-byte CRC32 entries for raw pack object data.
      
       - Table of 4-byte offset entries for objects in the pack if offset is
         representable with 31 bits or less, otherwise it is an index in the next
         table with top bit set.
      
       - Table of 8-byte offset entries indexed from previous table for offsets
         which are 32 bits or more (optional).
      
       - 20-byte SHA1 checksum of sorted object names.
      
       - 20-byte SHA1 checksum of the above.
      
      The object SHA1 table is all contiguous so future pack format that would
      contain this table directly won't require big changes to the code. It is
      also tighter for slightly better cache locality when looking up entries.
      
      Support for large packs exceeding 31 bits in size won't impose an index
      size bloat for packs within that range that don't need a 64-bit offset.
      And because newer objects which are likely to be the most frequently used
      are located at the beginning of the pack, they won't pay the 64-bit offset
      lookup at run time either even if the pack is large.
      
      Right now an index version 2 is created only when the biggest offset in a
      pack reaches 31 bits.  It might be a good idea to always use index version
      2 eventually to benefit from the CRC32 it contains when reusing pack data
      while repacking.
      
      [jc: with the "oops" fix to keep track of the last offset correctly]
      Signed-off-by: NNicolas Pitre <nico@cam.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      c553ca25
    • N
      compute object CRC32 with index-pack · ee5743ce
      Nicolas Pitre 提交于
      Same as previous patch but for index-pack.
      Signed-off-by: NNicolas Pitre <nico@cam.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      ee5743ce
    • N
      compute a CRC32 for each object as stored in a pack · 78d1e84f
      Nicolas Pitre 提交于
      The most important optimization for performance when repacking is the
      ability to reuse data from a previous pack as is and bypass any delta
      or even SHA1 computation by simply copying the raw data from one pack
      to another directly.
      
      The problem with  this is that any data corruption within a copied object
      would go unnoticed and the new (repacked) pack would be self-consistent
      with its own checksum despite containing a corrupted object.  This is a
      real issue that already happened at least once in the past.
      
      In some attempt to prevent this, we validate the copied data by inflating
      it and making sure no error is signaled by zlib.  But this is still not
      perfect as a significant portion of a pack content is made of object
      headers and references to delta base objects which are not deflated and
      therefore not validated when repacking actually making the pack data reuse
      still not as safe as it could be.
      
      Of course a full SHA1 validation could be performed, but that implies
      full data inflating and delta replaying which is extremely costly, which
      cost the data reuse optimization was designed to avoid in the first place.
      
      So the best solution to this is simply to store a CRC32 of the raw pack
      data for each object in the pack index.  This way any object in a pack can
      be validated before being copied as is in another pack, including header
      and any other non deflated data.
      
      Why CRC32 instead of a faster checksum like Adler32?  Quoting Wikipedia:
      
         Jonathan Stone discovered in 2001 that Adler-32 has a weakness for very
         short messages. He wrote "Briefly, the problem is that, for very short
         packets, Adler32 is guaranteed to give poor coverage of the available
         bits. Don't take my word for it, ask Mark Adler. :-)" The problem is
         that sum A does not wrap for short messages. The maximum value of A for
         a 128-byte message is 32640, which is below the value 65521 used by the
         modulo operation. An extended explanation can be found in RFC 3309,
         which mandates the use of CRC32 instead of Adler-32 for SCTP, the
         Stream Control Transmission Protocol.
      
      In the context of a GIT pack, we have lots of small objects, especially
      deltas, which are likely to be quite small and in a size range for which
      Adler32 is dimed not to be sufficient.  Another advantage of CRC32 is the
      possibility for recovery from certain types of small corruptions like
      single bit errors which are the most probable type of corruptions.
      
      OK what this patch does is to compute the CRC32 of each object written to
      a pack within pack-objects.  It is not written to the index yet and it is
      obviously not validated when reusing pack data yet either.
      Signed-off-by: NNicolas Pitre <nico@cam.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      78d1e84f
    • N
      add overflow tests on pack offset variables · d7dd0223
      Nicolas Pitre 提交于
      Change a few size and offset variables to more appropriate type, then
      add overflow tests on those offsets.  This prevents any bad data to be
      generated/processed if off_t happens to not be large enough to handle
      some big packs.
      
      Better be safe than sorry.
      Signed-off-by: NNicolas Pitre <nico@cam.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      d7dd0223
    • N
      make overflow test on delta base offset work regardless of variable size · 8723f216
      Nicolas Pitre 提交于
      This patch introduces the MSB() macro to obtain the desired number of
      most significant bits from a given variable independently of the variable
      type.
      
      It is then used to better implement the overflow test on the OBJ_OFS_DELTA
      base offset variable with the property of always working correctly
      regardless of the type/size of that variable.
      Signed-off-by: NNicolas Pitre <nico@cam.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      8723f216
    • N
      get rid of num_packed_objects() · 57059091
      Nicolas Pitre 提交于
      The coming index format change doesn't allow for the number of objects
      to be determined from the size of the index file directly.  Instead, Let's
      initialize a field in the packed_git structure with the object count when
      the index is validated since the count is always known at that point.
      
      While at it let's reorder some struct packed_git fields to avoid padding
      due to needed 64-bit alignment for some of them.
      Signed-off-by: NNicolas Pitre <nico@cam.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      57059091
  3. 10 4月, 2007 1 次提交
  4. 09 4月, 2007 8 次提交
  5. 08 4月, 2007 5 次提交
  6. 07 4月, 2007 13 次提交
    • J
      A new merge stragety 'subtree'. · 68faf689
      Junio C Hamano 提交于
      This merge strategy largely piggy-backs on git-merge-recursive.
      When merging trees A and B, if B corresponds to a subtree of A,
      B is first adjusted to match the tree structure of A, instead of
      reading the trees at the same level.  This adjustment is also
      done to the common ancestor tree.
      
      If you are pulling updates from git-gui repository into git.git
      repository, the root level of the former corresponds to git-gui/
      subdirectory of the latter.  The tree object of git-gui's toplevel
      is wrapped in a fake tree object, whose sole entry has name 'git-gui'
      and records object name of the true tree, before being used by
      the 3-way merge code.
      
      If you are merging the other way, only the git-gui/ subtree of
      git.git is extracted and merged into git-gui's toplevel.
      
      The detection of corresponding subtree is done by comparing the
      pathnames and types in the toplevel of the tree.
      
      Heuristics galore!  That's the git way ;-).
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      68faf689
    • J
      git-push to multiple locations does not stop at the first failure · fd1d1b05
      Junio C Hamano 提交于
      When pushing into multiple repositories with git push, via
      multiple URL in .git/remotes/$shorthand or multiple url
      variables in [remote "$shorthand"] section, we used to stop upon
      the first failure.  Continue the operation and report the
      failure at the end.
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      fd1d1b05
    • J
      git-push reports the URL after failing. · 39878b0c
      Junio C Hamano 提交于
      This came up on #git when somebody was getting 'unable to create
      ./objects/tmp_oXXXX' but sweared he had write permission to that
      directory.  It turned out that the repository URL was changed
      and he was accessing a repository he does not have a write
      permission anymore.
      
      I am not sure how much this would have helped somebody who
      believed he was accessing location when the permission of that
      location was changed while he was looking the other way, though.
      But giving more information on the error path would be better,
      and the next change would be helped with this as well.
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      39878b0c
    • J
      Merge branch 'jc/index-output' · ee9693e2
      Junio C Hamano 提交于
      * jc/index-output:
        git-read-tree --index-output=<file>
        _GIT_INDEX_OUTPUT: allow plumbing to output to an alternative index file.
      
      Conflicts:
      
      	builtin-apply.c
      ee9693e2
    • J
      Merge branch 'fp/make-j' · 39415449
      Junio C Hamano 提交于
      * fp/make-j:
        Makefile: Add '+' to QUIET_SUBDIR0 to fix parallel make.
      39415449
    • J
      Merge branch 'cc/bisect' · 5bba1b35
      Junio C Hamano 提交于
      * cc/bisect:
        git-bisect: allow bisecting with only one bad commit.
        t6030: add a bit more tests to git-bisect
        git-bisect: modernization
        Documentation: bisect: "start" accepts one bad and many good commits
        Bisect: teach "bisect start" to optionally use one bad and many good revs.
      5bba1b35
    • J
      Merge branch 'jc/checkout' (early part) · b7108a16
      Junio C Hamano 提交于
      * 'jc/checkout' (early part):
        checkout: allow detaching to HEAD even when switching to the tip of a branch
      b7108a16
    • J
      Merge branch 'maint' · ced38ea2
      Junio C Hamano 提交于
      * maint:
        Documentation: tighten dependency for git.{html,txt}
        Makefile: iconv() on Darwin has the old interface
        t5300-pack-object.sh: portability issue using /usr/bin/stat
        t3200-branch.sh: small language nit
        usermanual.txt: some capitalization nits
        Make builtin-branch.c handle the git config file
        rename_ref(): only print a warning when config-file update fails
        Distinguish branches by more than case in tests.
        Avoid composing too long "References" header.
        cvsimport: Improve formating consistency
        cvsimport: Reorder options in documentation for better understanding
        cvsimport: Improve usage error reporting
        cvsimport: Improve documentation of CVSROOT and CVS module determination
        cvsimport: sync usage lines with existing options
      
      Conflicts:
      
      	Documentation/Makefile
      ced38ea2
    • J
      Documentation: tighten dependency for git.{html,txt} · d7907392
      Junio C Hamano 提交于
      Every time _any_ documentation page changed, cmds-*.txt files
      were regenerated, which caused git.{html,txt} to be remade.  Try
      not to update cmds-*.txt files if their new contents match the
      old ones.
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      d7907392
    • A
      Makefile: iconv() on Darwin has the old interface · 63b4b7a7
      Arjen Laarhoven 提交于
      The libiconv on Darwin uses the old iconv() interface (2nd argument is a
      const char **, instead of a char **).  Add OLD_ICONV to the Darwin
      variable definitions to handle this.
      Signed-off-by: NArjen Laarhoven <arjen@yaph.org>
      Acked-by: NBrian Gernhardt <benji@silverinsanity.com>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      63b4b7a7
    • A
      t5300-pack-object.sh: portability issue using /usr/bin/stat · d93f7c18
      Arjen Laarhoven 提交于
      In the test 'compare delta flavors', /usr/bin/stat is used to get file size.
      This isn't portable.  There already is a dependency on Perl, use its '-s'
      operator to get the file size.
      Signed-off-by: NArjen Laarhoven <arjen@yaph.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      d93f7c18
    • J
      git-bisect: allow bisecting with only one bad commit. · 0a5280a9
      Junio C Hamano 提交于
      This allows you to say:
      
      	git bisect start
      	git bisect bad $bad
      	git bisect next
      
      to start bisection without knowing a good commit.  This would
      have you try a commit that is half-way since the beginning of
      the history, which is rather wasteful if you already know a good
      commit, but if you don't (or your history is short enough that
      you do not care), there is no reason not to allow this.
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      0a5280a9
    • J
      t6030: add a bit more tests to git-bisect · 4f506716
      Junio C Hamano 提交于
      Verify that git-bisect does not start before getting one bad and
      one good commit.
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      4f506716