1. 27 8月, 2006 3 次提交
  2. 24 8月, 2006 1 次提交
    • S
      Convert memcpy(a,b,20) to hashcpy(a,b). · e702496e
      Shawn Pearce 提交于
      This abstracts away the size of the hash values when copying them
      from memory location to memory location, much as the introduction
      of hashcmp abstracted away hash value comparsion.
      
      A few call sites were using char* rather than unsigned char* so
      I added the cast rather than open hashcpy to be void*.  This is a
      reasonable tradeoff as most call sites already use unsigned char*
      and the existing hashcmp is also declared to be unsigned char*.
      
      [jc: Splitted the patch to "master" part, to be followed by a
       patch for merge-recursive.c which is not in "master" yet.
      
       Fixed the cast in the latter hunk to combine-diff.c which was
       wrong in the original.
      
       Also converted ones left-over in combine-diff.c, diff-lib.c and
       upload-pack.c ]
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      e702496e
  3. 22 8月, 2006 2 次提交
  4. 18 8月, 2006 1 次提交
  5. 16 8月, 2006 1 次提交
  6. 12 8月, 2006 1 次提交
    • R
      drop length argument of has_extension · 5bb1cda5
      Rene Scharfe 提交于
      As Fredrik points out the current interface of has_extension() is
      potentially confusing.  Its parameters include both a nul-terminated
      string and a length-limited string.
      
      This patch drops the length argument, requiring two nul-terminated
      strings; all callsites are updated.  I checked that all of them indeed
      provide nul-terminated strings.  Filenames need to be nul-terminated
      anyway if they are to be passed to open() etc.  The performance penalty
      of the additional strlen() is negligible compared to the system calls
      which inevitably surround has_extension() calls.
      
      Additionally, change has_extension() to use size_t inside instead of
      int, as that is the exact type strlen() returns and memcmp() expects.
      Signed-off-by: NRene Scharfe <rene.scharfe@lsrfire.ath.cx>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      5bb1cda5
  7. 11 8月, 2006 1 次提交
  8. 26 7月, 2006 1 次提交
  9. 14 7月, 2006 1 次提交
    • L
      sha1_file: add the ability to parse objects in "pack file format" · 93821bd9
      Linus Torvalds 提交于
      The pack-file format is slightly different from the traditional git
      object format, in that it has a much denser binary header encoding.
      The traditional format uses an ASCII string with type and length
      information, which is somewhat wasteful.
      
      A new object format starts with uncompressed binary header
      followed by compressed payload -- this will allow us later to
      copy the payload straight to packfiles.
      
      Obviously they cannot be read by older versions of git, so for
      now new object files are created with the traditional format.
      core.legacyheaders configuration item, when set to false makes
      the code write in new format for people to experiment with.
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      93821bd9
  10. 13 7月, 2006 1 次提交
    • S
      Make lazy mkdir more robust. · 756aaf4a
      Shawn Pearce 提交于
      Linus Torvalds <torvalds@osdl.org> wrote:
      
        It's entirely possible that we should just make that whole
      
      	  if (ret == ENOENT)
      
        go away. Yes, it's the right error code if a subdirectory is missing, and
        yes, POSIX requires it, and yes, WXP is probably just a horrible piece of
        sh*t, but on the other hand, I don't think git really has any serious
        reason to even care.
      756aaf4a
  11. 12 7月, 2006 1 次提交
  12. 10 7月, 2006 2 次提交
  13. 04 7月, 2006 1 次提交
    • J
      Make zlib compression level configurable, and change default. · 12f6c308
      Joachim B Haga 提交于
      With the change in default, "git add ." on kernel dir is about
      twice as fast as before, with only minimal (0.5%) change in
      object size. The speed difference is even more noticeable
      when committing large files, which is now up to 8 times faster.
      
      The configurability is through setting core.compression = [-1..9]
      which maps to the zlib constants; -1 is the default, 0 is no
      compression, and 1..9 are various speed/size tradeoffs, 9
      being slowest.
      
      Signed-off-by: Joachim B Haga (cjhaga@fys.uio.no)
      Acked-by: NLinus Torvalds <torvalds@osdl.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      12f6c308
  14. 28 6月, 2006 1 次提交
  15. 20 6月, 2006 1 次提交
  16. 10 6月, 2006 1 次提交
    • J
      shared repository - add a few missing calls to adjust_shared_perm(). · 138086a7
      Junio C Hamano 提交于
      There were a few calls to adjust_shared_perm() that were
      missing:
      
       - init-db creates refs, refs/heads, and refs/tags before
         reading from templates that could specify sharedrepository in
         the config file;
      
       - updating config file created it under user's umask without
         adjusting;
      
       - updating refs created it under user's umask without
         adjusting;
      
       - switching branches created .git/HEAD under user's umask
         without adjusting.
      
      This moves adjust_shared_perm() from sha1_file.c to path.c,
      since a few SIMPLE_PROGRAM need to call repository configuration
      functions which in turn need to call adjust_shared_perm().
      sha1_file.c needs to link with SHA1 computation library which
      is usually not linked to SIMPLE_PROGRAM.
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      138086a7
  17. 03 6月, 2006 2 次提交
  18. 25 5月, 2006 1 次提交
    • L
      Clean up sha1 file writing · 4d548150
      Linus Torvalds 提交于
      This cleans up and future-proofs the sha1 file writing in sha1_file.c.
      
      In particular, instead of doing a simple "write()" call and just verifying
      that it succeeds (or - as in one place - just assuming it does), it uses
      "write_buffer()" to write data to the file descriptor while correctly
      checking for partial writes, EINTR etc.
      
      It also splits up write_sha1_to_fd() to be a lot more readable: if we need
      to re-create the compressed object, we do so in a separate helper
      function, making the logic a whole lot more modular and obvious.
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      4d548150
  19. 24 5月, 2006 1 次提交
  20. 16 5月, 2006 1 次提交
    • J
      Fix pack-index issue on 64-bit platforms a bit more portably. · 1b9bc5a7
      Junio C Hamano 提交于
      Apparently <stdint.h> is not enough for uint32_t on OpenBSD; use
      "unsigned int" -- hopefully that would stay 32-bit on every
      platform we care about, at least until we update the pack-index
      file format.
      
      Our sha1 routines optimized for architectures use uint32_t and
      expects '#include <stdint.h>' to be enough, so OpenBSD on arm or
      ppc might have similar issues down the road, I dunno.
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      1b9bc5a7
  21. 15 5月, 2006 1 次提交
  22. 14 5月, 2006 1 次提交
    • D
      Fix git-pack-objects for 64-bit platforms · 66561f5a
      Dennis Stosberg 提交于
      The offset of an object in the pack is recorded as a 4-byte integer
      in the index file.  When reading the offset from the mmap'ed index
      in prepare_pack_revindex(), the address is dereferenced as a long*.
      This works fine as long as the long type is four bytes wide.  On
      NetBSD/sparc64, however, a long is 8 bytes wide and so dereferencing
      the offset produces garbage.
      
      [jc: taking suggestion by Linus to use uint32_t]
      Signed-off-by: NDennis Stosberg <dennis@stosberg.net>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      66561f5a
  23. 08 5月, 2006 1 次提交
    • M
      Transitively read alternatives · c2f493a4
      Martin Waitz 提交于
      When adding an alternate object store then add entries from its
      info/alternates files, too.
      Relative entries are only allowed in the current repository.
      Loops and duplicate alternates through multiple repositories are ignored.
      Just to be sure that nothing breaks it is not allow to build deep
      nesting levels using info/alternates.
      Signed-off-by: NMartin Waitz <tali@admingilde.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      c2f493a4
  24. 04 5月, 2006 1 次提交
    • L
      sha1_to_hex() usage cleanup · dcb3450f
      Linus Torvalds 提交于
      Somebody on the #git channel complained that the sha1_to_hex() thing uses
      a static buffer which caused an error message to show the same hex output
      twice instead of showing two different ones.
      
      That's pretty easily rectified by making it uses a simple LRU of a few
      buffers, which also allows some other users (that were aware of the buffer
      re-use) to be written in a more straightforward manner.
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      dcb3450f
  25. 18 4月, 2006 1 次提交
  26. 08 4月, 2006 1 次提交
  27. 04 4月, 2006 1 次提交
  28. 20 3月, 2006 1 次提交
    • J
      unpack_delta_entry(): reduce memory footprint. · 67686d95
      Junio C Hamano 提交于
      Currently we unpack the delta data from the pack and then unpack
      the base object to apply that delta data to it.  When getting an
      object that is deeply deltified, we can reduce memory footprint
      by unpacking the base object first and then unpacking the delta
      data, because we will need to keep at most one delta data in
      memory that way.
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      67686d95
  29. 23 2月, 2006 3 次提交
    • J
      Give no terminating LF to error() function. · bd2afde8
      Junio C Hamano 提交于
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      bd2afde8
    • J
      pack-objects: reuse data from existing packs. · 3f9ac8d2
      Junio C Hamano 提交于
      When generating a new pack, notice if we have already needed
      objects in existing packs.  If an object is stored deltified,
      and its base object is also what we are going to pack, then
      reuse the existing deltified representation unconditionally,
      bypassing all the expensive find_deltas() and try_deltas()
      calls.
      
      Also, notice if what we are going to write out exactly match
      what is already in an existing pack (either deltified or just
      compressed).  In such a case, we can just copy it instead of
      going through the usual uncompressing & recompressing cycle.
      
      Without this patch, in linux-2.6 repository with about 1500
      loose objects and a single mega pack:
      
          $ git-rev-list --objects v2.6.16-rc3 >RL
          $ wc -l RL
          184141 RL
          $ time git-pack-objects p <RL
          Generating pack...
          Done counting 184141 objects.
          Packing 184141 objects....................
          a1fc7b3e537fcb9b3c46b7505df859f0a11e79d2
      
          real    12m4.323s
          user    11m2.560s
          sys     0m55.950s
      
      With this patch, the same input:
      
          $ time ../git.junio/git-pack-objects q <RL
          Generating pack...
          Done counting 184141 objects.
          Packing 184141 objects.....................
          a1fc7b3e537fcb9b3c46b7505df859f0a11e79d2
          Total 184141, written 184141, reused 182441
      
          real    1m2.608s
          user    0m55.090s
          sys     0m1.830s
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      3f9ac8d2
    • J
      detect broken alternates. · 26125f6b
      Junio C Hamano 提交于
      The real problem triggered an earlier fix was that an alternate
      entry was pointing at a removed directory.  Complaining on
      object/pack directory that cannot be opendir-ed produces noise
      in an ancient repository that does not have object/pack
      directory and has never been packed.
      
      Detect the real user error and report it.  Also if opendir
      failed for other reasons (e.g. no read permissions), report that
      as well.
      
      Spotted by Andrew Vasquez <andrew.vasquez@qlogic.com>.
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      26125f6b
  30. 18 2月, 2006 1 次提交
  31. 17 2月, 2006 1 次提交
    • J
      pack-objects: reuse data from existing packs. · a49dd05f
      Junio C Hamano 提交于
      When generating a new pack, notice if we have already needed
      objects in existing packs.  If an object is stored deltified,
      and its base object is also what we are going to pack, then
      reuse the existing deltified representation unconditionally,
      bypassing all the expensive find_deltas() and try_deltas()
      calls.
      
      Also, notice if what we are going to write out exactly match
      what is already in an existing pack (either deltified or just
      compressed).  In such a case, we can just copy it instead of
      going through the usual uncompressing & recompressing cycle.
      
      Without this patch, in linux-2.6 repository with about 1500
      loose objects and a single mega pack:
      
          $ git-rev-list --objects v2.6.16-rc3 >RL
          $ wc -l RL
          184141 RL
          $ time git-pack-objects p <RL
          Generating pack...
          Done counting 184141 objects.
          Packing 184141 objects....................
          a1fc7b3e537fcb9b3c46b7505df859f0a11e79d2
      
          real    12m4.323s
          user    11m2.560s
          sys     0m55.950s
      
      With this patch, the same input:
      
          $ time ../git.junio/git-pack-objects q <RL
          Generating pack...
          Done counting 184141 objects.
          Packing 184141 objects.....................
          a1fc7b3e537fcb9b3c46b7505df859f0a11e79d2
          Total 184141, written 184141, reused 182441
      
          real    1m2.608s
          user    0m55.090s
          sys     0m1.830s
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      a49dd05f
  32. 16 2月, 2006 1 次提交
  33. 10 2月, 2006 1 次提交
    • J
      stat() for existence in safe_create_leading_directories() · 67d42212
      Jason Riedy 提交于
      Use stat() to explicitly check for existence rather than
      relying on the non-portable EEXIST error in sha1_file.c's
      safe_create_leading_directories().  There certainly are
      optimizations possible, but then the code becomes almost
      the same as that in coreutil's lib/mkdir-p.c.
      
      Other uses of EEXIST seem ok.  Tested on Solaris 8, AIX 5.2L,
      and a few Linux versions.  AIX has some unrelated (I think)
      failures right now; I haven't tried many recent gits there.
      Anyone have an old Ultrix box to break everything?  ;)
      
      Also remove extraneous #includes.  Everything's already in
      git-compat-util.h, included through cache.h.
      Signed-off-by: NJason Riedy <ejr@cs.berkeley.edu>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      67d42212