1. 19 7月, 2007 1 次提交
    • C
      Rename read_pipe() with read_fd() and make its buffer nul-terminated. · c4fba0a3
      Carlos Rica 提交于
      The new name is closer to the purpose of the function.
      
      A NUL-terminated buffer makes things easier when callers need that.
      Since the function returns only the memory written with data,
      almost always allocating more space than needed because final
      size is unknown, an extra NUL terminating the buffer is harmless.
      It is not included in the returned size, so the function
      remains working as before.
      
      Also, now the function allows the buffer passed to be NULL at first,
      and alloc_nr is now used for growing the buffer, instead size=*2.
      Signed-off-by: NCarlos Rica <jasampler@gmail.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      c4fba0a3
  2. 04 7月, 2007 1 次提交
    • J
      Don't smash stack when $GIT_ALTERNATE_OBJECT_DIRECTORIES is too long · 9cb18f56
      Jim Meyering 提交于
      There is no restriction on the length of the name returned by
      get_object_directory, other than the fact that it must be a stat'able
      git object directory.  That means its name may have length up to
      PATH_MAX-1 (i.e., often 4095) not counting the trailing NUL.
      
      Combine that with the assumption that the concatenation of that name and
      suffixes like "/info/alternates" and "/pack/---long-name---.idx" will fit
      in a buffer of length PATH_MAX, and you see the problem.  Here's a fix:
      
          sha1_file.c (prepare_packed_git_one): Lengthen "path" buffer
          so we are guaranteed to be able to append "/pack/" without checking.
          Skip any directory entry that is too long to be appended.
          (read_info_alternates): Protect against a similar buffer overrun.
      
      Before this change, using the following admittedly contrived environment
      setting would cause many git commands to clobber their stack and segfault
      on a system with PATH_MAX == 4096:
      
        t=$(perl -e '$s=".git/objects";$n=(4096-6-length($s))/2;print "./"x$n . $s')
        export GIT_ALTERNATE_OBJECT_DIRECTORIES=$t
        touch g
        ./git-update-index --add g
      
      If you run the above commands, you'll soon notice that many
      git commands now segfault, so you'll want to do this:
      
        unset GIT_ALTERNATE_OBJECT_DIRECTORIES
      Signed-off-by: NJim Meyering <jim@meyering.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      9cb18f56
  3. 27 6月, 2007 1 次提交
  4. 13 6月, 2007 2 次提交
  5. 07 6月, 2007 1 次提交
    • J
      War on whitespace · a6080a0a
      Junio C Hamano 提交于
      This uses "git-apply --whitespace=strip" to fix whitespace errors that have
      crept in to our source files over time.  There are a few files that need
      to have trailing whitespaces (most notably, test vectors).  The results
      still passes the test, and build result in Documentation/ area is unchanged.
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      a6080a0a
  6. 31 5月, 2007 2 次提交
    • N
      always start looking up objects in the last used pack first · f7c22cc6
      Nicolas Pitre 提交于
      Jon Smirl said:
      
      | Once an object reference hits a pack file it is very likely that
      | following references will hit the same pack file. So first place to
      | look for an object is the same place the previous object was found.
      
      This is indeed a good heuristic so here it is.  The search always start
      with the pack where the last object lookup succeeded.  If the wanted
      object is not available there then the search continues with the normal
      pack ordering.
      
      To test this I split the Linux repository into 66 packs and performed a
      "time git-rev-list --objects --all > /dev/null".  Best results are as
      follows:
      
      	Pack Sort			w/o this patch	w/ this patch
      	-------------------------------------------------------------
      	recent objects last		26.4s		20.9s
      	recent objects first		24.9s		18.4s
      
      This shows that the pack order based on object age has some influence,
      but that the last-used-pack heuristic is even more significant in
      reducing object lookup.
      
      Signed-off-by: Nicolas Pitre <nico@cam.org> --- Note: the
      --max-pack-size to git-repack currently produces packs with old objects
      after those containing recent objects.  The pack sort based on
      filesystem timestamp is therefore backward for those.  This needs to be
      fixed of course, but at least it made me think about this variable for
      the test.
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      f7c22cc6
    • L
      fix signed range problems with hex conversions · 192a6be2
      Linus Torvalds 提交于
      Make hexval_table[] "const".  Also make sure that the accessor
      function hexval() does not access the table with out-of-range
      values by declaring its parameter "unsigned char", instead of
      "unsigned int".
      
      With this, gcc can just generate:
      
      	movzbl  (%rdi), %eax
      	movsbl  hexval_table(%rax),%edx
      	movzbl  1(%rdi), %eax
      	movsbl  hexval_table(%rax),%eax
      	sall    $4, %edx
      	orl     %eax, %edx
      
      for the code to generate a byte from two hex characters.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      192a6be2
  7. 30 5月, 2007 1 次提交
  8. 27 5月, 2007 2 次提交
    • S
      Micro-optimize prepare_alt_odb · 7dc24aa5
      Shawn O. Pearce 提交于
      Calling getenv() is not that expensive, but its also not free,
      and its certainly not cheaper than testing to see if alt_odb_tail
      is not null.
      
      Because we are calling prepare_alt_odb() from within find_sha1_file
      every time we cannot find an object file locally we want to skip out
      of prepare_alt_odb() as early as possible once we have initialized
      our alternate list.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      7dc24aa5
    • S
      Lazily open pack index files on demand · d079837e
      Shawn O. Pearce 提交于
      In some repository configurations the user may have many packfiles,
      but all of the recent commits/trees/tags/blobs are likely to
      be in the most recent packfile (the one with the newest mtime).
      It is therefore common to be able to complete an entire operation
      by accessing only one packfile, even if there are 25 packfiles
      available to the repository.
      
      Rather than opening and mmaping the corresponding .idx file for
      every pack found, we now only open and map the .idx when we suspect
      there might be an object of interest in there.
      
      Of course we cannot known in advance which packfile contains an
      object, so we still need to scan the entire packed_git list to
      locate anything.  But odds are users want to access objects in the
      most recently created packfiles first, and that may be all they
      ever need for the current operation.
      
      Junio observed in b867092f that placing recent packfiles before
      older ones can slightly improve access times for recent objects,
      without degrading it for historical object access.
      
      This change improves upon Junio's observations by trying even harder
      to avoid the .idx files that we won't need.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      d079837e
  9. 16 5月, 2007 1 次提交
  10. 11 5月, 2007 1 次提交
    • N
      deprecate the new loose object header format · 726f852b
      Nicolas Pitre 提交于
      Now that we encourage and actively preserve objects in a packed form
      more agressively than we did at the time the new loose object format and
      core.legacyheaders were introduced, that extra loose object format
      doesn't appear to be worth it anymore.
      
      Because the packing of loose objects has to go through the delta match
      loop anyway, and since most of them should end up being deltified in
      most cases, there is really little advantage to have this parallel loose
      object format as the CPU savings it might provide is rather lost in the
      noise in the end.
      
      This patch gets rid of core.legacyheaders, preserve the legacy format as
      the only writable loose object format and deprecate the other one to
      keep things simpler.
      Signed-off-by: NNicolas Pitre <nico@cam.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      726f852b
  11. 26 4月, 2007 1 次提交
    • S
      Actually handle some-low memory conditions · d1efefa4
      Shawn O. Pearce 提交于
      Tim Ansell discovered his Debian server didn't permit git-daemon to
      use as much memory as it needed to handle cloning a project with
      a 128 MiB packfile.  Filtering the strace provided by Tim of the
      rev-list child showed this gem of a sequence:
      
        open("./objects/pack/pack-*.pack", O_RDONLY|O_LARGEFILE <unfinished ...>
        <... open resumed> )              = 5
      
      OK, so the packfile is fd 5...
      
        mmap2(NULL, 33554432, PROT_READ, MAP_PRIVATE, 5, 0 <unfinished ...>
         <... mmap2 resumed> )             = 0xb5e2d000
      
      and we mapped one 32 MiB window from it at position 0...
      
         mmap2(NULL, 31020635, PROT_READ, MAP_PRIVATE, 5, 0x6000 <unfinished ...>
         <... mmap2 resumed> )             = -1 ENOMEM (Cannot allocate memory)
      
      And we asked for another window further into the file.  But got
      denied.  In Tim's case this was due to a resource limit on the
      git-daemon process, and its children.
      
      Now where are we in the code?  We're down inside use_pack(),
      after we have called unuse_one_window() enough times to make sure
      we stay within our allowed maximum window size.  However since we
      didn't unmap the prior window at 0xb5e2d000 we aren't exceeding
      the current limit (which probably was just the defaults).
      
      But we're actually down inside xmmap()...
      
      So we release the window we do have (by calling release_pack_memory),
      assuming there is some memory pressure...
      
         munmap(0xb5e2d000, 33554432 <unfinished ...>
         <... munmap resumed> )            = 0
         close(5 <unfinished ...>
         <... close resumed> )             = 0
      
      And that was the last window in this packfile.  So we closed it.
      Way to go us.  Our xmmap did not expect release_pack_memory to
      close the fd its about to map...
      
         mmap2(NULL, 31020635, PROT_READ, MAP_PRIVATE, 5, 0x6000 <unfinished ...>
         <... mmap2 resumed> )             = -1 EBADF (Bad file descriptor)
      
      And so the Linux kernel happily tells us f' off.
      
         write(2, "fatal: ", 7 <unfinished ...>
         <... write resumed> )             = 7
         write(2, "Out of memory? mmap failed: Bad "..., 47 <unfinished ...>
         <... write resumed> )             = 47
      
      And we report the bad file descriptor error, and not the ENOMEM,
      and die, claiming we are out of memory.  But actually that mmap
      should have succeeded, as we had enough memory for that window,
      seeing as how we released the prior one.
      
      Originally when I developed the sliding window mmap feature I had
      this exact same bug in fast-import, and I dealt with it by handing
      in the struct packed_git* we want to open the new window for, as the
      caller wasn't prepared to reopen the packfile if unuse_one_window
      closed it.  The same is true here from xmmap, but the caller doesn't
      have the struct packed_git* handy.  So I'm using the file descriptor
      instead to perform the same test.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      d1efefa4
  12. 21 4月, 2007 1 次提交
  13. 17 4月, 2007 1 次提交
  14. 11 4月, 2007 4 次提交
  15. 06 4月, 2007 1 次提交
  16. 28 3月, 2007 2 次提交
  17. 25 3月, 2007 2 次提交
  18. 21 3月, 2007 2 次提交
    • L
      Be more careful about zlib return values · ac54c277
      Linus Torvalds 提交于
      When creating a new object, we use "deflate(stream, Z_FINISH)" in a loop
      until it no longer returns Z_OK, and then we do "deflateEnd()" to finish
      up business.
      
      That should all work, but the fact is, it's not how you're _supposed_ to
      use the zlib return values properly:
      
       - deflate() should never return Z_OK in the first place, except if we
         need to increase the output buffer size (which we're not doing, and
         should never need to do, since we pre-allocated a buffer that is
         supposed to be able to hold the output in full). So the "while()" loop
         was incorrect: Z_OK doesn't actually mean "ok, continue", it means "ok,
         allocate more memory for me and continue"!
      
       - if we got an error return, we would consider it to be end-of-stream,
         but it could be some internal zlib error.  In short, we should check
         for Z_STREAM_END explicitly, since that's the only valid return value
         anyway for the Z_FINISH case.
      
       - we never checked deflateEnd() return codes at all.
      
      Now, admittedly, none of these issues should ever happen, unless there is
      some internal bug in zlib. So this patch should make zero difference, but
      it seems to be the right thing to do.
      
      We should probablybe anal and check the return value of "deflateInit()"
      too!
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      ac54c277
    • N
      index-pack: use hash_sha1_file() · ce9fbf16
      Nicolas Pitre 提交于
      Use hash_sha1_file() instead of duplicating code to compute object SHA1.
      While at it make it accept a const pointer.
      Signed-off-by: NNicolas Pitre <nico@cam.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      ce9fbf16
  19. 20 3月, 2007 3 次提交
  20. 19 3月, 2007 4 次提交
    • S
      Limit the size of the new delta_base_cache · 18bdec11
      Shawn O. Pearce 提交于
      The new configuration variable core.deltaBaseCacheLimit allows the
      user to control how much memory they are willing to give to Git for
      caching base objects of deltas.  This is not normally meant to be
      a user tweakable knob; the "out of the box" settings are meant to
      be suitable for almost all workloads.
      
      We default to 16 MiB under the assumption that the cache is not
      meant to consume all of the user's available memory, and that the
      cache's main purpose was to cache trees, for faster path limiters
      during revision traversal.  Since trees tend to be relatively small
      objects, this relatively small limit should still allow a large
      number of objects.
      
      On the other hand we don't want the cache to start storing 200
      different versions of a 200 MiB blob, as this could easily blow
      the entire address space of a 32 bit process.
      
      We evict OBJ_BLOB from the cache first (credit goes to Junio) as
      we want to favor OBJ_TREE within the cache.  These are the objects
      that have the highest inflate() startup penalty, as they tend to
      be small and thus don't have that much of a chance to ammortize
      that penalty over the entire data.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      18bdec11
    • N
      Reuse cached data out of delta base cache. · a0cba108
      Nicolas Pitre 提交于
      A malloc() + memcpy() will always be faster than mmap() +
      malloc() + inflate().  If the data is already there it is
      certainly better to copy it straight away.
      
      With this patch below I can do 'git log drivers/scsi/ >
      /dev/null' about 7% faster.  I bet it might be even more on
      those platforms with bad mmap() support.
      Signed-off-by: NNicolas Pitre <nico@cam.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      a0cba108
    • L
      Implement a simple delta_base cache · e5e01619
      Linus Torvalds 提交于
      This trivial 256-entry delta_base cache improves performance for some
      loads by a factor of 2.5 or so.
      
      Instead of always re-generating the delta bases (possibly over and over
      and over again), just cache the last few ones. They often can get re-used.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      e5e01619
    • L
      Make trivial wrapper functions around delta base generation and freeing · 62f255ad
      Linus Torvalds 提交于
      This doesn't change any code, it just creates a point for where we'd
      actually do the caching of delta bases that have been generated.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      62f255ad
  21. 17 3月, 2007 1 次提交
    • N
      [PATCH] clean up pack index handling a bit · 42873078
      Nicolas Pitre 提交于
      Especially with the new index format to come, it is more appropriate
      to encapsulate more into check_packed_git_idx() and assume less of the
      index format in struct packed_git.
      
      To that effect, the index_base is renamed to index_data with void * type
      so it is not used directly but other pointers initialized with it. This
      allows for a couple pointer cast removal, as well as providing a better
      generic name to grep for when adding support for new index versions or
      formats.
      
      And index_data is declared const too while at it.
      Signed-off-by: NNicolas Pitre <nico@cam.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      42873078
  22. 11 3月, 2007 1 次提交
    • J
      prepare_packed_git(): sort packs by age and localness. · b867092f
      Junio C Hamano 提交于
      When accessing objects, we first look for them in packs that
      are linked together in the reverse order of discovery.
      
      Since younger packs tend to contain more recent objects, which
      are more likely to be accessed often, and local packs tend to
      contain objects more relevant to our specific projects, sort the
      list of packs before starting to access them.  In addition,
      favoring local packs over the ones borrowed from alternates can
      be a win when alternates are mounted on network file systems.
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      b867092f
  23. 08 3月, 2007 4 次提交
    • S
      Cast 64 bit off_t to 32 bit size_t · dc49cd76
      Shawn O. Pearce 提交于
      Some systems have sizeof(off_t) == 8 while sizeof(size_t) == 4.
      This implies that we are able to access and work on files whose
      maximum length is around 2^63-1 bytes, but we can only malloc or
      mmap somewhat less than 2^32-1 bytes of memory.
      
      On such a system an implicit conversion of off_t to size_t can cause
      the size_t to wrap, resulting in unexpected and exciting behavior.
      Right now we are working around all gcc warnings generated by the
      -Wshorten-64-to-32 option by passing the off_t through xsize_t().
      
      In the future we should make xsize_t on such problematic platforms
      detect the wrapping and die if such a file is accessed.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      dc49cd76
    • S
      Use off_t when we really mean a file offset. · c4001d92
      Shawn O. Pearce 提交于
      Not all platforms have declared 'unsigned long' to be a 64 bit value,
      but we want to support a 64 bit packfile (or close enough anyway)
      in the near future as some projects are getting large enough that
      their packed size exceeds 4 GiB.
      
      By using off_t, the POSIX type that is declared to mean an offset
      within a file, we support whatever maximum file size the underlying
      operating system will handle.  For most modern systems this is up
      around 2^60 or higher.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      c4001d92
    • S
      Use uint32_t for all packed object counts. · 326bf396
      Shawn O. Pearce 提交于
      As we permit up to 2^32-1 objects in a single packfile we cannot
      use a signed int to represent the object offset within a packfile,
      after 2^31-1 objects we will start seeing negative indexes and
      error out or compute bad addresses within the mmap'd index.
      
      This is a minor cleanup that does not introduce any significant
      logic changes.  It is roach free.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      326bf396
    • S
      General const correctness fixes · 3a55602e
      Shawn O. Pearce 提交于
      We shouldn't attempt to assign constant strings into char*, as the
      string is not writable at runtime.  Likewise we should always be
      treating unsigned values as unsigned values, not as signed values.
      
      Most of these are very straightforward.  The only exception is the
      (unnecessary) xstrdup/free in builtin-branch.c for the detached
      head case.  Since this is a user-level interactive type program
      and that particular code path is executed no more than once, I feel
      that the extra xstrdup call is well worth the easy elimination of
      this warning.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      3a55602e