1. 09 4月, 2008 1 次提交
    • J
      sha1-lookup: more memory efficient search in sorted list of SHA-1 · 628522ec
      Junio C Hamano 提交于
      Currently, when looking for a packed object from the pack idx, a
      simple binary search is used.
      
      A conventional binary search loop looks like this:
      
              unsigned lo, hi;
              do {
                      unsigned mi = (lo + hi) / 2;
                      int cmp = "entry pointed at by mi" minus "target";
                      if (!cmp)
                              return mi; "mi is the wanted one"
                      if (cmp > 0)
                              hi = mi; "mi is larger than target"
                      else
                              lo = mi+1; "mi is smaller than target"
              } while (lo < hi);
      	"did not find what we wanted"
      
      The invariants are:
      
        - When entering the loop, 'lo' points at a slot that is never
          above the target (it could be at the target), 'hi' points at
          a slot that is guaranteed to be above the target (it can
          never be at the target).
      
        - We find a point 'mi' between 'lo' and 'hi' ('mi' could be
          the same as 'lo', but never can be as high as 'hi'), and
          check if 'mi' hits the target.  There are three cases:
      
           - if it is a hit, we have found what we are looking for;
      
           - if it is strictly higher than the target, we set it to
             'hi', and repeat the search.
      
           - if it is strictly lower than the target, we update 'lo'
             to one slot after it, because we allow 'lo' to be at the
             target and 'mi' is known to be below the target.
      
          If the loop exits, there is no matching entry.
      
      When choosing 'mi', we do not have to take the "middle" but
      anywhere in between 'lo' and 'hi', as long as lo <= mi < hi is
      satisfied.  When we somehow know that the distance between the
      target and 'lo' is much shorter than the target and 'hi', we
      could pick 'mi' that is much closer to 'lo' than (hi+lo)/2,
      which a conventional binary search would pick.
      
      This patch takes advantage of the fact that the SHA-1 is a good
      hash function, and as long as there are enough entries in the
      table, we can expect uniform distribution.  An entry that begins
      with for example "deadbeef..." is much likely to appear much
      later than in the midway of a reasonably populated table.  In
      fact, it can be expected to be near 87% (222/256) from the top
      of the table.
      
      This is a work-in-progress and has switches to allow easier
      experiments and debugging.  Exporting GIT_USE_LOOKUP environment
      variable enables this code.
      
      On my admittedly memory starved machine, with a partial KDE
      repository (3.0G pack with 95M idx):
      
          $ GIT_USE_LOOKUP=t git log -800 --stat HEAD >/dev/null
          3.93user 0.16system 0:04.09elapsed 100%CPU (0avgtext+0avgdata 0maxresident)k
          0inputs+0outputs (0major+55588minor)pagefaults 0swaps
      
      Without the patch, the numbers are:
      
          $ git log -800 --stat HEAD >/dev/null
          4.00user 0.15system 0:04.17elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k
          0inputs+0outputs (0major+60258minor)pagefaults 0swaps
      
      In the same repository:
      
          $ GIT_USE_LOOKUP=t git log -2000 HEAD >/dev/null
          0.12user 0.00system 0:00.12elapsed 97%CPU (0avgtext+0avgdata 0maxresident)k
          0inputs+0outputs (0major+4241minor)pagefaults 0swaps
      
      Without the patch, the numbers are:
      
          $ git log -2000 HEAD >/dev/null
          0.05user 0.01system 0:00.07elapsed 100%CPU (0avgtext+0avgdata 0maxresident)k
          0inputs+0outputs (0major+8506minor)pagefaults 0swaps
      
      There isn't much time difference, but the number of minor faults
      seems to show that we are touching much smaller number of pages,
      which is expected.
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      628522ec
  2. 01 3月, 2008 1 次提交
  3. 19 2月, 2008 1 次提交
  4. 14 2月, 2008 1 次提交
  5. 07 2月, 2008 1 次提交
    • S
      safecrlf: Add mechanism to warn about irreversible crlf conversions · 21e5ad50
      Steffen Prohaska 提交于
      CRLF conversion bears a slight chance of corrupting data.
      autocrlf=true will convert CRLF to LF during commit and LF to
      CRLF during checkout.  A file that contains a mixture of LF and
      CRLF before the commit cannot be recreated by git.  For text
      files this is the right thing to do: it corrects line endings
      such that we have only LF line endings in the repository.
      But for binary files that are accidentally classified as text the
      conversion can corrupt data.
      
      If you recognize such corruption early you can easily fix it by
      setting the conversion type explicitly in .gitattributes.  Right
      after committing you still have the original file in your work
      tree and this file is not yet corrupted.  You can explicitly tell
      git that this file is binary and git will handle the file
      appropriately.
      
      Unfortunately, the desired effect of cleaning up text files with
      mixed line endings and the undesired effect of corrupting binary
      files cannot be distinguished.  In both cases CRLFs are removed
      in an irreversible way.  For text files this is the right thing
      to do because CRLFs are line endings, while for binary files
      converting CRLFs corrupts data.
      
      This patch adds a mechanism that can either warn the user about
      an irreversible conversion or can even refuse to convert.  The
      mechanism is controlled by the variable core.safecrlf, with the
      following values:
      
       - false: disable safecrlf mechanism
       - warn: warn about irreversible conversions
       - true: refuse irreversible conversions
      
      The default is to warn.  Users are only affected by this default
      if core.autocrlf is set.  But the current default of git is to
      leave core.autocrlf unset, so users will not see warnings unless
      they deliberately chose to activate the autocrlf mechanism.
      
      The safecrlf mechanism's details depend on the git command.  The
      general principles when safecrlf is active (not false) are:
      
       - we warn/error out if files in the work tree can modified in an
         irreversible way without giving the user a chance to backup the
         original file.
      
       - for read-only operations that do not modify files in the work tree
         we do not not print annoying warnings.
      
      There are exceptions.  Even though...
      
       - "git add" itself does not touch the files in the work tree, the
         next checkout would, so the safety triggers;
      
       - "git apply" to update a text file with a patch does touch the files
         in the work tree, but the operation is about text files and CRLF
         conversion is about fixing the line ending inconsistencies, so the
         safety does not trigger;
      
       - "git diff" itself does not touch the files in the work tree, it is
         often run to inspect the changes you intend to next "git add".  To
         catch potential problems early, safety triggers.
      
      The concept of a safety check was originally proposed in a similar
      way by Linus Torvalds.  Thanks to Dimitry Potapov for insisting
      on getting the naked LF/autocrlf=true case right.
      Signed-off-by: NSteffen Prohaska <prohaska@zib.de>
      21e5ad50
  6. 18 1月, 2008 1 次提交
    • S
      Fix random fast-import errors when compiled with NO_MMAP · c9ced051
      Shawn O. Pearce 提交于
      fast-import was relying on the fact that on most systems mmap() and
      write() are synchronized by the filesystem's buffer cache.  We were
      relying on the ability to mmap() 20 bytes beyond the current end
      of the file, then later fill in those bytes with a future write()
      call, then read them through the previously obtained mmap() address.
      
      This isn't always true with some implementations of NFS, but it is
      especially not true with our NO_MMAP=YesPlease build time option used
      on some platforms.  If fast-import was built with NO_MMAP=YesPlease
      we used the malloc()+pread() emulation and the subsequent write()
      call does not update the trailing 20 bytes of a previously obtained
      "mmap()" (aka malloc'd) address.
      
      Under NO_MMAP that behavior causes unpack_entry() in sha1_file.c to
      be unable to read an object header (or data) that has been unlucky
      enough to be written to the packfile at a location such that it
      is in the trailing 20 bytes of a window previously opened on that
      same packfile.
      
      This bug has gone unnoticed for a very long time as it is highly data
      dependent.  Not only does the object have to be placed at the right
      position, but it also needs to be positioned behind some other object
      that has been accessed due to a branch cache invalidation.  In other
      words the stars had to align just right, and if you did run into
      this bug you probably should also have purchased a lottery ticket.
      
      Fortunately the workaround is a lot easier than the bug explanation.
      
      Before we allow unpack_entry() to read data from a pack window
      that has also (possibly) been modified through write() we force
      all existing windows on that packfile to be closed.  By closing
      the windows we ensure that any new access via the emulated mmap()
      will reread the packfile, updating to the current file content.
      
      This comes at a slight performance degredation as we cannot reuse
      previously cached windows when we update the packfile.  But it
      is a fairly minor difference as the window closes happen at only
      two points:
      
       - When the packfile is finalized and its .idx is generated:
      
         At this stage we are getting ready to update the refs and any
         data access into the packfile is going to be random, and is
         going after only the branch tips (to ensure they are valid).
         Our existing windows (if any) are not likely to be positioned
         at useful locations to access those final tip commits so we
         probably were closing them before anyway.
      
       - When the branch cache missed and we need to reload:
      
         At this point fast-import is getting change commands for the next
         commit and it needs to go re-read a tree object it previously
         had written out to the packfile.  What windows we had (if any)
         are not likely to cover the tree in question so we probably were
         closing them before anyway.
      
      We do try to avoid unnecessarily closing windows in the second case
      by checking to see if the packfile size has increased since the
      last time we called unpack_entry() on that packfile.  If the size
      has not changed then we have not written additional data, and any
      existing window is still vaild.  This nicely handles the cases where
      fast-import is going through a branch cache reload and needs to read
      many trees at once.  During such an event we are not likely to be
      updating the packfile so we do not cycle the windows between reads.
      
      With this change in place t9301-fast-export.sh (which was broken
      by c3b0dec5) finally works again.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      c9ced051
  7. 04 1月, 2008 1 次提交
  8. 29 11月, 2007 1 次提交
  9. 15 11月, 2007 1 次提交
  10. 30 10月, 2007 1 次提交
    • J
      sha1_file.c: avoid gcc signed overflow warnings · 7109c889
      Junio C Hamano 提交于
      With the recent gcc, we get:
      
      sha1_file.c: In check_packed_git_:
      sha1_file.c:527: warning: assuming signed overflow does not
      occur when assuming that (X + c) < X is always false
      sha1_file.c:527: warning: assuming signed overflow does not
      occur when assuming that (X + c) < X is always false
      
      for a piece of code that tries to make sure that off_t is large
      enough to hold more than 2^32 offset.  The test tried to make
      sure these do not wrap-around:
      
          /* make sure we can deal with large pack offsets */
          off_t x = 0x7fffffffUL, y = 0xffffffffUL;
          if (x > (x + 1) || y > (y + 1)) {
      
      but gcc assumes it can do whatever optimization it wants for a
      signed overflow (undefined behaviour) and warns about this
      construct.
      
      Follow Linus's suggestion to check sizeof(off_t) instead to work
      around the problem.
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      7109c889
  11. 29 9月, 2007 1 次提交
    • P
      strbuf change: be sure ->buf is never ever NULL. · b315c5c0
      Pierre Habouzit 提交于
      For that purpose, the ->buf is always initialized with a char * buf living
      in the strbuf module. It is made a char * so that we can sloppily accept
      things that perform: sb->buf[0] = '\0', and because you can't pass "" as an
      initializer for ->buf without making gcc unhappy for very good reasons.
      
      strbuf_init/_detach/_grow have been fixed to trust ->alloc and not ->buf
      anymore.
      
      as a consequence strbuf_detach is _mandatory_ to detach a buffer, copying
      ->buf isn't an option anymore, if ->buf is going to escape from the scope,
      and eventually be free'd.
      
      API changes:
        * strbuf_setlen now always works, so just make strbuf_reset a convenience
          macro.
        * strbuf_detatch takes a size_t* optional argument (meaning it can be
          NULL) to copy the buffer's len, as it was needed for this refactor to
          make the code more readable, and working like the callers.
      Signed-off-by: NPierre Habouzit <madcoder@debian.org>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      b315c5c0
  12. 19 9月, 2007 1 次提交
  13. 18 9月, 2007 1 次提交
  14. 17 9月, 2007 2 次提交
  15. 11 9月, 2007 1 次提交
  16. 25 8月, 2007 1 次提交
    • S
      Don't segfault if we failed to inflate a packed delta · 9064d87b
      Shawn O. Pearce 提交于
      Under some types of packfile corruption the zlib stream holding the
      data for a delta within a packfile may fail to inflate, due to say
      a CRC failure within the compressed data itself.  When this occurs
      the unpack_compressed_entry function will return NULL as a signal to
      the caller that the data is not available.  Unfortunately we then
      tried to use that NULL as though it referenced a memory location
      where a delta was stored and tried to apply it to the delta base.
      Loading a byte from the NULL address typically causes a SIGSEGV.
      
      cate on #git noticed this failure in `git fsck --full` where the
      call to verify_pack() first noticed that the packfile was corrupt
      by finding that the packfile's SHA-1 did not match the raw data of
      the file.  After finding this fsck went ahead and tried to verify
      every object within the packfile, even though the packfile was
      already known to be bad.  If we are going to shovel bad data at
      the delta unpacking code, we better handle it correctly.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      9064d87b
  17. 15 8月, 2007 1 次提交
  18. 19 7月, 2007 1 次提交
    • C
      Rename read_pipe() with read_fd() and make its buffer nul-terminated. · c4fba0a3
      Carlos Rica 提交于
      The new name is closer to the purpose of the function.
      
      A NUL-terminated buffer makes things easier when callers need that.
      Since the function returns only the memory written with data,
      almost always allocating more space than needed because final
      size is unknown, an extra NUL terminating the buffer is harmless.
      It is not included in the returned size, so the function
      remains working as before.
      
      Also, now the function allows the buffer passed to be NULL at first,
      and alloc_nr is now used for growing the buffer, instead size=*2.
      Signed-off-by: NCarlos Rica <jasampler@gmail.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      c4fba0a3
  19. 04 7月, 2007 1 次提交
    • J
      Don't smash stack when $GIT_ALTERNATE_OBJECT_DIRECTORIES is too long · 9cb18f56
      Jim Meyering 提交于
      There is no restriction on the length of the name returned by
      get_object_directory, other than the fact that it must be a stat'able
      git object directory.  That means its name may have length up to
      PATH_MAX-1 (i.e., often 4095) not counting the trailing NUL.
      
      Combine that with the assumption that the concatenation of that name and
      suffixes like "/info/alternates" and "/pack/---long-name---.idx" will fit
      in a buffer of length PATH_MAX, and you see the problem.  Here's a fix:
      
          sha1_file.c (prepare_packed_git_one): Lengthen "path" buffer
          so we are guaranteed to be able to append "/pack/" without checking.
          Skip any directory entry that is too long to be appended.
          (read_info_alternates): Protect against a similar buffer overrun.
      
      Before this change, using the following admittedly contrived environment
      setting would cause many git commands to clobber their stack and segfault
      on a system with PATH_MAX == 4096:
      
        t=$(perl -e '$s=".git/objects";$n=(4096-6-length($s))/2;print "./"x$n . $s')
        export GIT_ALTERNATE_OBJECT_DIRECTORIES=$t
        touch g
        ./git-update-index --add g
      
      If you run the above commands, you'll soon notice that many
      git commands now segfault, so you'll want to do this:
      
        unset GIT_ALTERNATE_OBJECT_DIRECTORIES
      Signed-off-by: NJim Meyering <jim@meyering.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      9cb18f56
  20. 27 6月, 2007 1 次提交
  21. 13 6月, 2007 2 次提交
  22. 07 6月, 2007 1 次提交
    • J
      War on whitespace · a6080a0a
      Junio C Hamano 提交于
      This uses "git-apply --whitespace=strip" to fix whitespace errors that have
      crept in to our source files over time.  There are a few files that need
      to have trailing whitespaces (most notably, test vectors).  The results
      still passes the test, and build result in Documentation/ area is unchanged.
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      a6080a0a
  23. 31 5月, 2007 2 次提交
    • N
      always start looking up objects in the last used pack first · f7c22cc6
      Nicolas Pitre 提交于
      Jon Smirl said:
      
      | Once an object reference hits a pack file it is very likely that
      | following references will hit the same pack file. So first place to
      | look for an object is the same place the previous object was found.
      
      This is indeed a good heuristic so here it is.  The search always start
      with the pack where the last object lookup succeeded.  If the wanted
      object is not available there then the search continues with the normal
      pack ordering.
      
      To test this I split the Linux repository into 66 packs and performed a
      "time git-rev-list --objects --all > /dev/null".  Best results are as
      follows:
      
      	Pack Sort			w/o this patch	w/ this patch
      	-------------------------------------------------------------
      	recent objects last		26.4s		20.9s
      	recent objects first		24.9s		18.4s
      
      This shows that the pack order based on object age has some influence,
      but that the last-used-pack heuristic is even more significant in
      reducing object lookup.
      
      Signed-off-by: Nicolas Pitre <nico@cam.org> --- Note: the
      --max-pack-size to git-repack currently produces packs with old objects
      after those containing recent objects.  The pack sort based on
      filesystem timestamp is therefore backward for those.  This needs to be
      fixed of course, but at least it made me think about this variable for
      the test.
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      f7c22cc6
    • L
      fix signed range problems with hex conversions · 192a6be2
      Linus Torvalds 提交于
      Make hexval_table[] "const".  Also make sure that the accessor
      function hexval() does not access the table with out-of-range
      values by declaring its parameter "unsigned char", instead of
      "unsigned int".
      
      With this, gcc can just generate:
      
      	movzbl  (%rdi), %eax
      	movsbl  hexval_table(%rax),%edx
      	movzbl  1(%rdi), %eax
      	movsbl  hexval_table(%rax),%eax
      	sall    $4, %edx
      	orl     %eax, %edx
      
      for the code to generate a byte from two hex characters.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      192a6be2
  24. 30 5月, 2007 1 次提交
  25. 27 5月, 2007 2 次提交
    • S
      Micro-optimize prepare_alt_odb · 7dc24aa5
      Shawn O. Pearce 提交于
      Calling getenv() is not that expensive, but its also not free,
      and its certainly not cheaper than testing to see if alt_odb_tail
      is not null.
      
      Because we are calling prepare_alt_odb() from within find_sha1_file
      every time we cannot find an object file locally we want to skip out
      of prepare_alt_odb() as early as possible once we have initialized
      our alternate list.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      7dc24aa5
    • S
      Lazily open pack index files on demand · d079837e
      Shawn O. Pearce 提交于
      In some repository configurations the user may have many packfiles,
      but all of the recent commits/trees/tags/blobs are likely to
      be in the most recent packfile (the one with the newest mtime).
      It is therefore common to be able to complete an entire operation
      by accessing only one packfile, even if there are 25 packfiles
      available to the repository.
      
      Rather than opening and mmaping the corresponding .idx file for
      every pack found, we now only open and map the .idx when we suspect
      there might be an object of interest in there.
      
      Of course we cannot known in advance which packfile contains an
      object, so we still need to scan the entire packed_git list to
      locate anything.  But odds are users want to access objects in the
      most recently created packfiles first, and that may be all they
      ever need for the current operation.
      
      Junio observed in b867092f that placing recent packfiles before
      older ones can slightly improve access times for recent objects,
      without degrading it for historical object access.
      
      This change improves upon Junio's observations by trying even harder
      to avoid the .idx files that we won't need.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      d079837e
  26. 16 5月, 2007 1 次提交
  27. 11 5月, 2007 1 次提交
    • N
      deprecate the new loose object header format · 726f852b
      Nicolas Pitre 提交于
      Now that we encourage and actively preserve objects in a packed form
      more agressively than we did at the time the new loose object format and
      core.legacyheaders were introduced, that extra loose object format
      doesn't appear to be worth it anymore.
      
      Because the packing of loose objects has to go through the delta match
      loop anyway, and since most of them should end up being deltified in
      most cases, there is really little advantage to have this parallel loose
      object format as the CPU savings it might provide is rather lost in the
      noise in the end.
      
      This patch gets rid of core.legacyheaders, preserve the legacy format as
      the only writable loose object format and deprecate the other one to
      keep things simpler.
      Signed-off-by: NNicolas Pitre <nico@cam.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      726f852b
  28. 26 4月, 2007 1 次提交
    • S
      Actually handle some-low memory conditions · d1efefa4
      Shawn O. Pearce 提交于
      Tim Ansell discovered his Debian server didn't permit git-daemon to
      use as much memory as it needed to handle cloning a project with
      a 128 MiB packfile.  Filtering the strace provided by Tim of the
      rev-list child showed this gem of a sequence:
      
        open("./objects/pack/pack-*.pack", O_RDONLY|O_LARGEFILE <unfinished ...>
        <... open resumed> )              = 5
      
      OK, so the packfile is fd 5...
      
        mmap2(NULL, 33554432, PROT_READ, MAP_PRIVATE, 5, 0 <unfinished ...>
         <... mmap2 resumed> )             = 0xb5e2d000
      
      and we mapped one 32 MiB window from it at position 0...
      
         mmap2(NULL, 31020635, PROT_READ, MAP_PRIVATE, 5, 0x6000 <unfinished ...>
         <... mmap2 resumed> )             = -1 ENOMEM (Cannot allocate memory)
      
      And we asked for another window further into the file.  But got
      denied.  In Tim's case this was due to a resource limit on the
      git-daemon process, and its children.
      
      Now where are we in the code?  We're down inside use_pack(),
      after we have called unuse_one_window() enough times to make sure
      we stay within our allowed maximum window size.  However since we
      didn't unmap the prior window at 0xb5e2d000 we aren't exceeding
      the current limit (which probably was just the defaults).
      
      But we're actually down inside xmmap()...
      
      So we release the window we do have (by calling release_pack_memory),
      assuming there is some memory pressure...
      
         munmap(0xb5e2d000, 33554432 <unfinished ...>
         <... munmap resumed> )            = 0
         close(5 <unfinished ...>
         <... close resumed> )             = 0
      
      And that was the last window in this packfile.  So we closed it.
      Way to go us.  Our xmmap did not expect release_pack_memory to
      close the fd its about to map...
      
         mmap2(NULL, 31020635, PROT_READ, MAP_PRIVATE, 5, 0x6000 <unfinished ...>
         <... mmap2 resumed> )             = -1 EBADF (Bad file descriptor)
      
      And so the Linux kernel happily tells us f' off.
      
         write(2, "fatal: ", 7 <unfinished ...>
         <... write resumed> )             = 7
         write(2, "Out of memory? mmap failed: Bad "..., 47 <unfinished ...>
         <... write resumed> )             = 47
      
      And we report the bad file descriptor error, and not the ENOMEM,
      and die, claiming we are out of memory.  But actually that mmap
      should have succeeded, as we had enough memory for that window,
      seeing as how we released the prior one.
      
      Originally when I developed the sliding window mmap feature I had
      this exact same bug in fast-import, and I dealt with it by handing
      in the struct packed_git* we want to open the new window for, as the
      caller wasn't prepared to reopen the packfile if unuse_one_window
      closed it.  The same is true here from xmmap, but the caller doesn't
      have the struct packed_git* handy.  So I'm using the file descriptor
      instead to perform the same test.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      d1efefa4
  29. 21 4月, 2007 1 次提交
  30. 17 4月, 2007 1 次提交
  31. 11 4月, 2007 4 次提交
  32. 06 4月, 2007 1 次提交
  33. 28 3月, 2007 1 次提交