1. 25 3月, 2009 1 次提交
    • N
      avoid possible overflow in delta size filtering computation · 720fe22d
      Nicolas Pitre 提交于
      On a 32-bit system, the maximum possible size for an object is less than
      4GB, while 64-bit systems may cope with larger objects.  Due to this
      limitation, variables holding object sizes are using an unsigned long
      type (32 bits on 32-bit systems, or 64 bits on 64-bit systems).
      
      When large objects are encountered, and/or people play with large delta
      depth values, it is possible for the maximum allowed delta size
      computation to overflow, especially on a 32-bit system.  When this
      occurs, surviving result bits may represent a value much smaller than
      what it is supposed to be, or even zero.  This prevents some objects
      from being deltified although they do get deltified when a smaller depth
      limit is used.  Fix this by always performing a 64-bit multiplication.
      Signed-off-by: NNicolas Pitre <nico@cam.org>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      720fe22d
  2. 11 12月, 2008 1 次提交
  3. 13 11月, 2008 3 次提交
  4. 02 11月, 2008 1 次提交
    • J
      pack-objects: avoid reading uninitalized data · 421b488a
      Jeff King 提交于
      In the main loop of find_deltas, we do:
      
        struct object_entry *entry = *list++;
        ...
        if (!*list_size)
      	  ...
      	  break
      
      Because we look at and increment *list _before_ the check of
      list_size, in the very last iteration of the loop we will
      look at uninitialized data, and increment the pointer beyond
      one past the end of the allocated space. Since we don't
      actually do anything with the data until after the check,
      this is not a problem in practice.
      
      But since it technically violates the C standard, and
      because it provokes a spurious valgrind warning, let's just
      move the initialization of entry to a safe place.
      
      This fixes valgrind errors in t5300, t5301, t5302, t303, and
      t9400.
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      421b488a
  5. 23 9月, 2008 1 次提交
  6. 16 9月, 2008 1 次提交
  7. 30 8月, 2008 3 次提交
  8. 28 8月, 2008 1 次提交
  9. 13 8月, 2008 1 次提交
  10. 06 7月, 2008 1 次提交
  11. 25 6月, 2008 1 次提交
  12. 24 6月, 2008 1 次提交
  13. 01 6月, 2008 2 次提交
  14. 15 5月, 2008 1 次提交
  15. 14 5月, 2008 1 次提交
    • N
      let pack-objects do the writing of unreachable objects as loose objects · ca11b212
      Nicolas Pitre 提交于
      Commit ccc12972 changed the behavior
      of 'git repack -A' so unreachable objects are stored as loose objects.
      However it did so in a naive and inn efficient way by making packs
      about to be deleted inaccessible and feeding their content through
      'git unpack-objects'.  While this works, there are major flaws with
      this approach:
      
      - It is unacceptably sloooooooooooooow.
      
        In the Linux kernel repository with no actual unreachable objects,
        doing 'git repack -A -d' before:
      
      	real    2m33.220s
      	user    2m21.675s
      	sys     0m3.510s
      
        And with this change:
      
      	real    0m36.849s
      	user    0m24.365s
      	sys     0m1.950s
      
        For reference, here's the timing for 'git repack -a -d':
      
      	real    0m35.816s
      	user    0m22.571s
      	sys     0m2.011s
      
        This is explained by the fact that 'git unpack-objects' was used to
        unpack _every_ objects even if (almost) 100% of them were thrown away.
      
      - There is a black out period.
      
        Between the removal of the .idx file for the redundant pack and the
        completion of its unpacking, the unreachable objects become completely
        unaccessible.  This is not a big issue as we're talking about unreachable
        objects, but some consistency is always good.
      
      - There is no way to easily set a sensible mtime for the newly created
        unreachable loose objects.
      
      So, while having a command called "pack-objects" to perform object
      unpacking looks really odd, this is probably the best compromize to be
      able to solve the above issues in an efficient way.
      Signed-off-by: NNicolas Pitre <nico@cam.org>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      ca11b212
  16. 04 5月, 2008 7 次提交
  17. 14 3月, 2008 1 次提交
    • N
      pack-objects: proper pack time stamping with --max-pack-size · f746bae8
      Nicolas Pitre 提交于
      Runtime pack access is done in the pack file mtime order since recent
      packs are more likely to contain frequently used objects than old packs.
      However the --max-pack-size option can produce multiple packs with mtime
      in the reversed order as newer objects are always written first.
      
      Let's modify mtime of later pack files (when any) so they appear older
      than preceding ones when a repack creates multiple packs.
      Signed-off-by: NNicolas Pitre <nico@cam.org>
      f746bae8
  18. 05 3月, 2008 1 次提交
    • S
      git-pack-objects: Automatically pack annotated tags if object was packed · f0a24aa5
      Shawn O. Pearce 提交于
      The new option "--include-tag" allows the caller to request that
      any annotated tag be included into the packfile if the object the tag
      references was also included as part of the packfile.
      
      This option can be useful on the server side of a native git transport,
      where the server knows what commits it is including into a packfile to
      update the client.  If new annotated tags have been introduced then we
      can also include them in the packfile, saving the client from needing
      to request them through a second connection.
      
      This change only introduces the backend option and provides a test.
      Protocol extensions to make this useful in fetch-pack/upload-pack
      are still necessary to activate the logic during transport.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      f0a24aa5
  19. 01 3月, 2008 1 次提交
  20. 27 2月, 2008 1 次提交
  21. 26 2月, 2008 1 次提交
  22. 24 2月, 2008 2 次提交
  23. 23 2月, 2008 1 次提交
    • J
      Avoid unnecessary "if-before-free" tests. · 8e0f7003
      Jim Meyering 提交于
      This change removes all obvious useless if-before-free tests.
      E.g., it replaces code like this:
      
              if (some_expression)
                      free (some_expression);
      
      with the now-equivalent:
      
              free (some_expression);
      
      It is equivalent not just because POSIX has required free(NULL)
      to work for a long time, but simply because it has worked for
      so long that no reasonable porting target fails the test.
      Here's some evidence from nearly 1.5 years ago:
      
          http://www.winehq.org/pipermail/wine-patches/2006-October/031544.html
      
      FYI, the change below was prepared by running the following:
      
        git ls-files -z | xargs -0 \
        perl -0x3b -pi -e \
          's/\bif\s*\(\s*(\S+?)(?:\s*!=\s*NULL)?\s*\)\s+(free\s*\(\s*\1\s*\))/$2/s'
      
      Note however, that it doesn't handle brace-enclosed blocks like
      "if (x) { free (x); }".  But that's ok, since there were none like
      that in git sources.
      
      Beware: if you do use the above snippet, note that it can
      produce syntactically invalid C code.  That happens when the
      affected "if"-statement has a matching "else".
      E.g., it would transform this
      
        if (x)
          free (x);
        else
          foo ();
      
      into this:
      
        free (x);
        else
          foo ();
      
      There were none of those here, either.
      
      If you're interested in automating detection of the useless
      tests, you might like the useless-if-before-free script in gnulib:
      [it *does* detect brace-enclosed free statements, and has a --name=S
       option to make it detect free-like functions with different names]
      
        http://git.sv.gnu.org/gitweb/?p=gnulib.git;a=blob;f=build-aux/useless-if-before-free
      
      Addendum:
        Remove one more (in imap-send.c), spotted by Jean-Luc Herren <jlh@gmx.ch>.
      Signed-off-by: NJim Meyering <meyering@redhat.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      8e0f7003
  24. 18 2月, 2008 1 次提交
  25. 13 2月, 2008 1 次提交
    • J
      Revert "pack-objects: only throw away data during memory pressure" · 75ad235c
      Junio C Hamano 提交于
      This reverts commit 9c217435.
      
      Nico analyzed and found out that this does not really help, and
      I agree with it.
      
      By the time this gets into action and data is actively thrown
      away, performance simply goes down the drain due to the data
      constantly being reloaded over and over and over and over and
      over and over again, to the point of virtually making no
      relative progress at all.  The previous behavior of enforcing
      the memory limit by dynamically shrinking the window size at
      least had the effect of allowing some kind of progress, even if
      the end result wouldn't be optimal.
      
      And that's the whole point behind this memory limiting feature:
      allowing some progress to be made when resources are too limited
      to let the repack go unbounded.
      75ad235c
  26. 12 2月, 2008 1 次提交
  27. 10 2月, 2008 1 次提交
  28. 22 1月, 2008 1 次提交