1. 30 12月, 2006 14 次提交
    • S
      Replace mmap with xmmap, better handling MAP_FAILED. · c4712e45
      Shawn O. Pearce 提交于
      In some cases we did not even bother to check the return value of
      mmap() and just assume it worked.  This is bad, because if we are
      out of virtual address space the kernel returned MAP_FAILED and we
      would attempt to dereference that address, segfaulting without any
      real error output to the user.
      
      We are replacing all calls to mmap() with xmmap() and moving all
      MAP_FAILED checking into that single location.  If a mmap call
      fails we try to release enough least-recently-used pack windows
      to possibly succeed, then retry the mmap() attempt.  If we cannot
      mmap even after releasing pack memory then we die() as none of our
      callers have any reasonable recovery strategy for a failed mmap.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      c4712e45
    • S
      Release pack windows before reporting out of memory. · 97bfeb34
      Shawn O. Pearce 提交于
      If we are about to fail because this process has run out of memory we
      should first try to automatically control our appetite for address
      space by releasing enough least-recently-used pack windows to gain
      back enough memory such that we might actually be able to meet the
      current allocation request.
      
      This should help users who have fairly large repositories but are
      working on systems with relatively small virtual address space.
      Many times we see reports on the mailing list of these users running
      out of memory during various Git operations.  Dynamically decreasing
      the amount of pack memory used when the demand for heap memory is
      increasing is an intelligent solution to this problem.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      97bfeb34
    • S
      Create pack_report() as a debugging aid. · a53128b6
      Shawn O. Pearce 提交于
      Much like the alloc_report() function can be useful to report on
      object allocation statistics while debugging the new pack_report()
      function can be useful to report on the behavior of the mmap window
      code used for packfile access.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      a53128b6
    • S
      Support unmapping windows on 'temporary' packfiles. · 11daf39b
      Shawn O. Pearce 提交于
      If a command opens a packfile for only temporary access and does not
      install the struct packed_git* into the global packed_git list then
      we are unable to unmap any inactive windows within that packed_git,
      causing the overall process to exceed core.packedGitLimit.
      
      We cannot force the callers to install their temporary packfile
      into the packed_git chain as doing so would allow that (possibly
      corrupt but currently being verified) temporary packfile to become
      part of the local ODB, which may allow it to be considered for
      object resolution when it may not actually be a valid packfile.
      
      So to support unmapping the windows of these temporary packfiles we
      also scan the windows of the struct packed_git which was supplied
      to use_pack().  Since commands only work with one temporary packfile
      at a time scanning the one supplied to use_pack() and all packs
      installed into packed_git should cover everything available in
      memory.
      
      We also have to be careful to not close the file descriptor of
      the packed_git which was handed to use_pack() when all of that
      packfile's windows have been unmapped, as we are already past the
      open call that would open the packfile and need the file descriptor
      to be ready for mmap() after unuse_one_window returns.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      11daf39b
    • S
      Improve error message when packfile mmap fails. · 73b4e4be
      Shawn O. Pearce 提交于
      If we are unable to mmap the a region of the packfile with the mmap()
      system call there may be a good reason why, such as a closed file
      descriptor or out of address space.  Reporting the system level
      error message can help to debug such problems.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      73b4e4be
    • S
      Fully activate the sliding window pack access. · 60bb8b14
      Shawn O. Pearce 提交于
      This finally turns on the sliding window behavior for packfile data
      access by mapping limited size windows and chaining them under the
      packed_git->windows list.
      
      We consider a given byte offset to be within the window only if there
      would be at least 20 bytes (one hash worth of data) accessible after
      the requested offset.  This range selection relates to the contract
      that use_pack() makes with its callers, allowing them to access
      one hash or one object header without needing to call use_pack()
      for every byte of data obtained.
      
      In the worst case scenario we will map the same page of data twice
      into memory: once at the end of one window and once again at the
      start of the next window.  This duplicate page mapping will happen
      only when an object header or a delta base reference is spanned
      over the end of a window and is always limited to just one page of
      duplication, as no sane operating system will ever have a page size
      smaller than a hash.
      
      I am assuming that the possible wasted page of virtual address
      space is going to perform faster than the alternatives, which
      would be to copy the object header or ref delta into a temporary
      buffer prior to parsing, or to check the window range on every byte
      during header parsing.  We may decide to revisit this decision in
      the future since this is just a gut instinct decision and has not
      actually been proven out by experimental testing.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      60bb8b14
    • S
      Unmap individual windows rather than entire files. · 54044bf8
      Shawn O. Pearce 提交于
      To support multiple windows per packfile we need to unmap only one
      window at a time from that packfile, leaving any other windows in
      place and available for reference.
      
      We treat all windows from all packfiles equally; the least recently
      used, not-in-use window across all packfiles will always be closed
      first.
      
      If we have unmapped all windows in a packfile then we can also close
      the packfile's file descriptor as its possible we won't need to map
      any window from that file in the near future.  This decision about
      when to close the pack file descriptor may need to be revisited in
      the future after additional testing on several different platforms
      can be performed.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      54044bf8
    • S
      Document why header parsing won't exceed a window. · 8d8a4ea5
      Shawn O. Pearce 提交于
      When we parse the object header or the delta base reference we
      don't bother to loop over use_pack() calls.  The reason we don't
      need to bother with calling use_pack for each byte accessed is that
      use_pack will always promise us at least 20 bytes (really the hash
      size) after the offset.  This promise from use_pack simplifies a
      lot of code in the header parsing logic, as well as helps out the
      zlib library by ensuring there's always some data for it to consume
      during an inflate call.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      8d8a4ea5
    • S
      Loop over pack_windows when inflating/accessing data. · 079afb18
      Shawn O. Pearce 提交于
      When multiple mmaps start getting used for all pack file access it
      is not possible to get all data associated with a specific object
      in one contiguous memory region.  This limitation prevents simply
      passing a single address and length to SHA1_Update or to inflate.
      
      Instead we need to loop until we have processed all data of interest.
      
      As we loop over the data we are always interested in reusing the same
      window 'cursor', as the prior window will no longer be of any use
      to us.  This allows the use_pack() call to automatically decrement
      the use count of the prior window before setting up access for us
      to the next window.
      
      Within each loop we need to make use of the available length output
      parameter of use_pack() to tell us how many bytes are available in
      the current memory region, as we cannot tell otherwise.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      079afb18
    • S
      Replace use_packed_git with window cursors. · 03e79c88
      Shawn O. Pearce 提交于
      Part of the implementation concept of the sliding mmap window for
      pack access is to permit multiple windows per pack to be mapped
      independently.  Since the inuse_cnt is associated with the mmap and
      not with the file, this value is in struct pack_window and needs to
      be incremented/decremented for each pack_window accessed by any code.
      
      To faciliate that implementation we need to replace all uses of
      use_packed_git() and unuse_packed_git() with a different API that
      follows struct pack_window objects rather than struct packed_git.
      
      The way this works is when we need to start accessing a pack for
      the first time we should setup a new window 'cursor' by declaring
      a local and setting it to NULL:
      
        struct pack_windows *w_curs = NULL;
      
      To obtain the memory region which contains a specific section of
      the pack file we invoke use_pack(), supplying the address of our
      current window cursor:
      
        unsigned int len;
        unsigned char *addr = use_pack(p, &w_curs, offset, &len);
      
      the returned address `addr` will be the first byte at `offset`
      within the pack file.  The optional variable len will also be
      updated with the number of bytes remaining following the address.
      
      Multiple calls to use_pack() with the same window cursor will
      update the window cursor, moving it from one window to another
      when necessary.  In this way each window cursor variable maintains
      only one struct pack_window inuse at a time.
      
      Finally before exiting the scope which originally declared the window
      cursor we must invoke unuse_pack() to unuse the current window (which
      may be different from the one that was first obtained from use_pack):
      
        unuse_pack(&w_curs);
      
      This implementation is still not complete with regards to multiple
      windows, as only one window per pack file is supported right now.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      03e79c88
    • S
      Refactor how we open pack files to prepare for multiple windows. · 9bc879c1
      Shawn O. Pearce 提交于
      To efficiently support mmaping of multiple regions of the same pack
      file we want to keep the pack's file descriptor open while we are
      actively working with that pack.  So we are now keeping that file
      descriptor in packed_git.pack_fd and closing it only after we unmap
      the last window.
      
      This is going to increase the number of file descriptors that are
      in use at once, however that will be bounded by the total number of
      pack files present and therefore should not be very high.  It is
      a small tradeoff which we may need to revisit after some testing
      can be done on various repositories and systems.
      
      For code clarity we also want to seperate out the implementation
      of how we open a pack file from the implementation which locates
      a suitable window (or makes a new one) from the given pack file.
      Since this is a rather large delta I'm taking advantage of doing
      it now, in a fairly isolated change.
      
      When we open a pack file we need to examine the header and trailer
      without having a mmap in place, as we may only need to mmap
      the middle section of this particular pack.  Consequently the
      verification code has been refactored to make use of the new
      read_or_die function.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      9bc879c1
    • S
      Refactor packed_git to prepare for sliding mmap windows. · c41ee586
      Shawn O. Pearce 提交于
      The idea behind the sliding mmap window pack reader implementation
      is to have multiple mmap regions active against the same pack file,
      thereby allowing the process to mmap in only the active/hot sections
      of the pack and reduce overall virtual address space usage.
      
      To implement this we need to refactor the mmap related data
      (pack_base, pack_use_cnt) out of struct packed_git and move them
      into a new struct pack_window.
      
      We are refactoring the code to support a single struct pack_window
      per packfile, thereby emulating the prior behavior of mmap'ing the
      entire pack file.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      c41ee586
    • S
      Introduce new config option for mmap limit. · 77ccc5bb
      Shawn O. Pearce 提交于
      Rather than hardcoding the maximum number of bytes which can be
      mmapped from pack files we should make this value configurable,
      allowing the end user to increase or decrease this limit on a
      per-repository basis depending on the size of the repository
      and the capabilities of their operating system.
      
      In general users should not need to manually tune such a low-level
      setting within the core code, but being able to artifically limit
      the number of bytes which we can mmap at once from pack files will
      make it easier to craft test cases for the new mmap sliding window
      implementation.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      77ccc5bb
    • S
      Replace unpack_entry_gently with unpack_entry. · 4d703a1a
      Shawn O. Pearce 提交于
      The unpack_entry_gently function currently has only two callers:
      the delta base resolution in sha1_file.c and the main loop of
      pack-check.c.  Both of these must change to using unpack_entry
      directly when we implement sliding window mmap logic, so I'm doing
      it earlier to help break down the change set.
      
      This may cause a slight performance decrease for delta base
      resolution as well as for pack-check.c's verify_packfile(), as
      the pack use counter will be incremented and decremented for every
      object that is unpacked.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      4d703a1a
  2. 21 12月, 2006 1 次提交
  3. 28 11月, 2006 1 次提交
  4. 10 11月, 2006 1 次提交
  5. 03 11月, 2006 2 次提交
    • S
      Teach receive-pack how to keep pack files based on object count. · fc04c412
      Shawn Pearce 提交于
      Since keeping a pushed pack or exploding it into loose objects
      should be a local repository decision this teaches receive-pack
      to decide if it should call unpack-objects or index-pack --stdin
      --fix-thin based on the setting of receive.unpackLimit and the
      number of objects contained in the received pack.
      
      If the number of objects (hdr_entries) in the received pack is
      below the value of receive.unpackLimit (which is 5000 by default)
      then we unpack-objects as we have in the past.
      
      If the hdr_entries >= receive.unpackLimit then we call index-pack and
      ask it to include our pid and hostname in the .keep file to make it
      easier to identify why a given pack has been kept in the repository.
      
      Currently this leaves every received pack as a kept pack.  We really
      don't want that as received packs will tend to be small.  Instead we
      want to delete the .keep file automatically after all refs have
      been updated.  That is being left as room for future improvement.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      fc04c412
    • J
  6. 01 11月, 2006 1 次提交
  7. 31 10月, 2006 1 次提交
  8. 30 10月, 2006 1 次提交
  9. 29 10月, 2006 1 次提交
    • J
      send-pack --keep: do not explode into loose objects on the receiving end. · c7740a94
      Junio C Hamano 提交于
      This adds "keep-pack" extension to send-pack vs receive pack protocol,
      and makes the receiver invoke "index-pack --stdin --fix-thin".
      
      With this, you can ask send-pack not to explode the result into
      loose objects on the receiving end.
      
      I've patched has_sha1_file() to re-check for added packs just
      like is done in read_sha1_file() for now, but I think the static
      "re-prepare" interface for packs was a mistake.  Creation of a
      new pack inside a process that needs to read objects in them
      back ought to be a rare event, so we are better off making the
      callers (such as receive-pack that calls "index-pack --stdin
      --fix-thin") explicitly call re-prepare.  That way we do not
      have to penalize ordinary users of read_sha1_file() and
      has_sha1_file().
      
      We would need to fix this someday.
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      c7740a94
  10. 19 10月, 2006 1 次提交
  11. 16 10月, 2006 2 次提交
  12. 15 10月, 2006 2 次提交
  13. 27 9月, 2006 2 次提交
    • N
      make pack data reuse compatible with both delta types · 780e6e73
      Nicolas Pitre 提交于
      This is the missing part to git-pack-objects allowing it to reuse delta
      data to/from any of the two delta types.  It can reuse delta from any
      type, and it outputs base offsets when --allow-delta-base-offset is
      provided and the base is also included in the pack.  Otherwise it
      outputs base sha1 references just like it always did.
      Signed-off-by: NNicolas Pitre <nico@cam.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      780e6e73
    • N
      introduce delta objects with offset to base · eb32d236
      Nicolas Pitre 提交于
      This adds a new object, namely OBJ_OFS_DELTA, renames OBJ_DELTA to
      OBJ_REF_DELTA to better make the distinction between those two delta
      objects, and adds support for the handling of those new delta objects
      in sha1_file.c only.
      
      The OBJ_OFS_DELTA contains a relative offset from the delta object's
      position in a pack instead of the 20-byte SHA1 reference to identify
      the base object.  Since the base is likely to be not so far away, the
      relative offset is more likely to have a smaller encoding on average
      than an absolute offset.  And for those delta objects the base must
      always be stored first because there is no way to know the distance of
      later objects when streaming a pack.  Hence this relative offset is
      always meant to be negative.
      
      The offset encoding is slightly denser than the one used for object
      size -- credits to <linux@horizon.com> (whoever this is) for bringing
      it to my attention.
      
      This allows for pack size reduction between 3.2% (Linux-2.6) to over 5%
      (linux-historic).  Runtime pack access should be faster too since delta
      replay does skip a search in the pack index for each delta in a chain.
      Signed-off-by: NNicolas Pitre <nico@cam.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      eb32d236
  14. 23 9月, 2006 1 次提交
    • N
      many cleanups to sha1_file.c · 43057304
      Nicolas Pitre 提交于
      Those cleanups are mainly to set the table for the support of deltas
      with base objects referenced by offsets instead of sha1.  This means
      that many pack lookup functions are converted to take a pack/offset
      tuple instead of a sha1.
      
      This eliminates many struct pack_entry usages since this structure
      carried redundent information in many cases, and it increased stack
      footprint needlessly for a couple recursively called functions that used
      to declare a local copy of it for every recursion loop.
      
      In the process, packed_object_info_detail() has been reorganized as well
      so to look much saner and more amenable to deltas with offset support.
      
      Finally the appropriate adjustments have been made to functions that
      depend on the above changes.  But there is no functionality changes yet
      simply some code refactoring at this point.
      Signed-off-by: NNicolas Pitre <nico@cam.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      43057304
  15. 21 9月, 2006 1 次提交
  16. 10 9月, 2006 1 次提交
  17. 07 9月, 2006 1 次提交
    • J
      pack-objects --unpacked=<existing pack> option. · 106d710b
      Junio C Hamano 提交于
      Incremental repack without -a essentially boils down to:
      
      	rev-list --objects --unpacked --all |
              pack-objects $new_pack
      
      which picks up all loose objects that are still live and creates
      a new pack.
      
      This implements --unpacked=<existing pack> option to tell the
      revision walking machinery to pretend as if objects in such a
      pack are unpacked for the purpose of object listing.  With this,
      we could say:
      
      	rev-list --objects --unpacked=$active_pack --all |
      	pack-objects $new_pack
      
      instead, to mean "all live loose objects but pretend as if
      objects that are in this pack are also unpacked".  The newly
      created pack would be perfect for updating $active_pack by
      replacing it.
      
      Since pack-objects now knows how to do the rev-list's work
      itself internally, you can also write the above example by:
      
      	pack-objects --unpacked=$active_pack --all $new_pack </dev/null
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      106d710b
  18. 04 9月, 2006 1 次提交
    • J
      more lightweight revalidation while reusing deflated stream in packing · 72518e9c
      Junio C Hamano 提交于
      When copying from an existing pack and when copying from a loose
      object with new style header, the code makes sure that the piece
      we are going to copy out inflates well and inflate() consumes
      the data in full while doing so.
      
      The check to see if the xdelta really apply is quite expensive
      as you described, because you would need to have the image of
      the base object which can be represented as a delta against
      something else.
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      72518e9c
  19. 03 9月, 2006 1 次提交
  20. 02 9月, 2006 2 次提交
    • S
      Replace uses of strdup with xstrdup. · 9befac47
      Shawn Pearce 提交于
      Like xmalloc and xrealloc xstrdup dies with a useful message if
      the native strdup() implementation returns NULL rather than a
      valid pointer.
      
      I just tried to use xstrdup in new code and found it to be missing.
      However I expected it to be present as xmalloc and xrealloc are
      already commonly used throughout the code.
      
      [jc: removed the part that deals with last_XXX, which I am
       finding more and more dubious these days.]
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      9befac47
    • J
      consolidate two copies of new style object header parsing code. · ad1ed5ee
      Junio C Hamano 提交于
      Also while we are at it, remove redundant typename[] array from
      unpack_sha1_header.  The only reason it is different from the
      type_names[] array in object.c module is that this code cares
      about the subset of object types that are valid in a loose
      object, so prepare a separate array of boolean that tells us
      which types are valid, and share the name translation with the
      others.
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      ad1ed5ee
  21. 01 9月, 2006 2 次提交