1. 29 7月, 2010 1 次提交
  2. 06 2月, 2010 1 次提交
    • E
      run-command: support custom fd-set in async · ae6a5609
      Erik Faye-Lund 提交于
      This patch adds the possibility to supply a set of non-0 file
      descriptors for async process communication instead of the
      default-created pipe.
      
      Additionally, we now support bi-directional communiction with the
      async procedure, by giving the async function both read and write
      file descriptors.
      
      To retain compatiblity and similar "API feel" with start_command,
      we require start_async callers to set .out = -1 to get a readable
      file descriptor.  If either of .in or .out is 0, we supply no file
      descriptor to the async process.
      
      [sp: Note: Erik started this patch, and a huge bulk of it is
           his work.  All bugs were introduced later by Shawn.]
      Signed-off-by: NErik Faye-Lund <kusmabite@gmail.com>
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      ae6a5609
  3. 11 12月, 2009 1 次提交
  4. 14 11月, 2009 1 次提交
    • N
      give priority to progress messages · 6b59f51b
      Nicolas Pitre 提交于
      In theory it is possible for sideband channel #2 to be delayed if
      pack data is quick to come up for sideband channel #1.  And because
      data for channel #2 is read only 128 bytes at a time while pack data
      is read 8192 bytes at a time, it is possible for many pack blocks to
      be sent to the client before the progress message fifo is emptied,
      making the situation even worse.  This would result in totally garbled
      progress display on the client's console as local progress gets mixed
      with partial remote progress lines.
      
      Let's prevent such situations by giving transmission priority to
      progress messages over pack data at all times.
      Signed-off-by: NNicolas Pitre <nico@fluxnic.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      6b59f51b
  5. 05 11月, 2009 1 次提交
    • S
      Add stateless RPC options to upload-pack, receive-pack · 42526b47
      Shawn O. Pearce 提交于
      When --stateless-rpc is passed as a command line parameter to
      upload-pack or receive-pack the programs now assume they may
      perform only a single read-write cycle with stdin and stdout.
      This fits with the HTTP POST request processing model where a
      program may read the request, write a response, and must exit.
      
      When --advertise-refs is passed as a command line parameter only
      the initial ref advertisement is output, and the program exits
      immediately.  This fits with the HTTP GET request model, where
      no request content is received but a response must be produced.
      
      HTTP headers and/or environment are not processed here, but
      instead are assumed to be handled by the program invoking
      either service backend.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      42526b47
  6. 31 10月, 2009 1 次提交
    • S
      Add multi_ack_detailed capability to fetch-pack/upload-pack · 78affc49
      Shawn O. Pearce 提交于
      When multi_ack_detailed is enabled the ACK continue messages returned
      by the remote upload-pack are broken out to describe the different
      states within the peer.  This permits the client to better understand
      the server's in-memory state.
      
      The fetch-pack/upload-pack protocol now looks like:
      
      NAK
      ---------------------------------
        Always sent in response to "done" if there was no common base
        selected from the "have" lines (or no have lines were sent).
      
        * no multi_ack or multi_ack_detailed:
      
          Sent when the client has sent a pkt-line flush ("0000") and
          the server has not yet found a common base object.
      
        * either multi_ack or multi_ack_detailed:
      
          Always sent in response to a pkt-line flush.
      
      ACK %s
      -----------------------------------
        * no multi_ack or multi_ack_detailed:
      
          Sent in response to "have" when the object exists on the remote
          side and is therefore an object in common between the peers.
          The argument is the SHA-1 of the common object.
      
        * either multi_ack or multi_ack_detailed:
      
          Sent in response to "done" if there are common objects.
          The argument is the last SHA-1 determined to be common.
      
      ACK %s continue
      -----------------------------------
        * multi_ack only:
      
          Sent in response to "have".
      
          The remote side wants the client to consider this object as
          common, and immediately stop transmitting additional "have"
          lines for objects that are reachable from it.  The reason
          the client should stop is not given, but is one of the two
          cases below available under multi_ack_detailed.
      
      ACK %s common
      -----------------------------------
        * multi_ack_detailed only:
      
          Sent in response to "have".  Both sides have this object.
          Like with "ACK %s continue" above the client should stop
          sending have lines reachable for objects from the argument.
      
      ACK %s ready
      -----------------------------------
        * multi_ack_detailed only:
      
          Sent in response to "have".
      
          The client should stop transmitting objects which are reachable
          from the argument, and send "done" soon to get the objects.
      
          If the remote side has the specified object, it should
          first send an "ACK %s common" message prior to sending
          "ACK %s ready".
      
          Clients may still submit additional "have" lines if there are
          more side branches for the client to explore that might be added
          to the common set and reduce the number of objects to transfer.
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      78affc49
  7. 13 9月, 2009 2 次提交
    • J
      don't dereference NULL upon fdopen failure · 41698375
      Jim Meyering 提交于
      There were several unchecked use of fdopen(); replace them with xfdopen()
      that checks and dies.
      Signed-off-by: NJim Meyering <meyering@redhat.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      41698375
    • J
      use write_str_in_full helper to avoid literal string lengths · 2b7ca830
      Jim Meyering 提交于
      In 2d14d65c (Use a clearer style to issue commands to remote helpers,
      2009-09-03) I happened to notice two changes like this:
      
      -	write_in_full(helper->in, "list\n", 5);
      +
      +	strbuf_addstr(&buf, "list\n");
      +	write_in_full(helper->in, buf.buf, buf.len);
      +	strbuf_reset(&buf);
      
      IMHO, it would be better to define a new function,
      
          static inline ssize_t write_str_in_full(int fd, const char *str)
          {
                 return write_in_full(fd, str, strlen(str));
          }
      
      and then use it like this:
      
      -       strbuf_addstr(&buf, "list\n");
      -       write_in_full(helper->in, buf.buf, buf.len);
      -       strbuf_reset(&buf);
      +       write_str_in_full(helper->in, "list\n");
      
      Thus not requiring the added allocation, and still avoiding
      the maintenance risk of literal string lengths.
      These days, compilers are good enough that strlen("literal")
      imposes no run-time cost.
      
      Transformed via this:
      
          perl -pi -e \
              's/write_in_full\((.*?), (".*?"), \d+\)/write_str_in_full($1, $2)/'\
            $(git grep -l 'write_in_full.*"')
      Signed-off-by: NJim Meyering <meyering@redhat.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      2b7ca830
  8. 06 9月, 2009 1 次提交
    • N
      make shallow repository deepening more network efficient · 6523078b
      Nicolas Pitre 提交于
      First of all, I can't find any reason why thin pack generation is
      explicitly disabled when dealing with a shallow repository.  The
      possible delta base objects are collected from the edge commits which
      are always obtained through history walking with the same shallow refs
      as the client, Therefore the client is always going to have those base
      objects available. So let's remove that restriction.
      
      Then we can make shallow repository deepening much more efficient by
      using the remote's unshallowed commits as edge commits to get preferred
      base objects for thin pack generation.  On git.git, this makes the data
      transfer for the deepening of a shallow repository from depth 1 to depth 2
      around 134 KB instead of 3.68 MB.
      Signed-off-by: NNicolas Pitre <nico@cam.org>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      6523078b
  9. 01 9月, 2009 1 次提交
  10. 29 8月, 2009 2 次提交
    • J
      upload-pack: feed "kind [clone|fetch]" to post-upload-pack hook · 11cae066
      Junio C Hamano 提交于
      A request to clone the repository does not give any "have" but asks for
      all the refs we offer with "want".  When a request does not ask to clone
      the repository fully, but asks to fetch some refs into an empty
      repository, it will not give any "have" but its "want" won't ask for all
      the refs we offer.
      
      If we suppose (and I would say this is a rather big if) that it makes
      sense to distinguish these two cases, a hook cannot reliably do this
      alone.  The hook can detect lack of "have" and bunch of "want", but there
      is no direct way to tell if the other end asked for all refs we offered,
      or merely most of them.
      
      Between the time we talked with the other end and the time the hook got
      called, we may have acquired more refs or lost some refs in the repository
      by concurrent operations.  Given that we plan to introduce selective
      advertisement of refs with a protocol extension, it would become even more
      difficult for hooks to guess between these two cases.
      
      This adds "kind [clone|fetch]" to hook's input, as a stable interface to
      allow the hooks to tell these cases apart.
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      11cae066
    • J
      upload-pack: add a trigger for post-upload-pack hook · a8563ec8
      Junio C Hamano 提交于
      After upload-pack successfully finishes its operation, post-upload-pack
      hook can be called for logging purposes.
      
      The hook is passed various pieces of information, one per line, from its
      standard input.  Currently the following items can be fed to the hook, but
      more types of information may be added in the future:
      
          want SHA-1::
              40-byte hexadecimal object name the client asked to include in the
              resulting pack.  Can occur one or more times in the input.
      
          have SHA-1::
              40-byte hexadecimal object name the client asked to exclude from
              the resulting pack, claiming to have them already.  Can occur zero
              or more times in the input.
      
          time float::
              Number of seconds spent for creating the packfile.
      
          size decimal::
              Size of the resulting packfile in bytes.
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      a8563ec8
  11. 19 6月, 2009 1 次提交
    • J
      upload-pack: squelch progress indicator if client cannot see it · 9462e3f5
      Johannes Sixt 提交于
      upload-pack runs pack-objects, which generates progress indicator output
      on its stderr. If the client requests a sideband, this indicator is sent
      to the client; but if it did not, then the progress is written to
      upload-pack's own stderr.
      
      If upload-pack is itself run from git-daemon (and if the client did not
      request a sideband) the progress indicator never reaches the client and it
      need not be generated in the first place. With this patch the progress
      indicator is suppressed in this situation.
      Signed-off-by: NJohannes Sixt <j6t@kdbg.org>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      9462e3f5
  12. 10 6月, 2009 1 次提交
  13. 01 6月, 2009 1 次提交
  14. 13 4月, 2009 2 次提交
    • L
      show_object(): push path_name() call further down · cf2ab916
      Linus Torvalds 提交于
      In particular, pushing the "path_name()" call _into_ the show() function
      would seem to allow
      
       - more clarity into who "owns" the name (ie now when we free the name in
         the show_object callback, it's because we generated it ourselves by
         calling path_name())
      
       - not calling path_name() at all, either because we don't care about the
         name in the first place, or because we are actually happy walking the
         linked list of "struct name_path *" and the last component.
      
      Now, I didn't do that latter optimization, because it would require some
      more coding, but especially looking at "builtin-pack-objects.c", we really
      don't even want the whole pathname, we really would be better off with the
      list of path components.
      
      Why? We use that name for two things:
       - add_preferred_base_object(), which actually _wants_ to traverse the
         path, and now does it by looking for '/' characters!
       - for 'name_hash()', which only cares about the last 16 characters of a
         name, so again, generating the full name seems to be just unnecessary
         work.
      
      Anyway, so I didn't look any closer at those things, but it did convince
      me that the "show_object()" calling convention was crazy, and we're
      actually better off doing _less_ in list-objects.c, and giving people
      access to the internal data structures so that they can decide whether
      they want to generate a path-name or not.
      
      This patch does that, and then for people who did use the name (even if
      they might do something more clever in the future), it just does the
      straightforward "name = path_name(path, component); .. free(name);" thing.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      cf2ab916
    • L
      process_{tree,blob}: show objects without buffering · 8d2dfc49
      Linus Torvalds 提交于
      Here's a less trivial thing, and slightly more dubious one.
      
      I was looking at that "struct object_array objects", and wondering why we
      do that. I have honestly totally forgotten. Why not just call the "show()"
      function as we encounter the objects? Rather than add the objects to the
      object_array, and then at the very end going through the array and doing a
      'show' on all, just do things more incrementally.
      
      Now, there are possible downsides to this:
      
       - the "buffer using object_array" _can_ in theory result in at least
         better I-cache usage (two tight loops rather than one more spread out
         one). I don't think this is a real issue, but in theory..
      
       - this _does_ change the order of the objects printed. Instead of doing a
         "process_tree(revs, commit->tree, &objects, NULL, "");" in the loop
         over the commits (which puts all the root trees _first_ in the object
         list, this patch just adds them to the list of pending objects, and
         then we'll traverse them in that order (and thus show each root tree
         object together with the objects we discover under it)
      
         I _think_ the new ordering actually makes more sense, but the object
         ordering is actually a subtle thing when it comes to packing
         efficiency, so any change in order is going to have implications for
         packing. Good or bad, I dunno.
      
       - There may be some reason why we did it that odd way with the object
         array, that I have simply forgotten.
      
      Anyway, now that we don't buffer up the objects before showing them
      that may actually result in lower memory usage during that whole
      traverse_commit_list() phase.
      
      This is seriously not very deeply tested. It makes sense to me, it seems
      to pass all the tests, it looks ok, but...
      
      Does anybody remember why we did that "object_array" thing? It used to be
      an "object_list" a long long time ago, but got changed into the array due
      to better memory usage patterns (those linked lists of obejcts are
      horrible from a memory allocation standpoint). But I wonder why we didn't
      do this back then. Maybe there's a reason for it.
      
      Or maybe there _used_ to be a reason, and no longer is.
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      8d2dfc49
  15. 08 4月, 2009 1 次提交
    • C
      list-objects: add "void *data" parameter to show functions · 11c211fa
      Christian Couder 提交于
      The goal of this patch is to get rid of the "static struct rev_info
      revs" static variable in "builtin-rev-list.c".
      
      To do that, we need to pass the revs to the "show_commit" function
      in "builtin-rev-list.c" and this in turn means that the
      "traverse_commit_list" function in "list-objects.c" must be passed
      functions pointers to functions with 2 parameters instead of one.
      
      So we have to change all the callers and all the functions passed
      to "traverse_commit_list".
      
      Anyway this makes the code more clean and more generic, so it
      should be a good thing in the long run.
      Signed-off-by: NChristian Couder <chriscool@tuxfamily.org>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      11c211fa
  16. 08 3月, 2009 1 次提交
  17. 05 3月, 2009 1 次提交
  18. 05 2月, 2009 1 次提交
  19. 26 1月, 2009 1 次提交
  20. 01 9月, 2008 1 次提交
  21. 26 7月, 2008 1 次提交
  22. 26 6月, 2008 1 次提交
  23. 05 3月, 2008 1 次提交
  24. 03 3月, 2008 1 次提交
    • S
      Teach upload-pack to log the received need lines to an fd · 49aaddd1
      Shawn O. Pearce 提交于
      To facilitate testing and verification of the requests sent by
      git-fetch to the remote side we permit logging the received packet
      lines to the file descriptor specified in GIT_DEBUG_SEND_PACK has
      been set.  Special start and end lines are included to indicate
      the start and end of each connection.
      
        $ GIT_DEBUG_SEND_PACK=3 git fetch 3>UPLOAD_LOG
        $ cat UPLOAD_LOG
        #S
        want 8e10cf4e007ad7e003463c30c34b1050b039db78 multi_ack side-band-64k thin-pack ofs-delta
        want ddfa4a33562179aca1ace2bcc662244a17d0b503
        #E
        #S
        want 3253df4d1cf6fb138b52b1938473bcfec1483223 multi_ack side-band-64k thin-pack ofs-delta
        #E
      
      >From the above trace the first connection opened by git-fetch was to
      download two refs (with values 8e and dd) and the second connection
      was opened to automatically follow an annotated tag (32).
      Signed-off-by: NShawn O. Pearce <spearce@spearce.org>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      49aaddd1
  25. 26 2月, 2008 1 次提交
  26. 19 2月, 2008 1 次提交
  27. 18 2月, 2008 2 次提交
  28. 14 2月, 2008 1 次提交
  29. 06 11月, 2007 1 次提交
    • J
      upload-pack: Use finish_{command,async}() instead of waitpid(). · 4c324c00
      Johannes Sixt 提交于
      upload-pack spawns two processes, rev-list and pack-objects, and carefully
      monitors their status so that it can report failure to the remote end.
      This change removes the complicated procedures on the grounds of the
      following observations:
      
      - If everything is OK, rev-list closes its output pipe end, upon which
        pack-objects (which reads from the pipe) sees EOF and terminates itself,
        closing its output (and error) pipes. upload-pack reads from both until
        it sees EOF in both. It collects the exit codes of the child processes
        (which indicate success) and terminates successfully.
      
      - If rev-list sees an error, it closes its output and terminates with
        failure. pack-objects sees EOF in its input and terminates successfully.
        Again upload-pack reads its inputs until EOF. When it now collects
        the exit codes of its child processes, it notices the failure of rev-list
        and signals failure to the remote end.
      
      - If pack-objects sees an error, it terminates with failure. Since this
        breaks the pipe to rev-list, rev-list is killed with SIGPIPE.
        upload-pack reads its input until EOF, then collects the exit codes of
        the child processes, notices their failures, and signals failure to the
        remote end.
      
      - If upload-pack itself dies unexpectedly, pack-objects is killed with
        SIGPIPE, and subsequently also rev-list.
      
      The upshot of this is that precise monitoring of child processes is not
      required because both terminate if either one of them dies unexpectedly.
      This allows us to use finish_command() and finish_async() instead of
      an explicit waitpid(2) call.
      
      The change is smaller than it looks because most of it only reduces the
      indentation of a large part of the inner loop.
      Signed-off-by: NJohannes Sixt <johannes.sixt@telecom.at>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      4c324c00
  30. 21 10月, 2007 3 次提交
  31. 08 6月, 2007 1 次提交
  32. 07 6月, 2007 1 次提交
    • J
      War on whitespace · a6080a0a
      Junio C Hamano 提交于
      This uses "git-apply --whitespace=strip" to fix whitespace errors that have
      crept in to our source files over time.  There are a few files that need
      to have trailing whitespaces (most notably, test vectors).  The results
      still passes the test, and build result in Documentation/ area is unchanged.
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      a6080a0a
  33. 29 3月, 2007 1 次提交
    • H
      git-upload-pack: make sure we close unused pipe ends · 3ac53e0d
      H. Peter Anvin 提交于
      Right now, we don't close the read end of the pipe when git-upload-pack
      runs git-pack-object, so we hang forever (why don't we get SIGALRM?)
      instead of dying with SIGPIPE if the latter dies, which seems to be the
      norm if the client disconnects.
      
      Thanks to Johannes Schindelin <Johannes.Schindelin@gmx.de> for
      pointing out where this close() needed to go.
      
      This patch has been tested on kernel.org for several weeks and appear
      to resolve the problem of git-upload-pack processes hanging around
      forever.
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      (cherry picked from commit 465b3518)
      3ac53e0d
  34. 28 3月, 2007 1 次提交
    • H
      git-upload-pack: make sure we close unused pipe ends · 465b3518
      H. Peter Anvin 提交于
      Right now, we don't close the read end of the pipe when git-upload-pack
      runs git-pack-object, so we hang forever (why don't we get SIGALRM?)
      instead of dying with SIGPIPE if the latter dies, which seems to be the
      norm if the client disconnects.
      
      Thanks to Johannes Schindelin <Johannes.Schindelin@gmx.de> for
      pointing out where this close() needed to go.
      
      This patch has been tested on kernel.org for several weeks and appear
      to resolve the problem of git-upload-pack processes hanging around
      forever.
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NJunio C Hamano <junkio@cox.net>
      465b3518