1. 23 2月, 2008 7 次提交
    • J
      pull: pass --strategy along to to rebase · 0d2dd191
      Jay Soffian 提交于
      rebase supports --strategy, so pull should pass the option along to it.
      Signed-off-by: NJay Soffian <jaysoffian@gmail.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      0d2dd191
    • L
      Use helper function for copying index entry information · eb7a2f1d
      Linus Torvalds 提交于
      We used to just memcpy() the index entry when we copied the stat() and
      SHA1 hash information, which worked well enough back when the index
      entry was just an exact bit-for-bit representation of the information on
      disk.
      
      However, these days we actually have various management information in
      the cache entry too, and we should be careful to not overwrite it when
      we copy the stat information from another index entry.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      eb7a2f1d
    • L
      Name hash fixups: export (and rename) remove_hash_entry · d070e3a3
      Linus Torvalds 提交于
      This makes the name hash removal function (which really just sets the
      bit that disables lookups of it) available to external routines, and
      makes read_cache_unmerged() use it when it drops an unmerged entry from
      the index.
      
      It's renamed to remove_index_entry(), and we drop the (unused) 'istate'
      argument.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      d070e3a3
    • L
      Fix name re-hashing semantics · a22c6371
      Linus Torvalds 提交于
      We handled the case of removing and re-inserting cache entries badly,
      which is something that merging commonly needs to do (removing the
      different stages, and then re-inserting one of them as the merged
      state).
      
      We even had a rather ugly special case for this failure case, where
      replace_index_entry() basically turned itself into a no-op if the new
      and the old entries were the same, exactly because the hash routines
      didn't handle it on their own.
      
      So what this patch does is to not just have the UNHASHED bit, but a
      HASHED bit too, and when you insert an entry into the name hash, that
      involves:
      
       - clear the UNHASHED bit, because now it's valid again for lookup
         (which is really all that UNHASHED meant)
      
       - if we're being lazy, we're done here (but we still want to clear the
         UNHASHED bit regardless of lazy mode, since we can become unlazy
         later, and so we need the UNHASHED bit to always be set correctly,
         even if we never actually insert the entry into the hash list)
      
       - if it was already hashed, we just leave it on the list
      
       - otherwise mark it HASHED and insert it into the list
      
      this all means that unhashing and rehashing a name all just works
      automatically.  Obviously, you cannot change the name of an entry (that
      would be a serious bug), but nothing can validly do that anyway (you'd
      have to allocate a new struct cache_entry anyway since the name length
      could change), so that's not a new limitation.
      
      The code actually gets simpler in many ways, although the lazy hashing
      does mean that there are a few odd cases (ie something can be marked
      unhashed even though it was never on the hash in the first place, and
      isn't actually marked hashed!).
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      a22c6371
    • J
      Merge branch 'maint' · 22c430ad
      Junio C Hamano 提交于
      * maint:
        hash: fix lookup_hash semantics
      22c430ad
    • J
      hash: fix lookup_hash semantics · 9ea0980a
      Jeff King 提交于
      We were returning the _address of_ the stored item (or NULL)
      instead of the item itself. While this sort of indirection
      is useful for insertion (since you can lookup and then
      modify), it is unnecessary for read-only lookup. Since the
      hash code splits these functions between the internal
      lookup_hash_entry function and the public lookup_hash
      function, it makes sense for the latter to provide what
      users of the library expect.
      
      The result of this was that the index caching returned bogus
      results on lookup. We unfortunately didn't catch this
      because we were returning a "struct cache_entry **" as a
      "void *", and accidentally assigning it to a "struct
      cache_entry *".
      
      As it happens, this actually _worked_ most of the time,
      because the entries were defined as:
      
        struct cache_entry {
      	  struct cache_entry *next;
      	  ...
        };
      
      meaning that interpreting a "struct cache_entry **" as a
      "struct cache_entry *" would yield an entry where all fields
      were totally bogus _except_ for the next pointer, which
      pointed to the actual cache entry. When walking the list, we
      would look at the bogus "name" field, which was unlikely to
      match our lookup, and then proceed to the "real" entry.
      
      The reading of bogus data was silently ignored most of the
      time, but could cause a segfault for some data (which seems
      to be more common on OS X).
      Signed-off-by: NJeff King <peff@peff.net>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      9ea0980a
    • J
      gitweb: Better chopping in commit search results · be8b9063
      Junio C Hamano 提交于
      When searching commit messages (commit search), if matched string is
      too long, the generated HTML was munged leading to an ill-formed XHTML
      document.
      
      Now gitweb chop leading, trailing and matched parts, HTML escapes
      those parts, then composes and marks up match info.  HTML output is
      never chopped.  Limiting matched info to 80 columns (with slop) is now
      done by dividing remaining characters after chopping match equally to
      leading and trailing part, not by chopping composed and HTML marked
      output.
      Noticed-by: NJean-Baptiste Quenot <jbq@caraldi.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      Signed-off-by: NJakub Narebski <jnareb@gmail.com>
      Signed-off-by: NJunio C Hamano <gitster@pobox.com>
      be8b9063
  2. 22 2月, 2008 4 次提交
  3. 21 2月, 2008 19 次提交
  4. 20 2月, 2008 10 次提交