1. 12 9月, 2011 1 次提交
    • P
      Remove many -Wcast-qual warnings · 1b81c2fe
      Peter Eisentraut 提交于
      This addresses only those cases that are easy to fix by adding or
      moving a const qualifier or removing an unnecessary cast.  There are
      many more complicated cases remaining.
      1b81c2fe
  2. 01 9月, 2011 1 次提交
  3. 21 8月, 2011 1 次提交
    • T
      Fix performance problem when building a lossy tidbitmap. · 08e1eedf
      Tom Lane 提交于
      As pointed out by Sergey Koposov, repeated invocations of tbm_lossify can
      make building a large tidbitmap into an O(N^2) operation.  To fix, make
      sure we remove more than the minimum amount of information per call, and
      add a fallback path to behave sanely if we're unable to fit the bitmap
      within the requested amount of memory.
      
      This has been wrong since the tidbitmap code was written, so back-patch
      to all supported branches.
      08e1eedf
  4. 02 1月, 2011 1 次提交
  5. 21 9月, 2010 1 次提交
  6. 03 1月, 2010 1 次提交
  7. 11 6月, 2009 1 次提交
  8. 25 3月, 2009 1 次提交
    • T
      Implement "fastupdate" support for GIN indexes, in which we try to accumulate · ff301d6e
      Tom Lane 提交于
      multiple index entries in a holding area before adding them to the main index
      structure.  This helps because bulk insert is (usually) significantly faster
      than retail insert for GIN.
      
      This patch also removes GIN support for amgettuple-style index scans.  The
      API defined for amgettuple is difficult to support with fastupdate, and
      the previously committed partial-match feature didn't really work with
      it either.  We might eventually figure a way to put back amgettuple
      support, but it won't happen for 8.4.
      
      catversion bumped because of change in GIN's pg_am entry, and because
      the format of GIN indexes changed on-disk (there's a metapage now,
      and possibly a pending list).
      
      Teodor Sigaev
      ff301d6e
  9. 11 1月, 2009 1 次提交
  10. 02 1月, 2009 1 次提交
  11. 11 4月, 2008 1 次提交
    • T
      Replace "amgetmulti" AM functions with "amgetbitmap", in which the whole · 4e82a954
      Tom Lane 提交于
      indexscan always occurs in one call, and the results are returned in a
      TIDBitmap instead of a limited-size array of TIDs.  This should improve
      speed a little by reducing AM entry/exit overhead, and it is necessary
      infrastructure if we are ever to support bitmap indexes.
      
      In an only slightly related change, add support for TIDBitmaps to preserve
      (somewhat lossily) the knowledge that particular TIDs reported by an index
      need to have their quals rechecked when the heap is visited.  This facility
      is not really used yet; we'll need to extend the forced-recheck feature to
      plain indexscans before it's useful, and that hasn't been coded yet.
      The intent is to use it to clean up 8.3's horrid @@@ kluge for text search
      with weighted queries.  There might be other uses in future, but that one
      alone is sufficient reason.
      
      Heikki Linnakangas, with some adjustments by me.
      4e82a954
  12. 02 1月, 2008 1 次提交
  13. 21 9月, 2007 1 次提交
    • T
      HOT updates. When we update a tuple without changing any of its indexed · 282d2a03
      Tom Lane 提交于
      columns, and the new version can be stored on the same heap page, we no longer
      generate extra index entries for the new version.  Instead, index searches
      follow the HOT-chain links to ensure they find the correct tuple version.
      
      In addition, this patch introduces the ability to "prune" dead tuples on a
      per-page basis, without having to do a complete VACUUM pass to recover space.
      VACUUM is still needed to clean up dead index entries, however.
      
      Pavan Deolasee, with help from a bunch of other people.
      282d2a03
  14. 27 4月, 2007 1 次提交
    • T
      Fix dynahash.c to suppress hash bucket splits while a hash_seq_search() scan · a2e923a6
      Tom Lane 提交于
      is in progress on the same hashtable.  This seems the least invasive way to
      fix the recently-recognized problem that a split could cause the scan to
      visit entries twice or (with much lower probability) miss them entirely.
      The only field-reported problem caused by this is the "failed to re-find
      shared lock object" PANIC in COMMIT PREPARED reported by Michel Dorochevsky,
      which was caused by multiply visited entries.  However, it seems certain
      that mdsync() is vulnerable to missing required fsync's due to missed
      entries, and I am fearful that RelationCacheInitializePhase2() might be at
      risk as well.  Because of that and the generalized hazard presented by this
      bug, back-patch all the supported branches.
      
      Along the way, fix pg_prepared_statement() and pg_cursor() to not assume
      that the hashtables they are examining will stay static between calls.
      This is risky regardless of the newly noted dynahash problem, because
      hash_seq_search() has never promised to cope with deletion of table entries
      other than the just-returned one.  There may be no bug here because the only
      supported way to call these functions is via ExecMakeTableFunctionResult()
      which will cycle them to completion before doing anything very interesting,
      but it seems best to get rid of the assumption.  This affects 8.2 and HEAD
      only, since those functions weren't there earlier.
      a2e923a6
  15. 06 1月, 2007 1 次提交
  16. 14 7月, 2006 1 次提交
  17. 05 3月, 2006 1 次提交
  18. 15 10月, 2005 1 次提交
  19. 03 9月, 2005 1 次提交
    • T
      Clean up a couple of ad-hoc computations of the maximum number of tuples · 35e9b1cc
      Tom Lane 提交于
      on a page, as suggested by ITAGAKI Takahiro.  Also, change a few places
      that were using some other estimates of max-items-per-page to consistently
      use MaxOffsetNumber.  This is conservatively large --- we could have used
      the new MaxHeapTuplesPerPage macro, or a similar one for index tuples ---
      but those places are simply declaring a fixed-size buffer and assuming it
      will work, rather than actively testing for overrun.  It seems safer to
      size these buffers in a way that can't overflow even if the page is
      corrupt.
      35e9b1cc
  20. 29 8月, 2005 1 次提交
  21. 24 7月, 2005 1 次提交
  22. 29 5月, 2005 1 次提交
    • T
      Modify hash_search() API to prevent future occurrences of the error · e92a8827
      Tom Lane 提交于
      spotted by Qingqing Zhou.  The HASH_ENTER action now automatically
      fails with elog(ERROR) on out-of-memory --- which incidentally lets
      us eliminate duplicate error checks in quite a bunch of places.  If
      you really need the old return-NULL-on-out-of-memory behavior, you
      can ask for HASH_ENTER_NULL.  But there is now an Assert in that path
      checking that you aren't hoping to get that behavior in a palloc-based
      hash table.
      Along the way, remove the old HASH_FIND_SAVE/HASH_REMOVE_SAVED actions,
      which were not being used anywhere anymore, and were surely too ugly
      and unsafe to want to see revived again.
      e92a8827
  23. 17 5月, 2005 1 次提交
  24. 20 4月, 2005 1 次提交
    • T
      Create executor and planner-backend support for decoupled heap and index · 4a8c5d03
      Tom Lane 提交于
      scans, using in-memory tuple ID bitmaps as the intermediary.  The planner
      frontend (path creation and cost estimation) is not there yet, so none
      of this code can be executed.  I have tested it using some hacked planner
      code that is far too ugly to see the light of day, however.  Committing
      now so that the bulk of the infrastructure changes go in before the tree
      drifts under me.
      4a8c5d03
  25. 18 4月, 2005 1 次提交