1. 10 3月, 2008 8 次提交
  2. 09 3月, 2008 5 次提交
    • T
      Remove postmaster.c's check that NBuffers is at least twice MaxBackends. · d9384a4b
      Tom Lane 提交于
      With the addition of multiple autovacuum workers, our choices were to delete
      the check, document the interaction with autovacuum_max_workers, or complicate
      the check to try to hide that interaction.  Since this restriction has never
      been adequate to ensure backends can't run out of pinnable buffers, it doesn't
      really have enough excuse to live to justify the second or third choices.
      Per discussion of a complaint from Andreas Kling (see also bug #3888).
      
      This commit also removes several documentation references to this restriction,
      but I'm not sure I got them all.
      d9384a4b
    • T
      Change patternsel() so that instead of switching from a pure · f4230d29
      Tom Lane 提交于
      pattern-examination heuristic method to purely histogram-driven selectivity at
      histogram size 100, we compute both estimates and use a weighted average.
      The weight put on the heuristic estimate decreases linearly with histogram
      size, dropping to zero for 100 or more histogram entries.
      Likewise in ltreeparentsel().  After a patch by Greg Stark, though I
      reorganized the logic a bit to give the caller of histogram_selectivity()
      more control.
      f4230d29
    • T
      Modify prefix_selectivity() so that it will never estimate the selectivity · 422495d0
      Tom Lane 提交于
      of the generated range condition var >= 'foo' AND var < 'fop' as being less
      than what eqsel() would estimate for var = 'foo'.  This is intuitively
      reasonable and it gets rid of the need for some entirely ad-hoc coding we
      formerly used to reject bogus estimates.  The basic problem here is that
      if the prefix is more than a few characters long, the two boundary values
      are too close together to be distinguishable by comparison to the column
      histogram, resulting in a selectivity estimate of zero, which is often
      not very sane.  Change motivated by an example from Peter Eisentraut.
      
      Arguably this is a bug fix, but I'll refrain from back-patching it
      for the moment.
      422495d0
    • T
      Refactor heap_page_prune so that instead of changing item states on-the-fly, · 6f10eb21
      Tom Lane 提交于
      it accumulates the set of changes to be made and then applies them.  It had
      to accumulate the set of changes anyway to prepare a WAL record for the
      pruning action, so this isn't an enormous change; the only new complexity is
      to not doubly mark tuples that are visited twice in the scan.  The main
      advantage is that we can substantially reduce the scope of the critical
      section in which the changes are applied, thus avoiding PANIC in foreseeable
      cases like running out of memory in inval.c.  A nice secondary advantage is
      that it is now far clearer that WAL replay will actually do the same thing
      that the original pruning did.
      
      This commit doesn't do anything about the open problem that
      CacheInvalidateHeapTuple doesn't have the right semantics for a CTID change
      caused by collapsing out a redirect pointer.  But whatever we do about that,
      it'll be a good idea to not do it inside a critical section.
      6f10eb21
    • B
      Add: · cc05d051
      Bruce Momjian 提交于
      >
      > * Consider a function-based API for '@@' full text searches
      >
      >   http://archives.postgresql.org/pgsql-hackers/2007-11/msg00511.php
      >
      cc05d051
  3. 08 3月, 2008 11 次提交
  4. 07 3月, 2008 16 次提交