1. 04 11月, 2009 2 次提交
  2. 03 11月, 2009 4 次提交
    • P
      Fix regression tests for psql \d view patch · 16cd34a4
      Peter Eisentraut 提交于
      16cd34a4
    • P
      Improve PL/Python elog output · 2e3b16c8
      Peter Eisentraut 提交于
      When the elog functions (plpy.info etc.) get a single argument, just print
      that argument instead of printing the single-member tuple like ('foo',).
      2e3b16c8
    • P
      In psql, show view definition only with \d+, not with \d · 2fe1b4dd
      Peter Eisentraut 提交于
      The rationale is that view definitions tend to be long and obscure the
      main information about the view.
      2fe1b4dd
    • P
      Fix obscure segfault condition in PL/Python · 9e411146
      Peter Eisentraut 提交于
      In PLy_output(), when the elog() call in the TRY branch throws an exception
      (this can happen when a statement timeout kicks in, for example), the
      PyErr_SetString() call in the CATCH branch can cause a segfault, because the
      Py_XDECREF(so) call before it releases memory that is still used by the sv
      variable that PyErr_SetString() uses as argument, because sv points into
      memory owned by so.
      
      Backpatched back to 8.0, where this code was introduced.
      
      I also threw in a couple of volatile declarations for variables that are used
      before and after the TRY.  I don't think they caused the crash that I
      observed, but they could become issues.
      9e411146
  3. 02 11月, 2009 2 次提交
    • T
      Dept of second thoughts: after studying index_getnext() a bit more I realize · 7d535ebe
      Tom Lane 提交于
      that it can scribble on scan->xs_ctup.t_self while following HOT chains,
      so we can't rely on that to stay valid between hashgettuple() calls.
      Introduce a private variable in HashScanOpaque, instead.
      7d535ebe
    • T
      Fix two serious bugs introduced into hash indexes by the 8.4 patch that made · c4afdca4
      Tom Lane 提交于
      hash indexes keep entries sorted by hash value.  First, the original plans for
      concurrency assumed that insertions would happen only at the end of a page,
      which is no longer true; this could cause scans to transiently fail to find
      index entries in the presence of concurrent insertions.  We can compensate
      by teaching scans to re-find their position after re-acquiring read locks.
      Second, neither the bucket split nor the bucket compaction logic had been
      fixed to preserve hashvalue ordering, so application of either of those
      processes could lead to permanent corruption of an index, in the sense
      that searches might fail to find entries that are present.
      
      This patch fixes the split and compaction logic to preserve hashvalue
      ordering, but it cannot do anything about pre-existing corruption.  We will
      need to recommend reindexing all hash indexes in the 8.4.2 release notes.
      
      To buy back the performance loss hereby induced in split and compaction,
      fix them to use PageIndexMultiDelete instead of retail PageIndexDelete
      operations.  We might later want to do something with qsort'ing the
      page contents rather than doing a binary search for each insertion,
      but that seemed more invasive than I cared to risk in a back-patch.
      
      Per bug #5157 from Jeff Janes and subsequent investigation.
      c4afdca4
  4. 01 11月, 2009 1 次提交
  5. 31 10月, 2009 2 次提交
    • T
      Implement parser hooks for processing ColumnRef and ParamRef nodes, as per my · fb5d0580
      Tom Lane 提交于
      recent proposal.  As proof of concept, remove knowledge of Params from the
      core parser, arranging for them to be handled entirely by parser hook
      functions.  It turns out we need an additional hook for that --- I had
      forgotten about the code that handles inferring a parameter's type from
      context.
      
      This is a preliminary step towards letting plpgsql handle its variables
      through parser hooks.  Additional work remains to be done to expose the
      facility through SPI, but I think this is all the changes needed in the core
      parser.
      fb5d0580
    • T
      Make the overflow guards in ExecChooseHashTableSize be more protective. · 8442317b
      Tom Lane 提交于
      The original coding ensured nbuckets and nbatch didn't exceed INT_MAX,
      which while not insane on its own terms did nothing to protect subsequent
      code like "palloc(nbatch * sizeof(BufFile *))".  Since enormous join size
      estimates might well be planner error rather than reality, it seems best
      to constrain the initial sizes to be not more than work_mem/sizeof(pointer),
      thus ensuring the allocated arrays don't exceed work_mem.  We will allow
      nbatch to get bigger than that during subsequent ExecHashIncreaseNumBatches
      calls, but we should still guard against integer overflow in those palloc
      requests.  Per bug #5145 from Bernt Marius Johnsen.
      
      Although the given test case only seems to fail back to 8.2, previous
      releases have variants of this issue, so patch all supported branches.
      8442317b
  6. 30 10月, 2009 1 次提交
  7. 29 10月, 2009 3 次提交
  8. 28 10月, 2009 3 次提交
    • T
      When FOR UPDATE/SHARE is used with LIMIT, put the LockRows plan node · 46e3a16b
      Tom Lane 提交于
      underneath the Limit node, not atop it.  This fixes the old problem that such
      a query might unexpectedly return fewer rows than the LIMIT says, due to
      LockRows discarding updated rows.
      
      There is a related problem that LockRows might destroy the sort ordering
      produced by earlier steps; but fixing that by pushing LockRows below Sort
      would create serious performance problems that are unjustified in many
      real-world applications, as well as potential deadlock problems from locking
      many more rows than expected.  Instead, keep the present semantics of applying
      FOR UPDATE after ORDER BY within a single query level; but allow the user to
      specify the other way by writing FOR UPDATE in a sub-select.  To make that
      work, track whether FOR UPDATE appeared explicitly in sub-selects or got
      pushed down from the parent, and don't flatten a sub-select that contained an
      explicit FOR UPDATE.
      46e3a16b
    • T
      Fix AfterTriggerSaveEvent to use a test and elog, not just Assert, to check · 44956c52
      Tom Lane 提交于
      that it's called within an AfterTriggerBeginQuery/AfterTriggerEndQuery pair.
      The RI cascade triggers suppress that overhead on the assumption that they
      are always run non-deferred, so it's possible to violate the condition if
      someone mistakenly changes pg_trigger to mark such a trigger deferred.
      We don't really care about supporting that, but throwing an error instead
      of crashing seems desirable.  Per report from Marcelo Costa.
      44956c52
    • T
      Make FOR UPDATE/SHARE in the primary query not propagate into WITH queries; · 61e53282
      Tom Lane 提交于
      for example in
        WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE
      the FOR UPDATE will now affect bar but not foo.  This is more useful and
      consistent than the original 8.4 behavior, which tried to propagate FOR UPDATE
      into the WITH query but always failed due to assorted implementation
      restrictions.  Even though we are in process of removing those restrictions,
      it seems correct on philosophical grounds to not let the outer query's
      FOR UPDATE affect the WITH query.
      
      In passing, fix isLockedRel which frequently got things wrong in
      nested-subquery cases: "FOR UPDATE OF foo" applies to an alias foo in the
      current query level, not subqueries.  This has been broken for a long time,
      but it doesn't seem worth back-patching further than 8.4 because the actual
      consequences are minimal.  At worst the parser would sometimes get
      RowShareLock on a relation when it should be AccessShareLock or vice versa.
      That would only make a difference if someone were using ExclusiveLock
      concurrently, which no standard operation does, and anyway FOR UPDATE
      doesn't result in visible changes so it's not clear that the someone would
      notice any problem.  Between that and the fact that FOR UPDATE barely works
      with subqueries at all in existing releases, I'm not excited about worrying
      about it.
      61e53282
  9. 27 10月, 2009 4 次提交
  10. 26 10月, 2009 1 次提交
    • T
      Re-implement EvalPlanQual processing to improve its performance and eliminate · 9f2ee8f2
      Tom Lane 提交于
      a lot of strange behaviors that occurred in join cases.  We now identify the
      "current" row for every joined relation in UPDATE, DELETE, and SELECT FOR
      UPDATE/SHARE queries.  If an EvalPlanQual recheck is necessary, we jam the
      appropriate row into each scan node in the rechecking plan, forcing it to emit
      only that one row.  The former behavior could rescan the whole of each joined
      relation for each recheck, which was terrible for performance, and what's much
      worse could result in duplicated output tuples.
      
      Also, the original implementation of EvalPlanQual could not re-use the recheck
      execution tree --- it had to go through a full executor init and shutdown for
      every row to be tested.  To avoid this overhead, I've associated a special
      runtime Param with each LockRows or ModifyTable plan node, and arranged to
      make every scan node below such a node depend on that Param.  Thus, by
      signaling a change in that Param, the EPQ machinery can just rescan the
      already-built test plan.
      
      This patch also adds a prohibition on set-returning functions in the
      targetlist of SELECT FOR UPDATE/SHARE.  This is needed to avoid the
      duplicate-output-tuple problem.  It seems fairly reasonable since the
      other restrictions on SELECT FOR UPDATE are meant to ensure that there
      is a unique correspondence between source tuples and result tuples,
      which an output SRF destroys as much as anything else does.
      9f2ee8f2
  11. 23 10月, 2009 1 次提交
  12. 22 10月, 2009 3 次提交
  13. 21 10月, 2009 3 次提交
  14. 17 10月, 2009 3 次提交
  15. 16 10月, 2009 2 次提交
  16. 15 10月, 2009 5 次提交