1. 10 2月, 2016 3 次提交
    • R
      postgres_fdw: Remove unstable regression test. · bb4df42e
      Robert Haas 提交于
      Per Tom Lane and the buildfarm.
      bb4df42e
    • R
      postgres_fdw: Push down joins to remote servers. · e4106b25
      Robert Haas 提交于
      If we've got a relatively straightforward join between two tables,
      this pushes that join down to the remote server instead of fetching
      the rows for each table and performing the join locally.  Some cases
      are not handled yet, such as SEMI and ANTI joins.  Also, we don't
      yet attempt to create presorted join paths or parameterized join
      paths even though these options do get tried for a base relation
      scan.  Nevertheless, this seems likely to be a very significant win
      in many practical cases.
      
      Shigeru Hanada and Ashutosh Bapat, reviewed by Robert Haas, with
      additional review at various points by Tom Lane, Etsuro Fujita,
      KaiGai Kohei, and Jeevan Chalke.
      e4106b25
    • T
      Add more chattiness in server shutdown. · 7351e182
      Tom Lane 提交于
      Early returns from the buildfarm show that there's a bit of a gap in the
      logging I added in 3971f648: the portion of CreateCheckPoint()
      after CheckPointGuts() can take a fair amount of time.  Add a few more
      log messages in that section of code.  This too shall be reverted later.
      7351e182
  2. 09 2月, 2016 4 次提交
    • T
      Temporarily make pg_ctl and server shutdown a whole lot chattier. · 3971f648
      Tom Lane 提交于
      This is a quick hack, due to be reverted when its purpose has been served,
      to try to gather information about why some of the buildfarm critters
      regularly fail with "postmaster does not shut down" complaints.  Maybe they
      are just really overloaded, but maybe something else is going on.  Hence,
      instrument pg_ctl to print the current time when it starts waiting for
      postmaster shutdown and when it gives up, and add a lot of logging of the
      current time in the server's checkpoint and shutdown code paths.
      
      No attempt has been made to make this pretty.  I'm not even totally sure
      if it will build on Windows, but we'll soon find out.
      3971f648
    • T
      Re-pgindent varlena.c. · 0231f838
      Tom Lane 提交于
      Just to make sure previous commit worked ...
      0231f838
    • T
      Rename typedef "string" to "VarString". · 58e79721
      Tom Lane 提交于
      Since pgindent treats typedef names as global, the original coding of
      b47b4dbf would have had rather nasty effects on the formatting
      of other files in which "string" is used as a variable or field name.
      Use a less generic name for this typedef, and rename some other
      identifiers to match.
      
      Peter Geoghegan, per gripe from me
      58e79721
    • T
      Use %u not %d to print OIDs. · 63828969
      Tom Lane 提交于
      Oversight in commit 96198d94.
      
      Etsuro Fujita
      63828969
  3. 08 2月, 2016 10 次提交
    • T
      Last-minute updates for release notes. · 02292845
      Tom Lane 提交于
      Security: CVE-2016-0773
      02292845
    • T
      Fix some regex issues with out-of-range characters and large char ranges. · 3bb3f42f
      Tom Lane 提交于
      Previously, our regex code defined CHR_MAX as 0xfffffffe, which is a
      bad choice because it is outside the range of type "celt" (int32).
      Characters approaching that limit could lead to infinite loops in logic
      such as "for (c = a; c <= b; c++)" where c is of type celt but the
      range bounds are chr.  Such loops will work safely only if CHR_MAX+1
      is representable in celt, since c must advance to beyond b before the
      loop will exit.
      
      Fortunately, there seems no reason not to restrict CHR_MAX to 0x7ffffffe.
      It's highly unlikely that Unicode will ever assign codes that high, and
      none of our other backend encodings need characters beyond that either.
      
      In addition to modifying the macro, we have to explicitly enforce character
      range restrictions on the values of \u, \U, and \x escape sequences, else
      the limit is trivially bypassed.
      
      Also, the code for expanding case-independent character ranges in bracket
      expressions had a potential integer overflow in its calculation of the
      number of characters it could generate, which could lead to allocating too
      small a character vector and then overwriting memory.  An attacker with the
      ability to supply arbitrary regex patterns could easily cause transient DOS
      via server crashes, and the possibility for privilege escalation has not
      been ruled out.
      
      Quite aside from the integer-overflow problem, the range expansion code was
      unnecessarily inefficient in that it always produced a result consisting of
      individual characters, abandoning the knowledge that we had a range to
      start with.  If the input range is large, this requires excessive memory.
      Change it so that the original range is reported as-is, and then we add on
      any case-equivalent characters that are outside that range.  With this
      approach, we can bound the number of individual characters allowed without
      sacrificing much.  This patch allows at most 100000 individual characters,
      which I believe to be more than the number of case pairs existing in
      Unicode, so that the restriction will never be hit in practice.
      
      It's still possible for range() to take awhile given a large character code
      range, so also add statement-cancel detection to its loop.  The downstream
      function dovec() also lacked cancel detection, and could take a long time
      given a large output from range().
      
      Per fuzz testing by Greg Stark.  Back-patch to all supported branches.
      
      Security: CVE-2016-0773
      3bb3f42f
    • F
      Make GIN regression test stable. · f8a1c1d5
      Fujii Masao 提交于
      Commit 7f46eaf0 added the regression test which checks that
      gin_clean_pending_list() cleans up the GIN pending list and returns >0.
      This usually works fine. But if autovacuum comes along and cleans
      the list before gin_clean_pending_list() starts, the function may
      return 0, and then the regression test may fail.
      
      To fix the problem, this commit disables autovacuum on the target
      index of gin_clean_pending_list() by setting autovacuum_enabled
      reloption to off when creating the table.
      
      Also this commit sets gin_pending_list_limit reloption to 4MB on
      the target index. Otherwise when running "make installcheck" with
      small gin_pending_list_limit GUC, insertions of data may trigger
      the cleanup of pending list before gin_clean_pending_list() starts
      and the function may return 0. This could cause the regression test
      to fail.
      
      Per buildfarm member spoonbill.
      
      Reported-By: Tom Lane
      f8a1c1d5
    • A
      Fix overeager pushdown of HAVING clauses when grouping sets are used. · a6897efa
      Andres Freund 提交于
      In 61444bfb we started to allow HAVING clauses to be fully pushed down
      into WHERE, even when grouping sets are in use. That turns out not to
      work correctly, because grouping sets can "produce" NULLs, meaning that
      filtering in WHERE and HAVING can have different results, even when no
      aggregates or volatile functions are involved.
      
      Instead only allow pushdown of empty grouping sets.
      
      It'd be nice to do better, but the exact mechanics of deciding which
      cases are safe are still being debated. It's important to give correct
      results till we find a good solution, and such a solution might not be
      appropriate for backpatching anyway.
      
      Bug: #13863
      Reported-By: 'wrb'
      Diagnosed-By: Dean Rasheed
      Author: Andrew Gierth
      Reviewed-By: Dean Rasheed and Andres Freund
      Discussion: 20160113183558.12989.56904@wrigleys.postgresql.org
      Backpatch: 9.5, where grouping sets were introduced
      a6897efa
    • T
      Improve documentation about PRIMARY KEY constraints. · c477e84f
      Tom Lane 提交于
      Get rid of the false implication that PRIMARY KEY is exactly equivalent to
      UNIQUE + NOT NULL.  That was more-or-less true at one time in our
      implementation, but the standard doesn't say that, and we've grown various
      features (many of them required by spec) that treat a pkey differently from
      less-formal constraints.  Per recent discussion on pgsql-general.
      
      I failed to resist the temptation to do some other wordsmithing in the
      same area.
      c477e84f
    • T
      Fix deparsing of ON CONFLICT arbiter WHERE clauses. · cc2ca931
      Tom Lane 提交于
      The parser doesn't allow qualification of column names appearing in
      these clauses, but ruleutils.c would sometimes qualify them, leading
      to dump/reload failures.  Per bug #13891 from Onder Kalaci.
      
      (In passing, make stanzas in ruleutils.c that save/restore varprefix
      more consistent.)
      
      Peter Geoghegan
      cc2ca931
    • T
      Release notes for 9.5.1, 9.4.6, 9.3.11, 9.2.15, 9.1.20. · 1d76c972
      Tom Lane 提交于
      1d76c972
    • T
      ExecHashRemoveNextSkewBucket must physically copy tuples to main hashtable. · f867ce55
      Tom Lane 提交于
      Commit 45f6240a added an assumption in ExecHashIncreaseNumBatches
      and ExecHashIncreaseNumBuckets that they could find all tuples in the main
      hash table by iterating over the "dense storage" introduced by that patch.
      However, ExecHashRemoveNextSkewBucket continued its old practice of simply
      re-linking deleted skew tuples into the main table's hashchains.  Hence,
      such tuples got lost during any subsequent increase in nbatch or nbuckets,
      and would never get joined, as reported in bug #13908 from Seth P.
      
      I (tgl) think that the aforesaid commit has got multiple design issues
      and should be reworked rather completely; but there is no time for that
      right now, so band-aid the problem by making ExecHashRemoveNextSkewBucket
      physically copy deleted skew tuples into the "dense storage" arena.
      
      The added test case is able to exhibit the problem by means of fooling the
      planner with a WHERE condition that it will underestimate the selectivity
      of, causing the initial nbatch estimate to be too small.
      
      Tomas Vondra and Tom Lane.  Thanks to David Johnston for initial
      investigation into the bug report.
      f867ce55
    • R
      Fix parallel-safety markings for pg_upgrade functions. · d89f06f0
      Robert Haas 提交于
      These establish backend-local state which will not be copied to
      parallel workers, so they must be marked parallel-restricted, not
      parallel-safe.
      d89f06f0
    • R
      Introduce a new GUC force_parallel_mode for testing purposes. · 7c944bd9
      Robert Haas 提交于
      When force_parallel_mode = true, we enable the parallel mode restrictions
      for all queries for which this is believed to be safe.  For the subset of
      those queries believed to be safe to run entirely within a worker, we spin
      up a worker and run the query there instead of running it in the
      original process.  When force_parallel_mode = regress, make additional
      changes to allow the regression tests to run cleanly even though parallel
      workers have been injected under the hood.
      
      Taken together, this facilitates both better user testing and better
      regression testing of the parallelism code.
      
      Robert Haas, with help from Amit Kapila and Rushabh Lathia.
      7c944bd9
  4. 07 2月, 2016 5 次提交
    • R
      Introduce group locking to prevent parallel processes from deadlocking. · a1c1af2a
      Robert Haas 提交于
      For locking purposes, we now regard heavyweight locks as mutually
      non-conflicting between cooperating parallel processes.  There are some
      possible pitfalls to this approach that are not to be taken lightly,
      but it works OK for now and can be changed later if we find a better
      approach.  Without this, it's very easy for parallel queries to
      silently self-deadlock if the user backend holds strong relation locks.
      
      Robert Haas, with help from Amit Kapila.  Thanks to Noah Misch and
      Andres Freund for extensive discussion of possible issues with this
      approach.
      a1c1af2a
    • T
      Improve speed of timestamp/time/date output functions. · aa2387e2
      Tom Lane 提交于
      It seems that sprintf(), at least in glibc's version, is unreasonably slow
      compared to hand-rolled code for printing integers.  Replacing most uses of
      sprintf() in the datetime.c output functions with special-purpose code
      turns out to give more than a 2X speedup in COPY of a table with a single
      timestamp column; which is pretty impressive considering all the other
      logic in that code path.
      
      David Rowley and Andres Freund, reviewed by Peter Geoghegan and myself
      aa2387e2
    • T
      Fix comment block trashed by pgindent. · b921aeb1
      Tom Lane 提交于
      Looks like I put the protective dashes in the wrong place in f4e4b327.
      b921aeb1
    • T
      Improve HJDEBUG code a bit. · be11f840
      Tom Lane 提交于
      Commit 30d7ae3c introduced an HJDEBUG
      stanza that probably didn't compile at the time, and definitely doesn't
      compile now, because it refers to a nonexistent variable.  It doesn't seem
      terribly useful anyway, so just get rid of it.
      
      While I'm fooling with it, use %z modifier instead of the obsolete hack of
      casting size_t to unsigned long, and include the HashJoinTable's address in
      each printout so that it's possible to distinguish the activities of
      multiple hashjoins occurring in one query.
      
      Noted while trying to use HJDEBUG to investigate bug #13908.  Back-patch
      to 9.5, because code that doesn't compile is certainly not very helpful.
      be11f840
    • T
      Add missing "static" qualifier. · 392998bc
      Tom Lane 提交于
      Per buildfarm member pademelon.
      392998bc
  5. 06 2月, 2016 3 次提交
  6. 05 2月, 2016 13 次提交
  7. 04 2月, 2016 2 次提交
    • T
      In pg_dump, ensure that view triggers are processed after view rules. · 0ed707e9
      Tom Lane 提交于
      If a view is split into CREATE TABLE + CREATE RULE to break a circular
      dependency, then any triggers on the view must be dumped/reloaded after
      the CREATE RULE; else the backend may reject the CREATE TRIGGER because
      it's the wrong type of trigger for a plain table.  This works all right
      in plain dump/restore because of pg_dump's sorting heuristic that places
      triggers after rules.  However, when using parallel restore, the ordering
      must be enforced by a dependency --- and we didn't have one.
      
      Fixing this is a mere matter of adding an addObjectDependency() call,
      except that we need to be able to find all the triggers belonging to the
      view relation, and there was no easy way to do that.  Add fields to
      pg_dump's TableInfo struct to remember where the associated TriggerInfo
      struct(s) are.
      
      Per bug report from Dennis Kögel.  The failure can be exhibited at least
      as far back as 9.1, so back-patch to all supported branches.
      0ed707e9
    • R
      Extend sortsupport for text to more opclasses. · b47b4dbf
      Robert Haas 提交于
      Have varlena.c expose an interface that allows the char(n), bytea, and
      bpchar types to piggyback on a now-generalized SortSupport for text.
      This pushes a little more knowledge of the bpchar/char(n) type into
      varlena.c than might be preferred, but that seems like the approach
      that creates least friction.  Also speed things up for index builds
      that use text_pattern_ops or varchar_pattern_ops.
      
      This patch does quite a bit of renaming, but it seems likely to be
      worth it, so as to avoid future confusion about the fact that this code
      is now more generally used than the old names might have suggested.
      
      Peter Geoghegan, reviewed by Álvaro Herrera and Andreas Karlsson,
      with small tweaks by me.
      b47b4dbf