1. 13 4月, 2013 1 次提交
    • T
      Clean up the mess around EXPLAIN and materialized views. · 0b337904
      Tom Lane 提交于
      Revert the matview-related changes in explain.c's API, as per recent
      complaint from Robert Haas.  The reason for these appears to have been
      principally some ill-considered choices around having intorel_startup do
      what ought to be parse-time checking, plus a poor arrangement for passing
      it the view parsetree it needs to store into pg_rewrite when creating a
      materialized view.  Do the latter by having parse analysis stick a copy
      into the IntoClause, instead of doing it at runtime.  (On the whole,
      I seriously question the choice to represent CREATE MATERIALIZED VIEW as a
      variant of SELECT INTO/CREATE TABLE AS, because that means injecting even
      more complexity into what was already a horrid legacy kluge.  However,
      I didn't go so far as to rethink that choice ... yet.)
      
      I also moved several error checks into matview parse analysis, and
      made the check for external Params in a matview more accurate.
      
      In passing, clean things up a bit more around interpretOidsOption(),
      and fix things so that we can use that to force no-oids for views,
      sequences, etc, thereby eliminating the need to cons up "oids = false"
      options when creating them.
      
      catversion bump due to change in IntoClause.  (I wonder though if we
      really need readfuncs/outfuncs support for IntoClause anymore.)
      0b337904
  2. 11 3月, 2013 1 次提交
    • T
      Support writable foreign tables. · 21734d2f
      Tom Lane 提交于
      This patch adds the core-system infrastructure needed to support updates
      on foreign tables, and extends contrib/postgres_fdw to allow updates
      against remote Postgres servers.  There's still a great deal of room for
      improvement in optimization of remote updates, but at least there's basic
      functionality there now.
      
      KaiGai Kohei, reviewed by Alexander Korotkov and Laurenz Albe, and rather
      heavily revised by Tom Lane.
      21734d2f
  3. 04 3月, 2013 1 次提交
    • K
      Add a materialized view relations. · 3bf3ab8c
      Kevin Grittner 提交于
      A materialized view has a rule just like a view and a heap and
      other physical properties like a table.  The rule is only used to
      populate the table, references in queries refer to the
      materialized data.
      
      This is a minimal implementation, but should still be useful in
      many cases.  Currently data is only populated "on demand" by the
      CREATE MATERIALIZED VIEW and REFRESH MATERIALIZED VIEW statements.
      It is expected that future releases will add incremental updates
      with various timings, and that a more refined concept of defining
      what is "fresh" data will be developed.  At some point it may even
      be possible to have queries use a materialized in place of
      references to underlying tables, but that requires the other
      above-mentioned features to be working first.
      
      Much of the documentation work by Robert Haas.
      Review by Noah Misch, Thom Brown, Robert Haas, Marko Tiikkaja
      Security review by KaiGai Kohei, with a decision on how best to
      implement sepgsql still pending.
      3bf3ab8c
  4. 28 2月, 2013 1 次提交
    • H
      Add support for piping COPY to/from an external program. · 3d009e45
      Heikki Linnakangas 提交于
      This includes backend "COPY TO/FROM PROGRAM '...'" syntax, and corresponding
      psql \copy syntax. Like with reading/writing files, the backend version is
      superuser-only, and in the psql version, the program is run in the client.
      
      In the passing, the psql \copy STDIN/STDOUT syntax is subtly changed: if you
      the stdin/stdout is quoted, it's now interpreted as a filename. For example,
      "\copy foo from 'stdin'" now reads from a file called 'stdin', not from
      standard input. Before this, there was no way to specify a filename called
      stdin, stdout, pstdin or pstdout.
      
      This creates a new function in pgport, wait_result_to_str(), which can
      be used to convert the exit status of a process, as returned by wait(3),
      to a human-readable string.
      
      Etsuro Fujita, reviewed by Amit Kapila.
      3d009e45
  5. 23 1月, 2013 1 次提交
    • A
      Improve concurrency of foreign key locking · 0ac5ad51
      Alvaro Herrera 提交于
      This patch introduces two additional lock modes for tuples: "SELECT FOR
      KEY SHARE" and "SELECT FOR NO KEY UPDATE".  These don't block each
      other, in contrast with already existing "SELECT FOR SHARE" and "SELECT
      FOR UPDATE".  UPDATE commands that do not modify the values stored in
      the columns that are part of the key of the tuple now grab a SELECT FOR
      NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently
      with tuple locks of the FOR KEY SHARE variety.
      
      Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this
      means the concurrency improvement applies to them, which is the whole
      point of this patch.
      
      The added tuple lock semantics require some rejiggering of the multixact
      module, so that the locking level that each transaction is holding can
      be stored alongside its Xid.  Also, multixacts now need to persist
      across server restarts and crashes, because they can now represent not
      only tuple locks, but also tuple updates.  This means we need more
      careful tracking of lifetime of pg_multixact SLRU files; since they now
      persist longer, we require more infrastructure to figure out when they
      can be removed.  pg_upgrade also needs to be careful to copy
      pg_multixact files over from the old server to the new, or at least part
      of multixact.c state, depending on the versions of the old and new
      servers.
      
      Tuple time qualification rules (HeapTupleSatisfies routines) need to be
      careful not to consider tuples with the "is multi" infomask bit set as
      being only locked; they might need to look up MultiXact values (i.e.
      possibly do pg_multixact I/O) to find out the Xid that updated a tuple,
      whereas they previously were assured to only use information readily
      available from the tuple header.  This is considered acceptable, because
      the extra I/O would involve cases that would previously cause some
      commands to block waiting for concurrent transactions to finish.
      
      Another important change is the fact that locking tuples that have
      previously been updated causes the future versions to be marked as
      locked, too; this is essential for correctness of foreign key checks.
      This causes additional WAL-logging, also (there was previously a single
      WAL record for a locked tuple; now there are as many as updated copies
      of the tuple there exist.)
      
      With all this in place, contention related to tuples being checked by
      foreign key rules should be much reduced.
      
      As a bonus, the old behavior that a subtransaction grabbing a stronger
      tuple lock than the parent (sub)transaction held on a given tuple and
      later aborting caused the weaker lock to be lost, has been fixed.
      
      Many new spec files were added for isolation tester framework, to ensure
      overall behavior is sane.  There's probably room for several more tests.
      
      There were several reviewers of this patch; in particular, Noah Misch
      and Andres Freund spent considerable time in it.  Original idea for the
      patch came from Simon Riggs, after a problem report by Joel Jacobson.
      Most code is from me, with contributions from Marti Raudsepp, Alexander
      Shulgin, Noah Misch and Andres Freund.
      
      This patch was discussed in several pgsql-hackers threads; the most
      important start at the following message-ids:
      	AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com
      	1290721684-sup-3951@alvh.no-ip.org
      	1294953201-sup-2099@alvh.no-ip.org
      	1320343602-sup-2290@alvh.no-ip.org
      	1339690386-sup-8927@alvh.no-ip.org
      	4FE5FF020200002500048A3D@gw.wicourts.gov
      	4FEAB90A0200002500048B7D@gw.wicourts.gov
      0ac5ad51
  6. 22 1月, 2013 1 次提交
    • T
      Add infrastructure for storing a VARIADIC ANY function's VARIADIC flag. · 75b39e79
      Tom Lane 提交于
      Originally we didn't bother to mark FuncExprs with any indication whether
      VARIADIC had been given in the source text, because there didn't seem to be
      any need for it at runtime.  However, because we cannot fold a VARIADIC ANY
      function's arguments into an array (since they're not necessarily all the
      same type), we do actually need that information at runtime if VARIADIC ANY
      functions are to respond unsurprisingly to use of the VARIADIC keyword.
      Add the missing field, and also fix ruleutils.c so that VARIADIC ANY
      function calls are dumped properly.
      
      Extracted from a larger patch that also fixes concat() and format() (the
      only two extant VARIADIC ANY functions) to behave properly when VARIADIC is
      specified.  This portion seems appropriate to review and commit separately.
      
      Pavel Stehule
      75b39e79
  7. 12 1月, 2013 1 次提交
    • T
      Redesign the planner's handling of index-descent cost estimation. · 31f38f28
      Tom Lane 提交于
      Historically we've used a couple of very ad-hoc fudge factors to try to
      get the right results when indexes of different sizes would satisfy a
      query with the same number of index leaf tuples being visited.  In
      commit 21a39de5 I tweaked one of these
      fudge factors, with results that proved disastrous for larger indexes.
      Commit bf01e34b fudged it some more,
      but still with not a lot of principle behind it.
      
      What seems like a better way to address these issues is to explicitly model
      index-descent costs, since that's what's really at stake when considering
      diferent indexes with similar leaf-page-level costs.  We tried that once
      long ago, and found that charging random_page_cost per page descended
      through was way too much, because upper btree levels tend to stay in cache
      in real-world workloads.  However, there's still CPU costs to think about,
      and the previous fudge factors can be seen as a crude attempt to account
      for those costs.  So this patch replaces those fudge factors with explicit
      charges for the number of tuple comparisons needed to descend the index
      tree, plus a small charge per page touched in the descent.  The cost
      multipliers are chosen so that the resulting charges are in the vicinity of
      the historical (pre-9.2) fudge factors for indexes of up to about a million
      tuples, while not ballooning unreasonably beyond that, as the old fudge
      factor did (even more so in 9.2).
      
      To make this work accurately for btree indexes, add some code that allows
      extraction of the known root-page height from a btree.  There's no
      equivalent number readily available for other index types, but we can use
      the log of the number of index pages as an approximate substitute.
      
      This seems like too much of a behavioral change to risk back-patching,
      but it should improve matters going forward.  In 9.2 I'll just revert
      the fudge-factor change.
      31f38f28
  8. 02 1月, 2013 1 次提交
  9. 19 10月, 2012 1 次提交
    • T
      Fix planning of non-strict equivalence clauses above outer joins. · 72a4231f
      Tom Lane 提交于
      If a potential equivalence clause references a variable from the nullable
      side of an outer join, the planner needs to take care that derived clauses
      are not pushed to below the outer join; else they may use the wrong value
      for the variable.  (The problem arises only with non-strict clauses, since
      if an upper clause can be proven strict then the outer join will get
      simplified to a plain join.)  The planner attempted to prevent this type
      of error by checking that potential equivalence clauses aren't
      outerjoin-delayed as a whole, but actually we have to check each side
      separately, since the two sides of the clause will get moved around
      separately if it's treated as an equivalence.  Bugs of this type can be
      demonstrated as far back as 7.4, even though releases before 8.3 had only
      a very ad-hoc notion of equivalence clauses.
      
      In addition, we neglected to account for the possibility that such clauses
      might have nonempty nullable_relids even when not outerjoin-delayed; so the
      equivalence-class machinery lacked logic to compute correct nullable_relids
      values for clauses it constructs.  This oversight was harmless before 9.2
      because we were only using RestrictInfo.nullable_relids for OR clauses;
      but as of 9.2 it could result in pushing constructed equivalence clauses
      to incorrect places.  (This accounts for bug #7604 from Bill MacArthur.)
      
      Fix the first problem by adding a new test check_equivalence_delay() in
      distribute_qual_to_rels, and fix the second one by adding code in
      equivclass.c and called functions to set correct nullable_relids for
      generated clauses.  Although I believe the second part of this is not
      currently necessary before 9.2, I chose to back-patch it anyway, partly to
      keep the logic similar across branches and partly because it seems possible
      we might find other reasons why we need valid values of nullable_relids in
      the older branches.
      
      Add regression tests illustrating these problems.  In 9.0 and up, also
      add test cases checking that we can push constants through outer joins,
      since we've broken that optimization before and I nearly broke it again
      with an overly simplistic patch for this problem.
      72a4231f
  10. 13 10月, 2012 2 次提交
    • T
      Get rid of COERCE_DONTCARE. · a29f7ed5
      Tom Lane 提交于
      We don't need this hack any more.
      a29f7ed5
    • T
      Make equal() ignore CoercionForm fields for better planning with casts. · 71e58dcf
      Tom Lane 提交于
      This change ensures that the planner will see implicit and explicit casts
      as equivalent for all purposes, except in the minority of cases where
      there's actually a semantic difference (as reflected by having a 3-argument
      cast function).  In particular, this fixes cases where the EquivalenceClass
      machinery failed to consider two references to a varchar column as
      equivalent if one was implicitly cast to text but the other was explicitly
      cast to text, as seen in bug #7598 from Vaclav Juza.  We have had similar
      bugs before in other parts of the planner, so I think it's time to fix this
      problem at the core instead of continuing to band-aid around it.
      
      Remove set_coercionform_dontcare(), which represents the band-aid
      previously in use for allowing matching of index and constraint expressions
      with inconsistent cast labeling.  (We can probably get rid of
      COERCE_DONTCARE altogether, but I don't think removing that enum value in
      back branches would be wise; it's possible there's third party code
      referring to it.)
      
      Back-patch to 9.2.  We could go back further, and might want to once this
      has been tested more; but for the moment I won't risk destabilizing plan
      choices in long-since-stable branches.
      71e58dcf
  11. 09 10月, 2012 1 次提交
  12. 04 10月, 2012 2 次提交
    • T
      Support CREATE SCHEMA IF NOT EXISTS. · fb34e94d
      Tom Lane 提交于
      Per discussion, schema-element subcommands are not allowed together with
      this option, since it's not very obvious what should happen to the element
      objects.
      
      Fabrízio de Royes Mello
      fb34e94d
    • A
      refactor ALTER some-obj SET OWNER implementation · 994c36e0
      Alvaro Herrera 提交于
      Remove duplicate implementation of catalog munging and miscellaneous
      privilege and consistency checks.  Instead rely on already existing data
      in objectaddress.c to do the work.
      
      Author: KaiGai Kohei
      Tweaked by me
      Reviewed by Robert Haas
      994c36e0
  13. 03 10月, 2012 1 次提交
    • A
      Refactor "ALTER some-obj SET SCHEMA" implementation · 2164f9a1
      Alvaro Herrera 提交于
      Instead of having each object type implement the catalog munging
      independently, centralize knowledge about how to do it and expand the
      existing table in objectaddress.c with enough data about each object
      type to support this operation.
      
      Author: KaiGai Kohei
      Tweaks by me
      Reviewed by Robert Haas
      2164f9a1
  14. 23 9月, 2012 1 次提交
    • A
      Allow IF NOT EXISTS when add a new enum label. · 6d12b68c
      Andrew Dunstan 提交于
      If the label is already in the enum the statement becomes a no-op.
      This will reduce the pain that comes from our not allowing this
      operation inside a transaction block.
      
      Andrew Dunstan, reviewed by Tom Lane and Magnus Hagander.
      6d12b68c
  15. 06 9月, 2012 1 次提交
    • T
      Fix PARAM_EXEC assignment mechanism to be safe in the presence of WITH. · 46c508fb
      Tom Lane 提交于
      The planner previously assumed that parameter Vars having the same absolute
      query level, varno, and varattno could safely be assigned the same runtime
      PARAM_EXEC slot, even though they might be different Vars appearing in
      different subqueries.  This was (probably) safe before the introduction of
      CTEs, but the lazy-evalution mechanism used for CTEs means that a CTE can
      be executed during execution of some other subquery, causing the lifespan
      of Params at the same syntactic nesting level as the CTE to overlap with
      use of the same slots inside the CTE.  In 9.1 we created additional hazards
      by using the same parameter-assignment technology for nestloop inner scan
      parameters, but it was broken before that, as illustrated by the added
      regression test.
      
      To fix, restructure the planner's management of PlannerParamItems so that
      items having different semantic lifespans are kept rigorously separated.
      This will probably result in complex queries using more runtime PARAM_EXEC
      slots than before, but the slots are cheap enough that this hardly matters.
      Also, stop generating PlannerParamItems containing Params for subquery
      outputs: all we really need to do is reserve the PARAM_EXEC slot number,
      and that now only takes incrementing a counter.  The planning code is
      simpler and probably faster than before, as well as being more correct.
      
      Per report from Vik Reykja.
      
      These changes will mostly also need to be made in the back branches, but
      I'm going to hold off on that until after 9.2.0 wraps.
      46c508fb
  16. 02 9月, 2012 1 次提交
    • T
      Drop cheap-startup-cost paths during add_path() if we don't need them. · 6d2c8c0e
      Tom Lane 提交于
      We can detect whether the planner top level is going to care at all about
      cheap startup cost (it will only do so if query_planner's tuple_fraction
      argument is greater than zero).  If it isn't, we might as well discard
      paths immediately whose only advantage over others is cheap startup cost.
      This turns out to get rid of quite a lot of paths in complex queries ---
      I saw planner runtime reduction of more than a third on one large query.
      
      Since add_path isn't currently passed the PlannerInfo "root", the easiest
      way to tell it whether to do this was to add a bool flag to RelOptInfo.
      That's a bit redundant, since all relations in a given query level will
      have the same setting.  But in the future it's possible that we'd refine
      the control decision to work on a per-relation basis, so this seems like
      a good arrangement anyway.
      
      Per my suggestion of a few months ago.
      6d2c8c0e
  17. 31 8月, 2012 1 次提交
    • A
      Split tuple struct defs from htup.h to htup_details.h · c219d9b0
      Alvaro Herrera 提交于
      This reduces unnecessary exposure of other headers through htup.h, which
      is very widely included by many files.
      
      I have chosen to move the function prototypes to the new file as well,
      because that means htup.h no longer needs to include tupdesc.h.  In
      itself this doesn't have much effect in indirect inclusion of tupdesc.h
      throughout the tree, because it's also required by execnodes.h; but it's
      something to explore in the future, and it seemed best to do the htup.h
      change now while I'm busy with it.
      c219d9b0
  18. 29 8月, 2012 1 次提交
    • A
      Split heapam_xlog.h from heapam.h · 21c09e99
      Alvaro Herrera 提交于
      The heapam XLog functions are used by other modules, not all of which
      are interested in the rest of the heapam API.  With this, we let them
      get just the XLog stuff in which they are interested and not pollute
      them with unrelated includes.
      
      Also, since heapam.h no longer requires xlog.h, many files that do
      include heapam.h no longer get xlog.h automatically, including a few
      headers.  This is useful because heapam.h is getting pulled in by
      execnodes.h, which is in turn included by a lot of files.
      21c09e99
  19. 27 8月, 2012 1 次提交
    • T
      Fix up planner infrastructure to support LATERAL properly. · 9ff79b9d
      Tom Lane 提交于
      This patch takes care of a number of problems having to do with failure
      to choose valid join orders and incorrect handling of lateral references
      pulled up from subqueries.  Notable changes:
      
      * Add a LateralJoinInfo data structure similar to SpecialJoinInfo, to
      represent join ordering constraints created by lateral references.
      (I first considered extending the SpecialJoinInfo structure, but the
      semantics are different enough that a separate data structure seems
      better.)  Extend join_is_legal() and related functions to prevent trying
      to form unworkable joins, and to ensure that we will consider joins that
      satisfy lateral references even if the joins would be clauseless.
      
      * Fill in the infrastructure needed for the last few types of relation scan
      paths to support parameterization.  We'd have wanted this eventually
      anyway, but it is necessary now because a relation that gets pulled up out
      of a UNION ALL subquery may acquire a reltargetlist containing lateral
      references, meaning that its paths *have* to be parameterized whether or
      not we have any code that can push join quals down into the scan.
      
      * Compute data about lateral references early in query_planner(), and save
      in RelOptInfo nodes, to avoid repetitive calculations later.
      
      * Assorted corner-case bug fixes.
      
      There's probably still some bugs left, but this is a lot closer to being
      real than it was before.
      9ff79b9d
  20. 08 8月, 2012 1 次提交
    • T
      Implement SQL-standard LATERAL subqueries. · 5ebaaa49
      Tom Lane 提交于
      This patch implements the standard syntax of LATERAL attached to a
      sub-SELECT in FROM, and also allows LATERAL attached to a function in FROM,
      since set-returning function calls are expected to be one of the principal
      use-cases.
      
      The main change here is a rewrite of the mechanism for keeping track of
      which relations are visible for column references while the FROM clause is
      being scanned.  The parser "namespace" lists are no longer lists of bare
      RTEs, but are lists of ParseNamespaceItem structs, which carry an RTE
      pointer as well as some visibility-controlling flags.  Aside from
      supporting LATERAL correctly, this lets us get rid of the ancient hacks
      that required rechecking subqueries and JOIN/ON and function-in-FROM
      expressions for invalid references after they were initially parsed.
      Invalid column references are now always correctly detected on sight.
      
      In passing, remove assorted parser error checks that are now dead code by
      virtue of our having gotten rid of add_missing_from, as well as some
      comments that are obsolete for the same reason.  (It was mainly
      add_missing_from that caused so much fudging here in the first place.)
      
      The planner support for this feature is very minimal, and will be improved
      in future patches.  It works well enough for testing purposes, though.
      
      catversion bump forced due to new field in RangeTblEntry.
      5ebaaa49
  21. 01 8月, 2012 1 次提交
    • T
      Fix WITH attached to a nested set operation (UNION/INTERSECT/EXCEPT). · f6ce81f5
      Tom Lane 提交于
      Parse analysis neglected to cover the case of a WITH clause attached to an
      intermediate-level set operation; it only handled WITH at the top level
      or WITH attached to a leaf-level SELECT.  Per report from Adam Mackler.
      
      In HEAD, I rearranged the order of SelectStmt's fields to put withClause
      with the other fields that can appear on non-leaf SelectStmts.  In back
      branches, leave it alone to avoid a possible ABI break for third-party
      code.
      
      Back-patch to 8.4 where WITH support was added.
      f6ce81f5
  22. 18 7月, 2012 1 次提交
    • R
      Syntax support and documentation for event triggers. · 3855968f
      Robert Haas 提交于
      They don't actually do anything yet; that will get fixed in a
      follow-on commit.  But this gets the basic infrastructure in place,
      including CREATE/ALTER/DROP EVENT TRIGGER; support for COMMENT,
      SECURITY LABEL, and ALTER EXTENSION .. ADD/DROP EVENT TRIGGER;
      pg_dump and psql support; and documentation for the anticipated
      initial feature set.
      
      Dimitri Fontaine, with review and a bunch of additional hacking by me.
      Thom Brown extensively reviewed earlier versions of this patch set,
      but there's not a whole lot of that code left in this commit, as it
      turns out.
      3855968f
  23. 17 7月, 2012 1 次提交
    • T
      Avoid pre-determining index names during CREATE TABLE LIKE parsing. · c92be3c0
      Tom Lane 提交于
      Formerly, when trying to copy both indexes and comments, CREATE TABLE LIKE
      had to pre-assign names to indexes that had comments, because it made up an
      explicit CommentStmt command to apply the comment and so it had to know the
      name for the index.  This creates bad interactions with other indexes, as
      shown in bug #6734 from Daniele Varrazzo: the preassignment logic couldn't
      take any other indexes into account so it could choose a conflicting name.
      
      To fix, add a field to IndexStmt that allows it to carry a comment to be
      assigned to the new index.  (This isn't a user-exposed feature of CREATE
      INDEX, only an internal option.)  Now we don't need preassignment of index
      names in any situation.
      
      I also took the opportunity to refactor DefineIndex to accept the IndexStmt
      as such, rather than passing all its fields individually in a mile-long
      parameter list.
      
      Back-patch to 9.2, but no further, because it seems too dangerous to change
      IndexStmt or DefineIndex's API in released branches.  The bug exists back
      to 9.0 where CREATE TABLE LIKE grew the ability to copy comments, but given
      the lack of prior complaints we'll just let it go unfixed before 9.2.
      c92be3c0
  24. 01 7月, 2012 1 次提交
    • T
      Suppress compiler warnings in readfuncs.c. · 39bfc94c
      Tom Lane 提交于
      Commit 7357558f introduced "(void) token;"
      into the READ_TEMP_LOCALS() macro, to suppress complaints from gcc 4.6
      when the value of token was not used anywhere in a particular node-read
      function.  However, this just moved the warning around: inspection of
      buildfarm results shows that some compilers are now complaining that token
      is being read before it's set.  Revert the READ_TEMP_LOCALS() macro change
      and instead put "(void) token;" into READ_NODE_FIELD(), which is the
      principal culprit for cases where the warning might occur.  In principle we
      might need the same in READ_BITMAPSET_FIELD() and/or READ_LOCATION_FIELD(),
      but it seems unlikely that a node would consist only of such fields, so
      I'll leave them alone for now.
      39bfc94c
  25. 11 6月, 2012 1 次提交
  26. 21 4月, 2012 1 次提交
    • A
      Recast "ONLY" column CHECK constraints as NO INHERIT · 09ff76fc
      Alvaro Herrera 提交于
      The original syntax wasn't universally loved, and it didn't allow its
      usage in CREATE TABLE, only ALTER TABLE.  It now works everywhere, and
      it also allows using ALTER TABLE ONLY to add an uninherited CHECK
      constraint, per discussion.
      
      The pg_constraint column has accordingly been renamed connoinherit.
      
      This commit partly reverts some of the changes in
      61d81bd2, particularly some pg_dump and
      psql bits, because now pg_get_constraintdef includes the necessary NO
      INHERIT within the constraint definition.
      
      Author: Nikhil Sontakke
      Some tweaks by me
      09ff76fc
  27. 20 4月, 2012 1 次提交
    • T
      Revise parameterized-path mechanism to fix assorted issues. · 5b7b5518
      Tom Lane 提交于
      This patch adjusts the treatment of parameterized paths so that all paths
      with the same parameterization (same set of required outer rels) for the
      same relation will have the same rowcount estimate.  We cache the rowcount
      estimates to ensure that property, and hopefully save a few cycles too.
      Doing this makes it practical for add_path_precheck to operate without
      a rowcount estimate: it need only assume that paths with different
      parameterizations never dominate each other, which is close enough to
      true anyway for coarse filtering, because normally a more-parameterized
      path should yield fewer rows thanks to having more join clauses to apply.
      
      In add_path, we do the full nine yards of comparing rowcount estimates
      along with everything else, so that we can discard parameterized paths that
      don't actually have an advantage.  This fixes some issues I'd found with
      add_path rejecting parameterized paths on the grounds that they were more
      expensive than not-parameterized ones, even though they yielded many fewer
      rows and hence would be cheaper once subsequent joining was considered.
      
      To make the same-rowcounts assumption valid, we have to require that any
      parameterized path enforce *all* join clauses that could be obtained from
      the particular set of outer rels, even if not all of them are useful for
      indexing.  This is required at both base scans and joins.  It's a good
      thing anyway since the net impact is that join quals are checked at the
      lowest practical level in the join tree.  Hence, discard the original
      rather ad-hoc mechanism for choosing parameterization joinquals, and build
      a better one that has a more principled rule for when clauses can be moved.
      The original rule was actually buggy anyway for lack of knowledge about
      which relations are part of an outer join's outer side; getting this right
      requires adding an outer_relids field to RestrictInfo.
      5b7b5518
  28. 18 4月, 2012 2 次提交
  29. 09 4月, 2012 1 次提交
    • T
      Don't bother copying empty support arrays in a zero-column MergeJoin. · d515365a
      Tom Lane 提交于
      The case could not arise when this code was originally written, but it can
      now (since we made zero-column MergeJoins work for the benefit of FULL JOIN
      ON TRUE).  I don't think there is any actual bug here, but we might as well
      treat it consistently with other uses of COPY_POINTER_FIELD().  Per comment
      from Ashutosh Bapat.
      d515365a
  30. 06 4月, 2012 1 次提交
  31. 28 3月, 2012 1 次提交
    • T
      Add some infrastructure for contrib/pg_stat_statements. · a40fa613
      Tom Lane 提交于
      Add a queryId field to Query and PlannedStmt.  This is not used by the
      core backend, except for being copied around at appropriate times.
      It's meant to allow plug-ins to track a particular query forward from
      parse analysis to execution.
      
      The queryId is intentionally not dumped into stored rules (and hence this
      commit doesn't bump catversion).  You could argue that choice either way,
      but it seems better that stored rule strings not have any dependency
      on plug-ins that might or might not be present.
      
      Also, add a post_parse_analyze_hook that gets invoked at the end of
      parse analysis (but only for top-level analysis of complete queries,
      not cases such as analyzing a domain's default-value expression).
      This is mainly meant to be used to compute and assign a queryId,
      but it could have other applications.
      
      Peter Geoghegan
      a40fa613
  32. 24 3月, 2012 1 次提交
    • T
      Code review for protransform patches. · 0339047b
      Tom Lane 提交于
      Fix loss of previous expression-simplification work when a transform
      function fires: we must not simply revert to untransformed input tree.
      Instead build a dummy FuncExpr node to pass to the transform function.
      This has the additional advantage of providing a simpler, more uniform
      API for transform functions.
      
      Move documentation to a somewhat less buried spot, relocate some
      poorly-placed code, be more wary of null constants and invalid typmod
      values, add an opr_sanity check on protransform function signatures,
      and some other minor cosmetic adjustments.
      
      Note: although this patch touches pg_proc.h, no need for catversion
      bump, because the changes are cosmetic and don't actually change the
      intended catalog contents.
      0339047b
  33. 20 3月, 2012 1 次提交
    • T
      Restructure SELECT INTO's parsetree representation into CreateTableAsStmt. · 9dbf2b7d
      Tom Lane 提交于
      Making this operation look like a utility statement seems generally a good
      idea, and particularly so in light of the desire to provide command
      triggers for utility statements.  The original choice of representing it as
      SELECT with an IntoClause appendage had metastasized into rather a lot of
      places, unfortunately, so that this patch is a great deal more complicated
      than one might at first expect.
      
      In particular, keeping EXPLAIN working for SELECT INTO and CREATE TABLE AS
      subcommands required restructuring some EXPLAIN-related APIs.  Add-on code
      that calls ExplainOnePlan or ExplainOneUtility, or uses
      ExplainOneQuery_hook, will need adjustment.
      
      Also, the cases PREPARE ... SELECT INTO and CREATE RULE ... SELECT INTO,
      which formerly were accepted though undocumented, are no longer accepted.
      The PREPARE case can be replaced with use of CREATE TABLE AS EXECUTE.
      The CREATE RULE case doesn't seem to have much real-world use (since the
      rule would work only once before failing with "table already exists"),
      so we'll not bother with that one.
      
      Both SELECT INTO and CREATE TABLE AS still return a command tag of
      "SELECT nnnn".  There was some discussion of returning "CREATE TABLE nnnn",
      but for the moment backwards compatibility wins the day.
      
      Andres Freund and Tom Lane
      9dbf2b7d
  34. 10 3月, 2012 1 次提交
    • T
      Revise FDW planning API, again. · b1495393
      Tom Lane 提交于
      Further reflection shows that a single callback isn't very workable if we
      desire to let FDWs generate multiple Paths, because that forces the FDW to
      do all work necessary to generate a valid Plan node for each Path.  Instead
      split the former PlanForeignScan API into three steps: GetForeignRelSize,
      GetForeignPaths, GetForeignPlan.  We had already bit the bullet of breaking
      the 9.1 FDW API for 9.2, so this shouldn't cause very much additional pain,
      and it's substantially more flexible for complex FDWs.
      
      Add an fdw_private field to RelOptInfo so that the new functions can save
      state there rather than possibly having to recalculate information two or
      three times.
      
      In addition, we'd not thought through what would be needed to allow an FDW
      to set up subexpressions of its choice for runtime execution.  We could
      treat ForeignScan.fdw_private as an executable expression but that seems
      likely to break existing FDWs unnecessarily (in particular, it would
      restrict the set of node types allowable in fdw_private to those supported
      by expression_tree_walker).  Instead, invent a separate field fdw_exprs
      which will receive the postprocessing appropriate for expression trees.
      (One field is enough since it can be a list of expressions; also, we assume
      the corresponding expression state tree(s) will be held within fdw_state,
      so we don't need to add anything to ForeignScanState.)
      
      Per review of Hanada Shigeru's pgsql_fdw patch.  We may need to tweak this
      further as we continue to work on that patch, but to me it feels a lot
      closer to being right now.
      b1495393
  35. 06 3月, 2012 1 次提交
    • T
      Redesign PlanForeignScan API to allow multiple paths for a foreign table. · 6b289942
      Tom Lane 提交于
      The original API specification only allowed an FDW to create a single
      access path, which doesn't seem like a terribly good idea in hindsight.
      Instead, move the responsibility for building the Path node and calling
      add_path() into the FDW's PlanForeignScan function.  Now, it can do that
      more than once if appropriate.  There is no longer any need for the
      transient FdwPlan struct, so get rid of that.
      
      Etsuro Fujita, Shigeru Hanada, Tom Lane
      6b289942
  36. 28 2月, 2012 1 次提交
    • A
      ALTER TABLE: skip FK validation when it's safe to do so · cb3a7c2b
      Alvaro Herrera 提交于
      We already skip rewriting the table in these cases, but we still force a
      whole table scan to validate the data.  This can be skipped, and thus
      we can make the whole ALTER TABLE operation just do some catalog touches
      instead of scanning the table, when these two conditions hold:
      
      (a) Old and new pg_constraint.conpfeqop match exactly.  This is actually
      stronger than needed; we could loosen things by way of operator
      families, but it'd require a lot more effort.
      
      (b) The functions, if any, implementing a cast from the foreign type to
      the primary opcintype are the same.  For this purpose, we can consider a
      binary coercion equivalent to an exact type match.  When the opcintype
      is polymorphic, require that the old and new foreign types match
      exactly.  (Since ri_triggers.c does use the executor, the stronger check
      for polymorphic types is no mere future-proofing.  However, no core type
      exercises its necessity.)
      
      Author: Noah Misch
      
      Committer's note: catalog version bumped due to change of the Constraint
      node.  I can't actually find any way to have such a node in a stored
      rule, but given that we have "out" support for them, better be safe.
      cb3a7c2b
  37. 28 1月, 2012 1 次提交
    • T
      Use parameterized paths to generate inner indexscans more flexibly. · e2fa76d8
      Tom Lane 提交于
      This patch fixes the planner so that it can generate nestloop-with-
      inner-indexscan plans even with one or more levels of joining between
      the indexscan and the nestloop join that is supplying the parameter.
      The executor was fixed to handle such cases some time ago, but the
      planner was not ready.  This should improve our plans in many situations
      where join ordering restrictions formerly forced complete table scans.
      
      There is probably a fair amount of tuning work yet to be done, because
      of various heuristics that have been added to limit the number of
      parameterized paths considered.  However, we are not going to find out
      what needs to be adjusted until the code gets some real-world use, so
      it's time to get it in there where it can be tested easily.
      
      Note API change for index AM amcostestimate functions.  I'm not aware of
      any non-core index AMs, but if there are any, they will need minor
      adjustments.
      e2fa76d8