1. 08 1月, 2016 2 次提交
    • T
      Marginal cleanup of GROUPING SETS code in grouping_planner(). · a54676ac
      Tom Lane 提交于
      Improve comments and make it a shade less messy.  I think we might want
      to move all of this somewhere else later, but it needs to be more
      readable first.
      
      In passing, re-pgindent the file, affecting some recently-added comments
      concerning parallel query planning.
      a54676ac
    • T
      Delay creation of subplan tlist until after create_plan(). · c44d0138
      Tom Lane 提交于
      Once upon a time it was necessary for grouping_planner() to determine
      the tlist it wanted from the scan/join plan subtree before it called
      query_planner(), because query_planner() would actually make a Plan using
      that.  But we refactored things a long time ago to delay construction of
      the Plan tree till later, so there's no need to build that tlist until
      (and indeed unless) we're ready to plaster it onto the Plan.  The only
      thing query_planner() cares about is what Vars are going to be needed for
      the tlist, and it can perfectly well get that by looking at the real tlist
      rather than some masticated version.
      
      Well, actually, there is one minor glitch in that argument, which is that
      make_subplanTargetList also adds Vars appearing only in HAVING to the
      tlist it produces.  So now we have to account for HAVING explicitly in
      build_base_rel_tlists.  But that just adds a few lines of code, and
      I doubt it moves the needle much on processing time; we might be doing
      pull_var_clause() twice on the havingQual, but before we had it scanning
      dummy tlist entries instead.
      
      This is a very small down payment on rationalizing grouping_planner
      enough so it can be refactored.
      c44d0138
  2. 03 1月, 2016 1 次提交
  3. 15 12月, 2015 1 次提交
    • S
      Collect the global OR of hasRowSecurity flags for plancache · e5e11c8c
      Stephen Frost 提交于
      We carry around information about if a given query has row security or
      not to allow the plancache to use that information to invalidate a
      planned query in the event that the environment changes.
      
      Previously, the flag of one of the subqueries was simply being copied
      into place to indicate if the query overall included RLS components.
      That's wrong as we need the global OR of all subqueries.  Fix by
      changing the code to match how fireRIRules works, which is results
      in OR'ing all of the flags.
      
      Noted by Tom.
      
      Back-patch to 9.5 where RLS was introduced.
      e5e11c8c
  4. 12 12月, 2015 1 次提交
    • T
      Get rid of the planner's LateralJoinInfo data structure. · 4fcf4845
      Tom Lane 提交于
      I originally modeled this data structure on SpecialJoinInfo, but after
      commit acfcd45c that looks like a pretty poor decision.
      All we really need is relid sets identifying laterally-referenced rels;
      and most of the time, what we want to know about includes indirect lateral
      references, a case the LateralJoinInfo data was unsuited to compute with
      any efficiency.  The previous commit redefined RelOptInfo.lateral_relids
      as the transitive closure of lateral references, so that it easily supports
      checking indirect references.  For the places where we really do want just
      direct references, add a new RelOptInfo field direct_lateral_relids, which
      is easily set up as a copy of lateral_relids before we perform the
      transitive closure calculation.  Then we can just drop lateral_info_list
      and LateralJoinInfo and the supporting code.  This makes the planner's
      handling of lateral references noticeably more efficient, and shorter too.
      
      Such a change can't be back-patched into stable branches for fear of
      breaking extensions that might be looking at the planner's data structures;
      but it seems not too late to push it into 9.5, so I've done so.
      4fcf4845
  5. 11 11月, 2015 2 次提交
    • R
      Generate parallel sequential scan plans in simple cases. · 80558c1f
      Robert Haas 提交于
      Add a new flag, consider_parallel, to each RelOptInfo, indicating
      whether a plan for that relation could conceivably be run inside of
      a parallel worker.  Right now, we're pretty conservative: for example,
      it might be possible to defer applying a parallel-restricted qual
      in a worker, and later do it in the leader, but right now we just
      don't try to parallelize access to that relation.  That's probably
      the right decision in most cases, anyway.
      
      Using the new flag, generate parallel sequential scan plans for plain
      baserels, meaning that we now have parallel sequential scan in
      PostgreSQL.  The logic here is pretty unsophisticated right now: the
      costing model probably isn't right in detail, and we can't push joins
      beneath Gather nodes, so the number of plans that can actually benefit
      from this is pretty limited right now.  Lots more work is needed.
      Nevertheless, it seems time to enable this functionality so that all
      this code can actually be tested easily by users and developers.
      
      Note that, if you wish to test this functionality, it will be
      necessary to set max_parallel_degree to a value greater than the
      default of 0.  Once a few more loose ends have been tidied up here, we
      might want to consider changing the default value of this GUC, but
      I'm leaving it alone for now.
      
      Along the way, fix a bug in cost_gather: the previous coding thought
      that a Gather node's transfer overhead should be costed on the basis of
      the relation size rather than the number of tuples that actually need
      to be passed off to the leader.
      
      Patch by me, reviewed in earlier versions by Amit Kapila.
      80558c1f
    • R
      Make sequential scans parallel-aware. · f0661c4e
      Robert Haas 提交于
      In addition, this path fills in a number of missing bits and pieces in
      the parallel infrastructure.  Paths and plans now have a parallel_aware
      flag indicating whether whatever parallel-aware logic they have should
      be engaged.  It is believed that we will need this flag for a number of
      path/plan types, not just sequential scans, which is why the flag is
      generic rather than part of the SeqScan structures specifically.
      Also, execParallel.c now gives parallel nodes a chance to initialize
      their PlanState nodes from the DSM during parallel worker startup.
      
      Amit Kapila, with a fair amount of adjustment by me.  Review of previous
      patch versions by Haribabu Kommi and others.
      f0661c4e
  6. 16 10月, 2015 1 次提交
    • R
      Prohibit parallel query when the isolation level is serializable. · a53c06a1
      Robert Haas 提交于
      In order for this to be safe, the code which hands true serializability
      will need to taught that the SIRead locks taken by a parallel worker
      pertain to the same transaction as those taken by the parallel leader.
      Some further changes may be needed as well.  Until the necessary
      adaptations are made, don't generate parallel plans in serializable
      mode, and if a previously-generated parallel plan is used after
      serializable mode has been activated, run it serially.
      
      This fixes a bug in commit 7aea8e4f.
      a53c06a1
  7. 29 9月, 2015 1 次提交
    • R
      Parallel executor support. · d1b7c1ff
      Robert Haas 提交于
      This code provides infrastructure for a parallel leader to start up
      parallel workers to execute subtrees of the plan tree being executed
      in the master.  User-supplied parameters from ParamListInfo are passed
      down, but PARAM_EXEC parameters are not.  Various other constructs,
      such as initplans, subplans, and CTEs, are also not currently shared.
      Nevertheless, there's enough here to support a basic implementation of
      parallel query, and we can lift some of the current restrictions as
      needed.
      
      Amit Kapila and Robert Haas
      d1b7c1ff
  8. 17 9月, 2015 1 次提交
    • R
      Determine whether it's safe to attempt a parallel plan for a query. · 7aea8e4f
      Robert Haas 提交于
      Commit 924bcf4f introduced a framework
      for parallel computation in PostgreSQL that makes most but not all
      built-in functions safe to execute in parallel mode.  In order to have
      parallel query, we'll need to be able to determine whether that query
      contains functions (either built-in or user-defined) that cannot be
      safely executed in parallel mode.  This requires those functions to be
      labeled, so this patch introduces an infrastructure for that.  Some
      functions currently labeled as safe may need to be revised depending on
      how pending issues related to heavyweight locking under paralllelism
      are resolved.
      
      Parallel plans can't be used except for the case where the query will
      run to completion.  If portal execution were suspended, the parallel
      mode restrictions would need to remain in effect during that time, but
      that might make other queries fail.  Therefore, this patch introduces
      a framework that enables consideration of parallel plans only when it
      is known that the plan will be run to completion.  This probably needs
      some refinement; for example, at bind time, we do not know whether a
      query run via the extended protocol will be execution to completion or
      run with a limited fetch count.  Having the client indicate its
      intentions at bind time would constitute a wire protocol break.  Some
      contexts in which parallel mode would be safe are not adjusted by this
      patch; the default is not to try parallel plans except from call sites
      that have been updated to say that such plans are OK.
      
      This commit doesn't introduce any parallel paths or plans; it just
      provides a way to determine whether they could potentially be used.
      I'm committing it on the theory that the remaining parallel sequential
      scan patches will also get committed to this release, hopefully in the
      not-too-distant future.
      
      Robert Haas and Amit Kapila.  Reviewed (in earlier versions) by Noah
      Misch.
      7aea8e4f
  9. 12 8月, 2015 1 次提交
    • T
      Postpone extParam/allParam calculations until the very end of planning. · 68fa28f7
      Tom Lane 提交于
      Until now we computed these Param ID sets at the end of subquery_planner,
      but that approach depends on subquery_planner returning a concrete Plan
      tree.  We would like to switch over to returning one or more Paths for a
      subquery, and in that representation the necessary details aren't fully
      fleshed out (not to mention that we don't really want to do this work for
      Paths that end up getting discarded).  Hence, refactor so that we can
      compute the param ID sets at the end of planning, just before
      set_plan_references is run.
      
      The main change necessary to make this work is that we need to capture
      the set of outer-level Param IDs available to the current query level
      before exiting subquery_planner, since the outer levels' plan_params lists
      are transient.  (That's not going to pose a problem for returning Paths,
      since all the work involved in producing that data is part of expression
      preprocessing, which will continue to happen before Paths are produced.)
      On the plus side, this change gets rid of several existing kluges.
      
      Eventually I'd like to get rid of SS_finalize_plan altogether in favor of
      doing this work during set_plan_references, but that will require some
      complex rejiggering because SS_finalize_plan needs to visit subplans and
      initplans before the main plan.  So leave that idea for another day.
      68fa28f7
  10. 31 7月, 2015 1 次提交
    • T
      Avoid some zero-divide hazards in the planner. · 8693ebe3
      Tom Lane 提交于
      Although I think on all modern machines floating division by zero
      results in Infinity not SIGFPE, we still don't want infinities
      running around in the planner's costing estimates; too much risk
      of that leading to insane behavior.
      
      grouping_planner() failed to consider the possibility that final_rel
      might be known dummy and hence have zero rowcount.  (I wonder if it
      would be better to set a rows estimate of 1 for dummy relations?
      But at least in the back branches, changing this convention seems
      like a bad idea, so I'll leave that for another day.)
      
      Make certain that get_variable_numdistinct() produces a nonzero result.
      The case that can be shown to be broken is with stadistinct < 0.0 and
      small ntuples; we did not prevent the result from rounding to zero.
      For good luck I applied clamp_row_est() to all the nonconstant return
      values.
      
      In ExecChooseHashTableSize(), Assert that we compute positive nbuckets
      and nbatch.  I know of no reason to think this isn't the case, but it
      seems like a good safety check.
      
      Per reports from Piotr Stefaniak.  Back-patch to all active branches.
      8693ebe3
  11. 26 7月, 2015 3 次提交
    • A
      Allow to push down clauses from HAVING to WHERE when grouping sets are used. · 61444bfb
      Andres Freund 提交于
      Previously we disallowed pushing down quals to WHERE in the presence of
      grouping sets. That's overly restrictive.
      
      We now instead copy quals to WHERE if applicable, leaving the
      one in HAVING in place. That's because, at that stage of the planning
      process, it's nontrivial to determine if it's safe to remove the one in
      HAVING.
      
      Author: Andrew Gierth
      Discussion: 874mkt3l59.fsf@news-spur.riddles.org.uk
      Backpatch: 9.5, where grouping sets were introduced. This isn't exactly
          a bugfix, but it seems better to keep the branches in sync at this point.
      61444bfb
    • A
      Build column mapping for grouping sets in all required cases. · 144666f6
      Andres Freund 提交于
      The previous coding frequently failed to fail because for one it's
      unusual to have rollup clauses with one column, and for another
      sometimes the wrong mapping didn't cause obvious problems.
      
      Author: Jeevan Chalke
      Reviewed-By: Andrew Gierth
      Discussion: CAM2+6=W=9=hQOipH0HAPbkun3Z3TFWij_EiHue0_6UX=oR=1kw@mail.gmail.com
      Backpatch: 9.5, where grouping sets were introduced
      144666f6
    • T
      Redesign tablesample method API, and do extensive code review. · dd7a8f66
      Tom Lane 提交于
      The original implementation of TABLESAMPLE modeled the tablesample method
      API on index access methods, which wasn't a good choice because, without
      specialized DDL commands, there's no way to build an extension that can
      implement a TSM.  (Raw inserts into system catalogs are not an acceptable
      thing to do, because we can't undo them during DROP EXTENSION, nor will
      pg_upgrade behave sanely.)  Instead adopt an API more like procedural
      language handlers or foreign data wrappers, wherein the only SQL-level
      support object needed is a single handler function identified by having
      a special return type.  This lets us get rid of the supporting catalog
      altogether, so that no custom DDL support is needed for the feature.
      
      Adjust the API so that it can support non-constant tablesample arguments
      (the original coding assumed we could evaluate the argument expressions at
      ExecInitSampleScan time, which is undesirable even if it weren't outright
      unsafe), and discourage sampling methods from looking at invisible tuples.
      Make sure that the BERNOULLI and SYSTEM methods are genuinely repeatable
      within and across queries, as required by the SQL standard, and deal more
      honestly with methods that can't support that requirement.
      
      Make a full code-review pass over the tablesample additions, and fix
      assorted bugs, omissions, infelicities, and cosmetic issues (such as
      failure to put the added code stanzas in a consistent ordering).
      Improve EXPLAIN's output of tablesample plans, too.
      
      Back-patch to 9.5 so that we don't have to support the original API
      in production.
      dd7a8f66
  12. 23 6月, 2015 1 次提交
    • T
      Improve inheritance_planner()'s performance for large inheritance sets. · 2cb9ec1b
      Tom Lane 提交于
      Commit c03ad560 introduced a planner
      performance regression for UPDATE/DELETE on large inheritance sets.
      It required copying the append_rel_list (which is of size proportional to
      the number of inherited tables) once for each inherited table, thus
      resulting in O(N^2) time and memory consumption.  While it's difficult to
      avoid that in general, the extra work only has to be done for
      append_rel_list entries that actually reference subquery RTEs, which
      inheritance-set entries will not.  So we can buy back essentially all of
      the loss in cases without subqueries in FROM; and even for those, the added
      work is mainly proportional to the number of UNION ALL subqueries.
      
      Back-patch to 9.2, like the previous commit.
      
      Tom Lane and Dean Rasheed, per a complaint from Thomas Munro.
      2cb9ec1b
  13. 25 5月, 2015 1 次提交
    • T
      Manual cleanup of pgindent results. · 2aa0476d
      Tom Lane 提交于
      Fix some places where pgindent did silly stuff, often because project
      style wasn't followed to begin with.  (I've not touched the atomics
      headers, though.)
      2aa0476d
  14. 24 5月, 2015 1 次提交
  15. 23 5月, 2015 1 次提交
    • A
      Remove the new UPSERT command tag and use INSERT instead. · 631d7490
      Andres Freund 提交于
      Previously, INSERT with ON CONFLICT DO UPDATE specified used a new
      command tag -- UPSERT.  It was introduced out of concern that INSERT as
      a command tag would be a misrepresentation for ON CONFLICT DO UPDATE, as
      some affected rows may actually have been updated.
      
      Alvaro Herrera noticed that the implementation of that new command tag
      was incomplete; in subsequent discussion we concluded that having it
      doesn't provide benefits that are in line with the compatibility breaks
      it requires.
      
      Catversion bump due to the removal of PlannedStmt->isUpsert.
      
      Author: Peter Geoghegan
      Discussion: 20150520215816.GI5885@postgresql.org
      631d7490
  16. 16 5月, 2015 2 次提交
    • A
      Support GROUPING SETS, CUBE and ROLLUP. · f3d31185
      Andres Freund 提交于
      This SQL standard functionality allows to aggregate data by different
      GROUP BY clauses at once. Each grouping set returns rows with columns
      grouped by in other sets set to NULL.
      
      This could previously be achieved by doing each grouping as a separate
      query, conjoined by UNION ALLs. Besides being considerably more concise,
      grouping sets will in many cases be faster, requiring only one scan over
      the underlying data.
      
      The current implementation of grouping sets only supports using sorting
      for input. Individual sets that share a sort order are computed in one
      pass. If there are sets that don't share a sort order, additional sort &
      aggregation steps are performed. These additional passes are sourced by
      the previous sort step; thus avoiding repeated scans of the source data.
      
      The code is structured in a way that adding support for purely using
      hash aggregation or a mix of hashing and sorting is possible. Sorting
      was chosen to be supported first, as it is the most generic method of
      implementation.
      
      Instead of, as in an earlier versions of the patch, representing the
      chain of sort and aggregation steps as full blown planner and executor
      nodes, all but the first sort are performed inside the aggregation node
      itself. This avoids the need to do some unusual gymnastics to handle
      having to return aggregated and non-aggregated tuples from underlying
      nodes, as well as having to shut down underlying nodes early to limit
      memory usage.  The optimizer still builds Sort/Agg node to describe each
      phase, but they're not part of the plan tree, but instead additional
      data for the aggregation node. They're a convenient and preexisting way
      to describe aggregation and sorting.  The first (and possibly only) sort
      step is still performed as a separate execution step. That retains
      similarity with existing group by plans, makes rescans fairly simple,
      avoids very deep plans (leading to slow explains) and easily allows to
      avoid the sorting step if the underlying data is sorted by other means.
      
      A somewhat ugly side of this patch is having to deal with a grammar
      ambiguity between the new CUBE keyword and the cube extension/functions
      named cube (and rollup). To avoid breaking existing deployments of the
      cube extension it has not been renamed, neither has cube been made a
      reserved keyword. Instead precedence hacking is used to make GROUP BY
      cube(..) refer to the CUBE grouping sets feature, and not the function
      cube(). To actually group by a function cube(), unlikely as that might
      be, the function name has to be quoted.
      
      Needs a catversion bump because stored rules may change.
      
      Author: Andrew Gierth and Atri Sharma, with contributions from Andres Freund
      Reviewed-By: Andres Freund, Noah Misch, Tom Lane, Svenne Krap, Tomas
          Vondra, Erik Rijkers, Marti Raudsepp, Pavel Stehule
      Discussion: CAOeZVidmVRe2jU6aMk_5qkxnB7dfmPROzM7Ur8JPW5j8Y5X-Lw@mail.gmail.com
      f3d31185
    • S
      TABLESAMPLE, SQL Standard and extensible · f6d208d6
      Simon Riggs 提交于
      Add a TABLESAMPLE clause to SELECT statements that allows
      user to specify random BERNOULLI sampling or block level
      SYSTEM sampling. Implementation allows for extensible
      sampling functions to be written, using a standard API.
      Basic version follows SQLStandard exactly. Usable
      concrete use cases for the sampling API follow in later
      commits.
      
      Petr Jelinek
      
      Reviewed by Michael Paquier and Simon Riggs
      f6d208d6
  17. 13 5月, 2015 1 次提交
    • T
      Add support for doing late row locking in FDWs. · afb9249d
      Tom Lane 提交于
      Previously, FDWs could only do "early row locking", that is lock a row as
      soon as it's fetched, even though local restriction/join conditions might
      discard the row later.  This patch adds callbacks that allow FDWs to do
      late locking in the same way that it's done for regular tables.
      
      To make use of this feature, an FDW must support the "ctid" column as a
      unique row identifier.  Currently, since ctid has to be of type TID,
      the feature is of limited use, though in principle it could be used by
      postgres_fdw.  We may eventually allow FDWs to specify another data type
      for ctid, which would make it possible for more FDWs to use this feature.
      
      This commit does not modify postgres_fdw to use late locking.  We've
      tested some prototype code for that, but it's not in committable shape,
      and besides it's quite unclear whether it actually makes sense to do late
      locking against a remote server.  The extra round trips required are likely
      to outweigh any benefit from improved concurrency.
      
      Etsuro Fujita, reviewed by Ashutosh Bapat, and hacked up a lot by me
      afb9249d
  18. 08 5月, 2015 1 次提交
    • A
      Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE. · 168d5805
      Andres Freund 提交于
      The newly added ON CONFLICT clause allows to specify an alternative to
      raising a unique or exclusion constraint violation error when inserting.
      ON CONFLICT refers to constraints that can either be specified using a
      inference clause (by specifying the columns of a unique constraint) or
      by naming a unique or exclusion constraint.  DO NOTHING avoids the
      constraint violation, without touching the pre-existing row.  DO UPDATE
      SET ... [WHERE ...] updates the pre-existing tuple, and has access to
      both the tuple proposed for insertion and the existing tuple; the
      optional WHERE clause can be used to prevent an update from being
      executed.  The UPDATE SET and WHERE clauses have access to the tuple
      proposed for insertion using the "magic" EXCLUDED alias, and to the
      pre-existing tuple using the table name or its alias.
      
      This feature is often referred to as upsert.
      
      This is implemented using a new infrastructure called "speculative
      insertion". It is an optimistic variant of regular insertion that first
      does a pre-check for existing tuples and then attempts an insert.  If a
      violating tuple was inserted concurrently, the speculatively inserted
      tuple is deleted and a new attempt is made.  If the pre-check finds a
      matching tuple the alternative DO NOTHING or DO UPDATE action is taken.
      If the insertion succeeds without detecting a conflict, the tuple is
      deemed inserted.
      
      To handle the possible ambiguity between the excluded alias and a table
      named excluded, and for convenience with long relation names, INSERT
      INTO now can alias its target table.
      
      Bumps catversion as stored rules change.
      
      Author: Peter Geoghegan, with significant contributions from Heikki
          Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes.
      Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs,
          Dean Rasheed, Stephen Frost and many others.
      168d5805
  19. 23 4月, 2015 1 次提交
    • S
      RLS fixes, new hooks, and new test module · 0bf22e0c
      Stephen Frost 提交于
      In prepend_row_security_policies(), defaultDeny was always true, so if
      there were any hook policies, the RLS policies on the table would just
      get discarded.  Fixed to start off with defaultDeny as false and then
      properly set later if we detect that only the default deny policy exists
      for the internal policies.
      
      The infinite recursion detection in fireRIRrules() didn't properly
      manage the activeRIRs list in the case of WCOs, so it would incorrectly
      report infinite recusion if the same relation with RLS appeared more
      than once in the rtable, for example "UPDATE t ... FROM t ...".
      
      Further, the RLS expansion code in fireRIRrules() was handling RLS in
      the main loop through the rtable, which lead to RTEs being visited twice
      if they contained sublink subqueries, which
      prepend_row_security_policies() attempted to handle by exiting early if
      the RTE already had securityQuals.  That doesn't work, however, since
      if the query involved a security barrier view on top of a table with
      RLS, the RTE would already have securityQuals (from the view) by the
      time fireRIRrules() was invoked, and so the table's RLS policies would
      be ignored.  This is fixed in fireRIRrules() by handling RLS in a
      separate loop at the end, after dealing with any other sublink
      subqueries, thus ensuring that each RTE is only visited once for RLS
      expansion.
      
      The inheritance planner code didn't correctly handle non-target
      relations with RLS, which would get turned into subqueries during
      planning. Thus an update of the form "UPDATE t1 ... FROM t2 ..." where
      t1 has inheritance and t2 has RLS quals would fail.  Fix by making sure
      to copy in and update the securityQuals when they exist for non-target
      relations.
      
      process_policies() was adding WCOs to non-target relations, which is
      unnecessary, and could lead to a lot of wasted time in the rewriter and
      the planner. Fix by only adding WCO policies when working on the result
      relation.  Also in process_policies, we should be copying the USING
      policies to the WITH CHECK policies on a per-policy basis, fix by moving
      the copying up into the per-policy loop.
      
      Lastly, as noted by Dean, we were simply adding policies returned by the
      hook provided to the list of quals being AND'd, meaning that they would
      actually restrict records returned and there was no option to have
      internal policies and hook-based policies work together permissively (as
      all internal policies currently work).  Instead, explicitly add support
      for both permissive and restrictive policies by having a hook for each
      and combining the results appropriately.  To ensure this is all done
      correctly, add a new test module (test_rls_hooks) to test the various
      combinations of internal, permissive, and restrictive hook policies.
      
      Largely from Dean Rasheed (thanks!):
      
      CAEZATCVmFUfUOwwhnBTcgi6AquyjQ0-1fyKd0T3xBWJvn+xsFA@mail.gmail.com
      
      Author: Dean Rasheed, though I added the new hooks and test module.
      0bf22e0c
  20. 23 3月, 2015 1 次提交
    • T
      Allow foreign tables to participate in inheritance. · cb1ca4d8
      Tom Lane 提交于
      Foreign tables can now be inheritance children, or parents.  Much of the
      system was already ready for this, but we had to fix a few things of
      course, mostly in the area of planner and executor handling of row locks.
      
      As side effects of this, allow foreign tables to have NOT VALID CHECK
      constraints (and hence to accept ALTER ... VALIDATE CONSTRAINT), and to
      accept ALTER SET STORAGE and ALTER SET WITH/WITHOUT OIDS.  Continuing to
      disallow these things would've required bizarre and inconsistent special
      cases in inheritance behavior.  Since foreign tables don't enforce CHECK
      constraints anyway, a NOT VALID one is a complete no-op, but that doesn't
      mean we shouldn't allow it.  And it's possible that some FDWs might have
      use for SET STORAGE or SET WITH OIDS, though doubtless they will be no-ops
      for most.
      
      An additional change in support of this is that when a ModifyTable node
      has multiple target tables, they will all now be explicitly identified
      in EXPLAIN output, for example:
      
       Update on pt1  (cost=0.00..321.05 rows=3541 width=46)
         Update on pt1
         Foreign Update on ft1
         Foreign Update on ft2
         Update on child3
         ->  Seq Scan on pt1  (cost=0.00..0.00 rows=1 width=46)
         ->  Foreign Scan on ft1  (cost=100.00..148.03 rows=1170 width=46)
         ->  Foreign Scan on ft2  (cost=100.00..148.03 rows=1170 width=46)
         ->  Seq Scan on child3  (cost=0.00..25.00 rows=1200 width=46)
      
      This was done mainly to provide an unambiguous place to attach "Remote SQL"
      fields, but it is useful for inherited updates even when no foreign tables
      are involved.
      
      Shigeru Hanada and Etsuro Fujita, reviewed by Ashutosh Bapat and Kyotaro
      Horiguchi, some additional hacking by me
      cb1ca4d8
  21. 16 3月, 2015 1 次提交
    • T
      Improve representation of PlanRowMark. · 7b8b8a43
      Tom Lane 提交于
      This patch fixes two inadequacies of the PlanRowMark representation.
      
      First, that the original LockingClauseStrength isn't stored (and cannot be
      inferred for foreign tables, which always get ROW_MARK_COPY).  Since some
      PlanRowMarks are created out of whole cloth and don't actually have an
      ancestral RowMarkClause, this requires adding a dummy LCS_NONE value to
      enum LockingClauseStrength, which is fairly annoying but the alternatives
      seem worse.  This fix allows getting rid of the use of get_parse_rowmark()
      in FDWs (as per the discussion around commits 462bd957 and
      8ec8760f), and it simplifies some things elsewhere.
      
      Second, that the representation assumed that all child tables in an
      inheritance hierarchy would use the same RowMarkType.  That's true today
      but will soon not be true.  We add an "allMarkTypes" field that identifies
      the union of mark types used in all a parent table's children, and use
      that where appropriate (currently, only in preprocess_targetlist()).
      
      In passing fix a couple of minor infelicities left over from the SKIP
      LOCKED patch, notably that _outPlanRowMark still thought waitPolicy
      is a bool.
      
      Catversion bump is required because the numeric values of enum
      LockingClauseStrength can appear in on-disk rules.
      
      Extracted from a much larger patch to support foreign table inheritance;
      it seemed worth breaking this out, since it's a separable concern.
      
      Shigeru Hanada and Etsuro Fujita, somewhat modified by me
      7b8b8a43
  22. 12 3月, 2015 1 次提交
    • T
      Support flattening of empty-FROM subqueries and one-row VALUES tables. · f4abd024
      Tom Lane 提交于
      We can't handle this in the general case due to limitations of the
      planner's data representations; but we can allow it in many useful cases,
      by being careful to flatten only when we are pulling a single-row subquery
      up into a FROM (or, equivalently, inner JOIN) node that will still have at
      least one remaining relation child.  Per discussion of an example from
      Kyotaro Horiguchi.
      f4abd024
  23. 22 2月, 2015 1 次提交
  24. 18 2月, 2015 1 次提交
    • T
      Fix EXPLAIN output for cases where parent table is excluded by constraints. · abe45a9b
      Tom Lane 提交于
      The previous coding in EXPLAIN always labeled a ModifyTable node with the
      name of the target table affected by its first child plan.  When originally
      written, this was necessarily the parent table of the inheritance tree,
      so everything was unconfusing.  But when we added NO INHERIT constraints,
      it became possible for the parent table to be deleted from the plan by
      constraint exclusion while still leaving child tables present.  This led to
      the ModifyTable plan node being labeled with the first surviving child,
      which was deemed confusing.  Fix it by retaining the parent table's RT
      index in a new field in ModifyTable.
      
      Etsuro Fujita, reviewed by Ashutosh Bapat and myself
      abe45a9b
  25. 07 1月, 2015 1 次提交
  26. 27 11月, 2014 1 次提交
    • S
      Rename pg_rowsecurity -> pg_policy and other fixes · 143b39c1
      Stephen Frost 提交于
      As pointed out by Robert, we should really have named pg_rowsecurity
      pg_policy, as the objects stored in that catalog are policies.  This
      patch fixes that and updates the column names to start with 'pol' to
      match the new catalog name.
      
      The security consideration for COPY with row level security, also
      pointed out by Robert, has also been addressed by remembering and
      re-checking the OID of the relation initially referenced during COPY
      processing, to make sure it hasn't changed under us by the time we
      finish planning out the query which has been built.
      
      Robert and Alvaro also commented on missing OCLASS and OBJECT entries
      for POLICY (formerly ROWSECURITY or POLICY, depending) in various
      places.  This patch fixes that too, which also happens to add the
      ability to COMMENT on policies.
      
      In passing, attempt to improve the consistency of messages, comments,
      and documentation as well.  This removes various incarnations of
      'row-security', 'row-level security', 'Row-security', etc, in favor
      of 'policy', 'row level security' or 'row_security' as appropriate.
      
      Happy Thanksgiving!
      143b39c1
  27. 08 10月, 2014 1 次提交
    • A
      Implement SKIP LOCKED for row-level locks · df630b0d
      Alvaro Herrera 提交于
      This clause changes the behavior of SELECT locking clauses in the
      presence of locked rows: instead of causing a process to block waiting
      for the locks held by other processes (or raise an error, with NOWAIT),
      SKIP LOCKED makes the new reader skip over such rows.  While this is not
      appropriate behavior for general purposes, there are some cases in which
      it is useful, such as queue-like tables.
      
      Catalog version bumped because this patch changes the representation of
      stored rules.
      
      Reviewed by Craig Ringer (based on a previous attempt at an
      implementation by Simon Riggs, who also provided input on the syntax
      used in the current patch), David Rowley, and Álvaro Herrera.
      
      Author: Thomas Munro
      df630b0d
  28. 19 9月, 2014 1 次提交
    • S
      Row-Level Security Policies (RLS) · 491c029d
      Stephen Frost 提交于
      Building on the updatable security-barrier views work, add the
      ability to define policies on tables to limit the set of rows
      which are returned from a query and which are allowed to be added
      to a table.  Expressions defined by the policy for filtering are
      added to the security barrier quals of the query, while expressions
      defined to check records being added to a table are added to the
      with-check options of the query.
      
      New top-level commands are CREATE/ALTER/DROP POLICY and are
      controlled by the table owner.  Row Security is able to be enabled
      and disabled by the owner on a per-table basis using
      ALTER TABLE .. ENABLE/DISABLE ROW SECURITY.
      
      Per discussion, ROW SECURITY is disabled on tables by default and
      must be enabled for policies on the table to be used.  If no
      policies exist on a table with ROW SECURITY enabled, a default-deny
      policy is used and no records will be visible.
      
      By default, row security is applied at all times except for the
      table owner and the superuser.  A new GUC, row_security, is added
      which can be set to ON, OFF, or FORCE.  When set to FORCE, row
      security will be applied even for the table owner and superusers.
      When set to OFF, row security will be disabled when allowed and an
      error will be thrown if the user does not have rights to bypass row
      security.
      
      Per discussion, pg_dump sets row_security = OFF by default to ensure
      that exports and backups will have all data in the table or will
      error if there are insufficient privileges to bypass row security.
      A new option has been added to pg_dump, --enable-row-security, to
      ask pg_dump to export with row security enabled.
      
      A new role capability, BYPASSRLS, which can only be set by the
      superuser, is added to allow other users to be able to bypass row
      security using row_security = OFF.
      
      Many thanks to the various individuals who have helped with the
      design, particularly Robert Haas for his feedback.
      
      Authors include Craig Ringer, KaiGai Kohei, Adam Brightwell, Dean
      Rasheed, with additional changes and rework by me.
      
      Reviewers have included all of the above, Greg Smith,
      Jeff McCormick, and Robert Haas.
      491c029d
  29. 23 7月, 2014 1 次提交
    • T
      Re-enable error for "SELECT ... OFFSET -1". · 27048980
      Tom Lane 提交于
      The executor has thrown errors for negative OFFSET values since 8.4 (see
      commit bfce56ee), but in a moment of brain
      fade I taught the planner that OFFSET with a constant negative value was a
      no-op (commit 1a1832eb).  Reinstate the
      former behavior by only discarding OFFSET with a value of exactly 0.  In
      passing, adjust a planner comment that referenced the ancient behavior.
      
      Back-patch to 9.3 where the mistake was introduced.
      27048980
  30. 19 6月, 2014 1 次提交
    • T
      Implement UPDATE tab SET (col1,col2,...) = (SELECT ...), ... · 8f889b10
      Tom Lane 提交于
      This SQL-standard feature allows a sub-SELECT yielding multiple columns
      (but only one row) to be used to compute the new values of several columns
      to be updated.  While the same results can be had with an independent
      sub-SELECT per column, such a workaround can require a great deal of
      duplicated computation.
      
      The standard actually says that the source for a multi-column assignment
      could be any row-valued expression.  The implementation used here is
      tightly tied to our existing sub-SELECT support and can't handle other
      cases; the Bison grammar would have some issues with them too.  However,
      I don't feel too bad about this since other cases can be converted into
      sub-SELECTs.  For instance, "SET (a,b,c) = row_valued_function(x)" could
      be written "SET (a,b,c) = (SELECT * FROM row_valued_function(x))".
      8f889b10
  31. 07 5月, 2014 1 次提交
    • B
      pgindent run for 9.4 · 0a783200
      Bruce Momjian 提交于
      This includes removing tabs after periods in C comments, which was
      applied to back branches, so this change should not effect backpatching.
      0a783200
  32. 13 4月, 2014 1 次提交
    • S
      Make security barrier views automatically updatable · 842faa71
      Stephen Frost 提交于
      Views which are marked as security_barrier must have their quals
      applied before any user-defined quals are called, to prevent
      user-defined functions from being able to see rows which the
      security barrier view is intended to prevent them from seeing.
      
      Remove the restriction on security barrier views being automatically
      updatable by adding a new securityQuals list to the RTE structure
      which keeps track of the quals from security barrier views at each
      level, independently of the user-supplied quals.  When RTEs are
      later discovered which have securityQuals populated, they are turned
      into subquery RTEs which are marked as security_barrier to prevent
      any user-supplied quals being pushed down (modulo LEAKPROOF quals).
      
      Dean Rasheed, reviewed by Craig Ringer, Simon Riggs, KaiGai Kohei
      842faa71
  33. 08 1月, 2014 1 次提交
  34. 24 12月, 2013 1 次提交
    • T
      Support ordered-set (WITHIN GROUP) aggregates. · 8d65da1f
      Tom Lane 提交于
      This patch introduces generic support for ordered-set and hypothetical-set
      aggregate functions, as well as implementations of the instances defined in
      SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
      percent_rank(), cume_dist()).  We also added mode() though it is not in the
      spec, as well as versions of percentile_cont() and percentile_disc() that
      can compute multiple percentile values in one pass over the data.
      
      Unlike the original submission, this patch puts full control of the sorting
      process in the hands of the aggregate's support functions.  To allow the
      support functions to find out how they're supposed to sort, a new API
      function AggGetAggref() is added to nodeAgg.c.  This allows retrieval of
      the aggregate call's Aggref node, which may have other uses beyond the
      immediate need.  There is also support for ordered-set aggregates to
      install cleanup callback functions, so that they can be sure that
      infrastructure such as tuplesort objects gets cleaned up.
      
      In passing, make some fixes in the recently-added support for variadic
      aggregates, and make some editorial adjustments in the recent FILTER
      additions for aggregates.  Also, simplify use of IsBinaryCoercible() by
      allowing it to succeed whenever the target type is ANY or ANYELEMENT.
      It was inconsistent that it dealt with other polymorphic target types
      but not these.
      
      Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
      and rather heavily editorialized upon by Tom Lane
      8d65da1f
  35. 15 12月, 2013 1 次提交
    • T
      Fix inherited UPDATE/DELETE with UNION ALL subqueries. · c03ad560
      Tom Lane 提交于
      Fix an oversight in commit b3aaf908: we do
      indeed need to process the planner's append_rel_list when copying RTE
      subqueries, because if any of them were flattenable UNION ALL subqueries,
      the append_rel_list shows which subquery RTEs were pulled up out of which
      other ones.  Without this, UNION ALL subqueries aren't correctly inserted
      into the update plans for inheritance child tables after the first one,
      typically resulting in no update happening for those child table(s).
      Per report from Victor Yegorov.
      
      Experimentation with this case also exposed a fault in commit
      a7b96538: if an inherited UPDATE/DELETE
      was proven totally dummy by constraint exclusion, we might arrive at
      add_rtes_to_flat_rtable with root->simple_rel_array being NULL.  This
      should be interpreted as not having any RelOptInfos.  I chose to code
      the guard as a check against simple_rel_array_size, so as to also
      provide some protection against indexing off the end of the array.
      
      Back-patch to 9.2 where the faulty code was added.
      c03ad560