1. 28 11月, 2018 1 次提交
  2. 27 11月, 2018 1 次提交
    • I
      Fix ZSTD compression error "dst buffer too small" · f8afec06
      Ivan Leskin 提交于
      When ZSTD compression is used for AO CO tables, insertion of data may cause an
      error "Destination buffer is too small". This happens when compressed data is
      larger than uncompressed input data.
      
      This commit adds handling of this situation: do not change output buffer and
      return size used equal to source size. The caller (e.g.,
      'cdbappendonlystoragewrite.c') is able to handle such output; in this case, it
      copies data from input to output itself.
      f8afec06
  3. 03 11月, 2018 1 次提交
  4. 30 10月, 2018 3 次提交
  5. 27 10月, 2018 1 次提交
  6. 25 10月, 2018 2 次提交
    • R
      Bump ORCA version to 3.6.0 · a7d941b7
      Rahul Iyer 提交于
      a7d941b7
    • B
      Bump Orca to v3.5.0 and Remove fix-me · 19f36828
      Bhuvnesh Chaudhary 提交于
      Update the tests and bump orca version corresponding to the below fix in
      ORCA.
      
      If the Scalar Agg node has child which has volatile function and has
      universal distribution for relational child, instead of deriving the
      spec as that of the child i.e Universal Spec, we should derive the
      distribution spec as singleton.
      
      Deriving singleton distribution spec will ensure that the tree is
      executed on one node and the results will be distributed to the
      remaining segments.
      
      Consider the below setup:
      
      create table f (a text);
      explain insert into f select sum(random()) from (select 1);
      
      Physical plan:
      +--CPhysicalDML (Insert, "foo"), Source Columns: ["f1" (2)], Action: ("ColRef_0004" (4))   rows:1   width:1  rebinds:1   cost:0.016658   origin: [Grp:12, GrpExpr:2]
         +--CPhysicalComputeScalar   rows:1   width:1  rebinds:1   cost:0.001033   origin: [Grp:23, GrpExpr:3]
            |--CPhysicalMotionHashDistribute HASHED: [ +--CScalarIdent "f1" (2)
       , nulls colocated, duplicate sensitive ]   rows:1   width:1  rebinds:1   cost:0.001029   origin: [Grp:11, GrpExpr:2]
            |  +--CPhysicalComputeScalar   rows:1   width:1  rebinds:1   cost:0.001008   origin: [Grp:11, GrpExpr:1]
            |     |--CPhysicalScalarAgg( Global ) Grp Cols: [], Minimal Grp Cols:[], Generates Duplicates :[ 0 ]    rows:1   width:8  rebinds:1   cost:0.001000   origin: [Grp:6, GrpExpr:3]
            |     |  |--CPhysicalTVF   rows:1000   width:1  rebinds:1   cost:0.001000   origin: [Grp:1, GrpExpr:1]
            |     |  |  |--CScalarConst (1)   origin: [Grp:0, GrpExpr:0]
            |     |  |  +--CScalarConst (1)   origin: [Grp:0, GrpExpr:0]
            |     |  +--CScalarProjectList   origin: [Grp:5, GrpExpr:0]
            |     |     +--CScalarProjectElement "sum" (1)   origin: [Grp:4, GrpExpr:0]
            |     |        +--CScalarAggFunc (sum , Distinct: false , Aggregate Stage: Global)   origin: [Grp:3, GrpExpr:0]
            |     |           +--CScalarFunc (random)   origin: [Grp:2, GrpExpr:0]
            |     +--CScalarProjectList   origin: [Grp:10, GrpExpr:0]
            |        +--CScalarProjectElement "f1" (2)   origin: [Grp:9, GrpExpr:0]
            |           +--CScalarCoerceViaIO   origin: [Grp:8, GrpExpr:0]
            |              +--CScalarIdent "sum" (1)   origin: [Grp:7, GrpExpr:0]
            +--CScalarProjectList   origin: [Grp:22, GrpExpr:0]
               +--CScalarProjectElement "ColRef_0004" (4)   origin: [Grp:21, GrpExpr:0]
                  +--CScalarConst (1)   origin: [Grp:0, GrpExpr:0]
      ",
                                                   QUERY PLAN
      ----------------------------------------------------------------------------------------------------
       Insert  (cost=0.00..0.02 rows=1 width=8)
         ->  Result  (cost=0.00..0.00 rows=1 width=12)
               ->  Result  (cost=0.00..0.00 rows=1 width=8)
                     ->  Result  (cost=0.00..0.00 rows=1 width=8)
                           ->  Aggregate  (cost=0.00..0.00 rows=1 width=8)
                                 ->  Function Scan on generate_series  (cost=0.00..0.00 rows=334 width=1)
      
      Above CPhysicalMotionHashDistribute has a universal child (i.e
      CPhysicalTVF), and translator will convert it to a Result Node with Hash
      Filter. The hash filter is build on the projected column in this query
      i.e sum(random()).. Random being a volatile function, will cause the
      resulting sum to be non-deterministic i.e volatile in this case. So the
      hash filter may not appropriately filter out the segment and the insert
      may result in inserting non-deterministic number of rows in the table
      (with the maximum being equal to the number of segments).  So, we should
      derive singleton distribution if the relation child of the scalar agg
      node delivers universal spec and the project list has volatile
      functions.
      
      Resulting plan will be:
      
      pivotal=# explain insert into foo select sum(random()) from (select 1) f;
                                         QUERY PLAN
      ---------------------------------------------------------------------------------
       Insert  (cost=0.00..0.02 rows=1 width=8)
         ->  Result  (cost=0.00..0.00 rows=1 width=12)
               ->  Redistribute Motion 1:3  (slice1)  (cost=0.00..0.00 rows=1 width=8)
                     Hash Key: (((sum(random())))::text)
                     ->  Result  (cost=0.00..0.00 rows=1 width=8)
                           ->  Aggregate  (cost=0.00..0.00 rows=1 width=8)
                                 ->  Result  (cost=0.00..0.00 rows=1 width=1)
      19f36828
  7. 18 10月, 2018 1 次提交
  8. 16 10月, 2018 1 次提交
  9. 11 10月, 2018 1 次提交
  10. 05 10月, 2018 1 次提交
  11. 27 9月, 2018 1 次提交
    • H
      Remove built-in stub functions for QuickLZ compressor. · 589533be
      Heikki Linnakangas 提交于
      The proprietary build can install them as normal C language functions,
      with CREATE FUNCTION, instead.
      
      In the passing, remove some unused QuickLZ debugging GUCs.
      
      This doesn't yet get rid of all references to QuickLZ, unfortunately. The
      GUC and reloption validation code still needs to know about it, so that
      they can validate the options read from postgresql.conf, when starting up
      postmaster. For the same reason, you cannot yet add custom compression
      algorithms, besides quicklz, as an extension. But this is another step in
      the right direction, anyway.
      Co-authored-by: NJimmy Yih <jyih@pivotal.io>
      Co-authored-by: NJoao Pereira <jdealmeidapereira@pivotal.io>
      589533be
  12. 25 9月, 2018 1 次提交
  13. 21 9月, 2018 2 次提交
  14. 19 9月, 2018 1 次提交
    • B
      Update optimizer output test files and bump orca to v2.75.0 · 35697528
      Bhuvnesh Chaudhary 提交于
      This commit adds the tests corresponding to the below change introduced in PQO.
      In case of Insert on Randomly distributed relations, a random / redistribute
      motion must exists to ensure randomness of the data inserted, else there will
      be skew on 1 segment.
      
      Consider the below scenario:
      Scenario 1:
      ```
      CREATE TABLE t1_random (a int) DISTRIBUTED RANDOMLY;
      INSERT INTO t1_random VALUES (1), (2);
      SET optimizer_print_plan=on;
      
      EXPLAIN INSERT INTO t1_random VALUES (1), (2);
      Physical plan:
      +--CPhysicalDML (Insert, "t1_random"), Source Columns: ["column1" (0)], Action: ("ColRef_0001" (1))   rows:2   width:4  rebinds:1   cost:0.020884   origin: [Grp:1, GrpExpr:2]
         +--CPhysicalComputeScalar
            |--CPhysicalMotionRandom  ==> Random Motion Inserted (2)
            |  +--CPhysicalConstTableGet Columns: ["column1" (0)] Values: [(1); (2)] ==> Delivers Universal Spec (1)
            +--CScalarProjectList
               +--CScalarProjectElement "ColRef_0001" (1)
                  +--CScalarConst (1)
      ",
                                             QUERY PLAN
      ----------------------------------------------------------------------------------------
       Insert  (cost=0.00..0.02 rows=1 width=4)
         ->  Result  (cost=0.00..0.00 rows=1 width=8)
               ->  Result  (cost=0.00..0.00 rows=1 width=4) ==> Random Motion Converted to a Result Node with Hash Filter to avoid duplicates (4)
                     ->  Values Scan on "Values"  (cost=0.00..0.00 rows=1 width=4) ==> Delivers Universal Spec (3)
       Optimizer: PQO version 2.70.0
      ```
      
      When an Insert is requested on t1_random from a Universal Source, Optimization
      framework does add a Random Motion / CPhysicalMotionRandom (See (2) above) to
      redistribute the data. However, since CPhysicalConstTableGet / Values Scan
      delivers Universal Spec, it is converted to a Result Node with hash filter to
      avoid duplicates in DXL to Planned Statement (See (4) above). Now, since there
      is no redistribute motion to spray the data randomly on the segments, due to
      the result node with hash filters, data from only one segment is allowed to
      propagate further, which is inserted by the DML node. This results in all the
      data being inserted on to 1 segment only.
      
      Scenario 2:
      ```
      CREATE TABLE t1_random (a int) DISTRIBUTED RANDOMLY;
      CREATE TABLE t2_random (a int) DISTRIBUTED RANDOMLY;
      EXPLAIN INSERT INTO t1_random SELECT * FROM t1_random;
      Physical plan:
      +--CPhysicalDML (Insert, "t1_random"), Source Columns: ["a" (0)], Action: ("ColRef_0008" (8))   rows:1   width:34  rebinds:1   cost:431.010436   origin: [Grp:1, GrpExpr:2]
         +--CPhysicalComputeScalar   rows:1   width:1  rebinds:1   cost:431.000011   origin: [Grp:5, GrpExpr:1]
            |--CPhysicalTableScan "t2_random" ("t2_random")   rows:1   width:34  rebinds:1   cost:431.000006   origin: [Grp:0, GrpExpr:1]
            +--CScalarProjectList   origin: [Grp:4, GrpExpr:0]
               +--CScalarProjectElement "ColRef_0008" (8)   origin: [Grp:3, GrpExpr:0]
                  +--CScalarConst (1)   origin: [Grp:2, GrpExpr:0]
      ",
                                              QUERY PLAN
      ------------------------------------------------------------------------------------------
       Insert  (cost=0.00..431.01 rows=1 width=4)
         ->  Result  (cost=0.00..431.00 rows=1 width=8)
               ->  Table Scan on t1_random  (cost=0.00..431.00 rows=1 width=4)
       Optimizer: PQO version 2.70.0
      ```
      
      When an Insert is request on t1_random (randomly distributed table) from
      another randomly distributed table, optimization framework does not add a
      redistribute motion because CPhysicalDML requested Random Spec and the source
      t2_random delivers random distribution spec matching / satisfying the requested
      spec.
      
      So, in summary, there are 2 causes for skewness in the data inserted into randomly distributed table.
      Scenario 1: Where the Random Motion is converted into a Result Node to Hash Filters
      Scneario 2: Where the requested and derived spec matches.
      
      This patch fixes the issue by ensuring that if an insert is performed on a
      randomly distributed table, there must exists a random motion which has been
      enforced by a true motion.  This is achived in 2 steps:
      1. CPhysicalDML / Insert on Randomly Distributed table must request a Strict Random Spec
      2. Append Enforcer for Random Spec must track if the motion node enforced has
      a universal child, if it has, then update the bool flag to false else true
      
      Characteristic of Strict Random Spec:
      1. Strict Random Spec matches / satisfies Strict Random Spec Only
      2. Random Spec enforced by a true motion matches / satisfies Strict Random Spec.
      
      CPhysicalDML always has a CPhysicalComputeScalar node below it which projects
      the additional column indicating if it's an Insert. The request for Strict
      Motion is not propagated down the CPhysicalComputeScalar node,
      CPhysicalComputeScalar requests Random Spec from its child instead. This is to
      mandate the existence of a true random motion either by the childs of CPhysicalDML
      node or it should be inserted between CPhysicalDML and CPhysicalComputeScalar.
      If there is any motion in the same group holding CPhysicalComputeScalar,
      it should also request Random Spec from its childs.
      
      Plans after the fix
      Scenario 1:
      ```
      EXPLAIN INSERT INTO t1_random VALUES (1), (2);
      Physical plan:
      +--CPhysicalDML (Insert, "t1_random"), Source Columns: ["column1" (0)], Action: ("ColRef_0001" (1))   rows:2   width:4  rebinds:1   cost:0.020884   origin: [Grp:1, GrpExpr:2]
         +--CPhysicalMotionRandom   rows:2   width:1  rebinds:1   cost:0.000051   origin: [Grp:5, GrpExpr:2] ==> Strict Random Motion Enforcing true randomness
            +--CPhysicalComputeScalar   rows:2   width:1  rebinds:1   cost:0.000034   origin: [Grp:5, GrpExpr:1]
               |--CPhysicalMotionRandom   rows:2   width:4  rebinds:1   cost:0.000029   origin: [Grp:0, GrpExpr:2]
               |  +--CPhysicalConstTableGet Columns: ["column1" (0)] Values: [(1); (2)]   rows:2   width:4  rebinds:1   cost:0.000008   origin: [Grp:0, GrpExpr:1]
               +--CScalarProjectList   origin: [Grp:4, GrpExpr:0]
                  +--CScalarProjectElement "ColRef_0001" (1)   origin: [Grp:3, GrpExpr:0]
                     +--CScalarConst (1)   origin: [Grp:2, GrpExpr:0]
      ",
                                             QUERY PLAN
      ----------------------------------------------------------------------------------------
       Insert  (cost=0.00..0.02 rows=1 width=4)
         ->  Redistribute Motion 3:3  (slice1; segments: 3)  (cost=0.00..0.00 rows=1 width=8) ==> Strict Random Motion Enforcing true randomness
               ->  Result  (cost=0.00..0.00 rows=1 width=8)
                     ->  Result  (cost=0.00..0.00 rows=1 width=4)
                           ->  Values Scan on "Values"  (cost=0.00..0.00 rows=1 width=4)
       Optimizer: PQO version 2
      
      ```
      
      Scenario 2:
      ```
      EXPLAIN INSERT INTO t1_random SELECT * FROM t1_random;
      Physical plan:
      +--CPhysicalDML (Insert, "t1_random"), Source Columns: ["a" (0)], Action: ("ColRef_0008" (8))   rows:2   width:34  rebinds:1   cost:431.020873   origin: [Grp:1, GrpExpr:2]
         +--CPhysicalMotionRandom   rows:2   width:1  rebinds:1   cost:431.000039   origin: [Grp:5, GrpExpr:2] ==> Strict Random Motion Enforcing True Randomness
            +--CPhysicalComputeScalar   rows:2   width:1  rebinds:1   cost:431.000023   origin: [Grp:5, GrpExpr:1]
               |--CPhysicalTableScan "t1_random" ("t1_random")   rows:2   width:34  rebinds:1   cost:431.000012   origin: [Grp:0, GrpExpr:1]
               +--CScalarProjectList   origin: [Grp:4, GrpExpr:0]
                  +--CScalarProjectElement "ColRef_0008" (8)   origin: [Grp:3, GrpExpr:0]
                     +--CScalarConst (1)   origin: [Grp:2, GrpExpr:0]
      ",
                                              QUERY PLAN
      ------------------------------------------------------------------------------------------
       Insert  (cost=0.00..431.02 rows=1 width=4)
         ->  Redistribute Motion 3:3  (slice1; segments: 3)  (cost=0.00..431.00 rows=1 width=8) ==> Strict Random Motion Enforcing true randomness
               ->  Result  (cost=0.00..431.00 rows=1 width=8)
                     ->  Table Scan on t1_random  (cost=0.00..431.00 rows=1 width=4)
       Optimizer: PQO version 2.70.0
      (5 rows)
      ```
      
      Note: If Insert is performed on a Randomly Distributed Table from a Hash
      Distributed Table (childs), an additional redistribute motion is not enforced
      between CPhysicalDML and CPhysicalComputeScalar as there already exists a true
      random motion due to mismatch of random vs hash distributed spec.
      ```
      pivotal=# explain insert into t1_random select * from t1_hash;
      LOG:  statement: explain insert into t1_random select * from t1_hash;
      LOG:  2018-08-20 13:31:01:135438 PDT,THD000,TRACE,"
      Physical plan:
      +--CPhysicalDML (Insert, "t1_random"), Source Columns: ["a" (0)], Action: ("ColRef_0008" (8))   rows:100   width:34  rebinds:1   cost:432.043222   origin: [Grp:1, GrpExpr:2]
         +--CPhysicalComputeScalar   rows:100   width:1  rebinds:1   cost:431.001555   origin: [Grp:5, GrpExpr:1]
            |--CPhysicalMotionRandom   rows:100   width:34  rebinds:1   cost:431.001289   origin: [Grp:0, GrpExpr:2]
            |  +--CPhysicalTableScan "t1_hash" ("t1_hash")   rows:100   width:34  rebinds:1   cost:431.000623   origin: [Grp:0, GrpExpr:1]
            +--CScalarProjectList   origin: [Grp:4, GrpExpr:0]
               +--CScalarProjectElement "ColRef_0008" (8)   origin: [Grp:3, GrpExpr:0]
                  +--CScalarConst (1)   origin: [Grp:2, GrpExpr:0]
      ",
                                                 QUERY PLAN
      -------------------------------------------------------------------------------------------------
       Insert  (cost=0.00..432.04 rows=34 width=4)
         ->  Result  (cost=0.00..431.00 rows=34 width=8)
               ->  Redistribute Motion 3:3  (slice1; segments: 3)  (cost=0.00..431.00 rows=34 width=4)
                     ->  Table Scan on t1_hash  (cost=0.00..431.00 rows=34 width=4)
       Optimizer: PQO version 2.70.0
      (5 rows)
      
      Time: 69.309 ms
      ```
      35697528
  15. 12 9月, 2018 3 次提交
    • T
      Use -Wno-format-truncation and -Wno-stringop-truncation, if available. · 18f9c0b9
      Tom Lane 提交于
      gcc 8 has started emitting some warnings that are largely useless for
      our purposes, particularly since they complain about code following
      the project-standard coding convention that path names are assumed
      to be shorter than MAXPGPATH.  Even if we make the effort to remove
      that assumption in some future release, the changes wouldn't get
      back-patched.  Hence, just suppress these warnings, on compilers that
      have these switches.
      
      Backpatch to all supported branches.
      
      Discussion: https://postgr.es/m/1524563856.26306.9.camel@gunduz.org
      (cherry picked from commit e7165852)
      18f9c0b9
    • T
      Suppress clang's unhelpful gripes about -pthread switch being unused. · 28d6c289
      Tom Lane 提交于
      Considering the number of cases in which "unused" command line arguments
      are silently ignored by compilers, it's fairly astonishing that anybody
      thought this warning was useful; it's certainly nothing but an annoyance
      when building Postgres.  One such case is that neither gcc nor clang
      complain about unrecognized -Wno-foo switches, making it more difficult
      to figure out whether the switch does anything than one could wish.
      
      Back-patch to 9.3, which is as far back as the patch applies conveniently
      (we'd have to back-patch PGAC_PROG_CC_VAR_OPT to go further, and it doesn't
      seem worth that).
      
      (cherry picked from commit 73b416b2)
      28d6c289
    • B
      Bump ORCA version to 2.74.0 · f247b418
      Bhuvnesh Chaudhary 提交于
      Signed-off-by: NDhanashree Kashid <dkashid@pivotal.io>
      f247b418
  16. 11 9月, 2018 1 次提交
  17. 08 9月, 2018 1 次提交
    • D
      Introduce optimizer guc to enable generating streaming material · 635c2e0f
      Dhanashree Kashid 提交于
      Previously, while optimizing nestloop joins, ORCA always generated a
      blocking materialize node (cdb_strict=true). Though, this conservative
      nature ensured that the join node produced by ORCA will always be
      deadlock safe; we sometimes produced slow running plans.
      
      ORCA now has a capability of producing blocking materialize only when
      needed by detecting motion hazard in the nestloop join. A streaming
      material will be generated when there is no motion hazard.
      
      This commit adds a guc to control this behavior. When set to off, we
      fallback to old behavior of always producing a blocking materialize.
      
      Also bump the statement_mem for a test in segspace. After this change,
      for the test query, we produce a streaming spool which changes number of
      operator groups in memory quota calculation and query fails with:
      `ERROR:  insufficient memory reserved for statement`. Bump the
      statement_mem by 1MB to test the fault injection.
      
      Also bump the orca version to 2.72.0
      Signed-off-by: NAbhijit Subramanya <asubramanya@pivotal.io>
      635c2e0f
  18. 01 9月, 2018 1 次提交
    • S
      Add test to ORCA generates correct equivalence class · 27127b47
      Sambitesh Dash 提交于
      Given a query like below:
      
      SELECT Count(*)
      FROM   (SELECT *
              FROM   (SELECT tab_2.cd AS CD1,
                             tab_2.cd AS CD2
                      FROM   tab_1
                             LEFT JOIN tab_2
                                    ON tab_1.id = tab_2.id) f
              UNION ALL
              SELECT region,
                     code
              FROM   tab_3)a;
      
      Previously, orca produced an incorrect filter, (cd2 = cd) on top of the
      project list generated for producing an alias. This led to incorrect
      results as column 'cd' is produced by a nullable side of LOJ (tab2) and
      such filter produces NULL
      output.
      Ensure orca produces correct equivalence class by considering the
      nullable columns.
      Signed-off-by: NDhanashree Kashid <dkashid@pivotal.io>
      27127b47
  19. 31 8月, 2018 1 次提交
    • A
      Add, optional, support for 128bit integers. · 9b164486
      Andres Freund 提交于
      We will, for the foreseeable future, not expose 128 bit datatypes to
      SQL. But being able to use 128bit math will allow us, in a later patch,
      to use 128bit accumulators for some aggregates; leading to noticeable
      speedups over using numeric.
      
      So far we only detect a gcc/clang extension that supports 128bit math,
      but no 128bit literals, and no *printf support. We might want to expand
      this in the future to further compilers; if there are any that that
      provide similar support.
      
      Discussion: 544BB5F1.50709@proxel.se
      Author: Andreas Karlsson, with significant editorializing by me
      Reviewed-By: Peter Geoghegan, Oskari Saarenmaa
      9b164486
  20. 18 8月, 2018 1 次提交
    • S
      Fix cost of BTree Index scan in ORCA · dc3186e5
      Sambitesh Dash 提交于
      This is the GPDB side commit for the ORCA commit.
      
      A note on test case changes :
      
      In `subselect_gp` the plans becomes similar to Planner generated plans.
      
      In `qp_gist_indexes4`, we started picking Index Scan over Bitmap Scan.
      
      Bumped the ORCA version to v2.70.0
      dc3186e5
  21. 16 8月, 2018 1 次提交
  22. 15 8月, 2018 2 次提交
  23. 11 8月, 2018 1 次提交
  24. 03 8月, 2018 2 次提交
  25. 02 8月, 2018 1 次提交
    • R
      Merge with PostgreSQL 9.2beta2. · 4750e1b6
      Richard Guo 提交于
      This is the final batch of commits from PostgreSQL 9.2 development,
      up to the point where the REL9_2_STABLE branch was created, and 9.3
      development started on the PostgreSQL master branch.
      
      Notable upstream changes:
      
      * Index-only scan was included in the batch of upstream commits. It
        allows queries to retrieve data only from indexes, avoiding heap access.
      
      * Group commit was added to work effectively under heavy load. Previously,
        batching of commits became ineffective as the write workload increased,
        because of internal lock contention.
      
      * A new fast-path lock mechanism was added to reduce the overhead of
        taking and releasing certain types of locks which are taken and released
        very frequently but rarely conflict.
      
      * The new "parameterized path" mechanism was added. It allows inner index
        scans to use values from relations that are more than one join level up
        from the scan. This can greatly improve performance in situations where
        semantic restrictions (such as outer joins) limit the allowed join orderings.
      
      * SP-GiST (Space-Partitioned GiST) index access method was added to support
        unbalanced partitioned search structures. For suitable problems, SP-GiST can
        be faster than GiST in both index build time and search time.
      
      * Checkpoints now are performed by a dedicated background process. Formerly
        the background writer did both dirty-page writing and checkpointing. Separating
        this into two processes allows each goal to be accomplished more predictably.
      
      * Custom plan was supported for specific parameter values even when using
        prepared statements.
      
      * API for FDW was improved to provide multiple access "paths" for their tables,
        allowing more flexibility in join planning.
      
      * Security_barrier option was added for views to prevents optimizations that
        might allow view-protected data to be exposed to users.
      
      * Range data type was added to store a lower and upper bound belonging to its
        base data type.
      
      * CTAS (CREATE TABLE AS/SELECT INTO) is now treated as utility statement. The
        SELECT query is planned during the execution of the utility. To conform to
        this change, GPDB executes the utility statement only on QD and dispatches
        the plan of the SELECT query to QEs.
      Co-authored-by: NAdam Lee <ali@pivotal.io>
      Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
      Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
      Co-authored-by: NAsim R P <apraveen@pivotal.io>
      Co-authored-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
      Co-authored-by: NGang Xiong <gxiong@pivotal.io>
      Co-authored-by: NHaozhou Wang <hawang@pivotal.io>
      Co-authored-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
      Co-authored-by: NJesse Zhang <sbjesse@gmail.com>
      Co-authored-by: NJinbao Chen <jinchen@pivotal.io>
      Co-authored-by: NJoao Pereira <jdealmeidapereira@pivotal.io>
      Co-authored-by: NMelanie Plageman <mplageman@pivotal.io>
      Co-authored-by: NPaul Guo <paulguo@gmail.com>
      Co-authored-by: NRichard Guo <guofenglinux@gmail.com>
      Co-authored-by: NShujie Zhang <shzhang@pivotal.io>
      Co-authored-by: NTaylor Vesely <tvesely@pivotal.io>
      Co-authored-by: NZhenghua Lyu <zlv@pivotal.io>
      4750e1b6
  26. 01 8月, 2018 1 次提交
  27. 31 7月, 2018 1 次提交
  28. 18 7月, 2018 1 次提交
  29. 10 7月, 2018 3 次提交
    • D
      Add libexecinfo check for OpenBSD · 83147222
      Daniel Gustafsson 提交于
      In order to use backtrace() in error reporting on OpenBSD we need
      to link with libexecinfo from ports as backtrace() is a glibc only
      addition.
      83147222
    • D
      Check for libyaml when building gpmapreduce · d30a975f
      Daniel Gustafsson 提交于
      Greenplum Mapreduce requires libyaml, but was lacking a specific
      test for it in autoconf. This worked since gpfdist has the same
      check but when building with --disable-gpfdist we need to ensure
      we have libyaml to avoid late compilation failures.
      d30a975f
    • T
      Remove configure check prohibiting threaded libpython on OpenBSD. · b009259a
      Tom Lane 提交于
      According to recent tests, this case now works fine, so there's no reason
      to reject it anymore.  (Even if there are still some OpenBSD platforms
      in the wild where it doesn't work, removing the check won't break any case
      that worked before.)
      
      We can actually remove the entire test that discovers whether libpython
      is threaded, since without the OpenBSD case there's no need to know that
      at all.
      
      Per report from Davin Potts.  Back-patch to all active branches.
      Co-authored-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
      b009259a
  30. 22 6月, 2018 1 次提交