1. 09 12月, 2016 3 次提交
  2. 08 12月, 2016 19 次提交
  3. 07 12月, 2016 7 次提交
    • K
      Reset mppIsWriter field of PGPROC to false in hook function of GUC... · fe09950f
      Kenan Yao 提交于
      Reset mppIsWriter field of PGPROC to false in hook function of GUC gp_session_role if the new value is GP_ROLE_UTILITY(#1421)
      
      When someone is connecting to a segment in utility mode, and there exists a connection
      from QD to this segment having a same session id coincidently, reader QE on this
      segment may possibly treat the utility backend as its corresponding writer QE, this may lead
      to confusion on shared snapshot, and reader QE of extended query may report error saying it
      cannot find snapshot temp file.
      
      We avoid this by excluding utility backend from writer list.
      
      Signed-off-by Pengzhou Tang <ptang@pivotal.io>
      fe09950f
    • K
      Remove a flaky regression test of dispatcher. · 1d4d067c
      Kenan Yao 提交于
      1d4d067c
    • A
      Update optimizer specific answer file that was left out · ef8cee6a
      Asim R P 提交于
      by commit 9830cc2c
      ef8cee6a
    • A
      Fix partition installcheck test. · 9830cc2c
      Asim R P 提交于
      The fix removes the assumption that no other partition table exists before the
      test starts.
      9830cc2c
    • A
      Move subtransaction limit removal tests from TINC to ICG. · 1e612781
      Asim R P 提交于
      Some of the TINC tests under "limit_removal" heading were testing
      subtransaction functionality but not limit remotal in particular.  Move them to
      distributed_transactions.sql.  The limit in "limit removal" refers to the
      in-progress subtransactions array SharedSnapshotSlot->subxids.  If the limit of
      MaxGpSavePoints subtransactions is reached, the array is spilled over to disk.
      More elaborate tests for this funcionality exist in TINC
      (mpp/gpdb/tests/storage/sub_transaction_limit_removal).
      1e612781
    • A
      Make UAO catalog tests deterministic. · 506032ac
      Asim R P 提交于
      The tests were counting tuples from an AO segfile in AWAITING_DROP state.  This
      file should be excluded in counting visble tuples/eof because AWAITING_DROP
      segfile is only used by select transactions that run concurrently with VACUUM
      transaction on the same AO table.
      506032ac
    • F
      Logging memory usage without allocating further memory. (#1417) · 4643f93b
      foyzur 提交于
      Previously we were converting accounts array to tree before logging
      those accounts via a tree walk. This is unnecessary for simple logging
      into the pg_log. Moreover, such tree conversion is not feasible when we
      have OOM in a peer process. Allocating memory merely for logging would
      force the current process to hit OOM as well. Therefore, we log the
      memory usage without allocating any further memory.
      4643f93b
  4. 06 12月, 2016 9 次提交
    • D
      Initialize loop control to handle Oid collision · 25342741
      Daniel Gustafsson 提交于
      In 88f0623e a cache of recently used relfilenode Oids was added, and
      the cache is interrogated in the GetNewRelFileNode() logic. If the
      first Oid in the loop is encountered in the cache however, the loop
      will only retry if the compiler has initialized the loop control
      variable to true. Explicitly initialize to true to handle this case
      for compilers that initialize to false such as clang.
      25342741
    • K
      Stablize regression tests of dispatcher. · ea2ad511
      Kenan Yao 提交于
      Signed-off-by: NPengzhou Tang <ptang@pivotal.io>
      ea2ad511
    • A
      gpcloud: add a case for Ohio region · db398280
      Adam Lee 提交于
      db398280
    • D
      Run installcheck from src/test · 72208928
      Dhanashree Kashid and Jesse Zhang 提交于
      `installcheck-good` stopped being a top-level make target after
      greenplum-db/gpdb@644b2a9
      
      [ci skip]
      72208928
    • P
    • A
      Add checkpoint step to solidify UAO faultinjector test · 6411a040
      Ashwin Agrawal 提交于
      The test panics and needs to perform crash recovery on segments, if lot of ddl
      operations are performed by previous tests inside checkpoint interval, lot of
      xlog recovery needs to happen for this tests. Hence adding checkpoint upfront to
      eliminate any extra load for this tests.
      6411a040
    • C
      Don't report drop errors during postdata restore. (#1395) · 751d4b06
      Chris Hajas 提交于
      When performing a dump, the postdata file contains DROP statements that will report an error if the relation already exists. This is misleading since the restore will say that errors have occured during restore, even though the DB is still in the expected state. For most cases we simply could use the `DROP IF EXISTS` syntax; however, the `DROP CONSTRAINT IF EXISTS` syntax isn't available until postgres 9.0 so we create a temporary procedure to perform this action.
      
      - Replaces DROP statements with DROP IF EXISTS for supported postdata reltypes
      - Temporarily adds and uses a function that mimics the behavior of DROP CONSTRAINT IF EXISTS, since this feature is not available in Postgres 8.2
      
      Authors: Chris Hajas, Larry Hamel, Stephen Wu
      751d4b06
    • X
      Fix for issue with EXISTS subquery with CTE · 5838bc1b
      Xin Zhang 提交于
      When we have a correlated subquery with EXISTS and CTE, the planner
      produces wrong plan as:
      
      ```
      pivotal=# explain with t as (select 1) select * from foo where exists (select * from bar where foo.a = 'a' and foo.b = 'b');
                                                   QUERY PLAN
      ----------------------------------------------------------------------------------------------------
       Gather Motion 3:1  (slice3; segments: 3)  (cost=0.01..1522.03 rows=5055210000 width=16)
         ->  Nested Loop  (cost=0.01..1522.03 rows=1685070000 width=16)
               ->  Broadcast Motion 1:3  (slice2; segments: 1)  (cost=0.01..0.03 rows=1 width=0)
                     ->  Limit  (cost=0.01..0.01 rows=1 width=0)
                           ->  Gather Motion 3:1  (slice1; segments: 3)  (cost=0.01..0.03 rows=1 width=0)
                                 ->  Limit  (cost=0.01..0.01 rows=1 width=0)
                                       ->  Result  (cost=0.01..811.00 rows=23700 width=0)
                                             One-Time Filter: $0 = 'a'::bpchar AND $1 = 'b'::bpchar
                                             ->  Seq Scan on bar  (cost=0.01..811.00 rows=23700 width=0)
               ->  Seq Scan on foo  (cost=0.00..811.00 rows=23700 width=16)
       Optimizer status: legacy query optimizer
      (11 rows)
      ```
      
      This failed during execution because $0 is referenced across the slices.
      
      Root Cause:
      That's because planner produce a plan with `$0` aka `param` but without
      `subplan`.  The `param` is created by `replace_outer_var()`, when planner
      detects a query referring to relations from its outer/parent query.  Such `var`
      is created when removing `sublink` in `convert_EXISTS_to_join()` function.  In
      that function, when handling the `EXISTS` query, we convert the `EXISTS_sublink`
      to a `subquery RTE` (and expect it to get pulled up later by
      `pull_up_subquery()`.  However the subquery cannot be pulled-up by
      `pull_up_subquery()` since it is not a simple subquery (`is_simple_subquery()`
      returns false because of CTE in this case).  In this case, the `sublink` got
      removed, hence it cannot produce the `subplan` (which is an valid option).  And
      the `var` left behind as outer-reference, and then covered to param, which is
      blowed up during query execution.  There is a mismatching conditions between
      `convert_EXISTS_to_join()` and `pull_up_subquery()` about whether this subquery
      can be pulled-up.
      
      The fix is to reuse `is_simple_subquery()` checking when
      `convert_EXISTS_to_join()`, so that it can be consistent with
      `pull_up_subquery()` on whether subquery can be pulled or not.
      
      The correct plan after fix is:
      
      ```
      pivotal=# explain with t as (select 1) select * from foo where exists (select * from bar where foo.a = 'a' and foo.b = 'b');
                                                            QUERY PLAN
      ----------------------------------------------------------------------------------------------------------------------
       Gather Motion 3:1  (slice2; segments: 3)  (cost=0.00..1977.50 rows=35550 width=16)
         ->  Seq Scan on foo  (cost=0.00..1977.50 rows=11850 width=16)
               Filter: (subplan)
               SubPlan 1
                 ->  Result  (cost=0.01..811.00 rows=23700 width=16)
                       One-Time Filter: $0 = 'a'::bpchar AND $1 = 'b'::bpchar
                       ->  Result  (cost=882.11..1593.11 rows=23700 width=16)
                             ->  Materialize  (cost=882.11..1593.11 rows=23700 width=16)
                                   ->  Broadcast Motion 3:3  (slice1; segments: 3)  (cost=0.01..811.00 rows=23700 width=16)
                                         ->  Seq Scan on bar  (cost=0.01..811.00 rows=23700 width=16)
       Optimizer status: legacy query optimizer
      (11 rows)
      ```
      Signed-off-by: NDhanashree Kashid <dkashid@pivotal.io>
      5838bc1b
    • H
      Add "installcheck-world" target (and sync plperl with upstream) (#1343) · 644b2a9d
      Heikki Linnakangas 提交于
      * Sync plperl with PostgreSQL REL9_1_STABLE.
      
      We had mostly backported plperl from 9.1, but not up to the tip of
      the stable branch, and there were some cosmetic and other differences
      that were not really necessary. It is simpler if the GPDB version is
      as identical as possible to a particular upstream version, not a mismatch
      of different versions.
      
      Copy over everything from upstream REL9_1_STABLE, and tweak things to make
      it compile and pass the regression tests.
      
      I started hacking this because the regression tests were not passing. Now
      they pass.
      
      * Add "installcheck-world" makefile target, remove other top-level targets.
      
      The master plan is that we will have straightforward "world" makefile
      targets, like newer versions of PostgreSQL has, that build, install,
      and test everything in the source tree. For example, "make world" builds
      everything and the kitchen sink, and "make installcheck-world" runs
      every test suite against an existing installation.
      
      This patch is the first step in that direction. It creates a top-level
      "installcheck-world" target, that runs some of the test suites we have.
      It does not run everything, because the tests for some components are
      either broken, or require additional setup to run. The point of this
      first step is to establish the precedence, and clean up and add
      everything else that we care about to this new target. For example, this
      does not run the gphdfs tests, gpmapreduce tests, or orafce tests yet.
      
      Also, this only adds the "installcheck-world" target, not "world",
      "install-world" nor "check-world". Those are TODOs.
      
      Remove other, defunct, top-level targets, to reduce confusion.
      
      Move unittest-check top-level target from gpAux/Makefile to top-level
      Makefile, to get some more visibility to it.
      
      Adjust the concourse pipeline scripts to run the new "installcheck-world"
      target, replacing the separate "installcheck-good" and
      "installcheck-bugbuster" tasks. In the future, whenever a new test suite
      is added, it can be added to the installcheck-world target, and the
      pipeline will pick it up immediately, without having to modify the yaml
      files.
      
      An immediate benefit of this is that we now run the isolationtester test
      suite, and the PL tests, as part of the pipeline.
      
      * Fixed test cases
      644b2a9d
  5. 05 12月, 2016 2 次提交