1. 17 1月, 2018 1 次提交
    • H
      Enable GIN indexes. · 89d739b0
      Heikki Linnakangas 提交于
      A long time ago, they were disabled, because the GIN code had not been
      modified to work with file replication. Now with WAL replication, there's
      no reason to not support them.
      89d739b0
  2. 15 1月, 2018 3 次提交
    • D
      Reduce overhead in partition testing · 90a957eb
      Daniel Gustafsson 提交于
      Since ICW is a very longrunning process, attempt to reduce time
      consumption by reducing overhead during testing while keeping the
      test constant (coverage not reduced).
      
      Avoid dropping/recreating underlying test tables when not required,
      reduce the number of partitions in some cases and skip pointless
      drops for objects we know doesn't exist. In total this shaves about
      30-40 seconds off an ICW run on my local machine, mileage may vary.
      90a957eb
    • D
      Add test for partition exchange with external table · 629f8161
      Daniel Gustafsson 提交于
      Make sure that we are able to exchnage in an external partition, and
      also ensure that truncation doesn't recurse into the ext partition.
      629f8161
    • D
      Update timezone data and management code to match PostgreSQL · ba5cfa93
      Daniel Gustafsson 提交于
      The timezone data in Greenplum are from the base version of
      PostgreSQL that the current version of Greenplum is based on.
      This cause issues since it means we are years behind on tz
      changes that have happened. This pulls in the timezone data
      and code from PostgreSQL 10.1 with as few changes to Greenplum
      as possible to minimize merge conflicts. The goal is to gain
      data rather than features, and for Greenplum for each release
      to be able to stay current with the iana tz database as it is
      imported into upstream PostgreSQL.
      
      This removes a Greenplum specific test for the Yakutsk timezone
      as it was made obsolete by upstream tz commit 1ac038c2c3f25f72.
      ba5cfa93
  3. 13 1月, 2018 6 次提交
    • J
      Remove gpcrondump, gpdbrestore, and related files · 8922315e
      Jamie McAtamney 提交于
      We will not be supporting these utilities in GPDB 6.
      
      References to gpcrondump and gpdbrestore in the gpdb-doc directory have been left
      intact, as the documentation will be updated to refer to gpbackup and gprestore
      in a separate commit.
      
      Author: Jamie McAtamney <jmcatamney@pivotal.io>
      8922315e
    • H
      Make newly-added gp_bloat_diag test less sensitive. · c91e320c
      Heikki Linnakangas 提交于
      The test was sensitive to the number of pages in the pg_rewrite system
      table's index, for no good reason. Also, don't create a new database for
      it, to speed it up.
      c91e320c
    • H
      Try to avoid race condition in test, when querying pg_partitions. · 3b74382b
      Heikki Linnakangas 提交于
      pg_partitions contains calls to pg_get_expr() function. That function
      suffers from a race condition: If the relation is dropped between the
      get_rel_name() call, and another syscache lookup in pg_get_expr_worker(),
      you get a "relation not found" error. The error message is reasonable,
      and I don't see any easy fix for the pg_partitions view itself, so just
      try to avoid hitting that in the tests.
      
      For some reason we are hitting that frequently in this particular query.
      Change it to query pg_class instead, it doesn't use any of the more
      complicated fields from pg_partitions, anyway.
      
      I'm pushing this to the 'walreplication' branch first, because for some
      reason, we're seeing the failure there more often than on 'master'. If
      this fixes the problem, I'll push this to 'master', too.
      3b74382b
    • H
      Remove GUCs and fault injection points related to PT and filerep. · e8dc97d4
      Heikki Linnakangas 提交于
      These were left over when Persistent Tables and Filerep were removed.
      e8dc97d4
    • H
    • H
      Remove a lot of persistent table and mirroring stuff. · 5c158ff3
      Heikki Linnakangas 提交于
      * Revert almost all the changes in smgr.c / md.c, to not go through
        the Mirrored* APIs.
      
      * Remove mmxlog stuff. Use upstream "pending relation deletion" code
        instead.
      
      * Get rid of multiple startup passes. Now it's just a single pass like
        in the upstream.
      
      * Revert the way database drop/create are handled to the way it is in
        upstream. Doesn't use PT anymore, but accesses file system directly,
        and WAL-logs a single CREATE/DROP DATABASE WAL record.
      
      * Get rid of MirroredLock
      
      * Remove a few tests that were specific to persistent tables.
      
      * Plus a lot of little removals and reverts to upstream code.
      5c158ff3
  4. 12 1月, 2018 1 次提交
    • S
      Fix Filter required properties for correlated subqueries in ORCA · 59abec44
      Shreedhar Hardikar 提交于
      This commit brings in ORCA changes that ensure that a Materialize node is not
      added under a Filter when its child contains outer references.  Otherwise, the
      subplan is not rescanned (because it is under a Material), producing wrong
      results. A rescan is necessary for it evaluates the subplan for each of the
      outer referenced values.
      
      For example:
      
      ```
      SELECT * FROM A,B WHERE EXISTS (
        SELECT * FROM E WHERE E.j = A.j and B.i NOT IN (
          SELECT E.i FROM E WHERE E.i != 10));
      ```
      
      For the above query ORCA produces a plan with two nested subplans:
      
      ```
      Result
        Filter: (SubPlan 2)
        ->  Gather Motion 3:1
              ->  Nested Loop
                    Join Filter: true
                    ->  Broadcast Motion 3:3
                          ->  Table Scan on a
                    ->  Table Scan on b
        SubPlan 2
          ->  Result
                Filter: public.c.j = $0
                ->  Materialize
                      ->  Result
                            Filter: (SubPlan 1)
                            ->  Materialize
                                  ->  Gather Motion 3:1
                                        ->  Table Scan on c
                            SubPlan 1
                              ->  Materialize
                                    ->  Gather Motion 3:1
                                          ->  Table Scan on c
                                                Filter: i <> 10
      ```
      
      The Materialize node (on top of Filter with Subplan 1) has cdb_strict = true.
      The cdb_strict semantics dictate that when the Materialize is rescanned,
      instead of destroying its tuplestore, it resets the accessor pointer to the
      beginning and the subtree is NOT rescanned.
      So the entries from the first scan are returned for all future calls; i.e. the
      results depend on the first row output by the cross join. This causes wrong and
      non-deterministic results.
      
      Also, this commit reinstates this test in qp_correlated_query.sql. It also
      fixes another wrong result caused by the same issue. Note that the changes in
      rangefuncs_optimizer.out are because ORCA now no longer falls back for those
      queries. Instead it produces a plan which is executed on master (instead of the
      segments as was done by planner) which changes the error messages.
      
      Also bump ORCA version to 2.53.8.
      Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
      59abec44
  5. 10 1月, 2018 1 次提交
  6. 06 1月, 2018 1 次提交
  7. 05 1月, 2018 2 次提交
  8. 03 1月, 2018 3 次提交
    • S
      Enable optimizer for tests with qp_olap_windowerr · 177a52fd
      Shreedhar Hardikar 提交于
      * Fix 4 (out of 64) windowerr tests that use row_number() to be non-deterministic
      
      * Fix remaining 58 (out of 64) tests.
      
      There was a difference in the results between planner and optimizer due
      to different value of row_number assigned.
      
      row_number() is inherently non-deterministic in GPDB. For example, for
      the following query:
      
        select row_number() over (partition by b) from foo;
      
      Let's say that foo was not distributed by b. In this case, to compute
      the WindowAgg, we would first have to redistribute the table on b (or
      gather all the tuples on master). Thus, for rows having the same b
      value, the row_number assigned depends on the order in which they are
      received by the WindowAgg - which is non-deterministic.
      
      In qp_olap_windowerr.sql tests, we mitigate this by forcing an order on
      ord column, which is unique in this context, making it easier to compare
      test results.
      
      * Remove FIXME comment and enable optimizer_trace_fallback
      Signed-off-by: NShreedhar Hardikar <shardikar@pivotal.io>
      177a52fd
    • H
      Bump Orca version to 2.53.4 · 18ffb4c1
      Haisheng Yuan 提交于
      18ffb4c1
    • E
      Updates for non-deterministic last_value tests · 961bba21
      Ekta Khanna 提交于
      The new tests added as part of the postgres merge, contain tests of
      window functions like: last_value, nth_value without specifying the
      ORDER BY clause in the window function. Due to Redistribute Motion
      getting added, the order is not deterministic without an explicit ORDER
      BY clause within the window function.
      
      This commit updates such tests for the relevant changes in ORCA (https://github.com/greenplum-db/gporca/commit/855ba856fdc59e88923523f1f8b2ead32ae32364).
      Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
      961bba21
  9. 02 1月, 2018 5 次提交
    • H
      Remove broken test queries. · 456f57e6
      Heikki Linnakangas 提交于
      The ao_ptotal() test function was broken a long time ago, in the 8.3 merge,
      by the removal of implicit cast from text to integer. You just got an
      "operator does not exist: text > integer" error. However, the queries
      using the function were inside start/end_ignore blocks, so that didn't
      
      We have tests on tupcount elsewhere, in the uao_* tests, for example.
      Whether the table is partitioned or not doesn't seem very interesting. So
      just remove the test queries, rather than try to fix them. (I don't
      understand what the endianess issue mentioned in the comment might've
      been.)
      
      I kept the test on COPY with REJECT LIMIT on partitioned table. I'm not
      sure how interesting that is either, but it wasn't broken. While at it,
      I reduced the number of partitions used, though, to shave off a few
      milliseconds from the test.
      456f57e6
    • H
      Fix test broken by commit e314acb1. · 01536f8d
      Heikki Linnakangas 提交于
      Commit e314acb1 changed the 'count_operator' helper function to include
      the EXPLAIN, so that it doesn't need to be given in the argument query
      anymore. But many of the calls of count_operator were not changed, and
      still contained EXPLAIN in the query, and as a result, they failed with
      'syntax error at or near "explain"'. These syntax errors were accidentally
      memorized in the expected output. Revert the expected output to what it
      was before, and remove the EXPLAIN from the queries instead.
      01536f8d
    • H
      Remove uninteresting and duplicated tests on ALTER TABLE TEMPLATE syntax. · 32dd73d7
      Heikki Linnakangas 提交于
      We have these exact same tests twice, with and without scema-qualifying
      the table name. That's hardly a meaningful difference, when testing the
      grammar of the SUBPARTITION TEMPLATE part. Remove the duplicated tests.
      (I'm not convinced it's useful to have even a single copy of these tests,
      but keep for now.)
      32dd73d7
    • H
      Remove duplicate test. · 9b838fab
      Heikki Linnakangas 提交于
      Both bfv_partition and partition_ddl had the essentially the same test.
      Keep the copy in partition_ddl, and move the "alter table" commands that
      were only present in the bfv_partition copy there.
      9b838fab
    • H
      Remove a few uninteresting test queries. · c900f980
      Heikki Linnakangas 提交于
      These negative tests throw an error in the parse analysis phase already.
      Whether the target table is an AO or AOCO table is not interesting.
      c900f980
  10. 28 12月, 2017 3 次提交
    • N
      Make btdexppages parameter to function gp_bloat_diag() numeric. · 152783e1
      Nadeem Ghani 提交于
      The gp_bloat_expected_pages.btdexppages column is numeric, but it was passed to
      the function gp_bloat_diag() as integer in the definition of the view of the
      same name gp_bloat_diag. This caused integer overflow errors when the number of
      expected pages exceeded max integer limit for columns with very large widths.
      
      This changes the function signature and call to use numeric for the
      btxexppages paramter.
      
      Adding a simple test to mimic the cutomer issue.
      
      Author: Nadeem Ghani <nghani@pivotal.io>
      Author: Shoaib Lari <slari@pivotal.io>
      152783e1
    • B
      Update ORCA version 2.53.2 · a808030f
      Bhuvnesh Chaudhary 提交于
      Test files are also updated in this commit as now we don't generate
      cross join alternative if an input join was present.
      
      cross join contains CScalarConst(1) as the join condition. if the
      input expression is as below with cross join at top level between
      CLogicalInnerJoin and CLogicalGet "t3"
      
      +--CLogicalInnerJoin
         |--CLogicalInnerJoin
         |  |--CLogicalGet "t1"
         |  |--CLogicalGet "t2"
         |  +--CScalarCmp (=)
         |     |--CScalarIdent "a" (0)
         |     +--CScalarIdent "b" (9)
         |--CLogicalGet "t3"
         +--CScalarConst (1)
      for the above expression (lower) predicate generated for the cross join
      between t1 and t3 will be: CScalarConst (1) In only such cases, donot
      generate such alternative with the lower join as cross join example:
      
      +--CLogicalInnerJoin
         |--CLogicalInnerJoin
         |  |--CLogicalGet "t1"
         |  |--CLogicalGet "t3"
         |  +--CScalarConst (1)
         |--CLogicalGet "t2"
         +--CScalarCmp (=)
            |--CScalarIdent "a" (0)
            +--CScalarIdent "b" (9)
      Signed-off-by: NShreedhar Hardikar <shardikar@pivotal.io>
      a808030f
    • X
      Fix bug in ALTER TABLE ADD COLUMN for AOCS · 7e7ee7e2
      Xin Zhang 提交于
      If the first insert into AOCS table aborted, the visible blocks in the block
      directory should be greater than 1. By default, we initialize the
      `DatumStreamWriter` with `blockFirstRowNumber=1` for newly added columns. Hence,
      the first row numbers are not consistent between the visible blocks. This caused
      inconsistency between the base table scan vs. the scan using indexes through
      block directory.
      
      This wrong result issue is only happened to the first invisible blocks. The
      current code (`aocs_addcol_endblock()` called in `ATAocsWriteNewColumns()`) already
      handles other gaps after the first visible blocks.
      
      The fix updates the `blockFirstRowNumber` with `expectedFRN`, and hence fixed the
      mis-alignment of visible blocks.
      
      Author: Xin Zhang <xzhang@pivotal.io>
      Author: Ashwin Agrawal <aagrawal@pivotal.io>
      7e7ee7e2
  11. 22 12月, 2017 2 次提交
    • S
      Fix flakey percentile test · c3c5d87b
      sambitesh 提交于
      This query was added to "percentile" in commit b877cd43 as outer references
      were allowed after we backported ordered-set aggregates. But the query was
      using an ARRAY sublink, which semantically does not guarantee ordering. This
      causes sporadic failure in tests.
      
      This commit tweaks the test query so that it has a deterministic ordering in
      the output array.
      
      We considered just adding an ORDER BY to the subquery, but ultimately we chose
      to use `array_agg` with an `ORDER BY` because subquery order is not preserved
      per SQL standard.
      Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
      c3c5d87b
    • J
      partition_pruning: don't drop public tables other tests need · 5d57a445
      Jacob Champion 提交于
      The portals_updatable test makes use of the public.bar table, which the
      partition_pruning test occasionally dropped. To fix, don't fall back to
      the public schema in the partition_pruning search path. Put the
      temporary functions in the partition_pruning schema as well for good
      measure.
      
      Author: Asim R P <apraveen@pivotal.io>
      Author: Jacob Champion <pchampion@pivotal.io>
      5d57a445
  12. 21 12月, 2017 2 次提交
  13. 14 12月, 2017 2 次提交
    • P
      Ignore undetermined output within cursor.sql · fc8a30e9
      Pengzhou Tang 提交于
      When declaring a cursor, QD will not block until QE got a
      snapshot and set up a interconnect, so we can not guarantee
      that QD always see 'qe_got_snapshot_and_interconnect' was
      triggered. For the case itself, if the writer QE can get
      slower than the reader QE without the help of fault injector,
      it's even better.
      fc8a30e9
    • M
      Fix internal_bpchar_pattern_compare compare logic to keep it · 35b53cbf
      Max Yang 提交于
      as same as upstream.
      
      In upstream, internal_bpchar_pattern_compare compare inputs by ignoring
      ending space. But GPDB it just use whole string compare. The bug didn't
      appear because the before merging PG_MERGE_84 GPDB just use TableScan when executing
      following query, but after PG_MERGE_84, IndexScan is used, and internal_bpchar_pattern_compare
      will be used for index:
      create table tbl(id int4, v char(10));
      create index tbl_v_idx_bpchar on tbl using btree(v bpchar_pattern_ops);
      insert into tbl values (1, 'abc');
      explain select * from tbl where v = 'abc '::char(20);
      select * from tbl where v = 'abc '::char(20);
      
      Author: Xiaoran Wang <xiwang@pivotal.io>
      35b53cbf
  14. 13 12月, 2017 5 次提交
  15. 09 12月, 2017 3 次提交
    • H
      Print window name in 'window "%s" does not exist' correctly. · adad0a77
      Heikki Linnakangas 提交于
      This code has been modified in GPDB, to turn "OVER (w)" into the same as
      "OVER w". (Whether that's a good idea is another debate, but that's what we
      have now, for compatibility with GPDB 5 and below). But the error printing
      code was wrong, and printed 'window "(null)" does not exist", for the
      OVER (w) syntax, if 'w' was not a valid window name.
      adad0a77
    • H
      Clean up temp toast schema on backend exit. · 2619b329
      Heikki Linnakangas 提交于
      When a backend exits normally, the "pg_temp_<sessionid>" schema is dropped.
      In GPDB 5, with the 8.3 merge, there is now a "pg_temp_toast_<sessionid>"
      schema in addition to the temp schema, but it was not dropped. As a result,
      you would end up with a lot of unused pg_temp_toast_* schemas. To fix,
      also drop the temp toast schema at backend exit.
      
      We will still leak temp schemas, and temp toast schemas, if a backend exits
      abnormally, or if the server crashes. That's not a new issue, but we should
      probably do something about that in the future, too.
      
      Fixes github issue #4061. Backport to 5x_STABLE, where the toast temp
      namespaces were introduced.
      2619b329
    • J
      Merge TIDBitmap API changes from upstream, and refactor · e8c3b293
      Jacob Champion 提交于
      Upstream commit 43a57cf3, which significantly changes the API for the
      HashBitmap (TIDBitmap in Postgres), is about to hit in an upcoming
      merge. This patch is a joint effort by myself, Max Yang, Xiaoran Wang,
      Heikki Linnakangas, and Daniel Gustafsson to reduce our diff against
      upstream and support the incoming API changes with our GPDB-specific
      customizations.
      
      The primary goal of this patch is to support concurrent iterations over
      a single StreamBitmap or TIDBitmap. GPDB has made significant changes to
      allow either one of those bitmap types to be iterated over without the
      caller necessarily needing to know which is which, and we've kept that
      property here.
      
      Here is the general list of changes:
      
      - Cherry-pick the following commit from upstream:
      
        commit 43a57cf3
        Author: Tom Lane <tgl@sss.pgh.pa.us>
        Date:   Sat Jan 10 21:08:36 2009 +0000
      
          Revise the TIDBitmap API to support multiple concurrent iterations over a
          bitmap.  This is extracted from Greg Stark's posix_fadvise patch; it seems
          worth committing separately, since it's potentially useful independently of
          posix_fadvise.
      
      - Revert as much as possible of the TIDBitmap API to the upstream
        version, to avoid unnecessary merge hazards in the future.
      
      - Add a tbm_generic_ version of the API to differentiate upstream's
        TIDBitmap-only API from ours. Both StreamBitmap and TIDBitmap can be
        passed to this version of the API.
      
      - Update each section of code to use the new generic API.
      
      - Fix up some memory management issues in bitmap.c that are now
        exacerbated by our changes.
      e8c3b293