1. 17 1月, 2018 1 次提交
    • H
      Enable GIN indexes. · 89d739b0
      Heikki Linnakangas 提交于
      A long time ago, they were disabled, because the GIN code had not been
      modified to work with file replication. Now with WAL replication, there's
      no reason to not support them.
      89d739b0
  2. 15 1月, 2018 3 次提交
    • D
      Reduce overhead in partition testing · 90a957eb
      Daniel Gustafsson 提交于
      Since ICW is a very longrunning process, attempt to reduce time
      consumption by reducing overhead during testing while keeping the
      test constant (coverage not reduced).
      
      Avoid dropping/recreating underlying test tables when not required,
      reduce the number of partitions in some cases and skip pointless
      drops for objects we know doesn't exist. In total this shaves about
      30-40 seconds off an ICW run on my local machine, mileage may vary.
      90a957eb
    • D
      Add test for partition exchange with external table · 629f8161
      Daniel Gustafsson 提交于
      Make sure that we are able to exchnage in an external partition, and
      also ensure that truncation doesn't recurse into the ext partition.
      629f8161
    • D
      Update timezone data and management code to match PostgreSQL · ba5cfa93
      Daniel Gustafsson 提交于
      The timezone data in Greenplum are from the base version of
      PostgreSQL that the current version of Greenplum is based on.
      This cause issues since it means we are years behind on tz
      changes that have happened. This pulls in the timezone data
      and code from PostgreSQL 10.1 with as few changes to Greenplum
      as possible to minimize merge conflicts. The goal is to gain
      data rather than features, and for Greenplum for each release
      to be able to stay current with the iana tz database as it is
      imported into upstream PostgreSQL.
      
      This removes a Greenplum specific test for the Yakutsk timezone
      as it was made obsolete by upstream tz commit 1ac038c2c3f25f72.
      ba5cfa93
  3. 13 1月, 2018 14 次提交
    • J
      Remove gpcrondump, gpdbrestore, and related files · 8922315e
      Jamie McAtamney 提交于
      We will not be supporting these utilities in GPDB 6.
      
      References to gpcrondump and gpdbrestore in the gpdb-doc directory have been left
      intact, as the documentation will be updated to refer to gpbackup and gprestore
      in a separate commit.
      
      Author: Jamie McAtamney <jmcatamney@pivotal.io>
      8922315e
    • H
      f70f49fe
    • H
      Remove a few stray references to filespaces. · f6401bc5
      Heikki Linnakangas 提交于
      f6401bc5
    • A
      a51df947
    • H
      Fix 'tablespace' test to work with ORCA. · 32fbf1e0
      Heikki Linnakangas 提交于
      With ORCA, the CREATE TABLE AS used in the test created a randomly
      distributed table, while with the Postgres planner, it was distributed by
      the only column. A randomly distributed table cannot have an index, so you
      got an error with that. Fix by forcing the distribution.
      32fbf1e0
    • H
      Embed dbid in tablespace path, to avoid clash between servers on same host. · 72e20655
      Heikki Linnakangas 提交于
      This is a backport upstream commit 22817041, from PostgreSQL 9.0, which
      added the server version number in the path. But in GPDB, also include the
      gp_dbid in the path. This makes it possible to use the same tablespace path
      on multiple servers running on the same host, without clashing.
      
      Also includes cherry-pick of the small upstream cleanup commits 5c82ccb1
      a6f56efc, and c282b36d.
      
      Re-enable upstream 'tablespace' regression test. It works now, even when
      all the nodes are running on same host.
      72e20655
    • H
      Remove filespaces. · 5a3a39bc
      Heikki Linnakangas 提交于
      Remove the concept of filespaces, revert tablespaces to work the same as
      in upstream.
      
      There is some leftovers in management tools. I don't know how to test all
      that, and I was afraid of touching things I can't run. Also, we may need
      to create replacements for some of those things on top of tablespaces, to
      make the management of tablespaces easier, and it might be easier to modify
      the existing tools than write them from scratch. (Yeah, you could always
      look at the git history, but still.)
      
      Per the discussion on gpdb-dev mailing list, the plan is to cherry-pick
      commit 16d8e594 from PostgreSQL 9.2, to make it possible to have a
      different path for a tablespace in the primary and its mirror. But that's
      not included in this commit yet.
      
      TODO: Make temp_tablespaces work.
      TODO: Make pg_dump do something sensible, when dumping from a GPDB 5 cluster
      that uses filespaces. Same with pg_upgrade.
      
      Discussion: https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/sON4lraPEqg/v3lkM587BAAJ
      5a3a39bc
    • H
      Make newly-added gp_bloat_diag test less sensitive. · c91e320c
      Heikki Linnakangas 提交于
      The test was sensitive to the number of pages in the pg_rewrite system
      table's index, for no good reason. Also, don't create a new database for
      it, to speed it up.
      c91e320c
    • A
      Start running segspace test for wal replication. · acec2c16
      Ashwin Agrawal 提交于
      Now that gpstop/gpstart works for wal replication, remove segspace from
      --exclude-tests. filespace is only one remains in --exclude-tests list which
      would go away soon as well.
      acec2c16
    • H
      Try to avoid race condition in test, when querying pg_partitions. · 3b74382b
      Heikki Linnakangas 提交于
      pg_partitions contains calls to pg_get_expr() function. That function
      suffers from a race condition: If the relation is dropped between the
      get_rel_name() call, and another syscache lookup in pg_get_expr_worker(),
      you get a "relation not found" error. The error message is reasonable,
      and I don't see any easy fix for the pg_partitions view itself, so just
      try to avoid hitting that in the tests.
      
      For some reason we are hitting that frequently in this particular query.
      Change it to query pg_class instead, it doesn't use any of the more
      complicated fields from pg_partitions, anyway.
      
      I'm pushing this to the 'walreplication' branch first, because for some
      reason, we're seeing the failure there more often than on 'master'. If
      this fixes the problem, I'll push this to 'master', too.
      3b74382b
    • H
      Remove GUCs and fault injection points related to PT and filerep. · e8dc97d4
      Heikki Linnakangas 提交于
      These were left over when Persistent Tables and Filerep were removed.
      e8dc97d4
    • H
      Also remove references to enable_segwalrep in Makefiles. · 1697a640
      Heikki Linnakangas 提交于
      I removed the autoconf flag and #ifdefs earlier, but missed these.
      1697a640
    • H
    • H
      Remove a lot of persistent table and mirroring stuff. · 5c158ff3
      Heikki Linnakangas 提交于
      * Revert almost all the changes in smgr.c / md.c, to not go through
        the Mirrored* APIs.
      
      * Remove mmxlog stuff. Use upstream "pending relation deletion" code
        instead.
      
      * Get rid of multiple startup passes. Now it's just a single pass like
        in the upstream.
      
      * Revert the way database drop/create are handled to the way it is in
        upstream. Doesn't use PT anymore, but accesses file system directly,
        and WAL-logs a single CREATE/DROP DATABASE WAL record.
      
      * Get rid of MirroredLock
      
      * Remove a few tests that were specific to persistent tables.
      
      * Plus a lot of little removals and reverts to upstream code.
      5c158ff3
  4. 12 1月, 2018 1 次提交
    • S
      Fix Filter required properties for correlated subqueries in ORCA · 59abec44
      Shreedhar Hardikar 提交于
      This commit brings in ORCA changes that ensure that a Materialize node is not
      added under a Filter when its child contains outer references.  Otherwise, the
      subplan is not rescanned (because it is under a Material), producing wrong
      results. A rescan is necessary for it evaluates the subplan for each of the
      outer referenced values.
      
      For example:
      
      ```
      SELECT * FROM A,B WHERE EXISTS (
        SELECT * FROM E WHERE E.j = A.j and B.i NOT IN (
          SELECT E.i FROM E WHERE E.i != 10));
      ```
      
      For the above query ORCA produces a plan with two nested subplans:
      
      ```
      Result
        Filter: (SubPlan 2)
        ->  Gather Motion 3:1
              ->  Nested Loop
                    Join Filter: true
                    ->  Broadcast Motion 3:3
                          ->  Table Scan on a
                    ->  Table Scan on b
        SubPlan 2
          ->  Result
                Filter: public.c.j = $0
                ->  Materialize
                      ->  Result
                            Filter: (SubPlan 1)
                            ->  Materialize
                                  ->  Gather Motion 3:1
                                        ->  Table Scan on c
                            SubPlan 1
                              ->  Materialize
                                    ->  Gather Motion 3:1
                                          ->  Table Scan on c
                                                Filter: i <> 10
      ```
      
      The Materialize node (on top of Filter with Subplan 1) has cdb_strict = true.
      The cdb_strict semantics dictate that when the Materialize is rescanned,
      instead of destroying its tuplestore, it resets the accessor pointer to the
      beginning and the subtree is NOT rescanned.
      So the entries from the first scan are returned for all future calls; i.e. the
      results depend on the first row output by the cross join. This causes wrong and
      non-deterministic results.
      
      Also, this commit reinstates this test in qp_correlated_query.sql. It also
      fixes another wrong result caused by the same issue. Note that the changes in
      rangefuncs_optimizer.out are because ORCA now no longer falls back for those
      queries. Instead it produces a plan which is executed on master (instead of the
      segments as was done by planner) which changes the error messages.
      
      Also bump ORCA version to 2.53.8.
      Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
      59abec44
  5. 10 1月, 2018 2 次提交
  6. 06 1月, 2018 1 次提交
  7. 05 1月, 2018 3 次提交
    • K
      e8dc3ea4
    • J
      Reinstate dropping schema for gporca test suite · db1ecd3c
      Jesse Zhang 提交于
      Partition tables hard-code the operator '=' lookup to namespace
      'pg_catalog', which means that in this test we had to put our
      user-defined operator into that special system namespace. This works
      fine, until we try to pg_dump the resulting database: pg_catalog is not
      dumped by default. That led to an incomplete dump that will fail to
      restore.
      
      This commit reinstates the dropping of the schema at the end of `gporca`
      test to get the pipelines back to green (broken after c7ab6924 ).
      Backpatch to 5X_STABLE.
      db1ecd3c
    • J
      set search_path and stop dropping schema in gporca test · c7ab6924
      Jesse Zhang 提交于
      The `gporca` regression test suite uses a schema but doesn't really
      switch `search_path` to the schema that's meant to encapsulate most of
      the objects it uses. This has led to multiple instances where we:
        1. Either used a table from another namespace by accident;
        2. Or we leaked objects into the public namespace that other tests in
        turned accidentally depended on.
      
      As we were about to add a few user-defined types and casts to the test
      suite, we want to (at last) ensure that all future additions are scoped
      to the namespace.
      Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
      
      Closes #4238
      c7ab6924
  8. 04 1月, 2018 1 次提交
    • J
      atmsort: try to find the end of \d tables correctly · d89fef52
      Jacob Champion 提交于
      Several \d variants don't put a row count at the end of their tables,
      which means that atmsort doesn't stop sorting output until it finds a
      row count somewhere later. Some tests are having their diff output
      horribly mangled because of this, which makes debugging difficult.
      
      When we see a \d command, try to apply more heuristics for finding the
      end of a table. In addition to ending at a row count, end table sorting
      whenever we find a line that doesn't have the same number of column
      separators as the table header. If we don't have a table header, still
      attempt to end table sorting at a blank line.
      
      extprotocol's test output order must be fixed as a result. Put the
      "External options" line where psql actually prints it, after "Format
      options".
      d89fef52
  9. 03 1月, 2018 3 次提交
    • S
      Enable optimizer for tests with qp_olap_windowerr · 177a52fd
      Shreedhar Hardikar 提交于
      * Fix 4 (out of 64) windowerr tests that use row_number() to be non-deterministic
      
      * Fix remaining 58 (out of 64) tests.
      
      There was a difference in the results between planner and optimizer due
      to different value of row_number assigned.
      
      row_number() is inherently non-deterministic in GPDB. For example, for
      the following query:
      
        select row_number() over (partition by b) from foo;
      
      Let's say that foo was not distributed by b. In this case, to compute
      the WindowAgg, we would first have to redistribute the table on b (or
      gather all the tuples on master). Thus, for rows having the same b
      value, the row_number assigned depends on the order in which they are
      received by the WindowAgg - which is non-deterministic.
      
      In qp_olap_windowerr.sql tests, we mitigate this by forcing an order on
      ord column, which is unique in this context, making it easier to compare
      test results.
      
      * Remove FIXME comment and enable optimizer_trace_fallback
      Signed-off-by: NShreedhar Hardikar <shardikar@pivotal.io>
      177a52fd
    • H
      Bump Orca version to 2.53.4 · 18ffb4c1
      Haisheng Yuan 提交于
      18ffb4c1
    • E
      Updates for non-deterministic last_value tests · 961bba21
      Ekta Khanna 提交于
      The new tests added as part of the postgres merge, contain tests of
      window functions like: last_value, nth_value without specifying the
      ORDER BY clause in the window function. Due to Redistribute Motion
      getting added, the order is not deterministic without an explicit ORDER
      BY clause within the window function.
      
      This commit updates such tests for the relevant changes in ORCA (https://github.com/greenplum-db/gporca/commit/855ba856fdc59e88923523f1f8b2ead32ae32364).
      Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
      961bba21
  10. 02 1月, 2018 5 次提交
    • H
      Remove broken test queries. · 456f57e6
      Heikki Linnakangas 提交于
      The ao_ptotal() test function was broken a long time ago, in the 8.3 merge,
      by the removal of implicit cast from text to integer. You just got an
      "operator does not exist: text > integer" error. However, the queries
      using the function were inside start/end_ignore blocks, so that didn't
      
      We have tests on tupcount elsewhere, in the uao_* tests, for example.
      Whether the table is partitioned or not doesn't seem very interesting. So
      just remove the test queries, rather than try to fix them. (I don't
      understand what the endianess issue mentioned in the comment might've
      been.)
      
      I kept the test on COPY with REJECT LIMIT on partitioned table. I'm not
      sure how interesting that is either, but it wasn't broken. While at it,
      I reduced the number of partitions used, though, to shave off a few
      milliseconds from the test.
      456f57e6
    • H
      Fix test broken by commit e314acb1. · 01536f8d
      Heikki Linnakangas 提交于
      Commit e314acb1 changed the 'count_operator' helper function to include
      the EXPLAIN, so that it doesn't need to be given in the argument query
      anymore. But many of the calls of count_operator were not changed, and
      still contained EXPLAIN in the query, and as a result, they failed with
      'syntax error at or near "explain"'. These syntax errors were accidentally
      memorized in the expected output. Revert the expected output to what it
      was before, and remove the EXPLAIN from the queries instead.
      01536f8d
    • H
      Remove uninteresting and duplicated tests on ALTER TABLE TEMPLATE syntax. · 32dd73d7
      Heikki Linnakangas 提交于
      We have these exact same tests twice, with and without scema-qualifying
      the table name. That's hardly a meaningful difference, when testing the
      grammar of the SUBPARTITION TEMPLATE part. Remove the duplicated tests.
      (I'm not convinced it's useful to have even a single copy of these tests,
      but keep for now.)
      32dd73d7
    • H
      Remove duplicate test. · 9b838fab
      Heikki Linnakangas 提交于
      Both bfv_partition and partition_ddl had the essentially the same test.
      Keep the copy in partition_ddl, and move the "alter table" commands that
      were only present in the bfv_partition copy there.
      9b838fab
    • H
      Remove a few uninteresting test queries. · c900f980
      Heikki Linnakangas 提交于
      These negative tests throw an error in the parse analysis phase already.
      Whether the target table is an AO or AOCO table is not interesting.
      c900f980
  11. 28 12月, 2017 3 次提交
    • N
      Make btdexppages parameter to function gp_bloat_diag() numeric. · 152783e1
      Nadeem Ghani 提交于
      The gp_bloat_expected_pages.btdexppages column is numeric, but it was passed to
      the function gp_bloat_diag() as integer in the definition of the view of the
      same name gp_bloat_diag. This caused integer overflow errors when the number of
      expected pages exceeded max integer limit for columns with very large widths.
      
      This changes the function signature and call to use numeric for the
      btxexppages paramter.
      
      Adding a simple test to mimic the cutomer issue.
      
      Author: Nadeem Ghani <nghani@pivotal.io>
      Author: Shoaib Lari <slari@pivotal.io>
      152783e1
    • B
      Update ORCA version 2.53.2 · a808030f
      Bhuvnesh Chaudhary 提交于
      Test files are also updated in this commit as now we don't generate
      cross join alternative if an input join was present.
      
      cross join contains CScalarConst(1) as the join condition. if the
      input expression is as below with cross join at top level between
      CLogicalInnerJoin and CLogicalGet "t3"
      
      +--CLogicalInnerJoin
         |--CLogicalInnerJoin
         |  |--CLogicalGet "t1"
         |  |--CLogicalGet "t2"
         |  +--CScalarCmp (=)
         |     |--CScalarIdent "a" (0)
         |     +--CScalarIdent "b" (9)
         |--CLogicalGet "t3"
         +--CScalarConst (1)
      for the above expression (lower) predicate generated for the cross join
      between t1 and t3 will be: CScalarConst (1) In only such cases, donot
      generate such alternative with the lower join as cross join example:
      
      +--CLogicalInnerJoin
         |--CLogicalInnerJoin
         |  |--CLogicalGet "t1"
         |  |--CLogicalGet "t3"
         |  +--CScalarConst (1)
         |--CLogicalGet "t2"
         +--CScalarCmp (=)
            |--CScalarIdent "a" (0)
            +--CScalarIdent "b" (9)
      Signed-off-by: NShreedhar Hardikar <shardikar@pivotal.io>
      a808030f
    • X
      Fix bug in ALTER TABLE ADD COLUMN for AOCS · 7e7ee7e2
      Xin Zhang 提交于
      If the first insert into AOCS table aborted, the visible blocks in the block
      directory should be greater than 1. By default, we initialize the
      `DatumStreamWriter` with `blockFirstRowNumber=1` for newly added columns. Hence,
      the first row numbers are not consistent between the visible blocks. This caused
      inconsistency between the base table scan vs. the scan using indexes through
      block directory.
      
      This wrong result issue is only happened to the first invisible blocks. The
      current code (`aocs_addcol_endblock()` called in `ATAocsWriteNewColumns()`) already
      handles other gaps after the first visible blocks.
      
      The fix updates the `blockFirstRowNumber` with `expectedFRN`, and hence fixed the
      mis-alignment of visible blocks.
      
      Author: Xin Zhang <xzhang@pivotal.io>
      Author: Ashwin Agrawal <aagrawal@pivotal.io>
      7e7ee7e2
  12. 22 12月, 2017 2 次提交
    • S
      Fix flakey percentile test · c3c5d87b
      sambitesh 提交于
      This query was added to "percentile" in commit b877cd43 as outer references
      were allowed after we backported ordered-set aggregates. But the query was
      using an ARRAY sublink, which semantically does not guarantee ordering. This
      causes sporadic failure in tests.
      
      This commit tweaks the test query so that it has a deterministic ordering in
      the output array.
      
      We considered just adding an ORDER BY to the subquery, but ultimately we chose
      to use `array_agg` with an `ORDER BY` because subquery order is not preserved
      per SQL standard.
      Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
      c3c5d87b
    • J
      partition_pruning: don't drop public tables other tests need · 5d57a445
      Jacob Champion 提交于
      The portals_updatable test makes use of the public.bar table, which the
      partition_pruning test occasionally dropped. To fix, don't fall back to
      the public schema in the partition_pruning search path. Put the
      temporary functions in the partition_pruning schema as well for good
      measure.
      
      Author: Asim R P <apraveen@pivotal.io>
      Author: Jacob Champion <pchampion@pivotal.io>
      5d57a445
  13. 21 12月, 2017 1 次提交