1. 05 4月, 2017 6 次提交
  2. 04 4月, 2017 3 次提交
    • D
      Fix various typos in comments and docs · 1878fa73
      Daniel Gustafsson 提交于
      [ci skip]
      1878fa73
    • D
      Revert "Remap transient typmods on receivers instead of on senders." · b1140e54
      Dhanashree Kashid and Jesse Zhang 提交于
      This reverts commit ab4398dd.
      [#142986717]
      b1140e54
    • D
      Explicitly hash-distribute in CTAS [#142986717] · 828b99c4
      Dhanashree Kashid and Jesse Zhang 提交于
      Similar to ea818f0e, we remove the
      sensitivity to segment count in test `dml_oids_delete`. Without this,
      this test was passing for the wrong reason:
      
      0. The table `dml_heap_r` was set up with 3 tuples, whose values in the
      distribution column `a` are 1, 2, NULL respectively. On a 2-segment
      system, the 1-tuple and 2-tuple are on distinct segments, and because of
      a quirk of our local OID counter synchronization, they will get the same
      oids.
      
      0. The table `tempoid` will be distributed randomly under ORCA, with
      tuples copied from `dml_heap_r`
      
      0. The intent of the final assertion is checking that the OIDs are not
      changed by the DELETE. Also hidden in the assumption is that the tuples
      stay on the same segment as the source table.
      
      0. However, the compounding effect of that "same oid" with a randomly
      distributed `tempoid` will lead to a passing test when we have two
      segments!
      
      This commit fixes that. So this test will pass for the right reason, and
      also on any segment count.
      828b99c4
  3. 03 4月, 2017 4 次提交
    • D
      Remove unused BugBuster leftovers · 7dbaace6
      Daniel Gustafsson 提交于
      With the last remaining testsuites moved over to ICW, there is no
      longer anything left running in BugBuster. Remove the remaining
      files and BugBuster makefile integration in one big swing of the
      git rm axe. The only thing left in use was a data file which was
      referenced from ICW, move this to regress/data instead.
      7dbaace6
    • D
      Fix typo and spelling in memory_quota util · c76b1c4b
      Daniel Gustafsson 提交于
      c76b1c4b
    • D
      Move BugBuster memory_quota test to ICW · 6cc722e0
      Daniel Gustafsson 提交于
      This moves the memory_quota tests more or less unchanged to ICW.
      Changes include removing ignore sections and minor formatting as
      well as a rename to bb_memory_quota.
      6cc722e0
    • D
      Migrate BugBuster mpph tests to ICW · 42b33d42
      Daniel Gustafsson 提交于
      This combines the various mpph tests in BugBuster into a single
      new ICW suite, bb_mpph. Most of the existing queries were moved
      over with a few pruned that were too uninteresting, or covered
      elsewhere.
      
      The BugBuster tests combined are: load_mpph, mpph_query,
      mpph_aopart, hashagg and opperf.
      42b33d42
  4. 01 4月, 2017 13 次提交
    • P
      Remap transient typmods on receivers instead of on senders. · ab4398dd
      Pengzhou Tang 提交于
      QD used to send a transient types table to QEs, then QE would remap the
      tuples with this table before sending them to QD. However in complex
      queries QD can't discover all the transient types so tuples can't be
      correctly remapped on QEs. One example is like below:
      
          SELECT q FROM (SELECT MAX(f1) FROM int4_tbl
                         GROUP BY f1 ORDER BY f1) q;
          ERROR:  record type has not been registered
      
      To fix this issue we changed the underlying logic: instead of sending
      the possibly incomplete transient types table from QD to QEs, we now
      send the tables from motion senders to motion receivers and do the remap
      on receivers. Receivers maintain a remap table for each motion so tuples
      from different senders can be remapped accordingly. In such way, queries
      contain multi-slices can also handle transient record type correctly
      between two QEs.
      
      The remap logic is derived from the executor/tqueue.c in upstream
      postgres. There is support for composite/record types and arrays as well
      as range types, however as range types are not yet supported in GPDB so
      the logic is put under a conditional compilation macro, in theory it
      shall be automatically enabled when range types are supported in GPDB.
      
      One side effect for this approach is that on receivers a performance
      down is introduced as the remap requires recursive checks on each tuple
      of record types. However optimization is made to make this side effect
      minimum on non-record types.
      
      Old logic that building transient types table on QD and sending them to
      QEs are retired.
      Signed-off-by: NGang Xiong <gxiong@pivotal.io>
      Signed-off-by: NNing Yu <nyu@pivotal.io>
      ab4398dd
    • G
      Remove the record comparison functions in d9148a54. · 45e7669e
      Gang Xiong 提交于
      Commit d9148a54 enabled the record array as well as the comparison of
      record types, the comparison functions/operators' OIDs used in upstream
      postgres are already used by others in GPDB, and many test cases are
      assuming comparison of record types should fail. As we don't actually
      need this comparison feature at the moment in GPDB we simply remove
      these functions for now.
      Signed-off-by: NNing Yu <nyu@pivotal.io>
      45e7669e
    • T
      Implement comparison of generic records (composite types), and invent a... · 02335757
      Tom Lane 提交于
      Implement comparison of generic records (composite types), and invent a pseudo-type record[] to represent arrays of possibly-anonymous composite types. Since composite datums carry their own type identification, no extra knowledge is needed at the array level.
      
      The main reason for doing this right now is that it is necessary to support
      the general case of detection of cycles in recursive queries: if you need to
      compare more than one column to detect a cycle, you need to compare a ROW()
      to an array built from ROW()s, at least if you want to do it as the spec
      suggests.  Add some documentation and regression tests concerning the cycle
      detection issue.
      02335757
    • H
      Use PartitionSelectors for partition elimination, even without ORCA. · e378d84b
      Heikki Linnakangas 提交于
      The old mechanism was to scan the complete plan, searching for a pattern
      with a Join, where the outer side included an Append node. The inner
      side was duplicated into an InitPlan, with the pg_partition_oid aggregate
      to collect the Oids of all the partitions that can match. That was
      inefficient and broken: if the duplicated plan was volatile, you might
      choose wrong partitions. And scanning the inner side twice can obviously
      be slow, if there are a lot of tuples.
      
      Rewrite the way such plans are generated. Instead of using an InitPlan,
      inject a PartitionSelector node into the inner side of the join.
      
      Fixes github issues #2100 and #2116.
      e378d84b
    • C
    • H
      Rewrite kerberos tests (#2087) · 2415aff4
      Heikki Linnakangas 提交于
      * Rewrite Kerberos test suite
      
      * Remove obsolete Kerberos test stuff from pipeline and TINC
      
      We now have a rewritten Kerberos test script in installcheck-world.
      
      * Update ICW kerberos test to run on concourse container
      
      This adds a whole new test script in src/test/regress, implemented in plain bash. It sets up temporary a KDC as part of the script, and doesn't therefore rely on a pre-existing Kerberos server, like the old MU_kerberos-smoke test job did. It does require MIT Kerberos server-side utilities to be installed, instead, but no server needs to be running, and no superuser privileges are required.
      
      This supersedes the MU_kerberos-smoke behave tests. The new rewritten bash script tests the same things:
        1. You cannot connect to the server before running 'kinit' (to verify that the server doesn't just let anyone in, which could happen if the pg_hba.conf is misconfigured for the test, for example)
        2. You can connect, after running 'kinit'
        3. You can no longer connect, if the user account is expired
      
      The new test script is hooked up to the top-level installcheck-world target.
      
      There were also some Kerberos-related tests in TINC. Remove all that, too. They didn't seem interesting in the first place, looks like they were just copies of a few random other tests, intended to be run as a smoke test, after a connection had been authenticated with Kerberos, but there was nothing in there to actually set up the Kerberos environment in TINC.
      
      Other minor patches added:
      
      * Remove absolute path when calling kerberos utilities
      -- assume they are on path, so that they can be accessed from various installs
      -- add clarification message if sample kerberos utility is not found with 'which'
      
      * Specify empty load library for kerberos tools
      
      * Move kerberos test to its own script file
      -- this allows a failure to be recorded without exiting Make, and
      therefore the server can be turned off always
      
      * Add trap for stopping kerberos server in all cases
      * Use localhost for kerberos connection
      Signed-off-by: NMarbin Tan <mtan@pivotal.io>
      Signed-off-by: NChumki Roy <croy@pivotal.io>
      Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
      2415aff4
    • H
      Fix error message, if EXCHANGE PARTITION with multiple constraints fails. · 30400ddc
      Heikki Linnakangas 提交于
      The loop to print each constraint's name was broken: it printed the name of
      the first constraint multiple times. A test case, as matter of principle.
      
      In the passing, change the set of tests around this error to all use the
      same partitioned table, rather than drop and recreate it for each command.
      And reduce the number of partitions from 10 to 5. Shaves some milliseconds
      from the time to run the test.
      30400ddc
    • J
      Set max_stack_depth explicitly in subtransaction_limit ICG test · a5e26310
      Jingyi Mei 提交于
      This comes from 4.3_STABLE repo
      Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
      a5e26310
    • F
      Rule based partition selection for list (sub)partitions (#2076) · 5cecfcd1
      foyzur 提交于
      GPDB supports range and list partitions. Range partitions are represented as a set of rules. Each rule defines the boundaries of a part. E.g., a rule might say that a part contains all values between (0, 5], where left bound is 0 exclusive, but the right bound is 5, inclusive. List partitions are defined by a list of values that the part will contain. 
      
      ORCA uses the above rule definition to generate expressions that determine which partitions need to be scanned. These expressions are of the following types:
      
      1. Equality predicate as in PartitionSelectorState->levelEqExpressions: If we have a simple equality on partitioning key (e.g., part_key = 1).
      
      2. General predicate as in PartitionSelectorState->levelExpressions: If we need more complex composition, including non-equality such as part_key > 1.
      
      Note:  We also have residual predicate, which the optimizer currently doesn't use. We are planning to remove this dead code soon.
      
      Prior to  this PR, ORCA was treating both range and list partitions as range partitions. This meant that each list part will be converted to a set of list values and each of these values will become a single point range partition.
      
      E.g., consider the DDL:
      
      ```sql
      CREATE TABLE DATE_PARTS (id int, year int, month int, day int, region text)
      DISTRIBUTED BY (id)
      PARTITION BY RANGE (year)
          SUBPARTITION BY LIST (month)
             SUBPARTITION TEMPLATE (
              SUBPARTITION Q1 VALUES (1, 2, 3), 
              SUBPARTITION Q2 VALUES (4 ,5 ,6),
              SUBPARTITION Q3 VALUES (7, 8, 9),
              SUBPARTITION Q4 VALUES (10, 11, 12),
              DEFAULT SUBPARTITION other_months )
      ( START (2002) END (2012) EVERY (1), 
        DEFAULT PARTITION outlying_years );
      ```
      
      Here we partition the months as list partition using quarters. So, each of the list part will contain three months. Now consider a query on this table:
      
      ```sql
      select * from DATE_PARTS where month between 1 and 3;
      ```
      
      Prior to this ORCA generated plan would consider each value of the Q1 as a separate range part with just one point range. I.e., we will have 3 virtual parts to evaluate for just one Q1: [1], [2], [3]. This approach is inefficient. The problem is further exacerbated when we have multi-level partitioning. Consider the list part of the above example. We have only 4 rules for 4 different quarters, but we will have 12 different virtual rule (aka constraints). For each such constraint, we will then evaluate the entire subtree of partitions.
      
      After this PR, we no longer decompose rules into constraints for list parts and then derive single point virtual range partitions based on those constraints. Rather, the new ORCA changes will use ScalarArrayOp to express selectivity on a list of values. So, the expression for the above SQL will look like 1 <= ANY {month_part} AND 3 >= ANY {month_part}, where month_part will be substituted at runtime with different list of values for each of quarterly partitions. We will end up evaluating that expressions 4 times with the following list of values:
      
      Q1: 1 <= ANY {1,2,3} AND 3 >= ANY {1,2,3}
      Q2: 1 <= ANY {4,5,6} AND 3 >= ANY {4,5,6}
      ...
      
      Compare this to the previous approach, where we will end up evaluating 12 different expressions, each time for a single point value:
      
      First constraint of Q1: 1 <= 1 AND 3 >= 1
      Second constraint of Q1: 1 <= 2 AND 3 >= 2
      Third constraint of Q1: 1 <= 3 AND 3 >= 3
      First constraint of Q2: 1 <= 4 AND 3 >= 4
      ...
      
      The ScalarArrayOp depends on a new type of expression PartListRuleExpr that can convert a list rule to an array of values. ORCA specific changes can be found here: https://github.com/greenplum-db/gporca/pull/149
      5cecfcd1
    • A
      Fix XidlimitsTests, avoid going back after bumping the xid. · 5b2ea684
      Ashwin Agrawal 提交于
      auto-vacuum limit would be reached first and then warn limit, followed by other
      limits. So, there is no reason to rollback after bumping the xid to auto-vacuum
      limit. Can land in all kinds of wierd issues by doing the same. Practically,
      these tests need to be fully re-written maybe by modifying the GUCs and then
      actually generating XID to reach the same instead of similating by bumping the
      counter, but will attempt that in another commit.
      5b2ea684
    • A
    • A
      bc967e0b
    • A
      Make storage test robust by checking if DB up. · 2cd7fd17
      Ashwin Agrawal 提交于
      2cd7fd17
  5. 31 3月, 2017 4 次提交
    • H
      Fix SQL escaping bug in internal query. · 1843ec62
      Heikki Linnakangas 提交于
      The old comment was unsure if toast tables and indexes have their Oids
      in sync across the cluster. They do, so let's rely on that, which is
      simpler.
      1843ec62
    • D
      Fix bug in duplicate distribution columns for CTAS · 02a38f5e
      Daniel Gustafsson 提交于
      When creating a table using the CTAS construction with an explicit
      distribution clause, the duplicate check for distribution columns
      wasn't applied:
      
        CREATE TABLE tt AS SELECT * FROM t DISTRIBUTED BY (c,c);
      
      Move the duplicate distribution check to the parsing stage since
      it is a syntax error to supply duplicate column references. This
      alters the behavior such that supplying a duplicate non-existing
      column will error out on the duplication rather than the column
      being non-existing. Since syntax errors should have precedence
      over semantic errors I however find that this is the correct order.
      
      Remove the existing duplicate checks which only applied to CREATE
      TABLE and ALTER TABLE, and add test cases to cover CTAS. The ticket
      reference was removed since it's not public and also clearly didn't
      cover all the cases. The DROP TABLE is also moved to not using IF
      EXISTS since it would be an error if the table didn't exist at that
      point.
      02a38f5e
    • C
      Revert "Print more output when behave backup and transfer tests fail. (#2082)" · 47c47b3c
      Christopher Hajas 提交于
      This reverts commit b20a3f92 and 37cdd250
      since the gptransfer suite is failing.
      47c47b3c
    • C
      Print more output when behave backup and transfer tests fail. (#2082) · b20a3f92
      Chris Hajas 提交于
      Instead of simply printing which files are different, print the contents
      of the diff files.
      b20a3f92
  6. 30 3月, 2017 3 次提交
    • D
      Remove slow CTAS from gp_create_table to speed up test · ade37378
      Daniel Gustafsson 提交于
      The gp_create_table suite tests that the maximum number of columns
      can be specified as a distribution clause (well, it really tests
      that CREATE TABLE allows the maximum number of columns since that
      check applies first), and then tests to CTAS from the resulting
      table. There is no reason to believe that there is a difference
      between CTAS with 1600 columns and CTAS with fewer columns. Remove
      this case to speed up the test significantly and also adjust the
      DROP TABLE clauses to match reality.
      ade37378
    • A
      Enable GPPC tests in ICW · 3ab8ef38
      Adam Lee 提交于
      Enable existing GPPC tests and move more from TINC to ICW to get higher
      coverage.
      
      For now we keep gppc_test, gppc_demo and tabfunc_gppc_demo test suites
      separate, may investigate more and merge them.
      Signed-off-by: NAdam Lee <ali@pivotal.io>
      Signed-off-by: NYuan Zhao <yuzhao@pivotal.io>
      3ab8ef38
    • J
      Remove bugbuster leftover schema_topology · 9c854010
      Jesse Zhang 提交于
      Not to be confused with a TINC test with the same name.
      This test was brought into the main suite in
      b82c1e60 as an effort to increase
      visibility into all the tests that we cared about. We never had the
      bandwidth to look at the intent of this test though.
      
      There are a plethora of problem of the bugbuster version of
      `schema_topology`:
      
      0. It has very unclear and mixed intent. For example, it depends on
      gphdfs (which nobody outside Pivotal can build) but it just tests that
      we are able to revoke privilege of that protocol.
      0. It creates and switches databases
      0. The vast number of cases in this test is duplicating coverage for
      tests elsewhere in `installcheck-good` and TINC.
      
      Burning this with fire.
      9c854010
  7. 29 3月, 2017 1 次提交
  8. 28 3月, 2017 2 次提交
    • M
      Revert "Rewrite kerberos tests (#2087)" · a0f20dfc
      Management and Monitoring Team 提交于
      -- it is red in pipeline with a report of OoM, but that could actually
      be a symptom of a missing library.
      
      This reverts commit e4976920.
      a0f20dfc
    • M
      Rewrite kerberos tests (#2087) · e4976920
      Marbin Tan 提交于
      * Rewrite Kerberos test suite
      
      * Remove obsolete Kerberos test stuff from pipeline and TINC
      
      We now have a rewritten Kerberos test script in installcheck-world.
      
      * Update ICW kerberos test to run on concourse container
      
      This adds a whole new test script in src/test/regress, implemented in plain bash. It sets up temporary a KDC as part of the script, and doesn't therefore rely on a pre-existing Kerberos server, like the old MU_kerberos-smoke test job did. It does require MIT Kerberos server-side utilities to be installed, instead, but no server needs to be running, and no superuser privileges are required.
      
      This supersedes the MU_kerberos-smoke behave tests. The new rewritten bash script tests the same things:
        1. You cannot connect to the server before running 'kinit' (to verify that the server doesn't just let anyone in, which could happen if the pg_hba.conf is misconfigured for the test, for example)
        2. You can connect, after running 'kinit'
        3. You can no longer connect, if the user account is expired
      
      The new test script is hooked up to the top-level installcheck-world target.
      
      There were also some Kerberos-related tests in TINC. Remove all that, too. They didn't seem interesting in the first place, looks like they were just copies of a few random other tests, intended to be run as a smoke test, after a connection had been authenticated with Kerberos, but there was nothing in there to actually set up the Kerberos environment in TINC.
      
      Other minor patches added:
      
      * Remove absolute path when calling kerberos utilities
      -- assume they are on path, so that they can be accessed from various installs
      -- add clarification message if sample kerberos utility is not found with 'which'
      
      * Specify empty load library for kerberos tools
      
      * Move kerberos test to its own script file
      -- this allows a failure to be recorded without exiting Make, and
      therefore the server can be turned off always
      
      * Add trap for stopping kerberos server in all cases
      * Use localhost for kerberos connection
      Signed-off-by: NMarbin Tan <mtan@pivotal.io>
      Signed-off-by: NChumki Roy <croy@pivotal.io>
      Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
      e4976920
  9. 27 3月, 2017 2 次提交
  10. 24 3月, 2017 2 次提交
    • M
      Removing gpmigrator · 68178574
      Marbin Tan 提交于
      gpmigrator and gpmigrator_mirror are older utilities
      to help the upgrade from 4.2 -> 4.3.
      This is no longer necessary; we are removing all
      related gpmigrator calls and mentions.
      
      * Remove gpmigrator from travis
      * Remove upgrade test
      Signed-off-by: NChumki Roy <croy@pivotal.io>
      68178574
    • M
      Remove -m from gpstart on gpstop test. (#2092) · d55a374c
      Marbin Tan 提交于
      Previously, -a -m from gpstart still resulted into a prompt due to a
      bug.
      This actually made it such that the changed line did not run
      properly -- probably hung on the prompt, since we don't validate if
      gpstart actually runs properly.
      6edeefec fixed the gpstart prompt issue and
      caused this issue (actually running with master only) to finally show
      up, which made the later tests to fail for unabling queries to the
      database.. Ensure that we start fresh by doing
      a gpstart -a instead of just doing master only.
      d55a374c