1. 05 4月, 2017 5 次提交
    • A
      Fix ON COMMIT DELETE ROWS action for append optimized tables · 5ccdd6a2
      Asim R P 提交于
      The fix is to perform the same steps as a TRUNCATE command - set new relfiles
      and drop existing ones for parent AO table as well as all its auxiliary tables.
      
      This fixes issue #913.  Thank you, Tao-Ma for reporting the issue and proposing
      a fix as PR #960.  This commit implements Tao-Ma's idea but implementation
      differs from the original proposal.
      5ccdd6a2
    • A
      Enhance UAO templating to recognize "aoseg" and "aocsseg" keywords. · 9fc5eee5
      Asim R P 提交于
      This is useful if a test wants to use gp_toolkit.__gp_{aoseg|aocsseg}*
      functions.
      9fc5eee5
    • B
      Ignore very wide columns in analyze sample · 8492807f
      Bhuvnesh Chaudhary and Ekta Khanna 提交于
      Analyze collects a sample from the table, in case the
      sample contains columns with huge length, it may
      result in memory usage to go high cancelling the query.
      
      This commit masks wide values `i.e pg_column_size(col) > WIDTH_THRESHOLD
      (1024)` in variable length columns to avoid high
      memory usage while collecting sample. Column values exceeding
      WIDTH_THRESHOLD will be marked as NULL and will be ignored from the collected
      samples tuples while computing stats on the relation.
      
      In case of expression/predicate indexes on the relation, the wide columns will be
      treated as NULL and will not be filtered out. Is it rare to have such
      indexes on very wide columns, so the effects on stats (nullfrac etc) will be minimal.
      Signed-off-by: NOmer Arap <oarap@pivotal.io>
      8492807f
    • B
      Use STRICT for functions using textin/textout · 038457a5
      Bhuvnesh Chaudhary 提交于
      The breakin/out functions should be marked as STRICT,
      because the underlying C functions, textin/textout,
      don't expect a NULL to be passed to them.
      Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
      038457a5
    • J
      Fix ICW tests after introducing relfilenode counter change. · 974e78dc
      Jimmy Yih 提交于
      These tests assumed OID == relfilenode. We updated the tests to not assume it
      anymore.
      974e78dc
  2. 04 4月, 2017 1 次提交
  3. 03 4月, 2017 4 次提交
    • D
      Remove unused BugBuster leftovers · 7dbaace6
      Daniel Gustafsson 提交于
      With the last remaining testsuites moved over to ICW, there is no
      longer anything left running in BugBuster. Remove the remaining
      files and BugBuster makefile integration in one big swing of the
      git rm axe. The only thing left in use was a data file which was
      referenced from ICW, move this to regress/data instead.
      7dbaace6
    • D
      Fix typo and spelling in memory_quota util · c76b1c4b
      Daniel Gustafsson 提交于
      c76b1c4b
    • D
      Move BugBuster memory_quota test to ICW · 6cc722e0
      Daniel Gustafsson 提交于
      This moves the memory_quota tests more or less unchanged to ICW.
      Changes include removing ignore sections and minor formatting as
      well as a rename to bb_memory_quota.
      6cc722e0
    • D
      Migrate BugBuster mpph tests to ICW · 42b33d42
      Daniel Gustafsson 提交于
      This combines the various mpph tests in BugBuster into a single
      new ICW suite, bb_mpph. Most of the existing queries were moved
      over with a few pruned that were too uninteresting, or covered
      elsewhere.
      
      The BugBuster tests combined are: load_mpph, mpph_query,
      mpph_aopart, hashagg and opperf.
      42b33d42
  4. 01 4月, 2017 8 次提交
    • P
      Remap transient typmods on receivers instead of on senders. · ab4398dd
      Pengzhou Tang 提交于
      QD used to send a transient types table to QEs, then QE would remap the
      tuples with this table before sending them to QD. However in complex
      queries QD can't discover all the transient types so tuples can't be
      correctly remapped on QEs. One example is like below:
      
          SELECT q FROM (SELECT MAX(f1) FROM int4_tbl
                         GROUP BY f1 ORDER BY f1) q;
          ERROR:  record type has not been registered
      
      To fix this issue we changed the underlying logic: instead of sending
      the possibly incomplete transient types table from QD to QEs, we now
      send the tables from motion senders to motion receivers and do the remap
      on receivers. Receivers maintain a remap table for each motion so tuples
      from different senders can be remapped accordingly. In such way, queries
      contain multi-slices can also handle transient record type correctly
      between two QEs.
      
      The remap logic is derived from the executor/tqueue.c in upstream
      postgres. There is support for composite/record types and arrays as well
      as range types, however as range types are not yet supported in GPDB so
      the logic is put under a conditional compilation macro, in theory it
      shall be automatically enabled when range types are supported in GPDB.
      
      One side effect for this approach is that on receivers a performance
      down is introduced as the remap requires recursive checks on each tuple
      of record types. However optimization is made to make this side effect
      minimum on non-record types.
      
      Old logic that building transient types table on QD and sending them to
      QEs are retired.
      Signed-off-by: NGang Xiong <gxiong@pivotal.io>
      Signed-off-by: NNing Yu <nyu@pivotal.io>
      ab4398dd
    • G
      Remove the record comparison functions in d9148a54. · 45e7669e
      Gang Xiong 提交于
      Commit d9148a54 enabled the record array as well as the comparison of
      record types, the comparison functions/operators' OIDs used in upstream
      postgres are already used by others in GPDB, and many test cases are
      assuming comparison of record types should fail. As we don't actually
      need this comparison feature at the moment in GPDB we simply remove
      these functions for now.
      Signed-off-by: NNing Yu <nyu@pivotal.io>
      45e7669e
    • T
      Implement comparison of generic records (composite types), and invent a... · 02335757
      Tom Lane 提交于
      Implement comparison of generic records (composite types), and invent a pseudo-type record[] to represent arrays of possibly-anonymous composite types. Since composite datums carry their own type identification, no extra knowledge is needed at the array level.
      
      The main reason for doing this right now is that it is necessary to support
      the general case of detection of cycles in recursive queries: if you need to
      compare more than one column to detect a cycle, you need to compare a ROW()
      to an array built from ROW()s, at least if you want to do it as the spec
      suggests.  Add some documentation and regression tests concerning the cycle
      detection issue.
      02335757
    • H
      Use PartitionSelectors for partition elimination, even without ORCA. · e378d84b
      Heikki Linnakangas 提交于
      The old mechanism was to scan the complete plan, searching for a pattern
      with a Join, where the outer side included an Append node. The inner
      side was duplicated into an InitPlan, with the pg_partition_oid aggregate
      to collect the Oids of all the partitions that can match. That was
      inefficient and broken: if the duplicated plan was volatile, you might
      choose wrong partitions. And scanning the inner side twice can obviously
      be slow, if there are a lot of tuples.
      
      Rewrite the way such plans are generated. Instead of using an InitPlan,
      inject a PartitionSelector node into the inner side of the join.
      
      Fixes github issues #2100 and #2116.
      e378d84b
    • H
      Rewrite kerberos tests (#2087) · 2415aff4
      Heikki Linnakangas 提交于
      * Rewrite Kerberos test suite
      
      * Remove obsolete Kerberos test stuff from pipeline and TINC
      
      We now have a rewritten Kerberos test script in installcheck-world.
      
      * Update ICW kerberos test to run on concourse container
      
      This adds a whole new test script in src/test/regress, implemented in plain bash. It sets up temporary a KDC as part of the script, and doesn't therefore rely on a pre-existing Kerberos server, like the old MU_kerberos-smoke test job did. It does require MIT Kerberos server-side utilities to be installed, instead, but no server needs to be running, and no superuser privileges are required.
      
      This supersedes the MU_kerberos-smoke behave tests. The new rewritten bash script tests the same things:
        1. You cannot connect to the server before running 'kinit' (to verify that the server doesn't just let anyone in, which could happen if the pg_hba.conf is misconfigured for the test, for example)
        2. You can connect, after running 'kinit'
        3. You can no longer connect, if the user account is expired
      
      The new test script is hooked up to the top-level installcheck-world target.
      
      There were also some Kerberos-related tests in TINC. Remove all that, too. They didn't seem interesting in the first place, looks like they were just copies of a few random other tests, intended to be run as a smoke test, after a connection had been authenticated with Kerberos, but there was nothing in there to actually set up the Kerberos environment in TINC.
      
      Other minor patches added:
      
      * Remove absolute path when calling kerberos utilities
      -- assume they are on path, so that they can be accessed from various installs
      -- add clarification message if sample kerberos utility is not found with 'which'
      
      * Specify empty load library for kerberos tools
      
      * Move kerberos test to its own script file
      -- this allows a failure to be recorded without exiting Make, and
      therefore the server can be turned off always
      
      * Add trap for stopping kerberos server in all cases
      * Use localhost for kerberos connection
      Signed-off-by: NMarbin Tan <mtan@pivotal.io>
      Signed-off-by: NChumki Roy <croy@pivotal.io>
      Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
      2415aff4
    • H
      Fix error message, if EXCHANGE PARTITION with multiple constraints fails. · 30400ddc
      Heikki Linnakangas 提交于
      The loop to print each constraint's name was broken: it printed the name of
      the first constraint multiple times. A test case, as matter of principle.
      
      In the passing, change the set of tests around this error to all use the
      same partitioned table, rather than drop and recreate it for each command.
      And reduce the number of partitions from 10 to 5. Shaves some milliseconds
      from the time to run the test.
      30400ddc
    • J
      Set max_stack_depth explicitly in subtransaction_limit ICG test · a5e26310
      Jingyi Mei 提交于
      This comes from 4.3_STABLE repo
      Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
      a5e26310
    • F
      Rule based partition selection for list (sub)partitions (#2076) · 5cecfcd1
      foyzur 提交于
      GPDB supports range and list partitions. Range partitions are represented as a set of rules. Each rule defines the boundaries of a part. E.g., a rule might say that a part contains all values between (0, 5], where left bound is 0 exclusive, but the right bound is 5, inclusive. List partitions are defined by a list of values that the part will contain. 
      
      ORCA uses the above rule definition to generate expressions that determine which partitions need to be scanned. These expressions are of the following types:
      
      1. Equality predicate as in PartitionSelectorState->levelEqExpressions: If we have a simple equality on partitioning key (e.g., part_key = 1).
      
      2. General predicate as in PartitionSelectorState->levelExpressions: If we need more complex composition, including non-equality such as part_key > 1.
      
      Note:  We also have residual predicate, which the optimizer currently doesn't use. We are planning to remove this dead code soon.
      
      Prior to  this PR, ORCA was treating both range and list partitions as range partitions. This meant that each list part will be converted to a set of list values and each of these values will become a single point range partition.
      
      E.g., consider the DDL:
      
      ```sql
      CREATE TABLE DATE_PARTS (id int, year int, month int, day int, region text)
      DISTRIBUTED BY (id)
      PARTITION BY RANGE (year)
          SUBPARTITION BY LIST (month)
             SUBPARTITION TEMPLATE (
              SUBPARTITION Q1 VALUES (1, 2, 3), 
              SUBPARTITION Q2 VALUES (4 ,5 ,6),
              SUBPARTITION Q3 VALUES (7, 8, 9),
              SUBPARTITION Q4 VALUES (10, 11, 12),
              DEFAULT SUBPARTITION other_months )
      ( START (2002) END (2012) EVERY (1), 
        DEFAULT PARTITION outlying_years );
      ```
      
      Here we partition the months as list partition using quarters. So, each of the list part will contain three months. Now consider a query on this table:
      
      ```sql
      select * from DATE_PARTS where month between 1 and 3;
      ```
      
      Prior to this ORCA generated plan would consider each value of the Q1 as a separate range part with just one point range. I.e., we will have 3 virtual parts to evaluate for just one Q1: [1], [2], [3]. This approach is inefficient. The problem is further exacerbated when we have multi-level partitioning. Consider the list part of the above example. We have only 4 rules for 4 different quarters, but we will have 12 different virtual rule (aka constraints). For each such constraint, we will then evaluate the entire subtree of partitions.
      
      After this PR, we no longer decompose rules into constraints for list parts and then derive single point virtual range partitions based on those constraints. Rather, the new ORCA changes will use ScalarArrayOp to express selectivity on a list of values. So, the expression for the above SQL will look like 1 <= ANY {month_part} AND 3 >= ANY {month_part}, where month_part will be substituted at runtime with different list of values for each of quarterly partitions. We will end up evaluating that expressions 4 times with the following list of values:
      
      Q1: 1 <= ANY {1,2,3} AND 3 >= ANY {1,2,3}
      Q2: 1 <= ANY {4,5,6} AND 3 >= ANY {4,5,6}
      ...
      
      Compare this to the previous approach, where we will end up evaluating 12 different expressions, each time for a single point value:
      
      First constraint of Q1: 1 <= 1 AND 3 >= 1
      Second constraint of Q1: 1 <= 2 AND 3 >= 2
      Third constraint of Q1: 1 <= 3 AND 3 >= 3
      First constraint of Q2: 1 <= 4 AND 3 >= 4
      ...
      
      The ScalarArrayOp depends on a new type of expression PartListRuleExpr that can convert a list rule to an array of values. ORCA specific changes can be found here: https://github.com/greenplum-db/gporca/pull/149
      5cecfcd1
  5. 31 3月, 2017 2 次提交
    • H
      Fix SQL escaping bug in internal query. · 1843ec62
      Heikki Linnakangas 提交于
      The old comment was unsure if toast tables and indexes have their Oids
      in sync across the cluster. They do, so let's rely on that, which is
      simpler.
      1843ec62
    • D
      Fix bug in duplicate distribution columns for CTAS · 02a38f5e
      Daniel Gustafsson 提交于
      When creating a table using the CTAS construction with an explicit
      distribution clause, the duplicate check for distribution columns
      wasn't applied:
      
        CREATE TABLE tt AS SELECT * FROM t DISTRIBUTED BY (c,c);
      
      Move the duplicate distribution check to the parsing stage since
      it is a syntax error to supply duplicate column references. This
      alters the behavior such that supplying a duplicate non-existing
      column will error out on the duplication rather than the column
      being non-existing. Since syntax errors should have precedence
      over semantic errors I however find that this is the correct order.
      
      Remove the existing duplicate checks which only applied to CREATE
      TABLE and ALTER TABLE, and add test cases to cover CTAS. The ticket
      reference was removed since it's not public and also clearly didn't
      cover all the cases. The DROP TABLE is also moved to not using IF
      EXISTS since it would be an error if the table didn't exist at that
      point.
      02a38f5e
  6. 30 3月, 2017 2 次提交
    • D
      Remove slow CTAS from gp_create_table to speed up test · ade37378
      Daniel Gustafsson 提交于
      The gp_create_table suite tests that the maximum number of columns
      can be specified as a distribution clause (well, it really tests
      that CREATE TABLE allows the maximum number of columns since that
      check applies first), and then tests to CTAS from the resulting
      table. There is no reason to believe that there is a difference
      between CTAS with 1600 columns and CTAS with fewer columns. Remove
      this case to speed up the test significantly and also adjust the
      DROP TABLE clauses to match reality.
      ade37378
    • J
      Remove bugbuster leftover schema_topology · 9c854010
      Jesse Zhang 提交于
      Not to be confused with a TINC test with the same name.
      This test was brought into the main suite in
      b82c1e60 as an effort to increase
      visibility into all the tests that we cared about. We never had the
      bandwidth to look at the intent of this test though.
      
      There are a plethora of problem of the bugbuster version of
      `schema_topology`:
      
      0. It has very unclear and mixed intent. For example, it depends on
      gphdfs (which nobody outside Pivotal can build) but it just tests that
      we are able to revoke privilege of that protocol.
      0. It creates and switches databases
      0. The vast number of cases in this test is duplicating coverage for
      tests elsewhere in `installcheck-good` and TINC.
      
      Burning this with fire.
      9c854010
  7. 29 3月, 2017 1 次提交
  8. 28 3月, 2017 2 次提交
    • M
      Revert "Rewrite kerberos tests (#2087)" · a0f20dfc
      Management and Monitoring Team 提交于
      -- it is red in pipeline with a report of OoM, but that could actually
      be a symptom of a missing library.
      
      This reverts commit e4976920.
      a0f20dfc
    • M
      Rewrite kerberos tests (#2087) · e4976920
      Marbin Tan 提交于
      * Rewrite Kerberos test suite
      
      * Remove obsolete Kerberos test stuff from pipeline and TINC
      
      We now have a rewritten Kerberos test script in installcheck-world.
      
      * Update ICW kerberos test to run on concourse container
      
      This adds a whole new test script in src/test/regress, implemented in plain bash. It sets up temporary a KDC as part of the script, and doesn't therefore rely on a pre-existing Kerberos server, like the old MU_kerberos-smoke test job did. It does require MIT Kerberos server-side utilities to be installed, instead, but no server needs to be running, and no superuser privileges are required.
      
      This supersedes the MU_kerberos-smoke behave tests. The new rewritten bash script tests the same things:
        1. You cannot connect to the server before running 'kinit' (to verify that the server doesn't just let anyone in, which could happen if the pg_hba.conf is misconfigured for the test, for example)
        2. You can connect, after running 'kinit'
        3. You can no longer connect, if the user account is expired
      
      The new test script is hooked up to the top-level installcheck-world target.
      
      There were also some Kerberos-related tests in TINC. Remove all that, too. They didn't seem interesting in the first place, looks like they were just copies of a few random other tests, intended to be run as a smoke test, after a connection had been authenticated with Kerberos, but there was nothing in there to actually set up the Kerberos environment in TINC.
      
      Other minor patches added:
      
      * Remove absolute path when calling kerberos utilities
      -- assume they are on path, so that they can be accessed from various installs
      -- add clarification message if sample kerberos utility is not found with 'which'
      
      * Specify empty load library for kerberos tools
      
      * Move kerberos test to its own script file
      -- this allows a failure to be recorded without exiting Make, and
      therefore the server can be turned off always
      
      * Add trap for stopping kerberos server in all cases
      * Use localhost for kerberos connection
      Signed-off-by: NMarbin Tan <mtan@pivotal.io>
      Signed-off-by: NChumki Roy <croy@pivotal.io>
      Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
      e4976920
  9. 27 3月, 2017 1 次提交
    • D
      Remove bugbuster tiny2 and stransaction suites · 935a2650
      Daniel Gustafsson 提交于
      The tiny2 suite was a cut-down version of qp_derived_table with
      no new tests added. The stransaction suite was already covered
      by the ICW suites gpdtm_plpgsql and trasaction. Neither of these
      two suites were no longer connected to any test schedule.
      935a2650
  10. 22 3月, 2017 1 次提交
  11. 21 3月, 2017 1 次提交
  12. 20 3月, 2017 1 次提交
    • D
      Remove deprecated analytical functions · fae97ae7
      Daniel Gustafsson 提交于
      Most of the builtin analytical functions added in Greenplum have
      been deprecated in favour of the corresponding functionality in
      MADLib. The deprecation notice was committed in a61bf8b7, code as
      well as tests are removed here. The sum(array[]) function require
      the matrix_add() backend code and thus it remains.
      
      This removes matrix_add(), matrix_multiply(), matrix_transpose(),
      pinv(), mregr_coef(), mregr_r2(), mregr_pvalues(), mregr_tstats(),
      nb_classify() and nb_probabilities().
      fae97ae7
  13. 17 3月, 2017 3 次提交
    • D
      Implement JSON operators and parser from 9.3 · c79f51a1
      Daniel Gustafsson 提交于
      This backports commit 79d39420d in it's entirety and the required parts
      of 3cba8240 (original commitmessages in full below). The difference to
      upstream is GPDB using SFRM_Materialize instead of _Randomize as well as
      the required pg_proc entry changes. Includes the upstream tests and bumps
      the catalog.
      
        commit 79d39420d6cd60cab141b1f13185a2415edfa4a3
        Author: Andrew Dunstan <andrew@dunslane.net>
        Date:   Fri Mar 29 14:12:13 2013 -0400
      
          Add new JSON processing functions and parser API.
      
          The JSON parser is converted into a recursive descent parser, and
          exposed for use by other modules such as extensions. The API provides
          hooks for all the significant parser event such as the beginning and end
          of objects and arrays, and providing functions to handle these hooks
          allows for fairly simple construction of a wide variety of JSON
          processing functions. A set of new basic processing functions and
          operators is also added, which use this API, including operations to
          extract array elements, object fields, get the length of arrays and the
          set of keys of a field, deconstruct an object into a set of key/value
          pairs, and create records from JSON objects and arrays of objects.
      
          Catalog version bumped.
      
          Andrew Dunstan, with some documentation assistance from Merlin Moncure.
      
        commit 3cba8240
        Author: Itagaki Takahiro <itagaki.takahiro@gmail.com>
        Date:   Mon Feb 21 14:08:04 2011 +0900
      
          Add ENCODING option to COPY TO/FROM and file_fdw.
          File encodings can be specified separately from client encoding.
          If not specified, client encoding is used for backward compatibility.
      
          Cases when the encoding doesn't match client encoding are slower
          than matched cases because we don't have conversion procs for other
          encodings. Performance improvement would be be a future work.
      
          Original patch by Hitoshi Harada, and modified by me.
      c79f51a1
    • A
      Drop table for split multi-level default partition. · 0f5d8109
      Ashwin Agrawal 提交于
      Currently, SPLIT partition of multi-level partition tables's default partition
      causes catalog inconsistency as constraint is not created for newly created
      tables. Hence, to enable running gpcheckcat from running after ICW drop the
      table as temp fix.
      0f5d8109
    • A
      Start running catalog check after ICW. · 5a846b63
      Ashwin Agrawal 提交于
      gpcheckcat would be handy to detect any catalog issues introduced, hence start
      running the same at end of ICW. Also, make sure different targets do not use
      regression database but database of their own.
      5a846b63
  14. 16 3月, 2017 1 次提交
    • N
      Implement DROP RESOURCE GROUP syntax. · 01565b0a
      Ning Yu 提交于
      Any resgroups created by CREATE RESOURCE GROUP syntax can be dropped
      with DROP RESOURCE GROUP syntax; the default resgroups, default_group
      and admin_group, can't be dropped; only superuser can drop resgroups;
      resgroups with roles bound to can't be dropped.
      
          -- drop a resource group
          DROP RESOURCE GROUP rg1;
      
      *NOTE*: this commit only implement the DROP RESOURCE GROUP syntax, the
      actual resource management are not yet supported, which will be provided
      later based on these syntax commits.
      
      *NOTE*: test cases are provided for both CREATE and DROP syntax.
      Signed-off-by: NPengzhou Tang <ptang@pivotal.io>
      01565b0a
  15. 15 3月, 2017 2 次提交
    • H
      Use the new optimizer_trace_fallback GUC to detect ORCA fallbacks. · bb3e4b4f
      Heikki Linnakangas 提交于
      qp_olap_group2 did an elaborate dance with optimizer_log=on,
      client_min_messages=on, and gpdiff rules to detect whether the queries
      fell back to the traditional planner. Replace all that with the new simple
      optimizer_trace_fallback GUC.
      
      Also enable optimizer_trace_fallback in the 'gp_optimizer' test. Since this
      test is all about testing ORCA, seems appropriate to memorize which queries
      currently fall back and which do not, so that we detect regressions where
      we start to fall back on queries that ORCA used to be able to plan. There
      was one existing test that explicitly set client_min_messages=on, like the
      tests in qp_olap_group2, to detect fallback. I kept the those extra logging
      GUCs for that one case, so that we have some coverage for that too, although
      I'm not sure how worthwhile it is anymore.
      
      In the passing, in the one remaining test in gp_optimizer that sets
      client_min_messages='log', don't assume that log_statement is set to 'all'.
      
      Setting optimizer_trace_fallback=on for 'gp_optimizer' caught the issue
      I fixed in previous commit, that one of the ANALYZE queries still used ORCA.
      bb3e4b4f
    • H
      Fix planning of a table function with an aggregate. · be4d9ca4
      Heikki Linnakangas 提交于
      add_second_stage_agg() builds a SubQueryScan, and moves the "current" plan
      underneath it. The SS_finalize_plan() call was misplaced, it needs to be
      called before constructing the new upper-level range table, because any
      references to the range table in the subplan refer to the original range
      table, not the new dummy one that only contains an entry for the
      SubQueryScan. It looks like the only thing in SS_finalize_plan() processing
      that accesses the range table is processing a TableFunctionScan. That lead
      to an assertion failure or crash, if a table function was used in a query
      with an aggregate.
      
      Fixes github issue #2033.
      be4d9ca4
  16. 14 3月, 2017 4 次提交
    • D
      added tests from upstream for oracle_compat (#2012) · 90caa205
      Dave Cramer 提交于
      * added tests from upstream for oracle_compat
      
      backport of upstream commit 607b2be7, from PostgreSQL 8.4? 
      90caa205
    • D
      Remove uninteresting tests from qp_misc_rio · ff18f0c1
      Daniel Gustafsson 提交于
      This removes uninteresting tests, and tests covering functionality
      which is tested elsewhere. The superflous search_path setting is
      removed, the amount of tables created reduced and the test data size
      reduced. The resource_queue test was more or less duplicated already
      but had enough interesting bits to warrant a move to the appropriate
      suite.
      
      This shaves roughly 20% of the runtime of tbe test suite.
      ff18f0c1
    • H
      Move test case for trying to exchange a partition that has subpartitions. · 4387f5ce
      Heikki Linnakangas 提交于
      And remove the rest of old partition_exchange TINC test. We already had
      tests in 'partition' test file, for validating that the values in the
      exchanged table are valid for the partition (search for "-- validation").
      
      There were also tests for exchanging a partition in a hash partitioned
      table, but the validation hasn't apparently been implemented for hash
      partitions, and they're not supported anyway, so that doesn't seem very
      interesting to test.
      4387f5ce
    • H
      Move more CTE tests from TINC. · ecc379fc
      Heikki Linnakangas 提交于
      I'm not sure if it's really worthwhile to test all of these with and
      without CTE inlining, but kept it for now.
      ecc379fc
  17. 13 3月, 2017 1 次提交
    • H
      Rewrite TINC tests for AO checksum failure, and move to main test suite. · c54ffe6e
      Heikki Linnakangas 提交于
      Some notable differences:
      
      * For the last "appendonly_verify_block_checksums_co" test, don't use a
        compressed table. That seems fragile.
      
      * In the "appendonly_verify_block_checksums_co" test, be more explicit in
        what is replaced. It used to scan backwards from end of file for the byte
        'a', but we didn't explicitly include that byte in the test data. What
        actually gets replaced depends heavily on how integers are encoded. (And
        the table was compressed too, which made it even more indeterministic.)
        In the rewritten test, we replace the string 'xyz', and we use a text
        field that contains that string as the table data.
      
      * Don't restore the original table file after corrupting. That seemed
        uninteresting to test. Presumably the table was OK before we corrupted
        it, so surely it's still OK after restoring it back. In theory, there
        could be a problem if the file's corrupt contents were cached somewhere,
        but we don't cache AO tables, and I'm not sure what we'd try to prove by
        testing that, anway, because swapping the file while the system is active
        is surely not supported.
      
      * The old script checked that the output when a corrupt table was SELECTed
        from contained the string "ERROR:  Header checksum does not match".
        However, half of the tests actually printed a different error, "Block
        checksum does not match". It turns out that the way the old select_table
        function asserted the result of the grep command was wrong. It should've
        done "assert(int(result) > 0), ..." rather than just "assert(result > 0),
        ...". As written, it always passed, even if there was no ERROR in the
        output. The rewritten test does not have that bug.
      c54ffe6e