1. 07 3月, 2017 20 次提交
    • H
      Remove unnecessary checks that error log functions are in pg_proc. · 69db441b
      Heikki Linnakangas 提交于
      Long time ago, they apparently weren't in the catalogs, but they're not
      going to suddenly disappear from there now, any more than any other
      function. The functions themselves are called elsewhere in the test file,
      so if they are missing or broken, the test will fail that way.
      69db441b
    • H
      Add function for throwing ereport-like errors from gpopt code. · f425e97c
      Heikki Linnakangas 提交于
      This allows us to have the exact same error message and hint for errors,
      as what the traditional planner produces. That makes testing easier, as
      you don't need to have a different expected output file for ORCA and
      non-ORCA. And allows for more structured errors anyway.
      
      Use the new function for the case of trying to read from a WRITABLE
      external table. There was no test for that in the main test suite
      previously. There was one in the gpfdist suite, but that's not really the
      right place, as that error is caught the same way regardless of the
      protocol. While we're at it, re-word the error message and change the error
      code to follow the Postgres error message style guide.
      f425e97c
    • H
      Remove more redundant tests with different datatypes. · 994f5418
      Heikki Linnakangas 提交于
      Like in commit 08f1d8e8. Float8, time, timestamp, and timestamptz have
      the same typlen, typyval, typtype, typalign and typstorage attributes as
      int8. It is therefore sufficient to run these tests with int8.
      
      Likewise, date has the same type attributes as int4, so the tests on int4
      cover the same ground.
      994f5418
    • H
    • H
      Prefer index scans in partition pruning tests. · 77e1206e
      Heikki Linnakangas 提交于
      The original TINC tests where these were copied for tested specifically
      that ORCA can use Dynamic Index Scans for these queries. To be closer to
      that spirit, even when running without ORCA, try to use index scans when
      possible. It exercises more codepaths in the planner anyway, and allows
      us to notice more easily if the planner loses the ability to use an index
      for some reason.
      77e1206e
    • O
      Bump Orca version to 2.8.0 · fbe0b0a7
      Omer Arap 提交于
      fbe0b0a7
    • Y
      Make gpinitsystem similar to gpstart and gpstop prompting (#1942) · 0701a77b
      yanchaozhong 提交于
      `gpinitsystem`, `gpstart` and `gpstop` had different prompting before:
      
      $ gpinitsystem -c gpinitsystem_config 
      ...
      Continue with Greenplum creation **Yy/Nn>**
      n
      ...
      
      $ gpstart
      ...
      
      Continue with Greenplum instance startup **Yy|Nn (default=N):**
      **>** n
      ...
      
      $ gpstop
      ...
      
      Continue with Greenplum instance shutdown **Yy|Nn (default=N):**
      **>** n
      ...
      
      This commit makes them consistent:
      
      $ gpinitsystem -c gpinitsystem_config 
      ...
      
      Continue with Greenplum creation Yy|Nn (default=N):
      > n
      0701a77b
    • T
      PR pipeline: use 'version: every' for downstream jobs · 8b84d9de
      Tom Meyer 提交于
      Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
      8b84d9de
    • H
      Bring 'constraints' test closer to upstream. · fe5ed784
      Heikki Linnakangas 提交于
      * Re-enable UPDATE tests
      * After statements that fail on GPDB but not in upstream, or vice versa,
        add fixup-statements, to restore the test table's state to what it is in
        the upstream, so that the expected output for subsequent tests match up
        with upstream.
      * Change row order in expected output to match upstream.
      fe5ed784
    • A
      Checkpointer and BgWriter code closer to PG 9.2. · a453e7a3
      Ashwin Agrawal 提交于
      Rename checkpoint.c to checkpointer.c. And move the code from bgwriter.c to
      checkpointer.c and also renames most of corresponding data structures to refect
      the clear ownership and association. This commit brings it as close as possible
      to PostgreSQL 9.2.
      
      Reference to PostgreSQL related commits:
      commit 806a2aee
          Split work of bgwriter between 2 processes: bgwriter and checkpointer.
      commit bf405ba8
          Add new file for checkpointer.c
      commit 8f28789b
          Rename BgWriterShmem/Request to CheckpointerShmem/Request
      commit d843589e5ab361dd4738dab5c9016e704faf4153
          Fix management of pendingOpsTable in auxiliary processes.
      a453e7a3
    • A
    • A
      Prevent deadlock between a committing backend and checkpointer process. · 3112d13d
      Ashwin Agrawal and Xin Zhang 提交于
      The deadlock occurs when a backend is about to unlink a file but finds the
      shared fsync requests queue full. If the backend waits for checkpointer to
      consume fsync requests from the queue, there is a deadlock. Because
      checkpointer is already waiting for this backend on `MyProc->inCommit`.
      
      A backend that is about to unlink a file is going through commit processing and
      must already set `MyProc->inCommit`.  This commit prevents the deadlock by
      checkpoint process absorbing requests from the shared queue while waiting for
      the transactions to commit.
      
      Fixes issue discussed on mailing list at:
      https://groups.google.com/a/greenplum.org/forum/#!topic/gpdb-dev/PHKuQPNwWs0
      3112d13d
    • A
      Move ShutdownXLOG() to checkpoint process · aab17eca
      Ashwin Agrawal and Xin Zhang 提交于
      BgWriter no more has pendingOpsTable, hence it should not call the
      CreateCheckPoint() directly.
      
      Error message was introduced in upstream
      ef072219, and the refactoring of ShutdownXLOG()
      was introduced in upstream 806a2aee.
      aab17eca
    • A
      Correctly maintain pendingOpsTable in checkpoint process. · 0291ff60
      Ashwin Agrawal, Asim R P and Xin Zhang 提交于
      We had partially pulled the fix to separate checkpoint and bgwriter
      processes and introduced a bug where pendingOpsTable was maintained in
      both the processes.  The pendingOpsTable records pending fsync
      requests.  Only checkpoint process should keep it.  Bgwriter should
      only write out dirty pages to OS cache.  Apparently, upstream also had
      this same bug and it was fixed in
      d843589e5ab361dd4738dab5c9016e704faf4153
      
      Also ensure that background writer sweeps buffers even in the first run after
      checkpoint.  There is no reason to hold off until next run and this is how it
      works in upstream.
      
      Fixes issue discussed on mailing list:
      https://groups.google.com/a/greenplum.org/forum/#!topic/gpdb-dev/PHKuQPNwWs0
      0291ff60
    • A
      pg_regress test to validate fsync requests are not lost. · 85b7754d
      Ashwin Agrawal and Xin Zhang 提交于
      The commit includes a UDF to walk dirty shared buffers and a new fault
      `fault_counter` to count the number of files fsync'ed by checkpointer process.
      
      Also another new fault `bg_buffer_sync_default_logic` to flush all buffers for
      BgBufferSync() for the background writer process.
      85b7754d
    • H
      Silence deadlock from test case. · c1097447
      Heikki Linnakangas 提交于
      This test case caused a deadlock, with gp_cte_sharing=on. Disable the
      offending test query, until the issue is fixed.
      c1097447
    • A
      Fix checkpoint wait for CommitTransaction. · 787992e4
      Ashwin Agrawal 提交于
      `MyProc->inCommit` is to protect checkpoint running during inCommit
      transactions.
      
      However, `MyProc->lxid` has to be valid because `GetVirtualXIDsDelayingChkpt()`
      and `HaveVirtualXIDsDelayingChkpt()` require `VirtualTransactionIdIsValid()` in
      addition to `inCommit` to block the checkpoint process.
      
      In this fix, we defer clearing `inCommit` and `lxid` to `CommitTransaction()`.
      787992e4
    • A
      Use VXIDs instead of xid for checkpoint delay. · a02b9e99
      Ashwin Agrawal 提交于
      Originally checkpoint is checking for xid, however, xid is used to control the
      transaction visibility and it's crucial to clean this xid if process is done
      with commit and before release locks.
      
      However, checkpoint need to wait for the `AtExat_smgr()` to cleanup persistent
      table information, which happened after release locks, where `xid` is already
      cleaned.
      
      Hence, we use VXID, which doesn't have visibility impact.
      
      NOTE: Upstream PostgreSQL commit f21bb9cf for the
      similar fix.
      a02b9e99
    • A
    • A
      Extend isolation2 test framework. · a5a473c0
      Ashwin Agrawal and Xin Zhang 提交于
      First, it support shell script and command:
      
      The new syntax is:
      ```
      ! <some script>
      ```
      This is required to run something like gpfaultinjector, gpstart, gpstop, etc.
      
      Second, it support utility mode with blocking command and join:
      
      The new syntax is:
      ```
      2U&: <sql>
      2U<:
      ```
      The above example means:
      - blocking in utility mode on dbid 2
      - join back in previous session in utility mode on dbid 2
      
      Fix the exception handling to allow the test to complete, and log the failed
      commands instead of abort the test. This will make sure all the cleaning steps
      are executed and not blocking following tests.
      
      This also include init_file for diff comparison to ignore timestamp output from
      gpfaultinjector.
      a5a473c0
  2. 06 3月, 2017 14 次提交
    • D
      Fix typos in code documentation · 6062bfa8
      Daniel Gustafsson 提交于
      6062bfa8
    • D
      Remove PL/Perl and PL/R gppkg tests · da580b04
      Daniel Gustafsson 提交于
      Installing PL languages via gppkg without doing anything to check
      the validity and functionality of the actual installation isn't
      terribly exciting, and on top of that it's also already tested in
      package/language.
      da580b04
    • D
      Remove duplicate PL/Perl tests from TINC · b4068cb5
      Daniel Gustafsson 提交于
      The tests in package/plperl were all but two just copies of the
      tests in pl/plperl/sql except each query were in a separate file.
      The two original tests were testing for memory errors which were
      fixed a very long time ago and for which the plperl code has been
      rewritten since. Neither test causes issues when tested with large
      inputs so remove as well.
      
      Also remove tests in language/plperl which overlap in functionality
      with what we already have.
      b4068cb5
    • H
      Move "CTE functional" tests from TINC to main test suite. · 629f7e76
      Heikki Linnakangas 提交于
      I had to jump through some hoops to run the same test queries with
      optimizer_cte_inlining both on and off, like the original tests did. The
      actual tests queries are in qp_with_functional.sql, which is included by
      two launcher scripts that set the options. The launcher tests are put in
      the test schedule, instead of qp_with_functional.sql.
      
      When ORCA is not enabled, the tests are run with gp_cte_sharing on and off.
      That's not quite the same as inlining, but it's similar in nature in that
      it makes sense to run test queries with it enabled and disabled. There were
      some tests for that in with_clause and qp_with_clause tests already, but I
      don't think some extra coverage will hurt us.
      
      This is just a straightforward conversion, there might be overlapping tests
      between these new tests and existing 'with_clause', 'qp_with_clause', and
      upstream 'with' tests. We can clean them up later; these new tests run in
      a few seconds on my laptop, so that's not urgent.
      
      A few tests were tagged with "@skip" in the original tests. Test queries
      58, 59, and 60 produced different results on different invocations,
      apparently depending on the order that a subquery returned rows (OPT-2497).
      I left them in TINC, as skipped tests, pending a decision on what to do
      about them. Queries 28 and 29 worked fine, so I'm not sure why they were
      tagged as "@skip OPT-3035" in the first place. I converted them over and
      enabled like the other tests.
      629f7e76
    • H
      Remove redundant CTE negative test cases. · 898130e7
      Heikki Linnakangas 提交于
      These error cases are covered by the 'with_clause' test in the main suite.
      898130e7
    • D
      Fix ON clause dumping for external tables · 4f4e5a5c
      Daniel Gustafsson 提交于
      Commit ac2fd680 introduced ON MASTER for all external tables, a
      clause until then only available on EXECUTE. The support in pg_dump
      did however assume there was an ON clause, which isn't neccesarily
      the case for external tables in 4.3 (and below), causing "illegal
      ON clause" logging without setting an appropriate ON clause. Fix
      by adding support for non-5.0 servers and rearrange logic to make
      it clearer.
      
      Also; error out on incorrect ON clause, as that could indicate
      catalog corruption and include the relation name in the error
      logging to aid debugging; and use static string/char PQExpBuffer
      functions where possible to avoid overhead.
      
      Apply to pg_dump and cdb_dump_agent.
      4f4e5a5c
    • D
      Set correct subdir for cdb_dump test · d77a1a3c
      Daniel Gustafsson 提交于
      d77a1a3c
    • D
      Compile pg_dump objects locally for cdb_dump · 13b4235e
      Daniel Gustafsson 提交于
      The cdb_dump/restore tools are linking some of the pg_dump objects
      for common code. When building, and linking, the shared objects into
      pg_dump folder however there is a race condition in parallel builds
      where an object compiled in pg_dump can be overwritten by the same
      object compiled from pg_dump/cdb while being read by the linker in
      pg_dump. While rare, an error in the CI pipeline indicates that this
      might have caused a compilation failure. Fix by symlinking the shared
      code into pg_dump/cdb to create a local copy.
      
      While there, did some general tidying up: break a few long lines into
      more readable chunks; make use of the $^ make variable to reference
      objects from the target definition; set the $(X) extension properly;
      ensure that all generated files are removed on make clean; use the
      variables rather than hardcoding paths; remove references to objects
      that no longer exists.
      13b4235e
    • H
      Remove redundant UPDATE and DELETE tests. · bc2208ad
      Heikki Linnakangas 提交于
      The UPDATE cases are sufficiently covered by the 'update' test in the main
      test suite. The DELETE cases are covered by the 'delete' test, and many
      others.
      bc2208ad
    • H
      Remove dead-simple, and therefore redundant, tests. · 9c3e069f
      Heikki Linnakangas 提交于
      We have plenty of INSERTs with constants all over the normal test suite.
      For example, in the 'insert' test. For the AO and AOCS tables, in
      'appendonly' and 'aocs' tests, and the 'uao_dml' tests.
      
      For the partitioned cases, we have inserts in the 'partition' and
      'partition1' tests, for example.
      9c3e069f
    • H
      Move 'colalias_dml_decimal' test from TINC, remove other redundant tests. · 3b3ac609
      Heikki Linnakangas 提交于
      These tests use column aliases that "shadow" real column names. Not very
      interesting, IMHO, but keep one copy, moved to the 'as_alias' test in
      main test suite. Remove the other copies, with different datatypes, they
      seem totally redundant.
      
      There's really no need to test this with all different datatypes.
      
      (These tests also had nothing to do with MPP-21090, like the description
      tags claimed.)
      3b3ac609
    • H
      Remove uninteresting tests on using + operator in DML queries. · 5423ebdc
      Heikki Linnakangas 提交于
      These expressions are as simple as it gets. If any of these queries would
      fail, the system would be so broken that you would notice it very quickly.
      
      (These tests had nothing to do with MPP-21090, like the description tags
      claimed. That bug happened when you dropped a column and then added a
      partition, and there are no DROP COLUMNS or ADD PARTITIONS in these tests.)
      5423ebdc
    • D
      Initialize global var to avoid macOS linker error · 4e0739eb
      Daniel Gustafsson 提交于
      The macOS ld64 linker has an assertion on empty DATA segments
      within linker Atoms. This assertion trips on the resource_manager
      since it only contains uninitialized variables placed for the BSS
      segment. This fails linking the backend on the resource_manager
      SUBSYS object. Without anything initialized, an no exported
      function symbols, the sections are:
      
      $ nm -mgU src/backend/utils/resource_manager/SUBSYS.o
      0000000000000004 (common) (alignment 2^2) external _Gp_resource_manager_policy
      0000000000000001 (common) external _ResourceScheduler
      
      With the initialization of the ResourceScheduler GUC variable:
      
      $ nm -mgU src/backend/utils/resource_manager/SUBSYS.o
      0000000000000004 (common) (alignment 2^2) external _Gp_resource_manager_policy
      0000000000000004 (__DATA,__common) external _ResourceScheduler
      
      Since the resource_manager in its current state is off anyways it
      seems harmless to initialize to the correct value.
      4e0739eb
    • H
      Move tests on using Dynamic Index Scans with partitioned tables. · 6ad7a5e3
      Heikki Linnakangas 提交于
      The old test script had some special logic in it. It ran the test scripts
      as usual, but it also did EXPLAIN on each query, and checked that the
      EXPLAIN output contained "Dynamic Index Scan" (except for a few queries
      that were marked with the "@negtest True" tag). I didn't copy over that
      approach; instead, I added added explicit EXPLAINs to each query, and let
      gpdiff.pl to check that the plan matches that in the expected output.
      That's slightly less fragile, the tests will now fail if there are some
      other changes to the plans, even if they still use a Dynamic Index Scans.
      I think that's quite reasonable, and shouldn't cause too many false
      positives. These queries are fairly simple, so there aren't very many
      possible plans for them anyway. Worst case is that we'll have to update
      the expected output every now and then, when there are changes to the
      planners.
      
      The original tests were run with:
        SELECT disable_xform('CXformDynamicGet2DynamicTableScan')
      
      but I left that out of the new tests. With Orca and
      CXformDynamicGet2DynamicTableScan, some test queries fell back the Postgres
      planner. That didn't seem very interesting. I'm guessing that the point
      of CXformDynamicGet2DynamicTableScan was to force the queries to use an
      (Dynamic) Index scan, rather than a Table Scan, but Orca seems to choose
      Index Scans for all the quries even without it.
      
      Some test queries were tagged with "@negtest True", but nevertheless used
      Dynamic Index Scans, so they would've passed the original Dynamic Index
      Scan checks too, if they hadn't been tagged. Perhaps Orca has improved
      since the tests were written?
      6ad7a5e3
  3. 05 3月, 2017 3 次提交
    • H
      Remove redundant tests with different datatypes. · 08f1d8e8
      Heikki Linnakangas 提交于
      Float8, time, timestamp, and timestamptz have the same typlen, typyval,
      typtype, typalign and typstorage attributes as int8. It is therefore
      sufficient to run these tests with int8.
      
      Likewise, date has the same type attributes as int4, so the tests on int4
      cover the same ground.
      08f1d8e8
    • H
      Remove more redundant TINC tests on int and int4 types. · 9a432ad0
      Heikki Linnakangas 提交于
      In commit 989c502f, I already removed a bunch of these, but didn't catch
      them all then. This removes the remaining tests on "int", where an
      identical test on "int4" also exists. "int" is just an alias for "int4", so
      there is no difference.
      
      I just realized the reason these tests existed in the first place: the
      "int" tests were supposed to actually test the "int2" datatype, not "int4".
      The test values in the "int" test have been chosen so that they would fit
      in a 16-bit integer. So if we wanted to restore the original purpose of
      these tests, we should search & replace "int" with "int2". But testing all
      the different datatypes isn't too interesting anyway, and there's no reason
      to believe that int2 would be a special, and at the same time not test many
      datatypes with different lengths, alignments and pass-by-value flags, so
      let's just remove these.
      9a432ad0
    • H
      Fix broken free_readfile() · edf5753f
      Heikki Linnakangas 提交于
      Commit 5a2d9a5a cherry-picked some code to plug a memory leak from
      test_postmaster_connection(), but the surrounding code looks quite
      different in the upstream version where it was cherry-picked from. In our
      8.3-based version of the function, 'optlines' is modified in the loop, and
      doesn't point to the original allocated array anymore, so we passed a bogus
      pointer to free().
      
      This made e.g. "pg_ctl restart -w" throw GLIBC warnings.
      
      Remove the offending call to free_readfile(), reverting that back to the
      way it was before commit 5a2d9a5a. Yes, that re-introduces the memory
      leak, but that's not a big deal. pg_ctl is not a server program, it will
      exit as soon as it gets its job done, so the leak is very short-lived.
      It was plugged only to silence Coverity warnings in the first place.
      edf5753f
  4. 04 3月, 2017 3 次提交
    • D
      Avoid string duplication in cdb_bsa agents · d286a675
      Daniel Gustafsson 提交于
      The cdb_bsa agents were all following a similar pattern, strduping
      the optarg and then strncpy()'ing the already allocated string into
      a second allocated string which was used for processing. Remove the
      extra malloc() + strncpy() since we already have perfectly good
      strings to operate on from the optarg processing. Also remove calls
      to free() just before exit since thats unnecessary.
      d286a675
    • H
      Remove broke "execute_all_plans" mechanism from TINC tests. · c73c77a8
      Heikki Linnakangas 提交于
      There was a mechanism in the DMLSQLTestCase class, in __init__.py, to
      enumerate all the different plans that ORCA could generate for them, and
      execute them all. The test cases marked with the "@execute_all_plans True"
      tag were supposed to be executed that way. However, it was broken: the
      DMLSQLTestCase class was not used, the test_dml class defined in
      test_dml.py inherited directly from SQLTestCase, not DMLSQLTestCase, so all
      the tests were executed the usual way.
      
      I assume that was an oversight, although it's also possible that these
      tests were copy-pasted from somewhere else, and the @execute_all_plans were
      just left over and not meant to do anything. In any case, I don't think
      it's worthwhile to enumerate and execute all the plans. These test queries
      are fairly simple, so there probably aren't very many plans for each query,
      and they are not that special, anyway.
      
      To avoid confusion, remove the broken mechanism, and all the
      @execute_all_plans tags.
      c73c77a8
    • J
      Changed backup file naming conventions to incorporate content id. (#1861) · ca124632
      Jamie McAtamney 提交于
      Previously, backup files were keyed only to the dbid of the segment from
      which they were backed up, which caused problems when trying to restore
      files backed up on a mirror segment, restore files to a new cluster with
      a different dbid mapping, and so forth.
      
      This commit fixes the above issues by changing the backup file naming
      convention to incorporate the content id of the backed-up segment and
      enables restoring files based on the content id.  Also, gpdbrestore
      seamlessly handles mixing the old and new filename formats, so users can
      still restore a backup made with a GPDB 4.3 version of gpcrondump that
      used the old format.
      ca124632