1. 08 11月, 2017 5 次提交
  2. 07 11月, 2017 16 次提交
    • D
      Remove storage/basic leftovers from Tinc · 6569f9b8
      Daniel Gustafsson 提交于
      With the partitioning tests moved over to ICW, remove the leftovers
      in the basic directory as the scripts are a) not connected to a test
      and b) haven't been used in a long time (referencing files on local
      drives off hackers long since gone from the project etc).
      6569f9b8
    • D
      Refactor basic partitioning tests from Tinc into ICW · 3d25eba8
      Daniel Gustafsson 提交于
      The storage/basic/partition tests were a collection of bugfixes from
      primarily older version of Greenplum. This moves the valuable tests
      over to ICW and removes the ones which are already covered in existing
      ICW suites. The decision for each individual tests is elaborated on in
      the list below:
      
      * MPP-3379 was testing an ancient bug and MPP-3553 a hypothetical
        bug from before the current partition code was written. Combined
        the two since the added clause will still cover the ancient 3379
        issue and remove the other from Tinc.
      
      * MPP-3625 was mostly already covered by existing tests in partition
        and bfv_partition. Added a test for splitting a non-existing default
        as that was the only thing covered.
      
      * MPP-6297 is testing the pg_get_partition_def() function after the
        partition hierarchy has been amended via ALTER TABLE, which is
        already covered by the partition suite. Additionally it tested
        running pg_dump, which use pg_get_partition_def() heavily, and
        this is covered by our pg_upgrade tests.
      
      * MPP-6379 tested that partitions inherited unique indexes, the test
        was moved to the partition_indexing suite.
      
      * MPP-6489 tested ALTER TABLE .. SET DISTRIBUTED BY for subpartitions
        which didn't seem covered in ICW so moved to alter_distribution_policy
        suite
      
      * MPP-7661 tested for an old bug where pg_dump incorrectly handled
        partition hierarchies created with the EVERY syntax. pg_upgrade
        tests in ICW will run this test on hierarchies from the partition
        suites so remove.
      
      * MPP-7734 tested for excessive memory consumption in the case of a
        very deep partitioning hierarchy. The issue in question was fixed
        in 4.0 and the test clocks in at ~ 1.5 minutes, so remove the test
        to save time in the test pipeline. The test was more of a stress
        test than a regression test at this point, and while thats not of
        no importance, we should run stresstests deliberately and not
        accidentally.
      
      * MPP-6537/6538 tested that new partitions introduced in the hierarchy
        correctly inherited the original owner. Test moved to the partition
        suite.
      
      * MPP-7002 tested that splitting partitions worked after a column
        had been dropped. Since we had a similar test already in the
        partition suite, extend that test to also cover splitting.
      
      * MPP-7164 is testing for partition rank after DDL operations on the
        partition hierarchy. I can see that we are testing something along
        the lines of this in ICW so pulled it across. In the process, fixed
        the test to actually run the SQL properly and not error out on a
        syntax error. Also removed a duplicated file for setting up the views.
      
      * MPP-9548 tested for the case of insert data into a column which
        was dropped and then re-added. Since we already had that covered
        in the partition suite, simply extended the comment on that case
        and added another projection to really cover it completely.
      
      * MPP-7232 tests that pg_get_partition_def() supports renamed partitions
        correctly. Added to ICW but the pg_dump test removed.
      
      * MPP-17740 tested the .. WITH (tablename = <t> ..) syntax which we
        to some extent already covered, but added a separate case to test
        permutations of it.
      
      * MPP-18673 covers an old intermittent bug where concurrent sessions
        splitting partitions would corrupt the relcache. The test in Tinc
        didn't however run concurrently so the underlying bug wasn't being
        tested, so remove the test. If we want a similar test at some point
        it should be an isolationtester suite.
      
      * MPP-17761 tests an old bug where splitting an added CO partition
        failed. Since we didn't have much in terms of testing splitting
        CO, extended the block doing split testing with that.
      
      * MPP-17707-* were essentially the same test but with varying storage
        options. While a lot of this is covered elsehwere, these tests were
        really easy to read so rolled them all into a new suite called
        partition_storage.
      
      * MPP-12775 covered yet another split/exchange scenario. Added a
        short variant to the partition suite.
      
      * MPP-17110 tests for an old regression in attribute encoding for
        added columns in partitioning hierarchies. Removed the part of
        the test that checked compression ratio as AO compression should
        be tested elsewhere.
      
      * The partition_ddl2 test was moved over as partition_ddl more or
        less unchanged.
      
      This also removes unused answer files mpp8031.ans.orca and query99
      for which there were no corresponding tests, as well as the data
      file used for copying data into the tests (a copy of which already
      exists in src/test/regress/data).
      3d25eba8
    • D
      Support hash subpartitions in PartitionSelector node · f685a09e
      Daniel Gustafsson 提交于
      When inserting tuples into a partitioned table with Orca, where the
      tuple belongs in a hashed subpartition the PartitionSelector node
      errored out on an unknown partition type. Extend partition selection
      for equality comparison to handle hash partitions. Uncovered when
      adding the partition_ddl test from Tinc into ICW.
      f685a09e
    • X
      640fd9d5
    • N
      resgroup: adjust cgroup mount list for centos6 pipeline. · 345abf78
      Ning Yu 提交于
      The cgroup controllers `hugetlb` and `pids` are not supported on centos6
      pipeline, do not mount them.
      345abf78
    • N
      resgroup: concourse: use here doc instead of ssh args. · c0019271
      Ning Yu 提交于
      Passing complex commands via bash here documents instead of ssh string
      arguments to simplify the handing of quotes and multiple lines.
      c0019271
    • N
      resgroup: remove the experimental warning. · 031779ae
      Ning Yu 提交于
      Currently we show the following experimental warning message, we can
      remove it since we will GA on redhat/centos and SuSE official support is
      also coming soon. Customers can refer to document for experimental
      message for SuSE 11.
      
      ```
       gpconfig -c 'gp_resource_manager' -v 'group'
      20171101:21:48:19:000469
      gpconfig:0de5ce56403a:gpadmin-[WARNING]:-Managing queries with resource
      groups is an experimental feature. A work-in-progress version is
      enabled.
      ```
      031779ae
    • W
      merge passwordcheck plugin from pg (#3689) · f9e6c5e1
      Weinan WANG 提交于
      * merge passwordcheck plugin from pg(c742b795)
      ```
      original commit message :
      Add a hook to CREATE/ALTER ROLE to allow an external module to check the
      strength of database passwords, and create a sample implementation of
      such a hook as a new contrib module "passwordcheck".
      
      Laurenz Albe, reviewed by Takahiro Itagaki
      
      ```
      * merge reason: some customer waiting
      f9e6c5e1
    • J
      Print diffs generated in CCP TINC tests · c8350a80
      Jimmy Yih 提交于
      If the test fails, we will search for diff files in the TINC directory
      and print them all out. This will allow us to see the diff file
      contents from the Concourse job UI. The trap function is copied from
      the run_tinc_test.sh native Concourse TINC test script.
      c8350a80
    • A
      Update README for pgindent and remove outdated files (#3703) · cbe23d0d
      Amil Khanzada 提交于
      Done as a result of the below discussion on gpdb-dev:
      "Formatting code with CLion in PostgreSQL style"
      Signed-off-by: NBen Christel <bchristel@pivotal.io>
      cbe23d0d
    • S
      Enhance gppperfmon test to include row metrics. · 9a4ceffc
      Shoaib Lari 提交于
      The gpperfmon test is enhanced to include rows_out metric from the
      queries_history table.
      Signed-off-by: NMarbin Tan <mtan@pivotal.io>
      9a4ceffc
    • D
      Remove mention of no-longer existing target · c9420f96
      Daniel Gustafsson 提交于
      Make install-check is no longer used, so update documentation to
      not refer to it.
      c9420f96
    • D
      Fix help output for PXF in autoconf · 6f4ae000
      Daniel Gustafsson 提交于
      Building with PXF was made the default in df1d6ccb,
      but the autoconf --help message was missed. Update to match the
      default config.
      6f4ae000
    • D
      Merge with PostgreSQL 8.4 up to 49f001d8 · fa287d01
      Daniel Gustafsson 提交于
      This merge with upstream contained a lot of fixes to the tree, with the
      major pieces of user visible functionality having been previously back-
      ported during the 5-dev cycle. Some noteworthy changes introduced in this
      merge are:
      
      * Allow pass by value on platforms where a Datum is 8 bytes wide.
      
      * plpgsql now supports RETURN QUERY EXECUTE as well as attaching DETAIL
        and HINT fields to user thrown errors. User errors can also specify which
        SQLSTATE to use. plgpsql functions also support CASE statements
      
      * GIN indexes now support partial matches with tsqueries extended to use
        this, as well as multi-column indexes.
      
      * name_pattern_ops and pattern equality ops were removed in 7b8a63c3
        which changes text equality to match bitwise equlity. The corresponding
        change has been made to the GPDB bitmap index code.
      
      * DROP <object> .. ; handling was refactored so that when multiple objects
        are specified they are all dropped in a single call. All GPDB specific
        objects have been updated to match this. The error message output will as
        an effect be more readable for errors involving many partitions. Another
        change here is that DROP FILESPACE is separated into its own command.
      
      * XLogOpenRelation() and XLogReadBuffer() have been refactored for the
        upcoming relation forks.
      
      * Index operator lossiness is now determined at runtime instead of plantime
      
      * EXPLAIN VERBOSE now prints the targetlist of each plan node instead of the
        internal representation of the plan tree.
      
      * A child relation is no longer allowed to not have an inherited CHECK
        constraints from the parent. This adds conislocal and coninhcount to
        pg_constraints to track this.
      
      * The always-on junk filter for INSERT and SELECT INTO which returned raw
        disk tuples was superfluous since ExecInsert() performs a copy anyways.
        In Greenplum, the tuple copying was removed from ExecInsert() to avoid
        double copying as an optimization. This reverts the optimization and
        aligns Greenplum with upstream code
      
      * Type category and preferred category type are no longer hardcoded and
        instead use system catalog lookups. All base types, and user defined
        types, must now set typcategory and typispreferred.
      
      * Keeping track of snapshots has been refactored into a list of registered
        snapshots as well as a stack of active snapshots. Memory management for
        snapshot handling is also done automatically now. Dispatching of distributed
        snapshots with queries was also reworked and optimized as fallout from this.
      
      This has been a joint effort between Heikki Linnakangas, Daniel Gustafsson,
      Jacob Champion, Tom Meyer, Xiwoaran Wang, Max Yang, Jesse Zhang and Omer Arap
      fa287d01
    • D
      Docs - updating RG performance note · d052d07b
      David Yozie 提交于
      d052d07b
    • M
      Piipeline Restructuring (#3781) · ae086f9f
      Michael Roth 提交于
      * fixed merge conflics
      
      * updated gate job list
      ae086f9f
  3. 06 11月, 2017 19 次提交
    • N
      pipeline: extract pre-run test setup to it's own separate task. · bc405b15
      Nadeem Ghani 提交于
      Ideally, pretest setup should be explicit for their own jobs, so we
      extracted out the setup to be more modularized.
      
      Any tests should be able to use this pattern instead of having to add it
      in run_tinc.yml or run_behave.yml
      Signed-off-by: NMarbin Tan <mtan@pivotal.io>
      bc405b15
    • H
      Remove pointless mock test. · 09416670
      Heikki Linnakangas 提交于
      There's a direct call to MemoryAccounting_Reset() in MemoryContextInit().
      This test was merely testing that the call is indeed there. That seems
      completely uninteresting to me.
      
      I tested what happens if the call is removed. Postmaster segfaults at
      startup. So if that call somehow accidentally gets removed, we will find
      out quickly even without this test.
      09416670
    • D
      Move copy interrupt test from Tinc to ICW · 52aa1811
      Daniel Gustafsson 提交于
      Since GPDB has a lot of downstream code in the COPY codepath, testing
      that we are still able to interrupt a COPY in progress is of interest.
      Move the Tinc test to ICW and in the process make it actually test the
      interruption by adding enough data into the testpath that it will be
      able to hit before the copy is done. The original coding only added
      100 rows to be copied which always finishes well before the interrupt
      fault injection manages to run.
      52aa1811
    • H
      88aa3411
    • M
      Remove Assert on hashList length related to nodeResult. · 44352db9
      Max Yang 提交于
      We don't need to check
      Assert(list_length(resultNode->hashList) <= resultSlot->tts_tupleDescriptor->natts),
      because optimizer could be smart to reuse columns for following queries:
      create table tbl(a int, b int, p text, c int) distributed by(a, b);
      create function immutable_generate_series(integer, integer) returns setof integer as
      'generate_series_int4' language internal immutable;
      set optimizer=on;
      insert into tbl select i, i, i || 'SOME NUMBER SOME NUMBER', i % 10 from immutable_generate_series(1, 1000) i;
      The hashList specified by planner is (1, 1) which references immutable_generate_series for (a, b), and
      resultSlot->tts_tupleDescriptor only contains immutable_generate_series. It's good, so we don't need to
      check again. And slot_getattr(resultSlot, attnum, &isnull) will check
      attnum <= resultSlot->tts_tupleDescriptor->natts for us
      Signed-off-by: NXiaoran Wang <xiwang@pivotal.io>
      44352db9
    • A
      Silence Bison deprecation warnings · 9fce2f7a
      Adam Lee 提交于
          commit 55fb759a
          Author: Peter Eisentraut <peter_e@gmx.net>
          Date:   Tue Jun 3 22:36:35 2014 -0400
      
              Silence Bison deprecation warnings
      
              Bison >=3.0 issues warnings about
      
                  %name-prefix="base_yy"
      
              instead of the now preferred
      
                  %name-prefix "base_yy"
      
              but the latter doesn't work with Bison 2.3 or less.  So for now we
              silence the deprecation warnings.
      9fce2f7a
    • A
      Fix compiler warnings in pg_dump · e020e03d
      Adam Lee 提交于
      Heikki at 6a76c5d0 back ported two commits from
      upstream to fix this, but hung gp_dump_agent and got reverted.
      
      This time I just backport one of it, which is enough as I tested.
      
          commit d923125b
          Author: Peter Eisentraut <peter_e@gmx.net>
          Date:   Fri Mar 2 22:30:01 2012 +0200
      
              Fix incorrect uses of gzFile
      
              gzFile is already a pointer, so code like
      
              gzFile *handle = gzopen(...)
      
              is wrong.
      
              This used to pass silently because gzFile used to be defined as void*,
              and you can assign a void* to a void**.  But somewhere between zlib
              versions 1.2.3.4 and 1.2.6, the definition of gzFile was changed to
              struct gzFile_s *, and with that new definition this usage causes
              compiler warnings.
      
              So remove all those extra pointer decorations.
      
              There is a related issue in pg_backup_archiver.h, where
      
              FILE       *FH;             /* General purpose file handle */
      
              is used throughout pg_dump as sometimes a real FILE* and sometimes a
              gzFile handle, which also causes warnings now.  This is not yet fixed
              here, because it might need more code restructuring.
      
      GitHub issue #447
      e020e03d
    • A
      Fix libpq C type warnings · 7e46c187
      Adam Lee 提交于
      C files need to include postgres.h on backend or postgres_fe.h on
      frontend, to adopt our C types.
      
      fe-protocol3.c: In function ‘pqParseInput3’:
      fe-protocol3.c:256:22: warning: passing argument 1 of ‘pqGetInt64’ from incompatible pointer type [-Wincompatible-pointer-types]
             if (pqGetInt64(&(ao->tupcount), conn))
                            ^
      In file included from fe-protocol3.c:21:0:
      libpq-int.h:629:14: note: expected ‘int64 * {aka long int *}’ but argument is of type ‘long long int *’
       extern int64 pqGetInt64(int64 *result, PGconn *conn);  /* GPDB only */
                    ^~~~~~~~~~
      7e46c187
    • A
      Fix cdb_ddboost_util compiling warnings · 92f83f54
      Adam Lee 提交于
      cdb_ddboost_util.c: In function ‘createFakeRestoreFile’:
      cdb_ddboost_util.c:52:82: warning: pointer type mismatch in conditional expression
       #define DDDOPEN(path, mode, compress) (((compress) == 1) ? (GZDOPEN(path, mode)) : (fdopen(path, mode)))
                                                                                        ^
      cdb_ddboost_util.c:1856:10: note: in expansion of macro ‘DDDOPEN’
         ddfp = DDDOPEN(fd[0], "r", isCompress);
                ^~~~~~~
      cdb_ddboost_util.c:51:80: warning: pointer type mismatch in conditional expression
       #define DDOPEN(path, mode, compress) (((compress) == 1) ? (GZOPEN(path, mode)) : (fopen(path, mode)))
                                                                                      ^
      cdb_ddboost_util.c:1864:14: note: in expansion of macro ‘DDOPEN’
         ddfpTemp = DDOPEN(dd_options->to_file, "w", isCompress);
                    ^~~~~~
      92f83f54
    • H
      Allow pg_config --configure. · 193c128d
      Heikki Linnakangas 提交于
      Despite the comment, I see no reason to hide it.
      
      Fixes github issue #2685.
      193c128d
    • N
      resgroup: collect debug info more frequently in cpu test. · 522f5797
      Ning Yu 提交于
      The resgroup cpu test is still flaky on concourse, but the failure is
      hard to trigger manually, so we have to put more debug info in the test
      logs.
      522f5797
    • R
      Improve resource group codes. · 54ef8c21
      Richard Guo 提交于
      * Set resource group id to be InvalidOid in pg_stat_activity
         when transaction ends.
      * Change the mode of ResGroupLock to LW_SHARED in decideResGroup().
      * Add necessary comments for resource group functions.
      * Change some function names and variable names.
      54ef8c21
    • R
      Retire resource group test case resgroup_verify_guc. · fae1feb3
      Richard Guo 提交于
      Test case resgroup_verify_guc is to verify the setting
      of GUC statement_mem. Since this GUC does not take
      effect in resource group, remove this test case.
      fae1feb3
    • A
      Symlink libpq files for backend and optimize the makefiles · 1e9cd7d9
      Adam Lee 提交于
      src/backend's makefiles have its own rules, this commit symlinks libpq
      files for backend to leverage them, canonical and much simpler.
      
      What are the rules?
      
      1, src/backend compile SUBDIR, list OBJS in sub-directories'
      objfiles.txt, then link them all into postgres.
      
      2, mock.mk links all OBJS, but filters out the objects which mocked by
      cases.
      1e9cd7d9
    • A
      Revert "Update mock makefiles to work with libpq objfiles" · e1d673dc
      Adam Lee 提交于
      This reverts commit 0ed93335.
      e1d673dc
    • M
      Fix memory tuple binding leak when calling tuplestore_putvalues for tuplestore. · b9058a4d
      Max Yang 提交于
      For instance, the simple query:
      create table spilltest2 (a integer);
      insert into spilltest2 select a from generate_series(1,40000000) a;
      Current optimizer would generate FunctionTableScan for generate_series, which stores
      result of generate_series in tuple store. Memory leak would happen because memory tuple
      binding will be constructed every time in tuplestore_putvalues. It is not only memory leak
      problem, also performance problem, because we don't need to construct memory tuple binding
      for every row, but just once. This fix just change the interface of tuplestore_putvalues. It
      will receive memory tuple binding as input, which is constructed once outside.
      Signed-off-by: NXiaoran Wang <xiwang@pivotal.io>
      b9058a4d
    • M
      Fix some indention and remove extra ending whitespace of following files · 645b6473
      Max Yang 提交于
      before we make some changes for them. Just split indention and code changes
      to separated commits to make review easier:
      pg_exttable.c
      prepare.c
      execQual.c
      plpgsql.h
      Signed-off-by: NXiaoran Wang <xiwang@pivotal.io>
      645b6473
    • H
      Make qp_dml_oids test case more robust. · 8f5e111a
      Heikki Linnakangas 提交于
      I frequently saw failures when running the test on my laptop with ORCA.
      The reason is that the OIDs on a user table are not guaranteed to be
      unique across segments in GPDB. Depending on concurrent activity and
      timing, the OID counters on different segments are not always in sync,
      and can produce duplicates when looked at across all nodes. In fact, I
      think the only reason this is currently passing on the pipeline so
      reliably as it is, is because the test is run in parallel with other
      tests that also create objects, which creates enough noise in the OID
      allocations.
      
      To fix, modify the test data in the test so that all the initial test rows
      reside on the same segment. Within a segment, the OIDs are unique.
      8f5e111a
    • H
      Refactoring around ExecInsert. · d0fbecf6
      Heikki Linnakangas 提交于
      This is mostly in preparation for changes soon to be merged from PostgreSQL
      8.4, commit a77eaa6a to be more precise. Currently GPDB's ExecInsert
      uses ExecSlotFetch*() functions to get the tuple from the slot, while in
      the upstream, it makes a modifiable copy with ExecMaterializeSlot(). That's
      OK as the code stands, because there's always a "junk filter" that ensures
      that the slot doesn't point directly to an on-disk tuple. But commit
      a77eaa6a will change that, so we have to start being more careful.
      
      This does fix an existing bug, namely that if you UPDATE an AO table with
      OIDs, the OIDs currently change (github issue #3732). Add a test case for
      that.
      
      More detailed breakdown of the changes:
      
      * In ExecInsert, create a writeable copy of the tuple when we're about
        to modify it, so that we don't accidentally modify an existing on-disk
        tuple. By calling ExecMaterializeSlot().
      
      * In ExecInsert, track the OID of the tuple we're about to insert in a
        local variable, when we call the BEFORE ROW triggers, because we don't
        have a "tuple" yet.
      
      * Add ExecMaterializeSlot() function, like in the upstream, because we now
        need it in ExecInsert. Refactor ExecFetchSlotHeapTuple to use
        ExecMaterializeSlot(), like in upstream.
      
      * Cherry-pick bug fix commit 3d02cae3 from upstream. We would get that
        soon anyway as part of the merge, but we'll soon have test failures if
        we don't fix it immediately.
      
      * Change the API of appendonly_insert(), so that it takes the new OID as
        argument, instead of extracting it from the passed-in MemTuple. With this
        change, appendonly_insert() is guaranteed to not modify the passed-in
        MemTuple, so we don't need the equivalent of ExecMaterializeSlot() for
        MemTuples.
      
      * Also change the API of appendonly_insert() so that it returns the new OID
        of the inserted tuple, like heap_insert() does. Most callers ignore the
        return value, so this way they don't need to pass a dummy pointer
        argument.
      
      * Add test case for the case that a BEFORE ROW trigger sets the OID of
        a tuple we're about to insert.
      
      This is based on earlier patches against the 8.4 merge iteration3 branch by
      Jacob and Max.
      d0fbecf6