1. 03 8月, 2018 2 次提交
    • J
      Correct test of pg_get_expr · b3c02f3a
      Joao Pereira 提交于
      The error that was expected in this error was wrong. In this test the
      error expecte should have been that the type could not be found instead
      of the one present. This correction was made in the previous commit
      b3c02f3a
    • J
      Missmerge of commit from upstream · 54a2bda7
      Joao Pereira 提交于
      The commit 8ab6a6b4 from upstream removed the function
      check_pg_get_expr_args and all the calls to it.
      Nevertheless the merge brought it back and with it the
      issue corrected on commit f223bb7aCo-authored-by: NTaylor Vesely <tvesely@pivotal.io>
      (cherry picked from commit 39560a95615771768b7381842fffe4af9b4284b6)
      54a2bda7
  2. 02 8月, 2018 3 次提交
  3. 25 7月, 2018 1 次提交
    • E
      Save git root, submodule and changelog information. · a075db42
      Ed Espino 提交于
      The following information will be saved in
      ${GREENPLUM_INSTALL_DIR}/etc/git-info.json:
      
      * Root repo (uri, sha1)
      * Submodules (submodule source path, sha1, tag)
      
      Save git commits since last release tag into
      ${GREENPLUM_INSTALL_DIR}/etc/git-current-changelog.txt
      a075db42
  4. 24 7月, 2018 2 次提交
    • D
      Fix race condition in autovacuum test · 6a71ff04
      David Kimura 提交于
      If autovacuum was triggered before ShmemVariableCache->latestCompletedXid is
      updated by manually consuming xids then autovacuum may not vacuum template0
      with a proper transaction id to compare against. We made the test more reliable
      by suspending a new fault injector (auto_vac_worker_before_do_autovacuum) right
      before autovacuum worker sets recentXid and starts doing the autovacuum.  This
      allows us to guarantee that autovacuum is comparing against a proper xid.
      
      We also removed the loop in the test because vacuum_update_dat_frozen_xid fault
      injector ensures the pg_database table has been updated.
      Co-authored-by: NJimmy Yih <jyih@pivotal.io>
      6a71ff04
    • L
      docs - remove one copy of duplicate gphdfs hdfs kerberos content (#5311) · 85c3fd97
      Lisa Owen 提交于
      * docs - remove duplicate gphdfs/kerberos topic in best practices
      
      * remove unused file
      85c3fd97
  5. 23 7月, 2018 2 次提交
    • H
      support fast_match option in gpload config file (#5317) · 9f83fee5
      Huiliang.liu 提交于
      - add fast_match option in gpload config file. If both reuse_tables
      and fast_match are true, gpload will try fast match external
      table(without checking columns). If reuse_tables is false and
      fast_match is true, it will print warning message.
      9f83fee5
    • H
      gpperfmon: fix long query text cannot load into queries_history · c8a03d20
      Hao Wang 提交于
      1. When doing harvesting, raise the gp_max_csv_line_length to
      maximum legal value in session level.
      2. For query longer than gp_max_csv_line_length, this workaround
      replaces line breaks in query text with space to prevent load
      failure. It may lead long query statement changed when load to
      history table, but it is still better than fail to load or truncate
      the query text.
      
      Co-authored-by: Teng Zhang tezhang@pivotal.io
      Co-authored-by: Hao Wang haowang@pivotal.io
      c8a03d20
  6. 21 7月, 2018 1 次提交
  7. 20 7月, 2018 3 次提交
    • D
      Remove MyProc inDropTransaction flag (#5301) · 4c3be783
      David Kimura 提交于
      The MyProc inDropTransaction flag was used to make sure concurrent AO vacuum
      would not conflict with each other during drop phase. Two concurrent AO vacuum
      on same relation was possible back in 4.3 where the different AO vacuum phases
      (prepare, compaction, drop, cleanup) would interleave with each other, and
      having two AO vacuum drop phases concurrently on the same AO relation was
      dangerous. We now hold the ShareUpdateExclusiveLock through the entire AO
      vacuum which renders the inDropTransaction flag useless and disallows the
      interleaving mechanism.
      Co-authored-by: NJimmy Yih <jyih@pivotal.io>
      4c3be783
    • L
      Ping utility can optionally survive DNS failure · 75da6523
      Larry Hamel 提交于
      - Previously, DNS was queried within the `Ping` utility constructor, so a DNS failure would always raised an exception.
      - Now the DNS query is in the standard `run()` method, so a failure from DNS will raise optionally, depending on the `validateAfter` parameter.
      - `Command` declared as a new style class so that `super(Ping, self).run()` can be called.
      Co-authored-by: NLarry Hamel <lhamel@pivotal.io>
      Co-authored-by: NJemish Patel <jpatel@pivotal.io>
      75da6523
    • H
      Increase sleep between launching gpfdist and running test queries. · 504ad08b
      Heikki Linnakangas 提交于
      We've seen a lot of failures in the 'sreh' test in the pipeline, like this:
      
      --- 263,269 ----
        FORMAT 'text' (delimiter '|')
        SEGMENT REJECT LIMIT 10000;
        SELECT * FROM sreh_ext;
      ! ERROR:  connection failed dummy_protocol://DUMMY_LOCATION
        INSERT INTO sreh_target SELECT * FROM sreh_ext;
        NOTICE:  Found 10 data formatting errors (10 or more input rows). Rejected related input data.
        SELECT count(*) FROM sreh_target;
      
      I don't really know, but I'm guessing it could be because it sometimes
      takes more than one second for gpfdist to fully start up, if there's a lot
      of disk or other activity. Increase the sleep time from 1 to 3 seconds;
      we'll see if that helps.
      
      (cherry picked from commit bb8575a9)
      504ad08b
  8. 19 7月, 2018 4 次提交
    • Y
      Narrow race condition between pg_partitions view and altering partitions. (#5050) · eacb080e
      Yandong Yao 提交于
      We have lately seen a lot of failures in test cases related to
      partitioning, with errors like this:
      
      select tablename, partitionlevel, partitiontablename, partitionname, partitionrank, partitionboundary from pg_partitions where tablename = 'mpp3079a';
      ERROR:  cache lookup failed for relation 148532 (ruleutils.c:7172)
      
      The culprit is that that the view passes a relation OID to the
      pg_get_partition_rule_def() function, and the function tries to perform a
      syscache lookup on the relation (in flatten_reloptions()), but the lookup
      fails because the relation was dropped concurrently by another transaction.
      This race is possible, because the query runs with an MVCC snapshot, but
      the syscache lookups use SnapshotNow.
      
      This commit doesn't eliminate the race completely, but at least it makes it
      narrower. A more reliable solution would've been to acquire a lock on the
      table, but then that might block, which isn't nice either.
      
      Another solution would've been to modify flatten_reloptions() to return
      NULL instead of erroring out, if the lookup fails. That approach is taken
      on the other lookups, but I'm reluctant to modify flatten_reloptions()
      because it's inherited from upstream. Let's see how well this works in
      practice first, before we do more drastic measures.
      eacb080e
    • M
      docs - update gpbackup API - add segment instance and update backup d… (#5285) · b31d53a3
      Mel Kiyama 提交于
      * docs - update gpbackup API - add segment instance and update backup directory information.
      
      Also update API version to 0.3.0.
      
      This will be ported to 5X_STABLE
      
      * docs - gpbackup API - review updates and fixes for scope information
      
      Also, cleanup edits.
      
      * docs - gpbackup API - more review updates and fixes to scope information.
      b31d53a3
    • M
    • L
      docs - add kafka connector xrefs (#5292) · 6f3a8fb3
      Lisa Owen 提交于
      6f3a8fb3
  9. 18 7月, 2018 4 次提交
  10. 17 7月, 2018 2 次提交
  11. 14 7月, 2018 4 次提交
  12. 13 7月, 2018 6 次提交
    • J
      Fix resource group bypass test case · 2637bb3e
      Jialun Du 提交于
      - Remove async session before CREATE FUNCTION
      - Change comment format from -- to /* */
      
      (cherry picked from commit 1cc742247cc4de314941624cfac9e8676bc71100)
      2637bb3e
    • J
      Fix resgroup bypass quota (#5262) · 584acc63
      Jialun 提交于
      * Add resource group bypass memory limit.
      
      - Bypassed query allocates all the memory in group / global shared memory,
        and is also enforced by the group's memory limit;
      - Bypassed query also has a memory limit of 10 chunks per process;
      
      * Add test cases for the resgroup bypass memory limit.
      
      * Provide ORCA answer file.
      
      * Adjust memory limit on QD.
      
       (cherry picked from commit dc82ceea)
      584acc63
    • M
      docs - gpbackup S3 plugin support for S3 compatible data stores. (#5233) · 7d1eb3c0
      Mel Kiyama 提交于
      * docs - gpbackup S3 plugin - support for S3 compatible data stores.
      
      Link to HTML format on GPDB docs review site.
      http://docs-gpdb-review-staging.cfapps.io/review/admin_guide/managing/backup-s3-plugin.html
      
      * docs - gpbackup S3 plugin - review comment updates
      
      * docs - gpbackup S3 plugin - Add OSS/Pivotal support information.
      
      * docs - gpbackup S3 plugin - fix typos.
      
      * docs - gpbackup S3 plugin - updated information about S3 compatible data stores.
      7d1eb3c0
    • M
      docs - gpbackup API - update for scope argument. (#5267) · ebbba229
      Mel Kiyama 提交于
      * docs - gpbackup API - update for scope argument.
      
      This will be ported to 5X_STABLE
      
      * docs - gpbackup API - correct API scope description based on review comments.
      
      * docs - gpbackup API - edit scope description based on review comments.
      
      --Updated API version
      ebbba229
    • D
      Docs: Edit pxf filter pushdown docs (#5219) · 052251a7
      David Yozie 提交于
      * fix problematic xrefs
      
      * Consistency edits for list items
      
      * edit, relocate filter pushdown section
      
      * minor edits to guc description
      
      * remove note about non-support in Hive doc
      
      * Edits from Lisa's review
      
      * Adding note about experimental status of HBase connector pushdown
      
      * Adding note about experimental status of Hive connector pushdown
      
      * Revert "Adding note about experimental status of Hive connector pushdown"
      
      This reverts commit 43dfe51526e19983835f7cbd25d540d3c0dec4ba.
      
      * Revert "Adding note about experimental status of HBase connector pushdown"
      
      This reverts commit 3b143de058c7403c2bc141c11c61bf227c2abf3a.
      
      * restoring HBase, Hive pushdown support
      
      * slight wording change
      
      * adding xref
      052251a7
    • A
      Shared buffer should not be accessed after it is unpinned · 673bcf22
      Asim R P 提交于
      We seemed to be doing that in this case.  This was caught by enabling
      memory_protect_buffer_pool GUC.
      673bcf22
  13. 12 7月, 2018 4 次提交
  14. 11 7月, 2018 2 次提交
    • P
      Fix duplicate distributed keys for CTAS · c40a6690
      Pengzhou Tang 提交于
      To keep it consistent with the "Create table" syntax, CTAS should also
      disallow duplicate distributed keys, otherwise backup and restore will
      mess up.
      c40a6690
    • J
      Invalidate AppendOnlyHash cache entry at end of TRUNCATE · e5e6f65d
      Jimmy Yih 提交于
      TRUNCATE will rewrite the relation by creating a temporary table and
      swapping it with the real relation. For AO, this includes the
      auxiliary tables which is concerning for the AO relation's pg_aoseg
      table which holds information that a AO segment file is available for
      write or waiting to be compacted/dropped. Since we do not currently
      invalidate the AppendOnlyHash cache entry, the entry could have
      invisible leaks in its AOSegfileStatus array that will be stuck in
      state AOSEG_STATE_AWAITING_DROP. These leaks will persist until the
      user evicts the cache entry by not using the table to allow another AO
      table to cache itself in that slot or by restarting the database. We
      fix this issue by invalidating the cache entry at the end of TRUNCATE
      on AO relations.
      
      Conflicts:
          src/backend/commands/tablecmds.c
           - TRUNCATE is a bit different between Greenplum 5 and 6. Needed
             to move the heap_close() to the end to invalidate the
             AppendOnlyHash entries after dispatch.
           - Macros RelationIsAppendOptimized() and IS_QUERY_DISPATCHER() do
             not exist in 5X_STABLE.
          src/test/isolation2/sql/truncate_after_ao_vacuum_skip_drop.sql
           - Isolation2 utility mode connections use dbid instead of content
             id like in Greenplum 6.
      e5e6f65d