1. 19 7月, 2018 4 次提交
    • Y
      Narrow race condition between pg_partitions view and altering partitions. (#5050) · eacb080e
      Yandong Yao 提交于
      We have lately seen a lot of failures in test cases related to
      partitioning, with errors like this:
      
      select tablename, partitionlevel, partitiontablename, partitionname, partitionrank, partitionboundary from pg_partitions where tablename = 'mpp3079a';
      ERROR:  cache lookup failed for relation 148532 (ruleutils.c:7172)
      
      The culprit is that that the view passes a relation OID to the
      pg_get_partition_rule_def() function, and the function tries to perform a
      syscache lookup on the relation (in flatten_reloptions()), but the lookup
      fails because the relation was dropped concurrently by another transaction.
      This race is possible, because the query runs with an MVCC snapshot, but
      the syscache lookups use SnapshotNow.
      
      This commit doesn't eliminate the race completely, but at least it makes it
      narrower. A more reliable solution would've been to acquire a lock on the
      table, but then that might block, which isn't nice either.
      
      Another solution would've been to modify flatten_reloptions() to return
      NULL instead of erroring out, if the lookup fails. That approach is taken
      on the other lookups, but I'm reluctant to modify flatten_reloptions()
      because it's inherited from upstream. Let's see how well this works in
      practice first, before we do more drastic measures.
      eacb080e
    • M
      docs - update gpbackup API - add segment instance and update backup d… (#5285) · b31d53a3
      Mel Kiyama 提交于
      * docs - update gpbackup API - add segment instance and update backup directory information.
      
      Also update API version to 0.3.0.
      
      This will be ported to 5X_STABLE
      
      * docs - gpbackup API - review updates and fixes for scope information
      
      Also, cleanup edits.
      
      * docs - gpbackup API - more review updates and fixes to scope information.
      b31d53a3
    • M
    • L
      docs - add kafka connector xrefs (#5292) · 6f3a8fb3
      Lisa Owen 提交于
      6f3a8fb3
  2. 18 7月, 2018 4 次提交
  3. 17 7月, 2018 2 次提交
  4. 14 7月, 2018 4 次提交
  5. 13 7月, 2018 6 次提交
    • J
      Fix resource group bypass test case · 2637bb3e
      Jialun Du 提交于
      - Remove async session before CREATE FUNCTION
      - Change comment format from -- to /* */
      
      (cherry picked from commit 1cc742247cc4de314941624cfac9e8676bc71100)
      2637bb3e
    • J
      Fix resgroup bypass quota (#5262) · 584acc63
      Jialun 提交于
      * Add resource group bypass memory limit.
      
      - Bypassed query allocates all the memory in group / global shared memory,
        and is also enforced by the group's memory limit;
      - Bypassed query also has a memory limit of 10 chunks per process;
      
      * Add test cases for the resgroup bypass memory limit.
      
      * Provide ORCA answer file.
      
      * Adjust memory limit on QD.
      
       (cherry picked from commit dc82ceea)
      584acc63
    • M
      docs - gpbackup S3 plugin support for S3 compatible data stores. (#5233) · 7d1eb3c0
      Mel Kiyama 提交于
      * docs - gpbackup S3 plugin - support for S3 compatible data stores.
      
      Link to HTML format on GPDB docs review site.
      http://docs-gpdb-review-staging.cfapps.io/review/admin_guide/managing/backup-s3-plugin.html
      
      * docs - gpbackup S3 plugin - review comment updates
      
      * docs - gpbackup S3 plugin - Add OSS/Pivotal support information.
      
      * docs - gpbackup S3 plugin - fix typos.
      
      * docs - gpbackup S3 plugin - updated information about S3 compatible data stores.
      7d1eb3c0
    • M
      docs - gpbackup API - update for scope argument. (#5267) · ebbba229
      Mel Kiyama 提交于
      * docs - gpbackup API - update for scope argument.
      
      This will be ported to 5X_STABLE
      
      * docs - gpbackup API - correct API scope description based on review comments.
      
      * docs - gpbackup API - edit scope description based on review comments.
      
      --Updated API version
      ebbba229
    • D
      Docs: Edit pxf filter pushdown docs (#5219) · 052251a7
      David Yozie 提交于
      * fix problematic xrefs
      
      * Consistency edits for list items
      
      * edit, relocate filter pushdown section
      
      * minor edits to guc description
      
      * remove note about non-support in Hive doc
      
      * Edits from Lisa's review
      
      * Adding note about experimental status of HBase connector pushdown
      
      * Adding note about experimental status of Hive connector pushdown
      
      * Revert "Adding note about experimental status of Hive connector pushdown"
      
      This reverts commit 43dfe51526e19983835f7cbd25d540d3c0dec4ba.
      
      * Revert "Adding note about experimental status of HBase connector pushdown"
      
      This reverts commit 3b143de058c7403c2bc141c11c61bf227c2abf3a.
      
      * restoring HBase, Hive pushdown support
      
      * slight wording change
      
      * adding xref
      052251a7
    • A
      Shared buffer should not be accessed after it is unpinned · 673bcf22
      Asim R P 提交于
      We seemed to be doing that in this case.  This was caught by enabling
      memory_protect_buffer_pool GUC.
      673bcf22
  6. 12 7月, 2018 4 次提交
  7. 11 7月, 2018 5 次提交
    • P
      Fix duplicate distributed keys for CTAS · c40a6690
      Pengzhou Tang 提交于
      To keep it consistent with the "Create table" syntax, CTAS should also
      disallow duplicate distributed keys, otherwise backup and restore will
      mess up.
      c40a6690
    • J
      Invalidate AppendOnlyHash cache entry at end of TRUNCATE · e5e6f65d
      Jimmy Yih 提交于
      TRUNCATE will rewrite the relation by creating a temporary table and
      swapping it with the real relation. For AO, this includes the
      auxiliary tables which is concerning for the AO relation's pg_aoseg
      table which holds information that a AO segment file is available for
      write or waiting to be compacted/dropped. Since we do not currently
      invalidate the AppendOnlyHash cache entry, the entry could have
      invisible leaks in its AOSegfileStatus array that will be stuck in
      state AOSEG_STATE_AWAITING_DROP. These leaks will persist until the
      user evicts the cache entry by not using the table to allow another AO
      table to cache itself in that slot or by restarting the database. We
      fix this issue by invalidating the cache entry at the end of TRUNCATE
      on AO relations.
      
      Conflicts:
          src/backend/commands/tablecmds.c
           - TRUNCATE is a bit different between Greenplum 5 and 6. Needed
             to move the heap_close() to the end to invalidate the
             AppendOnlyHash entries after dispatch.
           - Macros RelationIsAppendOptimized() and IS_QUERY_DISPATCHER() do
             not exist in 5X_STABLE.
          src/test/isolation2/sql/truncate_after_ao_vacuum_skip_drop.sql
           - Isolation2 utility mode connections use dbid instead of content
             id like in Greenplum 6.
      e5e6f65d
    • J
      Invalidate AppendOnlyHash cache entry for AT_SetDistributedBy cases · 057f5dbd
      Jimmy Yih 提交于
      ALTER TABLE commands that are tagged as AT_SetDistributedBy require a
      gather motion and does its own variation of creating a temporary table
      for CTAS (basically bypassing the usual ATRewriteTable which actually
      does do AppendOnlyHash cache entry invalidation). Without the
      AppendOnlyHash cache entry invalidation, the entry could have
      invisible leaks in its AOSegfileStatus array that will be stuck in
      state AOSEG_STATE_AWAITING_DROP. These leaks will persist until the
      user evicts the cache entry by not using the table to allow another AO
      table to cache itself in that slot or by restarting the database. We
      fix this issue by invalidating the cache entry at the end of
      AT_SetDistributedBy ALTER TABLE cases.
      
      Conflicts:
          src/backend/commands/tablecmds.c
           - Macro IS_QUERY_DISPATCHER() does not exist in 5X_STABLE.
          src/test/isolation2/sql/reorganize_after_ao_vacuum_skip_drop.sql
           - Isolation2 utility mode connections use dbid instead of content
             id like in Greenplum 6.
      057f5dbd
    • D
    • M
      docs - PL/Container - added note - domain object not supported. (#5257) · d9cd8d8f
      Mel Kiyama 提交于
      * docs - PL/Container - added note - domain object not supported.
      
      * docs - PL/Container - updated note for non-support of domain object
      d9cd8d8f
  8. 10 7月, 2018 2 次提交
  9. 07 7月, 2018 2 次提交
  10. 06 7月, 2018 3 次提交
    • J
      Fix create gang failure on dns lookup error on down mirrors. · 4eff0222
      Jialun 提交于
      If a segment exists in gp_segment_configuration but its ip address can
      not be resolved we will run into a runtime error on gang creation:
      
          ERROR:  could not translate host name "segment-0a", port "40000" to
          address: Name or service not known (cdbutil.c:675)
      
      This happens even if segment-0a is a mirror and is marked as down.  With
      this error queries can not be executed, gpstart and gpstop will also
      fail.
      
      One way to trigger the issue:
      
      - create a multiple segments cluster;
      - remove sdw1's dns entry from /etc/hosts on mdw;
      - kill postgres primary process on sdw1;
      
      FTS can detect this error and automatically switch to mirror, but
      queries can not be executed.
      
      (cherry picked from commit dd861e72)
      4eff0222
    • M
      docs - update system catalog maintenance information. (#5179) · c4241e17
      Mel Kiyama 提交于
      * docs - update system catalog maintenance information.
      
      --Updated Admin. Guide and Best Practices for running REINDEX, VACUUM, and ANALYZE
      --Added note to REINDEX reference about running ANALYZE after REINDEX.
      
      * docs - edits for system catalog maintenance updates
      
      * docs - update recommendation for running vacuum and analyze.
      
      Update based on dev input.
      c4241e17
    • L
      c4f63c1e
  11. 03 7月, 2018 2 次提交
    • J
      Implement resource group bypass mode (#5223) · d893d8ef
      Jialun 提交于
      - Introduce a new GUC gp_resource_group_bypass, when it is on,
        the query in this session will not be limited by resource group
      d893d8ef
    • J
      ci: gen pipeline will try and guess git details · 6dfd6291
      Jim Doty 提交于
      The gen pipeline script outputs a suggested command when setting a dev
      pipeline.  Currently the git remote and git branch have to be edited
      before executing the command.  Since often times the branch has been
      created and is tracing remote, it is possible to guess those details.
      The case statements attempt to prevent suggesting using the production
      branches, and fall back to the same string as before.
      Authored-by: NJim Doty <jdoty@pivotal.io>
      (cherry picked from commit ea12d3a1)
      6dfd6291
  12. 30 6月, 2018 2 次提交
    • D
      fix problematic xrefs · e2c4961b
      David Yozie 提交于
      e2c4961b
    • I
      Extra docs for the pushdown feature (#5193) · 92c993f2
      Ivan Leskin 提交于
      * Extra docs for gp_external_enable_filter_pushdown
      
      Add extra documentation for 'gp_external_enable_filter_pushdown' and the pushdown feature in PXF extension.
      
      * Minor doc text fixes
      
      Minor documentation text fixes, proposed by @dyozie.
      
      * Clarify the pushdown support by PXF
      
      Add the following information:
      * List the PXF connectors that support pushdown;
      * State that GPDB PXF extension supports pushdown;
      * Add a list of conditions that need to be fulfilled for the pushdown feature to work when PXF protocol is used.
      
      * Correct the list of PXF connectors with pushdown
      
      * State that Hive and HBase PXF connectors support filter predicate pushdown;
      * Remove references to JDBC and Apache Ignite PXF connectors, as proposed by @dyozie (these are not officially supported by Greenplum).
      92c993f2