1. 11 7月, 2018 4 次提交
    • J
      Invalidate AppendOnlyHash cache entry at end of TRUNCATE · e5e6f65d
      Jimmy Yih 提交于
      TRUNCATE will rewrite the relation by creating a temporary table and
      swapping it with the real relation. For AO, this includes the
      auxiliary tables which is concerning for the AO relation's pg_aoseg
      table which holds information that a AO segment file is available for
      write or waiting to be compacted/dropped. Since we do not currently
      invalidate the AppendOnlyHash cache entry, the entry could have
      invisible leaks in its AOSegfileStatus array that will be stuck in
      state AOSEG_STATE_AWAITING_DROP. These leaks will persist until the
      user evicts the cache entry by not using the table to allow another AO
      table to cache itself in that slot or by restarting the database. We
      fix this issue by invalidating the cache entry at the end of TRUNCATE
      on AO relations.
      
      Conflicts:
          src/backend/commands/tablecmds.c
           - TRUNCATE is a bit different between Greenplum 5 and 6. Needed
             to move the heap_close() to the end to invalidate the
             AppendOnlyHash entries after dispatch.
           - Macros RelationIsAppendOptimized() and IS_QUERY_DISPATCHER() do
             not exist in 5X_STABLE.
          src/test/isolation2/sql/truncate_after_ao_vacuum_skip_drop.sql
           - Isolation2 utility mode connections use dbid instead of content
             id like in Greenplum 6.
      e5e6f65d
    • J
      Invalidate AppendOnlyHash cache entry for AT_SetDistributedBy cases · 057f5dbd
      Jimmy Yih 提交于
      ALTER TABLE commands that are tagged as AT_SetDistributedBy require a
      gather motion and does its own variation of creating a temporary table
      for CTAS (basically bypassing the usual ATRewriteTable which actually
      does do AppendOnlyHash cache entry invalidation). Without the
      AppendOnlyHash cache entry invalidation, the entry could have
      invisible leaks in its AOSegfileStatus array that will be stuck in
      state AOSEG_STATE_AWAITING_DROP. These leaks will persist until the
      user evicts the cache entry by not using the table to allow another AO
      table to cache itself in that slot or by restarting the database. We
      fix this issue by invalidating the cache entry at the end of
      AT_SetDistributedBy ALTER TABLE cases.
      
      Conflicts:
          src/backend/commands/tablecmds.c
           - Macro IS_QUERY_DISPATCHER() does not exist in 5X_STABLE.
          src/test/isolation2/sql/reorganize_after_ao_vacuum_skip_drop.sql
           - Isolation2 utility mode connections use dbid instead of content
             id like in Greenplum 6.
      057f5dbd
    • D
    • M
      docs - PL/Container - added note - domain object not supported. (#5257) · d9cd8d8f
      Mel Kiyama 提交于
      * docs - PL/Container - added note - domain object not supported.
      
      * docs - PL/Container - updated note for non-support of domain object
      d9cd8d8f
  2. 10 7月, 2018 2 次提交
  3. 07 7月, 2018 2 次提交
  4. 06 7月, 2018 3 次提交
    • J
      Fix create gang failure on dns lookup error on down mirrors. · 4eff0222
      Jialun 提交于
      If a segment exists in gp_segment_configuration but its ip address can
      not be resolved we will run into a runtime error on gang creation:
      
          ERROR:  could not translate host name "segment-0a", port "40000" to
          address: Name or service not known (cdbutil.c:675)
      
      This happens even if segment-0a is a mirror and is marked as down.  With
      this error queries can not be executed, gpstart and gpstop will also
      fail.
      
      One way to trigger the issue:
      
      - create a multiple segments cluster;
      - remove sdw1's dns entry from /etc/hosts on mdw;
      - kill postgres primary process on sdw1;
      
      FTS can detect this error and automatically switch to mirror, but
      queries can not be executed.
      
      (cherry picked from commit dd861e72)
      4eff0222
    • M
      docs - update system catalog maintenance information. (#5179) · c4241e17
      Mel Kiyama 提交于
      * docs - update system catalog maintenance information.
      
      --Updated Admin. Guide and Best Practices for running REINDEX, VACUUM, and ANALYZE
      --Added note to REINDEX reference about running ANALYZE after REINDEX.
      
      * docs - edits for system catalog maintenance updates
      
      * docs - update recommendation for running vacuum and analyze.
      
      Update based on dev input.
      c4241e17
    • L
      c4f63c1e
  5. 03 7月, 2018 2 次提交
    • J
      Implement resource group bypass mode (#5223) · d893d8ef
      Jialun 提交于
      - Introduce a new GUC gp_resource_group_bypass, when it is on,
        the query in this session will not be limited by resource group
      d893d8ef
    • J
      ci: gen pipeline will try and guess git details · 6dfd6291
      Jim Doty 提交于
      The gen pipeline script outputs a suggested command when setting a dev
      pipeline.  Currently the git remote and git branch have to be edited
      before executing the command.  Since often times the branch has been
      created and is tracing remote, it is possible to guess those details.
      The case statements attempt to prevent suggesting using the production
      branches, and fall back to the same string as before.
      Authored-by: NJim Doty <jdoty@pivotal.io>
      (cherry picked from commit ea12d3a1)
      6dfd6291
  6. 30 6月, 2018 6 次提交
    • D
      fix problematic xrefs · e2c4961b
      David Yozie 提交于
      e2c4961b
    • I
      Extra docs for the pushdown feature (#5193) · 92c993f2
      Ivan Leskin 提交于
      * Extra docs for gp_external_enable_filter_pushdown
      
      Add extra documentation for 'gp_external_enable_filter_pushdown' and the pushdown feature in PXF extension.
      
      * Minor doc text fixes
      
      Minor documentation text fixes, proposed by @dyozie.
      
      * Clarify the pushdown support by PXF
      
      Add the following information:
      * List the PXF connectors that support pushdown;
      * State that GPDB PXF extension supports pushdown;
      * Add a list of conditions that need to be fulfilled for the pushdown feature to work when PXF protocol is used.
      
      * Correct the list of PXF connectors with pushdown
      
      * State that Hive and HBase PXF connectors support filter predicate pushdown;
      * Remove references to JDBC and Apache Ignite PXF connectors, as proposed by @dyozie (these are not officially supported by Greenplum).
      92c993f2
    • J
      Update README and remove depricated options · 95d626b3
      Jim Doty 提交于
      (cherry picked from commit 12888bf0)
      95d626b3
    • S
      Remove irrelevant comments from sql test file · 9a6b5c3a
      Shreedhar Hardikar 提交于
      9a6b5c3a
    • S
      Fix 'no parameter found for initplan subquery' · 50960370
      Shreedhar Hardikar 提交于
      The issue happens because of constant folding in the testexpr of the
      SUBPLAN expression node. The testexpr may be reduced to a const and any
      PARAMs, previous used in the testexpr, disappear, However, the subplan
      still remains.
      
      This behavior is similar in upstream Postgres 10 and may be of
      performance consideration. Leaving that aside for now, the constant
      folding produces an elog(ERROR)s when the plan has subplans and no
      PARAMs are used. This check in `addRemoteExecParamsToParamList()` uses
      `context.params` which computes the used PARAMs in the plan and `nIntPrm
      = list_length(root->glob->paramlist`, which is the number of PARAMs
      declared/created.
      Given the ERROR messages generated, the above check makes no sense.
      Especially since it won’t even trip for the InitPlan bug (mentioned in
      the comments) as long as there is at least one PARAM in the query.
      
      This commit removes this check since it doesn't correctly capture the
      intent.
      
      In theory, it could be be replaced by one specifically aimed at
      InitPlans, that is, find all the params ids used by InitPlan and then
      make sure they are used in the plan. But we already do this and
      remove any unused initplans in `remove_unused_initplans()`. So I don’t
      see the point of adding that.
      
      Fixes #2839
      50960370
    • T
      ci: Modify pxf compile job for sync_tools restructure · 49c634e5
      Trevor Yacovone 提交于
      This is related to the work we have done to fix the sles11 and windows
      compilation failures.
      Co-authored-by: NLisa Oakley <loakley@pivotal.io>
      Co-authored-by: NTrevor Yacovone <tyacovone@pivotal.io>
      49c634e5
  7. 29 6月, 2018 2 次提交
  8. 27 6月, 2018 1 次提交
  9. 26 6月, 2018 2 次提交
  10. 23 6月, 2018 3 次提交
    • L
      c9a6e67f
    • I
      Implement filter pushdown for PXF data sources (#4968) · 1beae342
      Ivan Leskin 提交于
      * Change src/backend/access/external functions to extract and pass query constraints;
      * Add a field with constraints to 'ExtProtocolData';
      * Add 'pxffilters' to gpAux/extensions/pxf and modify the extension to use pushdown.
      
      * Remove duplicate '=' check in PXF
      
      Remove check for duplicate '=' for the parameters of external table. Some databases (MS SQL, for example) may use '=' for database name or other parameters. Now PXF extension finds the first '=' in a parameter and treats the whole remaining string as a parameter value.
      
      * disable pushdown by default
      * Disallow passing of constraints of type boolean (the decoding fails on PXF side);
      
      * Fix implicit AND expressions addition
      
      Fix implicit addition of extra 'BoolExpr' to a list of expression items. Before, there was a check that the expression items list did not contain logical operators (and if it did, no extra implicit AND operators were added). This behaviour is incorrect. Consider the following query:
      
      SELECT * FROM table_ex WHERE bool1=false AND id1=60003;
      
      Such query will be translated as a list of three items: 'BoolExpr', 'Var' and 'OpExpr'.
      Due to the presence of a 'BoolExpr', extra implicit 'BoolExpr' will not be added, and
      we get an error "stack is not empty ...".
      
      This commit changes the signatures of some internal pxffilters functions to fix this error.
      We pass a number of required extra 'BoolExpr's to 'add_extra_and_expression_items'.
      
      As 'BoolExpr's of different origin may be present in the list of expression items,
      the mechanism of freeing the BoolExpr node changes.
      
      The current mechanism of implicit AND expressions addition is suitable only before
      OR operators are introduced (we will have to add those expressions to different parts
      of a list, not just the end, as done now).
      1beae342
    • L
      docs - create ... external ... temp table (#5180) · f3861bad
      Lisa Owen 提交于
      * docs - create ... external ... temp table
      
      * update CREATE EXTERNAL TABLE sgml docs
      f3861bad
  11. 22 6月, 2018 2 次提交
    • A
      Bump ORCA version to 2.64.0 · df6f8df6
      Abhijit Subramanya 提交于
      df6f8df6
    • C
      Feature/kerberos setup edit (#5159) · 209ce6c3
      Chuck Litzell 提交于
      * Edits to apply organizational improvements made in the HAWQ version, using consistent realm and domain names, and testing that procedures work.
      
      * Convert tasks to topics to fix formatting. Clean up pg_ident.conf topic.
      
      * Convert another task to topic
      
      * Remove extraneous tag
      
      * Formatting and minor edits
      
      * - added $ or # prompts for all code blocks
      - Reworked section "Mapping Kerberos Principals to Greenplum Database Roles" to describe, generally, a user's authentication process and to more clearly describe how principal name is mapped to gpdb name.
      
      * - add krb_realm auth param
      
      - add description of include_realm=1 for completeness
      209ce6c3
  12. 21 6月, 2018 2 次提交
  13. 20 6月, 2018 2 次提交
  14. 19 6月, 2018 2 次提交
  15. 18 6月, 2018 1 次提交
    • M
      docs - gpbackup/gprestore new functionality. (#5157) · c30e4637
      Mel Kiyama 提交于
      * docs - gpbackup/gprestore new functionality.
      
      --gpbackup new option --jobs to backup tables in parallel.
      --gprestore  --include-table* options support restoring views and sequences.
      
      * docs - gpbackup/gprestore. fixed typos. Updated backup/restore of sequences and views
      
      * docs - gpbackup/gprestore - clarified information on dependent objects.
      
      * docs - gpbackup/gprestore - updated information on locking/quiescent state.
      
      * docs - gpbackup/gprestore - clarify connection in --jobs option.
      c30e4637
  16. 16 6月, 2018 1 次提交
    • A
      Fix incorrect modification of storageAttributes.compress. · 3ef44a22
      Ashwin Agrawal 提交于
      For CO table, storageAttributes.compress only conveys if should apply block
      compression or not. RLE is performed as stream compression within the block and
      hence storageAttributes.compress true or false doesn't relate to rle at all. So,
      with rle_type compression storageAttributes.compress is true for compression
      levels > 1 where along with stream compression, block compression is
      performed. For compress level = 1 storageAttributes.compress is always false as
      no block compression is applied. Now since rle doesn't relate to
      storageAttributes.compress there is no reason to touch the same based on
      rle_type compression.
      
      Also, the problem manifests more due the fact in datumstream layer
      AppendOnlyStorageAttributes in DatumStreamWrite (`acc->ao_attr.compress`) is
      used to decide block type whereas in cdb storage layer functions
      AppendOnlyStorageAttributes from AppendOnlyStorageWrite
      (`idesc->ds[i]->ao_write->storageAttributes.compress`) is used. Due to this
      difference changing just one that too unnecessarily is bound to cause issue
      during insert.
      
      So, removing the unnecessary and incorrect update to
      AppendOnlyStorageAttributes.
      
      Test case showcases the failing scenario without the patch.
      3ef44a22
  17. 14 6月, 2018 3 次提交