1. 27 6月, 2018 7 次提交
    • A
      Remove beginLoc from rm_desc routines signature. · 14b79227
      Ashwin Agrawal 提交于
      Upstream doesn't have it and not used anymore in Greenplum, so loose it.
      14b79227
    • A
      UnpackCheckPointRecord() should happen on QE similar to QD. · d9122f25
      Ashwin Agrawal 提交于
      Now, that wal replication is enabled for QD and QE the code must be enabled.
      d9122f25
    • A
    • A
      ProcDie: Reply only after syncing to mirror for commit-prepared. · 29b78ef4
      Ashwin Agrawal 提交于
      Upstream and for greenplum master if procdie is received while waiting for
      replication, just WARNING is issued and transaction moves forward without
      waiting for mirror. But that would cause inconsistency for QE if failover
      happens to such mirror missing the commit-prepared record.
      
      If only prepare is performed and primary is yet to process the commit-prepared,
      gxact is present in memory. If commit-prepared processing is complete on primary
      gxact is removed from memory. If gxact is found then we will flow through
      regular commit-prepared flow, emit the xlog record and sync the same to
      mirror. But if gxact is not found on primary, we used to return blindly success
      to QD. Hence, modified the code to always call `SyncRepWaitForLSN()` before
      replying to QD incase gxact is not found on primary.
      
      It calls `SyncRepWaitForLSN()` with the lsn value of `flush` from
      `xlogctl->LogwrtResult`, as there is no way to find-out the actual lsn value of
      commit-prepared record for primary. Usage of that lsn is based on following
      assumptions
      	- WAL always is written serially forward
      	- Synchronous mirror if has xlog record xyz must have xlog records before xyz
      	- Not finding gxact entry in-memory on primary for commit-prepared retry
        	  from QD means it was for sure committed (completed) on primary
      
      Since, the commit-prepared retry can be received if everything is done on
      segment but failed on some other segment, under concurrency we may call
      `SyncRepWaitForLSN()` with same lsn value multiple times given we are using
      latest flush point. Hence in GPDB check in `SyncRepQueueIsOrderedByLSN()`
      doesn't validate for unique entries but just validates the queue is sorted which
      is required for correctness. Without the same during ICW tests can hit assertion
      "!(SyncRepQueueIsOrderedByLSN(mode))".
      29b78ef4
    • S
      Add gphdfs-mapr certification job (#5184) · 982832af
      Shivram Mani 提交于
      Added new test job to the pipeline to certify GPHDFS with MAPR Hadoop distribution and renamed existing GPHDFS certification job to state that it tests with generic Hadoop. MAPR cluster consists of 1 node deployed by CCP scripts into GCE.
      
      - MAPR 5.2
      - Parquet 1.8.1
      Co-authored-by: NAlexander Denissov <adenissov@pivotal.io>
      Co-authored-by: NShivram Mani <smani@pivotal.io>
      Co-authored-by: NFrancisco Guerrero <aguerrero@pivotal.io>
      982832af
    • J
      Fix validation of SEQUENCE filepath · a9e7ecbf
      Jimmy Yih 提交于
      A check was added during the 9.1 merge to verify that the sequence
      filepath to be created would not collide with an existing file. The
      filepath that is constructed does not use the sequence OID value that
      was just generated and uses whatever value is in that piece of memory
      at the time. This would make the check go through usually, especially
      in our CI testing, but occasionally a sequence would fail to be
      created because the random filepath would exist. Fix the issue by
      storing the generated OID in the RelFileNode var that will be passed
      into the filepath construction.
      a9e7ecbf
    • L
      7261814c
  2. 26 6月, 2018 6 次提交
  3. 25 6月, 2018 2 次提交
  4. 23 6月, 2018 4 次提交
    • S
      Remove QP_memory_accounting job from pipeline · 69c3727e
      Sambitesh Dash 提交于
      QP_memory_accounting tests have been moved to the isolation2 test. So we no
      longer need this job in the pipeline.
      69c3727e
    • I
      Implement filter pushdown for PXF data sources (#4968) · 6d36a1c0
      Ivan Leskin 提交于
      * Change src/backend/access/external functions to extract and pass query constraints;
      * Add a field with constraints to 'ExtProtocolData';
      * Add 'pxffilters' to gpAux/extensions/pxf and modify the extension to use pushdown.
      
      * Remove duplicate '=' check in PXF
      
      Remove check for duplicate '=' for the parameters of external table. Some databases (MS SQL, for example) may use '=' for database name or other parameters. Now PXF extension finds the first '=' in a parameter and treats the whole remaining string as a parameter value.
      
      * disable pushdown by default
      * Disallow passing of constraints of type boolean (the decoding fails on PXF side);
      
      * Fix implicit AND expressions addition
      
      Fix implicit addition of extra 'BoolExpr' to a list of expression items. Before, there was a check that the expression items list did not contain logical operators (and if it did, no extra implicit AND operators were added). This behaviour is incorrect. Consider the following query:
      
      SELECT * FROM table_ex WHERE bool1=false AND id1=60003;
      
      Such query will be translated as a list of three items: 'BoolExpr', 'Var' and 'OpExpr'.
      Due to the presence of a 'BoolExpr', extra implicit 'BoolExpr' will not be added, and
      we get an error "stack is not empty ...".
      
      This commit changes the signatures of some internal pxffilters functions to fix this error.
      We pass a number of required extra 'BoolExpr's to 'add_extra_and_expression_items'.
      
      As 'BoolExpr's of different origin may be present in the list of expression items,
      the mechanism of freeing the BoolExpr node changes.
      
      The current mechanism of implicit AND expressions addition is suitable only before
      OR operators are introduced (we will have to add those expressions to different parts
      of a list, not just the end, as done now).
      6d36a1c0
    • S
      Added details of how aosegments tables are named and retrieved (#5178) · 55d8635c
      Soumyadeep Chakraborty 提交于
      * Added details of how aosegments tables are named
      
      1) How are aosegments table initially named and how they are named following a DDL operation.
      2) Method to get the current aosegments table for a particular AO table.
      
      * Detail : Creation of new aosegments table post DDL
      
      Incorporated PR feedback on including details about the creation process of new aosegments tables after a DDL operation implicating a rewrite of the table on disk, is applied.
      55d8635c
    • L
      docs - create ... external ... temp table (#5180) · 738ddc21
      Lisa Owen 提交于
      * docs - create ... external ... temp table
      
      * update CREATE EXTERNAL TABLE sgml docs
      738ddc21
  5. 22 6月, 2018 6 次提交
    • A
      Bump ORCA version to 2.64.0 · b7ce9c43
      Abhijit Subramanya 提交于
      b7ce9c43
    • A
      Revert "ProcDie: Reply only after syncing to mirror for commit-prepared." · 23241a2b
      Ashwin Agrawal 提交于
      This reverts commit a7842ea9. Yet to fully
      investigate the issue but its hitting the Assertion
      (""!(SyncRepQueueIsOrderedByLSN(mode))"", File: ""syncrep.c"", Line: 214)
      sometimes.
      23241a2b
    • A
      ProcDie: Reply only after syncing to mirror for commit-prepared. · a7842ea9
      Ashwin Agrawal 提交于
      Upstream and for greenplum master if procdie is received while waiting for
      replication, just WARNING is issued and transaction moves forward without
      waiting for mirror. But that would cause inconsistency for QE if failover
      happens to such mirror missing the commit-prepared record.
      
      If only prepare is performed and primary is yet to process the commit-prepared,
      gxact is present in memory. If commit-prepared processing is complete on primary
      gxact is removed from memory. If gxact is found then we will flow through
      regular commit-prepared flow, emit the xlog record and sync the same to
      mirror. But if gxact is not found on primary, we used to return blindly success
      to QD. Hence, modified the code to always call `SyncRepWaitForLSN()` before
      replying to QD incase gxact is not found on primary.
      
      It calls `SyncRepWaitForLSN()` with the lsn value of `flush` from
      `xlogctl->LogwrtResult`, as there is no way to find-out the actual lsn value of
      commit-prepared record for primary. Usage of that lsn is based on following
      assumptions
      	- WAL always is written serially forward
      	- Synchronous mirror if has xlog record xyz must have xlog records before xyz
      	- Not finding gxact entry in-memory on primary for commit-prepared retry
        	  from QD means it was for sure committed (completed) on primary
      a7842ea9
    • J
      Make gprecoverseg use new pg_backup flag --force-overwrite · e029720d
      Jimmy Yih 提交于
      This is needed during gprecoverseg full to preserve important files
      such as pg_log files. We pass this flag down the call stack to prevent
      other utilities such as gpinitstandby or gpaddmirror from using the
      new flag. The new flag can be dangerous if not used properly and
      should only be used when data directory file preservation is
      necessary.
      e029720d
    • J
      Add pg_basebackup flag to force overwrite of destination data directory · 4333acd9
      Jimmy Yih 提交于
      Currently, pg_basebackup has a hard restriction where the destination
      data directory must be empty or nonexistant. It is expected that
      anything of interest should be moved somewhere temporarily and then
      copied back in. To reduce the complexity, we introduce a new flag
      --force-overwrite which will delete the directories or files that are
      being copied from the source data directory before doing the actual
      copy. Combined with the Greenplum-specific exclusion flag (-E), we are
      now able to preserve files of interest.
      
      Our main example would be gprecoverseg full recovery and pg_log
      files. There have been times when a mirror fails and a full recovery
      would run which would drop the entire mirror directory before running
      pg_basebackup which would result in the mirror log files before the
      crash to be erased. This is substantially worse when we think of
      gprecoverseg rebalancing scenario where we currently do not have
      pg_rewind and must run full recovery to bring the old primary back
      up... which would result in vast amounts of old primary log files to
      be erased. Then during rebalance, the acting primary which would
      return to being a mirror also goes through a full recovery so its logs
      as a primary are also removed. The obvious solution would be to tar
      these logs out and untar them back in afterwards, but what if there
      are other files that must be preserved. Creating a copy may be costly
      in environments where disk space is valued highly.
      4333acd9
    • C
      Feature/kerberos setup edit (#5159) · b133cfe1
      Chuck Litzell 提交于
      * Edits to apply organizational improvements made in the HAWQ version, using consistent realm and domain names, and testing that procedures work.
      
      * Convert tasks to topics to fix formatting. Clean up pg_ident.conf topic.
      
      * Convert another task to topic
      
      * Remove extraneous tag
      
      * Formatting and minor edits
      
      * - added $ or # prompts for all code blocks
      - Reworked section "Mapping Kerberos Principals to Greenplum Database Roles" to describe, generally, a user's authentication process and to more clearly describe how principal name is mapped to gpdb name.
      
      * - add krb_realm auth param
      
      - add description of include_realm=1 for completeness
      b133cfe1
  6. 21 6月, 2018 8 次提交
  7. 20 6月, 2018 6 次提交
  8. 19 6月, 2018 1 次提交
    • L
      docs - docs and updates for pgbouncer 1.8.1 (#5151) · a99194e0
      Lisa Owen 提交于
      * docs - docs and updates for pgbouncer 1.8.1
      
      * some edits requested by david
      
      * add pgbouncer config page to see also, include directive
      
      * add auth_hba_type config param
      
      * ldap - add info to migrating section, remove ldap passwds
      
      * remove ldap note
      a99194e0