1. 27 6月, 2018 16 次提交
    • L
      Add back nightly triggers · 10491249
      Lisa Oakley 提交于
      Co-authored-by: NLisa Oakley <loakley@pivotal.io>
      Co-authored-by: NTrevor Yacovone <tyacovone@pivotal.io>
      10491249
    • T
      sles11 compile fix; sync_tools refactor · 560a6695
      Trevor Yacovone 提交于
      This is a common issue with sles11 - that it doesn't
      currently include support for > TLS v1.1. Many upstream endpoints over
      the last 6 months have started to enforce > TLS v1.1.
      
      We resolved this by separating the sync_tools call into a separate task,
      such that we could run this task from a centos docker image, prior to
      compiling on the correct OS.
      
      These changes were backported to other compile jobs. We are pushing this
      change to resolve the sles11 blocker, but we are still experiencing
      difficulty with windows.
      Co-authored-by: NLisa Oakley <loakley@pivotal.io>
      Co-authored-by: NTrevor Yacovone <tyacovone@pivotal.io>
      Co-authored-by: NEd Espino <edespino@pivotal.io>
      560a6695
    • A
      Use uniform strategy to accommodate GPDB changes to pg_proc.h · 476ceb9c
      Ashwin Agrawal 提交于
      Some functions if had different return type or argument compared to upstream,
      were modified with comment in pg_proc.h while few were moved completely to
      pg_proc.sql. This difference causes confusion while merging and having single
      consistent method for all would be better. So, with this commit now upstream
      functions would be defined in pg_proc.h irrespective if there definitions differ
      from upstream or not.
      
      Note:
      pg_proc.h is used for all upstream definitions and pg_proc.sql is used to
      auto-generate gpdb added functions in Greenplum.
      476ceb9c
    • A
      Do not skip call to MarkBufferDirtyHint for temp tables. · 0492a052
      Ashwin Agrawal 提交于
      In markDirty() seems oversight in commit
      8c8b5c39 to avoid calling MarkBufferDirtyHint()
      for temp tables. Previous patch used relation->rd_istemp before calling
      XLogSaveBufferForHint() in MarkBufferDirtyHint() that was unnecessary given it
      already checks for BM_PERMANENT. So, call now MarkBufferDirtyHint()
      unconditionally.
      0492a052
    • A
      f9b651e2
    • A
      Resolve GPDB_91_MERGE_FIXME about transaction_deferrable. · ff950ca9
      Ashwin Agrawal 提交于
      gp_dispatch = false on utility command is correct, as cannot dispatch the SET
      command yet as not established the transaction yet on
      QEs. transaction_deferrable is only useful with serializable isolation level as
      per upstream docs, so added note to start dispatching the same when we support
      serializable isolation level.
      ff950ca9
    • A
      Remove the GPDB_91_MERGE_FIXME in sigusr1_handler(). · 1a1f650f
      Ashwin Agrawal 提交于
      Currently gpdb wal rep code is mix of multiple versions, once we reach 9.3 get
      opportunity to pain get in sync with upstream version. This will be taken care
      of then till that time live with gpdb modified version of the
      CheckPromoteSignal().
      1a1f650f
    • A
      Remove code checking for am_walsender in SyncRepWaitForLSN() · 33a64335
      Ashwin Agrawal 提交于
      No reason to call `SyncRepWaitForLSN()` from walsender process itself. Some
      existed in past seems which performed the same, even if walsender for whatever
      reason needs to perform transaction shouldn't result in wrtitng
      anything. Replaced the if with assertion instead to catch any viaolations of the
      assumption.
      33a64335
    • A
      Get rm_desc signature and code closer to upstream. · 34971f37
      Ashwin Agrawal 提交于
      Removing the Greenplum specific guc `Debug_xlog_insert_print`. Instead use
      upstream guc `wal_debug` for the same. Also, remove some unneccessary
      modifications vs upstream.
      34971f37
    • A
      Remove beginLoc from rm_desc routines signature. · 14b79227
      Ashwin Agrawal 提交于
      Upstream doesn't have it and not used anymore in Greenplum, so loose it.
      14b79227
    • A
      UnpackCheckPointRecord() should happen on QE similar to QD. · d9122f25
      Ashwin Agrawal 提交于
      Now, that wal replication is enabled for QD and QE the code must be enabled.
      d9122f25
    • A
    • A
      ProcDie: Reply only after syncing to mirror for commit-prepared. · 29b78ef4
      Ashwin Agrawal 提交于
      Upstream and for greenplum master if procdie is received while waiting for
      replication, just WARNING is issued and transaction moves forward without
      waiting for mirror. But that would cause inconsistency for QE if failover
      happens to such mirror missing the commit-prepared record.
      
      If only prepare is performed and primary is yet to process the commit-prepared,
      gxact is present in memory. If commit-prepared processing is complete on primary
      gxact is removed from memory. If gxact is found then we will flow through
      regular commit-prepared flow, emit the xlog record and sync the same to
      mirror. But if gxact is not found on primary, we used to return blindly success
      to QD. Hence, modified the code to always call `SyncRepWaitForLSN()` before
      replying to QD incase gxact is not found on primary.
      
      It calls `SyncRepWaitForLSN()` with the lsn value of `flush` from
      `xlogctl->LogwrtResult`, as there is no way to find-out the actual lsn value of
      commit-prepared record for primary. Usage of that lsn is based on following
      assumptions
      	- WAL always is written serially forward
      	- Synchronous mirror if has xlog record xyz must have xlog records before xyz
      	- Not finding gxact entry in-memory on primary for commit-prepared retry
        	  from QD means it was for sure committed (completed) on primary
      
      Since, the commit-prepared retry can be received if everything is done on
      segment but failed on some other segment, under concurrency we may call
      `SyncRepWaitForLSN()` with same lsn value multiple times given we are using
      latest flush point. Hence in GPDB check in `SyncRepQueueIsOrderedByLSN()`
      doesn't validate for unique entries but just validates the queue is sorted which
      is required for correctness. Without the same during ICW tests can hit assertion
      "!(SyncRepQueueIsOrderedByLSN(mode))".
      29b78ef4
    • S
      Add gphdfs-mapr certification job (#5184) · 982832af
      Shivram Mani 提交于
      Added new test job to the pipeline to certify GPHDFS with MAPR Hadoop distribution and renamed existing GPHDFS certification job to state that it tests with generic Hadoop. MAPR cluster consists of 1 node deployed by CCP scripts into GCE.
      
      - MAPR 5.2
      - Parquet 1.8.1
      Co-authored-by: NAlexander Denissov <adenissov@pivotal.io>
      Co-authored-by: NShivram Mani <smani@pivotal.io>
      Co-authored-by: NFrancisco Guerrero <aguerrero@pivotal.io>
      982832af
    • J
      Fix validation of SEQUENCE filepath · a9e7ecbf
      Jimmy Yih 提交于
      A check was added during the 9.1 merge to verify that the sequence
      filepath to be created would not collide with an existing file. The
      filepath that is constructed does not use the sequence OID value that
      was just generated and uses whatever value is in that piece of memory
      at the time. This would make the check go through usually, especially
      in our CI testing, but occasionally a sequence would fail to be
      created because the random filepath would exist. Fix the issue by
      storing the generated OID in the RelFileNode var that will be passed
      into the filepath construction.
      a9e7ecbf
    • L
      7261814c
  2. 26 6月, 2018 6 次提交
  3. 25 6月, 2018 2 次提交
  4. 23 6月, 2018 4 次提交
    • S
      Remove QP_memory_accounting job from pipeline · 69c3727e
      Sambitesh Dash 提交于
      QP_memory_accounting tests have been moved to the isolation2 test. So we no
      longer need this job in the pipeline.
      69c3727e
    • I
      Implement filter pushdown for PXF data sources (#4968) · 6d36a1c0
      Ivan Leskin 提交于
      * Change src/backend/access/external functions to extract and pass query constraints;
      * Add a field with constraints to 'ExtProtocolData';
      * Add 'pxffilters' to gpAux/extensions/pxf and modify the extension to use pushdown.
      
      * Remove duplicate '=' check in PXF
      
      Remove check for duplicate '=' for the parameters of external table. Some databases (MS SQL, for example) may use '=' for database name or other parameters. Now PXF extension finds the first '=' in a parameter and treats the whole remaining string as a parameter value.
      
      * disable pushdown by default
      * Disallow passing of constraints of type boolean (the decoding fails on PXF side);
      
      * Fix implicit AND expressions addition
      
      Fix implicit addition of extra 'BoolExpr' to a list of expression items. Before, there was a check that the expression items list did not contain logical operators (and if it did, no extra implicit AND operators were added). This behaviour is incorrect. Consider the following query:
      
      SELECT * FROM table_ex WHERE bool1=false AND id1=60003;
      
      Such query will be translated as a list of three items: 'BoolExpr', 'Var' and 'OpExpr'.
      Due to the presence of a 'BoolExpr', extra implicit 'BoolExpr' will not be added, and
      we get an error "stack is not empty ...".
      
      This commit changes the signatures of some internal pxffilters functions to fix this error.
      We pass a number of required extra 'BoolExpr's to 'add_extra_and_expression_items'.
      
      As 'BoolExpr's of different origin may be present in the list of expression items,
      the mechanism of freeing the BoolExpr node changes.
      
      The current mechanism of implicit AND expressions addition is suitable only before
      OR operators are introduced (we will have to add those expressions to different parts
      of a list, not just the end, as done now).
      6d36a1c0
    • S
      Added details of how aosegments tables are named and retrieved (#5178) · 55d8635c
      Soumyadeep Chakraborty 提交于
      * Added details of how aosegments tables are named
      
      1) How are aosegments table initially named and how they are named following a DDL operation.
      2) Method to get the current aosegments table for a particular AO table.
      
      * Detail : Creation of new aosegments table post DDL
      
      Incorporated PR feedback on including details about the creation process of new aosegments tables after a DDL operation implicating a rewrite of the table on disk, is applied.
      55d8635c
    • L
      docs - create ... external ... temp table (#5180) · 738ddc21
      Lisa Owen 提交于
      * docs - create ... external ... temp table
      
      * update CREATE EXTERNAL TABLE sgml docs
      738ddc21
  5. 22 6月, 2018 6 次提交
    • A
      Bump ORCA version to 2.64.0 · b7ce9c43
      Abhijit Subramanya 提交于
      b7ce9c43
    • A
      Revert "ProcDie: Reply only after syncing to mirror for commit-prepared." · 23241a2b
      Ashwin Agrawal 提交于
      This reverts commit a7842ea9. Yet to fully
      investigate the issue but its hitting the Assertion
      (""!(SyncRepQueueIsOrderedByLSN(mode))"", File: ""syncrep.c"", Line: 214)
      sometimes.
      23241a2b
    • A
      ProcDie: Reply only after syncing to mirror for commit-prepared. · a7842ea9
      Ashwin Agrawal 提交于
      Upstream and for greenplum master if procdie is received while waiting for
      replication, just WARNING is issued and transaction moves forward without
      waiting for mirror. But that would cause inconsistency for QE if failover
      happens to such mirror missing the commit-prepared record.
      
      If only prepare is performed and primary is yet to process the commit-prepared,
      gxact is present in memory. If commit-prepared processing is complete on primary
      gxact is removed from memory. If gxact is found then we will flow through
      regular commit-prepared flow, emit the xlog record and sync the same to
      mirror. But if gxact is not found on primary, we used to return blindly success
      to QD. Hence, modified the code to always call `SyncRepWaitForLSN()` before
      replying to QD incase gxact is not found on primary.
      
      It calls `SyncRepWaitForLSN()` with the lsn value of `flush` from
      `xlogctl->LogwrtResult`, as there is no way to find-out the actual lsn value of
      commit-prepared record for primary. Usage of that lsn is based on following
      assumptions
      	- WAL always is written serially forward
      	- Synchronous mirror if has xlog record xyz must have xlog records before xyz
      	- Not finding gxact entry in-memory on primary for commit-prepared retry
        	  from QD means it was for sure committed (completed) on primary
      a7842ea9
    • J
      Make gprecoverseg use new pg_backup flag --force-overwrite · e029720d
      Jimmy Yih 提交于
      This is needed during gprecoverseg full to preserve important files
      such as pg_log files. We pass this flag down the call stack to prevent
      other utilities such as gpinitstandby or gpaddmirror from using the
      new flag. The new flag can be dangerous if not used properly and
      should only be used when data directory file preservation is
      necessary.
      e029720d
    • J
      Add pg_basebackup flag to force overwrite of destination data directory · 4333acd9
      Jimmy Yih 提交于
      Currently, pg_basebackup has a hard restriction where the destination
      data directory must be empty or nonexistant. It is expected that
      anything of interest should be moved somewhere temporarily and then
      copied back in. To reduce the complexity, we introduce a new flag
      --force-overwrite which will delete the directories or files that are
      being copied from the source data directory before doing the actual
      copy. Combined with the Greenplum-specific exclusion flag (-E), we are
      now able to preserve files of interest.
      
      Our main example would be gprecoverseg full recovery and pg_log
      files. There have been times when a mirror fails and a full recovery
      would run which would drop the entire mirror directory before running
      pg_basebackup which would result in the mirror log files before the
      crash to be erased. This is substantially worse when we think of
      gprecoverseg rebalancing scenario where we currently do not have
      pg_rewind and must run full recovery to bring the old primary back
      up... which would result in vast amounts of old primary log files to
      be erased. Then during rebalance, the acting primary which would
      return to being a mirror also goes through a full recovery so its logs
      as a primary are also removed. The obvious solution would be to tar
      these logs out and untar them back in afterwards, but what if there
      are other files that must be preserved. Creating a copy may be costly
      in environments where disk space is valued highly.
      4333acd9
    • C
      Feature/kerberos setup edit (#5159) · b133cfe1
      Chuck Litzell 提交于
      * Edits to apply organizational improvements made in the HAWQ version, using consistent realm and domain names, and testing that procedures work.
      
      * Convert tasks to topics to fix formatting. Clean up pg_ident.conf topic.
      
      * Convert another task to topic
      
      * Remove extraneous tag
      
      * Formatting and minor edits
      
      * - added $ or # prompts for all code blocks
      - Reworked section "Mapping Kerberos Principals to Greenplum Database Roles" to describe, generally, a user's authentication process and to more clearly describe how principal name is mapped to gpdb name.
      
      * - add krb_realm auth param
      
      - add description of include_realm=1 for completeness
      b133cfe1
  6. 21 6月, 2018 6 次提交