1. 29 6月, 2018 5 次提交
    • O
      Fix incremental analyze for non-matching attnums · ef39e0d0
      Omer Arap 提交于
      To merge stats in incremental analyze for root partition, we use leaf
      tables' statistics. In commit b28d0297,
      we fixed an issue where child attnum do not match with a root table's
      attnum for the same column. After we fixed that issue with a test, that
      test also exposed the bug in analyze code.
      
      This commit fixes the issue in analyze using the similar fix in
      b28d0297.
      ef39e0d0
    • L
      ci: pr_pipeline: Separate sync_tools from compilation (#5214) · f9dd6ba0
      Lisa Oakley 提交于
      This is related to the work we have done to fix the sles11 and windows
      compilation failures.
      Co-authored-by: NLisa Oakley <loakley@pivotal.io>
      Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
      f9dd6ba0
    • O
      Fix querying stats for largest child · b28d0297
      Omer Arap 提交于
      Previously, we would use the root table's information to acquire stats
      from the `syscache` which return no result. The reason it does not
      return any result is because we query syscache using `inh` field which
      is set true for root table and false for the leaf tables.
      
      Another issue which is not evident is the possibility of mismatching
      `attnum`s for the root and leaf tables after running specific scenarios.
      When we delete a column and then split a partition, unchanged partitions
      and old partitions preserves the old attnums while newly created
      partitions have increasing attnums with no gaps. If we query syscache
      using the root's attnum for that column, we would be getting a wrong
      stats for that specific column. Passing root's `inh` hide the issue of
      having wrong stats.
      
      This commit fixes the issue by getting the attribute name using the
      root's attnume and use it to acquire correct attnum for the largest leaf
      partition.
      b28d0297
    • A
      Perform analyze on specific table in spilltodisk test. · 37c75753
      Ashwin Agrawal 提交于
      No need to have database scope analyze, only specific table needs to be
      analyzed for the test.
      37c75753
    • A
      Restrict max_wal_senders guc to 1 in GPDB. · db53d8cf
      Ashwin Agrawal 提交于
      GPDB only supports 1 replica currently. Need to adopt in FTS and all to support
      1:n till then restrict max_wal_senders GUC to 1. Later when code can handle the
      same the max value of guc can be changed.
      
      Also, remove the setting of max_wal_senders in postmaster which was added
      earlier for dealing with filerep/walrep co-existence.
      db53d8cf
  2. 27 6月, 2018 19 次提交
    • A
      Fix COPY help page · 464db986
      Adam Lee 提交于
      These three was listed in common options,
      
      ```
      FILL MISSING FIELDS
      LOG ERRORS [SEGMENT REJECT LIMIT <replaceable class="parameter">count</replaceable> [ROWS | PERCENT] ]
      IGNORE EXTERNAL PARTITIONS
      ```
      
      But,
      
      1, they are not working with both FROM and TO
      2, FILL MISSING FIELDS should be [FILL_MISSING_FIELDS true | false] in
      generic form, which is silly, the old syntax is better
      3, SREH and IGNORE EXTERNAL PARTITIONS could not be specified as generic ones
      
      Also documents the missing NEWLINE option.
      464db986
    • A
      COPY: don't dispatch AO segno map for unloading · b320948d
      Adam Lee 提交于
      Unloading doesn't need it, checking the distribution policy neither.
      b320948d
    • T
      Dont ssh to wix when running make sync_tools · c2a26ce3
      Trevor Yacovone 提交于
      Also, remove dev generated pipeline and add prod generated pipeline
      Co-authored-by: NLisa Oakley <loakley@pivotal.io>
      Co-authored-by: NTrevor Yacovone <tyacovone@pivotal.io>
      c2a26ce3
    • L
      Add back nightly triggers · 10491249
      Lisa Oakley 提交于
      Co-authored-by: NLisa Oakley <loakley@pivotal.io>
      Co-authored-by: NTrevor Yacovone <tyacovone@pivotal.io>
      10491249
    • T
      sles11 compile fix; sync_tools refactor · 560a6695
      Trevor Yacovone 提交于
      This is a common issue with sles11 - that it doesn't
      currently include support for > TLS v1.1. Many upstream endpoints over
      the last 6 months have started to enforce > TLS v1.1.
      
      We resolved this by separating the sync_tools call into a separate task,
      such that we could run this task from a centos docker image, prior to
      compiling on the correct OS.
      
      These changes were backported to other compile jobs. We are pushing this
      change to resolve the sles11 blocker, but we are still experiencing
      difficulty with windows.
      Co-authored-by: NLisa Oakley <loakley@pivotal.io>
      Co-authored-by: NTrevor Yacovone <tyacovone@pivotal.io>
      Co-authored-by: NEd Espino <edespino@pivotal.io>
      560a6695
    • A
      Use uniform strategy to accommodate GPDB changes to pg_proc.h · 476ceb9c
      Ashwin Agrawal 提交于
      Some functions if had different return type or argument compared to upstream,
      were modified with comment in pg_proc.h while few were moved completely to
      pg_proc.sql. This difference causes confusion while merging and having single
      consistent method for all would be better. So, with this commit now upstream
      functions would be defined in pg_proc.h irrespective if there definitions differ
      from upstream or not.
      
      Note:
      pg_proc.h is used for all upstream definitions and pg_proc.sql is used to
      auto-generate gpdb added functions in Greenplum.
      476ceb9c
    • A
      Do not skip call to MarkBufferDirtyHint for temp tables. · 0492a052
      Ashwin Agrawal 提交于
      In markDirty() seems oversight in commit
      8c8b5c39 to avoid calling MarkBufferDirtyHint()
      for temp tables. Previous patch used relation->rd_istemp before calling
      XLogSaveBufferForHint() in MarkBufferDirtyHint() that was unnecessary given it
      already checks for BM_PERMANENT. So, call now MarkBufferDirtyHint()
      unconditionally.
      0492a052
    • A
      f9b651e2
    • A
      Resolve GPDB_91_MERGE_FIXME about transaction_deferrable. · ff950ca9
      Ashwin Agrawal 提交于
      gp_dispatch = false on utility command is correct, as cannot dispatch the SET
      command yet as not established the transaction yet on
      QEs. transaction_deferrable is only useful with serializable isolation level as
      per upstream docs, so added note to start dispatching the same when we support
      serializable isolation level.
      ff950ca9
    • A
      Remove the GPDB_91_MERGE_FIXME in sigusr1_handler(). · 1a1f650f
      Ashwin Agrawal 提交于
      Currently gpdb wal rep code is mix of multiple versions, once we reach 9.3 get
      opportunity to pain get in sync with upstream version. This will be taken care
      of then till that time live with gpdb modified version of the
      CheckPromoteSignal().
      1a1f650f
    • A
      Remove code checking for am_walsender in SyncRepWaitForLSN() · 33a64335
      Ashwin Agrawal 提交于
      No reason to call `SyncRepWaitForLSN()` from walsender process itself. Some
      existed in past seems which performed the same, even if walsender for whatever
      reason needs to perform transaction shouldn't result in wrtitng
      anything. Replaced the if with assertion instead to catch any viaolations of the
      assumption.
      33a64335
    • A
      Get rm_desc signature and code closer to upstream. · 34971f37
      Ashwin Agrawal 提交于
      Removing the Greenplum specific guc `Debug_xlog_insert_print`. Instead use
      upstream guc `wal_debug` for the same. Also, remove some unneccessary
      modifications vs upstream.
      34971f37
    • A
      Remove beginLoc from rm_desc routines signature. · 14b79227
      Ashwin Agrawal 提交于
      Upstream doesn't have it and not used anymore in Greenplum, so loose it.
      14b79227
    • A
      UnpackCheckPointRecord() should happen on QE similar to QD. · d9122f25
      Ashwin Agrawal 提交于
      Now, that wal replication is enabled for QD and QE the code must be enabled.
      d9122f25
    • A
    • A
      ProcDie: Reply only after syncing to mirror for commit-prepared. · 29b78ef4
      Ashwin Agrawal 提交于
      Upstream and for greenplum master if procdie is received while waiting for
      replication, just WARNING is issued and transaction moves forward without
      waiting for mirror. But that would cause inconsistency for QE if failover
      happens to such mirror missing the commit-prepared record.
      
      If only prepare is performed and primary is yet to process the commit-prepared,
      gxact is present in memory. If commit-prepared processing is complete on primary
      gxact is removed from memory. If gxact is found then we will flow through
      regular commit-prepared flow, emit the xlog record and sync the same to
      mirror. But if gxact is not found on primary, we used to return blindly success
      to QD. Hence, modified the code to always call `SyncRepWaitForLSN()` before
      replying to QD incase gxact is not found on primary.
      
      It calls `SyncRepWaitForLSN()` with the lsn value of `flush` from
      `xlogctl->LogwrtResult`, as there is no way to find-out the actual lsn value of
      commit-prepared record for primary. Usage of that lsn is based on following
      assumptions
      	- WAL always is written serially forward
      	- Synchronous mirror if has xlog record xyz must have xlog records before xyz
      	- Not finding gxact entry in-memory on primary for commit-prepared retry
        	  from QD means it was for sure committed (completed) on primary
      
      Since, the commit-prepared retry can be received if everything is done on
      segment but failed on some other segment, under concurrency we may call
      `SyncRepWaitForLSN()` with same lsn value multiple times given we are using
      latest flush point. Hence in GPDB check in `SyncRepQueueIsOrderedByLSN()`
      doesn't validate for unique entries but just validates the queue is sorted which
      is required for correctness. Without the same during ICW tests can hit assertion
      "!(SyncRepQueueIsOrderedByLSN(mode))".
      29b78ef4
    • S
      Add gphdfs-mapr certification job (#5184) · 982832af
      Shivram Mani 提交于
      Added new test job to the pipeline to certify GPHDFS with MAPR Hadoop distribution and renamed existing GPHDFS certification job to state that it tests with generic Hadoop. MAPR cluster consists of 1 node deployed by CCP scripts into GCE.
      
      - MAPR 5.2
      - Parquet 1.8.1
      Co-authored-by: NAlexander Denissov <adenissov@pivotal.io>
      Co-authored-by: NShivram Mani <smani@pivotal.io>
      Co-authored-by: NFrancisco Guerrero <aguerrero@pivotal.io>
      982832af
    • J
      Fix validation of SEQUENCE filepath · a9e7ecbf
      Jimmy Yih 提交于
      A check was added during the 9.1 merge to verify that the sequence
      filepath to be created would not collide with an existing file. The
      filepath that is constructed does not use the sequence OID value that
      was just generated and uses whatever value is in that piece of memory
      at the time. This would make the check go through usually, especially
      in our CI testing, but occasionally a sequence would fail to be
      created because the random filepath would exist. Fix the issue by
      storing the generated OID in the RelFileNode var that will be passed
      into the filepath construction.
      a9e7ecbf
    • L
      7261814c
  3. 26 6月, 2018 6 次提交
  4. 25 6月, 2018 2 次提交
  5. 23 6月, 2018 4 次提交
    • S
      Remove QP_memory_accounting job from pipeline · 69c3727e
      Sambitesh Dash 提交于
      QP_memory_accounting tests have been moved to the isolation2 test. So we no
      longer need this job in the pipeline.
      69c3727e
    • I
      Implement filter pushdown for PXF data sources (#4968) · 6d36a1c0
      Ivan Leskin 提交于
      * Change src/backend/access/external functions to extract and pass query constraints;
      * Add a field with constraints to 'ExtProtocolData';
      * Add 'pxffilters' to gpAux/extensions/pxf and modify the extension to use pushdown.
      
      * Remove duplicate '=' check in PXF
      
      Remove check for duplicate '=' for the parameters of external table. Some databases (MS SQL, for example) may use '=' for database name or other parameters. Now PXF extension finds the first '=' in a parameter and treats the whole remaining string as a parameter value.
      
      * disable pushdown by default
      * Disallow passing of constraints of type boolean (the decoding fails on PXF side);
      
      * Fix implicit AND expressions addition
      
      Fix implicit addition of extra 'BoolExpr' to a list of expression items. Before, there was a check that the expression items list did not contain logical operators (and if it did, no extra implicit AND operators were added). This behaviour is incorrect. Consider the following query:
      
      SELECT * FROM table_ex WHERE bool1=false AND id1=60003;
      
      Such query will be translated as a list of three items: 'BoolExpr', 'Var' and 'OpExpr'.
      Due to the presence of a 'BoolExpr', extra implicit 'BoolExpr' will not be added, and
      we get an error "stack is not empty ...".
      
      This commit changes the signatures of some internal pxffilters functions to fix this error.
      We pass a number of required extra 'BoolExpr's to 'add_extra_and_expression_items'.
      
      As 'BoolExpr's of different origin may be present in the list of expression items,
      the mechanism of freeing the BoolExpr node changes.
      
      The current mechanism of implicit AND expressions addition is suitable only before
      OR operators are introduced (we will have to add those expressions to different parts
      of a list, not just the end, as done now).
      6d36a1c0
    • S
      Added details of how aosegments tables are named and retrieved (#5178) · 55d8635c
      Soumyadeep Chakraborty 提交于
      * Added details of how aosegments tables are named
      
      1) How are aosegments table initially named and how they are named following a DDL operation.
      2) Method to get the current aosegments table for a particular AO table.
      
      * Detail : Creation of new aosegments table post DDL
      
      Incorporated PR feedback on including details about the creation process of new aosegments tables after a DDL operation implicating a rewrite of the table on disk, is applied.
      55d8635c
    • L
      docs - create ... external ... temp table (#5180) · 738ddc21
      Lisa Owen 提交于
      * docs - create ... external ... temp table
      
      * update CREATE EXTERNAL TABLE sgml docs
      738ddc21
  6. 22 6月, 2018 4 次提交
    • A
      Bump ORCA version to 2.64.0 · b7ce9c43
      Abhijit Subramanya 提交于
      b7ce9c43
    • A
      Revert "ProcDie: Reply only after syncing to mirror for commit-prepared." · 23241a2b
      Ashwin Agrawal 提交于
      This reverts commit a7842ea9. Yet to fully
      investigate the issue but its hitting the Assertion
      (""!(SyncRepQueueIsOrderedByLSN(mode))"", File: ""syncrep.c"", Line: 214)
      sometimes.
      23241a2b
    • A
      ProcDie: Reply only after syncing to mirror for commit-prepared. · a7842ea9
      Ashwin Agrawal 提交于
      Upstream and for greenplum master if procdie is received while waiting for
      replication, just WARNING is issued and transaction moves forward without
      waiting for mirror. But that would cause inconsistency for QE if failover
      happens to such mirror missing the commit-prepared record.
      
      If only prepare is performed and primary is yet to process the commit-prepared,
      gxact is present in memory. If commit-prepared processing is complete on primary
      gxact is removed from memory. If gxact is found then we will flow through
      regular commit-prepared flow, emit the xlog record and sync the same to
      mirror. But if gxact is not found on primary, we used to return blindly success
      to QD. Hence, modified the code to always call `SyncRepWaitForLSN()` before
      replying to QD incase gxact is not found on primary.
      
      It calls `SyncRepWaitForLSN()` with the lsn value of `flush` from
      `xlogctl->LogwrtResult`, as there is no way to find-out the actual lsn value of
      commit-prepared record for primary. Usage of that lsn is based on following
      assumptions
      	- WAL always is written serially forward
      	- Synchronous mirror if has xlog record xyz must have xlog records before xyz
      	- Not finding gxact entry in-memory on primary for commit-prepared retry
        	  from QD means it was for sure committed (completed) on primary
      a7842ea9
    • J
      Make gprecoverseg use new pg_backup flag --force-overwrite · e029720d
      Jimmy Yih 提交于
      This is needed during gprecoverseg full to preserve important files
      such as pg_log files. We pass this flag down the call stack to prevent
      other utilities such as gpinitstandby or gpaddmirror from using the
      new flag. The new flag can be dangerous if not used properly and
      should only be used when data directory file preservation is
      necessary.
      e029720d