- 27 6月, 2018 18 次提交
-
-
由 Adam Lee 提交于
Unloading doesn't need it, checking the distribution policy neither.
-
由 Trevor Yacovone 提交于
Also, remove dev generated pipeline and add prod generated pipeline Co-authored-by: NLisa Oakley <loakley@pivotal.io> Co-authored-by: NTrevor Yacovone <tyacovone@pivotal.io>
-
由 Lisa Oakley 提交于
Co-authored-by: NLisa Oakley <loakley@pivotal.io> Co-authored-by: NTrevor Yacovone <tyacovone@pivotal.io>
-
由 Trevor Yacovone 提交于
This is a common issue with sles11 - that it doesn't currently include support for > TLS v1.1. Many upstream endpoints over the last 6 months have started to enforce > TLS v1.1. We resolved this by separating the sync_tools call into a separate task, such that we could run this task from a centos docker image, prior to compiling on the correct OS. These changes were backported to other compile jobs. We are pushing this change to resolve the sles11 blocker, but we are still experiencing difficulty with windows. Co-authored-by: NLisa Oakley <loakley@pivotal.io> Co-authored-by: NTrevor Yacovone <tyacovone@pivotal.io> Co-authored-by: NEd Espino <edespino@pivotal.io>
-
由 Ashwin Agrawal 提交于
Some functions if had different return type or argument compared to upstream, were modified with comment in pg_proc.h while few were moved completely to pg_proc.sql. This difference causes confusion while merging and having single consistent method for all would be better. So, with this commit now upstream functions would be defined in pg_proc.h irrespective if there definitions differ from upstream or not. Note: pg_proc.h is used for all upstream definitions and pg_proc.sql is used to auto-generate gpdb added functions in Greenplum.
-
由 Ashwin Agrawal 提交于
In markDirty() seems oversight in commit 8c8b5c39 to avoid calling MarkBufferDirtyHint() for temp tables. Previous patch used relation->rd_istemp before calling XLogSaveBufferForHint() in MarkBufferDirtyHint() that was unnecessary given it already checks for BM_PERMANENT. So, call now MarkBufferDirtyHint() unconditionally.
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
gp_dispatch = false on utility command is correct, as cannot dispatch the SET command yet as not established the transaction yet on QEs. transaction_deferrable is only useful with serializable isolation level as per upstream docs, so added note to start dispatching the same when we support serializable isolation level.
-
由 Ashwin Agrawal 提交于
Currently gpdb wal rep code is mix of multiple versions, once we reach 9.3 get opportunity to pain get in sync with upstream version. This will be taken care of then till that time live with gpdb modified version of the CheckPromoteSignal().
-
由 Ashwin Agrawal 提交于
No reason to call `SyncRepWaitForLSN()` from walsender process itself. Some existed in past seems which performed the same, even if walsender for whatever reason needs to perform transaction shouldn't result in wrtitng anything. Replaced the if with assertion instead to catch any viaolations of the assumption.
-
由 Ashwin Agrawal 提交于
Removing the Greenplum specific guc `Debug_xlog_insert_print`. Instead use upstream guc `wal_debug` for the same. Also, remove some unneccessary modifications vs upstream.
-
由 Ashwin Agrawal 提交于
Upstream doesn't have it and not used anymore in Greenplum, so loose it.
-
由 Ashwin Agrawal 提交于
Now, that wal replication is enabled for QD and QE the code must be enabled.
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
Upstream and for greenplum master if procdie is received while waiting for replication, just WARNING is issued and transaction moves forward without waiting for mirror. But that would cause inconsistency for QE if failover happens to such mirror missing the commit-prepared record. If only prepare is performed and primary is yet to process the commit-prepared, gxact is present in memory. If commit-prepared processing is complete on primary gxact is removed from memory. If gxact is found then we will flow through regular commit-prepared flow, emit the xlog record and sync the same to mirror. But if gxact is not found on primary, we used to return blindly success to QD. Hence, modified the code to always call `SyncRepWaitForLSN()` before replying to QD incase gxact is not found on primary. It calls `SyncRepWaitForLSN()` with the lsn value of `flush` from `xlogctl->LogwrtResult`, as there is no way to find-out the actual lsn value of commit-prepared record for primary. Usage of that lsn is based on following assumptions - WAL always is written serially forward - Synchronous mirror if has xlog record xyz must have xlog records before xyz - Not finding gxact entry in-memory on primary for commit-prepared retry from QD means it was for sure committed (completed) on primary Since, the commit-prepared retry can be received if everything is done on segment but failed on some other segment, under concurrency we may call `SyncRepWaitForLSN()` with same lsn value multiple times given we are using latest flush point. Hence in GPDB check in `SyncRepQueueIsOrderedByLSN()` doesn't validate for unique entries but just validates the queue is sorted which is required for correctness. Without the same during ICW tests can hit assertion "!(SyncRepQueueIsOrderedByLSN(mode))".
-
由 Shivram Mani 提交于
Added new test job to the pipeline to certify GPHDFS with MAPR Hadoop distribution and renamed existing GPHDFS certification job to state that it tests with generic Hadoop. MAPR cluster consists of 1 node deployed by CCP scripts into GCE. - MAPR 5.2 - Parquet 1.8.1 Co-authored-by: NAlexander Denissov <adenissov@pivotal.io> Co-authored-by: NShivram Mani <smani@pivotal.io> Co-authored-by: NFrancisco Guerrero <aguerrero@pivotal.io>
-
由 Jimmy Yih 提交于
A check was added during the 9.1 merge to verify that the sequence filepath to be created would not collide with an existing file. The filepath that is constructed does not use the sequence OID value that was just generated and uses whatever value is in that piece of memory at the time. This would make the check go through usually, especially in our CI testing, but occasionally a sequence would fail to be created because the random filepath would exist. Fix the issue by storing the generated OID in the RelFileNode var that will be passed into the filepath construction.
-
由 Lisa Owen 提交于
-
- 26 6月, 2018 6 次提交
-
-
由 Ning Yu 提交于
Resource group capabilities could be missing in ALTER command, e.g.: - create resgroup rg1 with v5.0 which does not support cpuset(cap=7); - binary switch to v5.10 (suppose it supports cpuset); - alter rg1's cpu_rate_limit, it will also `update` the cpuset cap; Now as rg1 was created with v5.0 so there is no cap=7 row in the catalog table pg_resgroupcapability, so the `update` operation will raise an error as the expected tuple can not be found. The proper behavior is to fallback to `insert` in such a case. Test cases are not included as it is already covered by existing binary swap test resgroup_current_3_queue. (cherry picked from commit 95f215d9)
-
由 Jesse Zhang 提交于
When the server is built with `--disable-orca`, we shouldn't (and used not to) allow setting the option `optimizer=on`. Upstream Postgres 9.1 commit 2594cf0e introduced the check hook in a GUC code refactoring, and it seems that we regressed in the 9.1 merge where we forgot to signal the calling code in `call_bool_check_hook` to error out. This commit fixes that by reintroducing the error.
-
由 mkiyama 提交于
-
由 mkiyama 提交于
-
由 Daniel Gustafsson 提交于
-
由 Sambitesh Dash 提交于
This reverts commit 1ec65820.
-
- 25 6月, 2018 2 次提交
-
-
由 Sambitesh Dash 提交于
-
由 Sambitesh Dash 提交于
-
- 23 6月, 2018 4 次提交
-
-
由 Sambitesh Dash 提交于
QP_memory_accounting tests have been moved to the isolation2 test. So we no longer need this job in the pipeline.
-
由 Ivan Leskin 提交于
* Change src/backend/access/external functions to extract and pass query constraints; * Add a field with constraints to 'ExtProtocolData'; * Add 'pxffilters' to gpAux/extensions/pxf and modify the extension to use pushdown. * Remove duplicate '=' check in PXF Remove check for duplicate '=' for the parameters of external table. Some databases (MS SQL, for example) may use '=' for database name or other parameters. Now PXF extension finds the first '=' in a parameter and treats the whole remaining string as a parameter value. * disable pushdown by default * Disallow passing of constraints of type boolean (the decoding fails on PXF side); * Fix implicit AND expressions addition Fix implicit addition of extra 'BoolExpr' to a list of expression items. Before, there was a check that the expression items list did not contain logical operators (and if it did, no extra implicit AND operators were added). This behaviour is incorrect. Consider the following query: SELECT * FROM table_ex WHERE bool1=false AND id1=60003; Such query will be translated as a list of three items: 'BoolExpr', 'Var' and 'OpExpr'. Due to the presence of a 'BoolExpr', extra implicit 'BoolExpr' will not be added, and we get an error "stack is not empty ...". This commit changes the signatures of some internal pxffilters functions to fix this error. We pass a number of required extra 'BoolExpr's to 'add_extra_and_expression_items'. As 'BoolExpr's of different origin may be present in the list of expression items, the mechanism of freeing the BoolExpr node changes. The current mechanism of implicit AND expressions addition is suitable only before OR operators are introduced (we will have to add those expressions to different parts of a list, not just the end, as done now).
-
由 Soumyadeep Chakraborty 提交于
* Added details of how aosegments tables are named 1) How are aosegments table initially named and how they are named following a DDL operation. 2) Method to get the current aosegments table for a particular AO table. * Detail : Creation of new aosegments table post DDL Incorporated PR feedback on including details about the creation process of new aosegments tables after a DDL operation implicating a rewrite of the table on disk, is applied.
-
由 Lisa Owen 提交于
* docs - create ... external ... temp table * update CREATE EXTERNAL TABLE sgml docs
-
- 22 6月, 2018 6 次提交
-
-
由 Abhijit Subramanya 提交于
-
由 Ashwin Agrawal 提交于
This reverts commit a7842ea9. Yet to fully investigate the issue but its hitting the Assertion (""!(SyncRepQueueIsOrderedByLSN(mode))"", File: ""syncrep.c"", Line: 214) sometimes.
-
由 Ashwin Agrawal 提交于
Upstream and for greenplum master if procdie is received while waiting for replication, just WARNING is issued and transaction moves forward without waiting for mirror. But that would cause inconsistency for QE if failover happens to such mirror missing the commit-prepared record. If only prepare is performed and primary is yet to process the commit-prepared, gxact is present in memory. If commit-prepared processing is complete on primary gxact is removed from memory. If gxact is found then we will flow through regular commit-prepared flow, emit the xlog record and sync the same to mirror. But if gxact is not found on primary, we used to return blindly success to QD. Hence, modified the code to always call `SyncRepWaitForLSN()` before replying to QD incase gxact is not found on primary. It calls `SyncRepWaitForLSN()` with the lsn value of `flush` from `xlogctl->LogwrtResult`, as there is no way to find-out the actual lsn value of commit-prepared record for primary. Usage of that lsn is based on following assumptions - WAL always is written serially forward - Synchronous mirror if has xlog record xyz must have xlog records before xyz - Not finding gxact entry in-memory on primary for commit-prepared retry from QD means it was for sure committed (completed) on primary
-
由 Jimmy Yih 提交于
This is needed during gprecoverseg full to preserve important files such as pg_log files. We pass this flag down the call stack to prevent other utilities such as gpinitstandby or gpaddmirror from using the new flag. The new flag can be dangerous if not used properly and should only be used when data directory file preservation is necessary.
-
由 Jimmy Yih 提交于
Currently, pg_basebackup has a hard restriction where the destination data directory must be empty or nonexistant. It is expected that anything of interest should be moved somewhere temporarily and then copied back in. To reduce the complexity, we introduce a new flag --force-overwrite which will delete the directories or files that are being copied from the source data directory before doing the actual copy. Combined with the Greenplum-specific exclusion flag (-E), we are now able to preserve files of interest. Our main example would be gprecoverseg full recovery and pg_log files. There have been times when a mirror fails and a full recovery would run which would drop the entire mirror directory before running pg_basebackup which would result in the mirror log files before the crash to be erased. This is substantially worse when we think of gprecoverseg rebalancing scenario where we currently do not have pg_rewind and must run full recovery to bring the old primary back up... which would result in vast amounts of old primary log files to be erased. Then during rebalance, the acting primary which would return to being a mirror also goes through a full recovery so its logs as a primary are also removed. The obvious solution would be to tar these logs out and untar them back in afterwards, but what if there are other files that must be preserved. Creating a copy may be costly in environments where disk space is valued highly.
-
由 Chuck Litzell 提交于
* Edits to apply organizational improvements made in the HAWQ version, using consistent realm and domain names, and testing that procedures work. * Convert tasks to topics to fix formatting. Clean up pg_ident.conf topic. * Convert another task to topic * Remove extraneous tag * Formatting and minor edits * - added $ or # prompts for all code blocks - Reworked section "Mapping Kerberos Principals to Greenplum Database Roles" to describe, generally, a user's authentication process and to more clearly describe how principal name is mapped to gpdb name. * - add krb_realm auth param - add description of include_realm=1 for completeness
-
- 21 6月, 2018 4 次提交
-
-
由 Jamie McAtamney 提交于
Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io> Co-authored-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Nadeem Ghani 提交于
We have a step to run gpinitstandby in mgmt_utils.py. Removing this code to make it more likely that we standardize on using the step in mgmt_utils.py. Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NKevin Yeap <kyeap@pivotal.io>
-
由 Nadeem Ghani 提交于
- Add mirrors with and without standby, and ensure that the host assignment is identical between the two. - Add mirrors, then kill one, and ensure that gprecoverseg operates correctly on the newly added mirror. Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NJacob Champion <pchampion@pivotal.io>
-
由 Kevin Yeap 提交于
Fix a bug where gpexpand would fail to run on a cluster that had a standby master but no mirrors. Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NKevin Yeap <kyeap@pivotal.io>
-