- 27 6月, 2018 7 次提交
-
-
由 Ashwin Agrawal 提交于
Upstream doesn't have it and not used anymore in Greenplum, so loose it.
-
由 Ashwin Agrawal 提交于
Now, that wal replication is enabled for QD and QE the code must be enabled.
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
Upstream and for greenplum master if procdie is received while waiting for replication, just WARNING is issued and transaction moves forward without waiting for mirror. But that would cause inconsistency for QE if failover happens to such mirror missing the commit-prepared record. If only prepare is performed and primary is yet to process the commit-prepared, gxact is present in memory. If commit-prepared processing is complete on primary gxact is removed from memory. If gxact is found then we will flow through regular commit-prepared flow, emit the xlog record and sync the same to mirror. But if gxact is not found on primary, we used to return blindly success to QD. Hence, modified the code to always call `SyncRepWaitForLSN()` before replying to QD incase gxact is not found on primary. It calls `SyncRepWaitForLSN()` with the lsn value of `flush` from `xlogctl->LogwrtResult`, as there is no way to find-out the actual lsn value of commit-prepared record for primary. Usage of that lsn is based on following assumptions - WAL always is written serially forward - Synchronous mirror if has xlog record xyz must have xlog records before xyz - Not finding gxact entry in-memory on primary for commit-prepared retry from QD means it was for sure committed (completed) on primary Since, the commit-prepared retry can be received if everything is done on segment but failed on some other segment, under concurrency we may call `SyncRepWaitForLSN()` with same lsn value multiple times given we are using latest flush point. Hence in GPDB check in `SyncRepQueueIsOrderedByLSN()` doesn't validate for unique entries but just validates the queue is sorted which is required for correctness. Without the same during ICW tests can hit assertion "!(SyncRepQueueIsOrderedByLSN(mode))".
-
由 Shivram Mani 提交于
Added new test job to the pipeline to certify GPHDFS with MAPR Hadoop distribution and renamed existing GPHDFS certification job to state that it tests with generic Hadoop. MAPR cluster consists of 1 node deployed by CCP scripts into GCE. - MAPR 5.2 - Parquet 1.8.1 Co-authored-by: NAlexander Denissov <adenissov@pivotal.io> Co-authored-by: NShivram Mani <smani@pivotal.io> Co-authored-by: NFrancisco Guerrero <aguerrero@pivotal.io>
-
由 Jimmy Yih 提交于
A check was added during the 9.1 merge to verify that the sequence filepath to be created would not collide with an existing file. The filepath that is constructed does not use the sequence OID value that was just generated and uses whatever value is in that piece of memory at the time. This would make the check go through usually, especially in our CI testing, but occasionally a sequence would fail to be created because the random filepath would exist. Fix the issue by storing the generated OID in the RelFileNode var that will be passed into the filepath construction.
-
由 Lisa Owen 提交于
-
- 26 6月, 2018 6 次提交
-
-
由 Ning Yu 提交于
Resource group capabilities could be missing in ALTER command, e.g.: - create resgroup rg1 with v5.0 which does not support cpuset(cap=7); - binary switch to v5.10 (suppose it supports cpuset); - alter rg1's cpu_rate_limit, it will also `update` the cpuset cap; Now as rg1 was created with v5.0 so there is no cap=7 row in the catalog table pg_resgroupcapability, so the `update` operation will raise an error as the expected tuple can not be found. The proper behavior is to fallback to `insert` in such a case. Test cases are not included as it is already covered by existing binary swap test resgroup_current_3_queue. (cherry picked from commit 95f215d9)
-
由 Jesse Zhang 提交于
When the server is built with `--disable-orca`, we shouldn't (and used not to) allow setting the option `optimizer=on`. Upstream Postgres 9.1 commit 2594cf0e introduced the check hook in a GUC code refactoring, and it seems that we regressed in the 9.1 merge where we forgot to signal the calling code in `call_bool_check_hook` to error out. This commit fixes that by reintroducing the error.
-
由 mkiyama 提交于
-
由 mkiyama 提交于
-
由 Daniel Gustafsson 提交于
-
由 Sambitesh Dash 提交于
This reverts commit 1ec65820.
-
- 25 6月, 2018 2 次提交
-
-
由 Sambitesh Dash 提交于
-
由 Sambitesh Dash 提交于
-
- 23 6月, 2018 4 次提交
-
-
由 Sambitesh Dash 提交于
QP_memory_accounting tests have been moved to the isolation2 test. So we no longer need this job in the pipeline.
-
由 Ivan Leskin 提交于
* Change src/backend/access/external functions to extract and pass query constraints; * Add a field with constraints to 'ExtProtocolData'; * Add 'pxffilters' to gpAux/extensions/pxf and modify the extension to use pushdown. * Remove duplicate '=' check in PXF Remove check for duplicate '=' for the parameters of external table. Some databases (MS SQL, for example) may use '=' for database name or other parameters. Now PXF extension finds the first '=' in a parameter and treats the whole remaining string as a parameter value. * disable pushdown by default * Disallow passing of constraints of type boolean (the decoding fails on PXF side); * Fix implicit AND expressions addition Fix implicit addition of extra 'BoolExpr' to a list of expression items. Before, there was a check that the expression items list did not contain logical operators (and if it did, no extra implicit AND operators were added). This behaviour is incorrect. Consider the following query: SELECT * FROM table_ex WHERE bool1=false AND id1=60003; Such query will be translated as a list of three items: 'BoolExpr', 'Var' and 'OpExpr'. Due to the presence of a 'BoolExpr', extra implicit 'BoolExpr' will not be added, and we get an error "stack is not empty ...". This commit changes the signatures of some internal pxffilters functions to fix this error. We pass a number of required extra 'BoolExpr's to 'add_extra_and_expression_items'. As 'BoolExpr's of different origin may be present in the list of expression items, the mechanism of freeing the BoolExpr node changes. The current mechanism of implicit AND expressions addition is suitable only before OR operators are introduced (we will have to add those expressions to different parts of a list, not just the end, as done now).
-
由 Soumyadeep Chakraborty 提交于
* Added details of how aosegments tables are named 1) How are aosegments table initially named and how they are named following a DDL operation. 2) Method to get the current aosegments table for a particular AO table. * Detail : Creation of new aosegments table post DDL Incorporated PR feedback on including details about the creation process of new aosegments tables after a DDL operation implicating a rewrite of the table on disk, is applied.
-
由 Lisa Owen 提交于
* docs - create ... external ... temp table * update CREATE EXTERNAL TABLE sgml docs
-
- 22 6月, 2018 6 次提交
-
-
由 Abhijit Subramanya 提交于
-
由 Ashwin Agrawal 提交于
This reverts commit a7842ea9. Yet to fully investigate the issue but its hitting the Assertion (""!(SyncRepQueueIsOrderedByLSN(mode))"", File: ""syncrep.c"", Line: 214) sometimes.
-
由 Ashwin Agrawal 提交于
Upstream and for greenplum master if procdie is received while waiting for replication, just WARNING is issued and transaction moves forward without waiting for mirror. But that would cause inconsistency for QE if failover happens to such mirror missing the commit-prepared record. If only prepare is performed and primary is yet to process the commit-prepared, gxact is present in memory. If commit-prepared processing is complete on primary gxact is removed from memory. If gxact is found then we will flow through regular commit-prepared flow, emit the xlog record and sync the same to mirror. But if gxact is not found on primary, we used to return blindly success to QD. Hence, modified the code to always call `SyncRepWaitForLSN()` before replying to QD incase gxact is not found on primary. It calls `SyncRepWaitForLSN()` with the lsn value of `flush` from `xlogctl->LogwrtResult`, as there is no way to find-out the actual lsn value of commit-prepared record for primary. Usage of that lsn is based on following assumptions - WAL always is written serially forward - Synchronous mirror if has xlog record xyz must have xlog records before xyz - Not finding gxact entry in-memory on primary for commit-prepared retry from QD means it was for sure committed (completed) on primary
-
由 Jimmy Yih 提交于
This is needed during gprecoverseg full to preserve important files such as pg_log files. We pass this flag down the call stack to prevent other utilities such as gpinitstandby or gpaddmirror from using the new flag. The new flag can be dangerous if not used properly and should only be used when data directory file preservation is necessary.
-
由 Jimmy Yih 提交于
Currently, pg_basebackup has a hard restriction where the destination data directory must be empty or nonexistant. It is expected that anything of interest should be moved somewhere temporarily and then copied back in. To reduce the complexity, we introduce a new flag --force-overwrite which will delete the directories or files that are being copied from the source data directory before doing the actual copy. Combined with the Greenplum-specific exclusion flag (-E), we are now able to preserve files of interest. Our main example would be gprecoverseg full recovery and pg_log files. There have been times when a mirror fails and a full recovery would run which would drop the entire mirror directory before running pg_basebackup which would result in the mirror log files before the crash to be erased. This is substantially worse when we think of gprecoverseg rebalancing scenario where we currently do not have pg_rewind and must run full recovery to bring the old primary back up... which would result in vast amounts of old primary log files to be erased. Then during rebalance, the acting primary which would return to being a mirror also goes through a full recovery so its logs as a primary are also removed. The obvious solution would be to tar these logs out and untar them back in afterwards, but what if there are other files that must be preserved. Creating a copy may be costly in environments where disk space is valued highly.
-
由 Chuck Litzell 提交于
* Edits to apply organizational improvements made in the HAWQ version, using consistent realm and domain names, and testing that procedures work. * Convert tasks to topics to fix formatting. Clean up pg_ident.conf topic. * Convert another task to topic * Remove extraneous tag * Formatting and minor edits * - added $ or # prompts for all code blocks - Reworked section "Mapping Kerberos Principals to Greenplum Database Roles" to describe, generally, a user's authentication process and to more clearly describe how principal name is mapped to gpdb name. * - add krb_realm auth param - add description of include_realm=1 for completeness
-
- 21 6月, 2018 8 次提交
-
-
由 Jamie McAtamney 提交于
Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io> Co-authored-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Nadeem Ghani 提交于
We have a step to run gpinitstandby in mgmt_utils.py. Removing this code to make it more likely that we standardize on using the step in mgmt_utils.py. Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NKevin Yeap <kyeap@pivotal.io>
-
由 Nadeem Ghani 提交于
- Add mirrors with and without standby, and ensure that the host assignment is identical between the two. - Add mirrors, then kill one, and ensure that gprecoverseg operates correctly on the newly added mirror. Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NJacob Champion <pchampion@pivotal.io>
-
由 Kevin Yeap 提交于
Fix a bug where gpexpand would fail to run on a cluster that had a standby master but no mirrors. Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NKevin Yeap <kyeap@pivotal.io>
-
由 Nadeem Ghani 提交于
The gparray object was taking the existence of a standby as evidence that the cluster had mirrors. Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NKevin Yeap <kyeap@pivotal.io>
-
由 Daniel Gustafsson 提交于
-
由 Jimmy Yih 提交于
After -Werror=implicit-function-declaration was introduced in our configure file, Cmockery unit tests do not seem to compile on OSX. I am not sure how these compile on Linux, but this patch should fix the issue for any OS hitting the same. Reference to -Werror=implicit-function-declaration addition: https://github.com/greenplum-db/gpdb/commit/a3104caa3b0619361f77f3d36ec6563e6c397545
-
由 Lisa Owen 提交于
-
- 20 6月, 2018 6 次提交
-
-
由 skahler-pivotal 提交于
-
由 Mel Kiyama 提交于
--change command that tests email notification to a psql command. --remove old example that uses gmail public SMTP server
-
由 Jim Doty 提交于
-
由 Dhanashree Kashid 提交于
Add tests to ensure sane behavior when a subquery appears nested inside a scalar expression. The intent is to check for correct results. Bump ORCA version to 2.63.0 Signed-off-by: NShreedhar Hardikar <shardikar@pivotal.io>
-
由 Jimmy Yih 提交于
The pg_log directory has always been excluded using the pg_basebackup exclude option (-E ./pg_log). With this change, we add it to the static list inside of basebackup. Because of this change, we are able to remove all instances of mkdir pg_log in our management utilities. Previously, the utilities would always have to create the pg_log directory after running pg_basebackup because the postmaster does a validation check on the pg_log path existing. This also helps us align better with upstream Postgres since the pg_basebackup exclude option is Greenplum-specific and really not needed at all. Our dynamic exclusion list hasn't changed for a very long time (so it's pretty much static anyways) and is not maintained in the utilities very well. We may actually remove the pg_basebackup exclude option in the near future.
-
由 mkiyama 提交于
-
- 19 6月, 2018 1 次提交
-
-
由 Lisa Owen 提交于
* docs - docs and updates for pgbouncer 1.8.1 * some edits requested by david * add pgbouncer config page to see also, include directive * add auth_hba_type config param * ldap - add info to migrating section, remove ldap passwds * remove ldap note
-