- 09 7月, 2019 1 次提交
-
-
由 Lisa Owen 提交于
* docs - [pg->gp]_dist_wait_status UDF name change * add sample output from simon
-
- 08 7月, 2019 3 次提交
-
-
由 Jesse Zhang 提交于
In fact, if you manage to get an extra directory created, you can exercise the code being removed. It won't work anyway, with an error: > make[1]: Entering directory '/build/gpdb/gpcontrib' > make[1]: *** installcheck: No such file or directory. Stop. > make[1]: Leaving directory '/build/gpdb/gpcontrib' > make: *** [Makefile:96: installcheck] Error 2 This should have been deleted in commit ad6b920f as part of GPHDFS removal.
-
由 Daniel Gustafsson 提交于
-
由 Pengzhou Tang 提交于
background worker is not scheduled until distributed transactions are recovered if it needs to start at BgWorkerStart_RecoveryFinished or BgWorkerStart_ConsistentState because it's not safe to do a read or write if DTX is not recovered. GPDB is designed to do this check in master only, however Gp_role == GP_ROLE_DISPATCH is not a sufficient check for master. Spotted by Wang Hao <haowang@pivotal.io>
-
- 06 7月, 2019 2 次提交
-
-
由 Soumyadeep Chakraborty 提交于
Co-authored-by: NSoumyadeep Chakraborty <sochakraborty@pivotal.io>
-
由 Soumyadeep Chakraborty 提交于
High-level details: 1) Tear out existing 2PC infrastructure for databases 2) Introduce new structures (in-mem, on-disk) for making databases 2PC 3) Make sure ALTER DATABASE SET TABLESPACE (ADST) works correctly such that no orphaned files remain in the source/target tablespace directories in cases of commit/abort. 4) We add the dboid dir to pendingDbDeletes for abort after copying dboid dir in movedb. This ensures that the dboid dir is cleaned up from the target tablespace when we have a failure during the course of execution of the ALTER. 5) Re-engineer tests to assert filesystem changes and accommodate multiple failure scenarios during the course of a 2PC commit/abort. 6) Introduce a new generic mechanism for making mirrors catch up. This is done by inserting a XLOG_XACT_NOOP record and then waiting for the mirror to replay the record with gp_inject_fault2. The premise is that if the mirror has replayed this latest record, it would have replayed everything before the NOOP record. We have introduced a UDF and a fault injection point to make this possible. 7) Update pg_xlogdump to dump out pending DB deletes on commit/abort. Refactoring Notes: 1) The xlog insert for database drops in `DropDatabaseDirectory` is no longer required since we are already WALing database drops elsewhere. 2) `xact_get_distributed_info_from_commit` populated variables that were useful in the context of its only caller `xact_redo_distributed_commit`. If we keep it around there would be unnecessary code duplication -> thus deleting it. 3) Use DropRelationFiles() instead of duplicating logic in xact_redo_distributed_commit. 4) Refactor session-level lock release for the QD during movedb(). 5) smgr is a layer that pertains only to relations. So, we drop the smgr prefix for pending db delete functions. 6) DropDatabaseDirectories() pertains to dropping dboid dirs at the filesystem level. This means that it belongs inside storage.h/c. DropDatabaseDirectories forms the visible interface as contrasted to DropDatabaseDirectory. So moved the latter and made it static. 7) Rename DatabaseDropStorage to ScheduleDbDirDelete to call out its intent. 8) Extract pending db deletes and dboid dir removal to dedicated file 9) Tests for scenarios with and without faults now live in one consolidated file. Co-authored-by: NTaylor Vesely <tvesely@pivotal.io> Co-authored-by: NAdam Berlin <aberlin@pivotal.io> Co-authored-by: NJesse Zhang <jzhang@pivotal.io>
-
- 05 7月, 2019 3 次提交
-
-
由 Georgios Kokolatos 提交于
It is possible, and does happen, that CreateExternalStmt will be written upon during transformations. The transformer functions are not explicit regarding ownership and some of them might (and do) invoke walkers that can (and do) write on the transformed statement. However, some callers of these functions, do pass arguments that should be concidered read only. Such an example is passing a CreateExternalStmt which belongs to, and taken from, a cached plan which will be correctly retained for subsequent executions. Create a short lived memory context in the transformCreateExternalStmt, which will additionaly protect against leaks in long expanding statments, and operate in a copy of the passed statement. Reviewed-by: NAsim R P <apraveen@pivotal.io> Reviewed-by: NBaiShaoqi <sbai@pivotal.io>
-
由 Georgios Kokolatos 提交于
The macro IS_FILL_WORD was generating a false positive array-bounds warning. Convting the macro into an inline function removes the warning without altering the essence of the code. The macro was always used within a conditional so it is unlikely to suffer performance penalties.
-
由 Hao Wu 提交于
* Revert "CI: skip CLI behave tests that currently fail on ubuntu18.04" This reverts commit 6a0cb2c6. This commit is to re-enable behave test for centos/ubuntu * Ouput the same error message format gppkg for rpm and deb outputs error messages when consistency is broken, but the message is not the same, which make a pipeline fail. * Update gppkg for behave test and fix some minor error in gppkg Now, the binary sample.gppkg is removed, instead we add sample-*.rpm and sample-*.deb. Because sample.gppkg is platform specific and GP major version sensitive. The uploaded rpm/deb files are available unless the rpm/deb file is incompatible on the specific platform. GP_MAJORVERSION is dynamically retrieved from a makefile installed in the gpdb folder, so the gp major version in the gppkg will be always correct. sample.gppkg will only be generated when gppkg tag is provided
-
- 04 7月, 2019 5 次提交
-
-
由 Peifeng Qiu 提交于
The kerberos service name of gpdb and psql must match to allow proper kerberos authentication. For linux platform this is controlled by --with-krb-srvnam. For Windows platform it's hardcoded in template pg_config.h.win32. We currently build gpdb server without specifying --with-krb-srvnam, and the default value is "postgres". Change the correspond value that hardcoded Windows config template to "postgres" also. - Package necessary kerberos utility with windows installer - Add Kerberos auth test for Windows client
-
由 xiong-gang 提交于
As discussed in #6932, the size of interconnect connection hash table is 16, which would be too small for large cluster and could bring too many hash table collisions. Set to (2 * number segments).
-
由 Paul Guo 提交于
Sanity check update/delete to prevent potential data corruption when there are tuples with bad distribution. (#7304) Data being stored with bad distribution is not unusual in a real production environment. There could be issues if the ctid for update/delete is distributed to another segment. This patch will double check this and error out if needed. Reviewed-by: Jinbao Chen
-
由 Tang Pengzhou 提交于
* extend background worker framework for old auxiliary process GPDB used to have its own framework to manage its auxiliary processes like FTS, GDD, DtxRecovery, Perfmon, Backoff, etc. This old framework a few defects: 1. GPDB only, different concept and make code different from upstream in many places. 2. Not easy enough to add a new type of auxiliary process although it has tried to reduce the complex, but still lots of additional code needed out of core feature. 3. not clear for backend initialization and many duplicated code. 4. not clear for the components/resource needed. 5. using SIGUSR2 to shutdown which differs from upstream. 6. special code for auxiliary in the workflow. Actually, the old framework is very similar to the background worker framework in upstream. Now we have bgworker feature merged, it's time to covert those GPDB processes to background worker now and take advantage of the background worker feature. To do this, we need to extend the background worker framework a little bit: 1. extend bgw_start_time typedef enum { /* can start if startup process is lunched*/ BgWorkerStart_PostmasterStart, /* can start basicly when it's saft to read (in GPDB, after DTX recovery is done) */ BgWorkerStart_ConsistentState, /* can start basicly when it's safe to read/write (in GPDB, after DTX recovery is done) */ BgWorkerStart_RecoveryFinished +++ BgWorkerStart_DtxRecovering, } BgWorkerStartTime; A new type called BgWorkerStart_DtxRecovering which means a worker can be started after the startup process finished and before DTX recovery is done. This is important because DTX recovery process is also a bgworker now and it also need FTS prober once it hit an error. 2. adding a bgw_start_rule Some auxiliary processes can only be launched in QD or only be lanuched when some GUCs are set. So users can provide a custom function to specify the rule to wether launch a bgworker or not. * move fts to bgworker * move gdd to background worker * move dtx recovery * move perfmon_segmentinfo to bg worker * move backoff to bgworker * move perfmon to bgworker Reviewed-by: NAsim R P <apraveen@pivotal.io> Reviewed-by: NHubert Zhang <hzhang@pivotal.io>
-
由 David Kimura 提交于
Issue is that COPY WITH OIDS requires an OID from the QD for tuple insert. However, in the case of multi-insert we don't yet have the multiple OIDs required from the QD. Simplest approach is to disable multi-insert on COPY WITH OIDS. A better solution may be to send a list of OIDS from the QD which QE would use in multi-insert.
-
- 03 7月, 2019 5 次提交
-
-
由 Xiaoran Wang 提交于
Sometimes gpfdist with ssl can not start in 30 seconds.
-
由 Daniel Gustafsson 提交于
The mailmap file is used by Git to properly coalesce contributions which have been made by the same contributor with different email addresses or user names. Discussion: https://groups.google.com/a/greenplum.org/forum/#!topic/gpdb-dev/zxD4x6fis0E
-
由 Asim R P 提交于
Greenplum diverges from upstream PostgreSQL in abort transaction workflow. Greenplum must dispatch ABORT command to QEs. Consequently, transaction ID in top transaction state is reset before broadcasting the ABORT. Exported snapshots are closely tied to the transaction that exported them. They must be cleared before the transaction that exported them ceases to exist. In order to abide by this sequence of operations, Greenplum must also clear exported snapshots before marking the transaction as invalid. This patch moves the call to clear exported snapshots at a location that is right for Greenplum. Fixes Github issue #8020 Reviewed-by: Georgios Kokolatos and Jimmy Yih
-
由 Adam Lee 提交于
FDW or External table's INSERT actions are actually just some defined callbacks but not real heap table INSERT actions. Don't mark transactions doing write just because they insert into foreign or external tables, some FDW extensions report an error because they don't see XACT_EVENT_PRE_PREPARE coming.
-
由 Adam Lee 提交于
``` SET "request.header.user-agent" = 'curl/7.29.0'; ``` The double quote characters are needed in the above command, but Greenplum lost them while dispatching, which reports syntax error.
-
- 02 7月, 2019 10 次提交
-
-
由 Adam Berlin 提交于
We need to bump the catalog because of the change to the XLOG introduced by the create tablespace pr. (122c79f2)
-
由 Daniel Gustafsson 提交于
The checkinc.py script was performing a check for the presence of header files we're not shipping. This was essentially guaranteed to always pass as the code was able to compile, and has generally provided little value. Recent changes in macOS installations has also made it fail with false positives, so finally remove it to minimize developer time spent fighting quite useless tests. Any checks for headers should be performed in autoconf before the code is compiled, not during testing. Discussion: https://github.com/greenplum-db/gpdb/issues/7961 https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/cCXsHwPr7GM/XbExPStmAgAJ Reviewed-by: Ashwin Agrawal, Jimmy Yih
-
由 Mark Sliva 提交于
To match how bin_gpdb tar.gz is created. Co-authored-by: NMark Sliva <msliva@pivotal.io> Co-authored-by: NAmil Khanzada <akhanzada@pivotal.io>
-
由 dyozie 提交于
-
由 Mark Sliva 提交于
For commit e995ae86Co-authored-by: NMark Sliva <msliva@pivotal.io> Co-authored-by: NAmil Khanzada <akhanzada@pivotal.io>
-
由 Ashuka Xue 提交于
ICG changes for Create LAS Correlated Apply on in filter context
-
由 Mark Sliva 提交于
Prior to this change it would only run on centos7. Also pipelines that did not target centos7 would fail to run. Co-authored-by: NMark Sliva <msliva@pivotal.io> Co-authored-by: NAmil Khanzada <akhanzada@pivotal.io>
-
由 Mark Sliva 提交于
This reverts commit c46bf52f.
-
由 David Yozie 提交于
* Adding reference topic and additional information around temp_tablespaces * Clarify comma-separated list
-
由 Mark Sliva 提交于
Fix gen_pipeline.py OS issue
-
- 01 7月, 2019 3 次提交
-
-
由 Adam Berlin 提交于
Modify TwoPhaseFileHeader to include a tablespace OID that needs to be removed if an abort occurs. We add a 'pending tablespace for deletion' that is scheduled during the 'create tablespace' command. If a node encounters an error, it can abort using the in-memory value. The master then can send an ABORT PREPARED for all segments that have prepared the create tablespace. A successful 'create tablespace' needs to unschedule the deletion, or following aborted transactions will accidentally delete the newly created tablespace. XLOG records are written for the ABORT and ABORT_PREPARED of a transaction, and these records also now contain a tablespace oid for deletion. Notes: During CREATE TABLESPACE, if an error occurs after the XLOG_XACT_PREPARE record has been written, there should be no filesystem changes left on disk for any of the segments. (master, standby, primary, or mirror) If an error occurs after the XLOG_XACT_DISTRIBUTED_COMMIT, then the create tablespace should succeed, and all tablespace filesystem changes should exist. Co-authored-by: NTaylor Vesely <tvesely@pivotal.io> Co-authored-by: NAlexandra Wang <lewang@pivotal.io> Co-authored-by: NSoumyadeep Chakraborty <sochakraborty@pivotal.io>
-
由 Daniel Gustafsson 提交于
-
由 Xiaoran Wang 提交于
* Add remote_regress folder in src/bin/gpfdist/ to test gpfdist which runs on a remote host, such as a windows server. make installcheck_win can run them. * The scripts of start_gpfdist_remote_win are used to start and stop gpfdist runing on windows. If you want to test gpfdist on another os, such as aix, you can create a forlder named start_gpfdist_remote_aix and add some scripts to start and stop gpfdist.
-
- 30 6月, 2019 1 次提交
-
-
由 Zhenghua Lyu 提交于
With GDD enabled, and under some simple cases (refer the commit 6ebce733 and the function checkCanOptSelectLockingClause for details), we might also do some optimizations for the select statement with locking clause and limit clause. Greenplum generates two-stage-merge sort or limit plan to implement sort or limit and we can only lock tuples on segments. We prefer locking more tuples on segments rather than locking the whole table. Without the whole table lock, performance for OLTP should be improved. Also, after lockrows data cannot be assumed in order, but we do merge gather after lockrows. It is reasonable because even for postgres: `select * from t order by c for update` cannot guarantee result's order.
-
- 29 6月, 2019 5 次提交
-
-
由 Mark Sliva 提交于
The task that had been using it was removed in commit: b9636096
-
由 David Yozie 提交于
-
由 Chuck Litzell 提交于
* docs - GPCC no longer depends on gpperfmon_install * Review comments from David
-
由 David Yozie 提交于
-
由 David Kimura 提交于
Issue is that in Postgres 9.5 there is a fixed amount of xl_info flags that can be used. GPDB commit b5871009 added another flag which in GPDB 9.5 merge branch exceeds the amount available. Ashwin noticed the difference between XLOG_XACT_ONE_PHASE_COMMIT and XLOG_XACT_COMMIT records is if distributed info is set. Instead we can use a same record type in both cases if we setup distributed info in RecordTransactionCommit() and then pass the distributed info through xact_redo().
-
- 28 6月, 2019 2 次提交
-
-
由 David Krieger 提交于
Now that the root cause for a single failing test on Ubuntu18.04 has been merged(dc0a3cec), enable that test in the pipeline.
-
由 Daniel Gustafsson 提交于
The link to the upstream PostgreSQL documentation had a typo and was pointing to postgreql.org.
-