- 10 7月, 2019 8 次提交
-
-
由 Ashwin Agrawal 提交于
Seeking motivation from upstream thread https://postgr.es/m/CAD21AoDuAYsRb3Q9aobkFZ6DZMWxsyg4HOmgkwgeWNfSkTwGxw@mail.gmail.com which has script to detect declared but not defined functions. Using that script found following functions on GPDB code in that category. The script is bare minimum and flags these but the output contains other noise too and can be modified to enhance it into tool but that's for some other day.
-
由 Tingfang Bao 提交于
Authored-by: NTingfang Bao <bbao@pivotal.io>
-
由 Tingfang Bao 提交于
Authored-by: NTingfang Bao <bbao@pivotal.io>
-
由 Hao Wu 提交于
* Add certificates & keys and test cases for contrib/sslinfo Use `echo` + `sed` to add/remove options in postgresql.conf of master node and standby node. This method could completely restore the SSL related options in postgresql.conf. The imperfect point is this way may overwrite the existing certificates and keys under data directory. We use newly created certificates and keys, instead of certificates in src/test/ssl/ssl. Because there some fields in those certificates are missing in the test.
-
由 Pengzhou Tang 提交于
Previously, we cannot reconstruct the statement and dispatch it to QEs in AlterSetting() if role and database are zero, we report an error instead. Actually, when both role and database are not specified, it's a ALTER USER ALL statement, this commit reconstruct it well.
-
由 David Krieger 提交于
-
由 David Krieger 提交于
Use pg_ctl to allow background worker connections during smart shutdown and remove application_name whitelist, which was checked to allow smart shutdown to proceed. When pg_ctl timeouts or the user decides to Control-C , prompt user to continue shutting down in smart mode while waiting forever, or shutdown in fast or immediate mode. We allow the user to send a SIGINT to interrupt a long-running smart mode shutdown and resend either a fast or immediate shutdown. Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io> Co-authored-by: NKalen Krempely <kkrempely@pivotal.io> Co-authored-by: NShoai Lari <slari@pivotal.io> Co-authored-by: NJacob Champion <pchampion@pivotal.io> Co-authored-by: NNikolaos Kalampalikis <nkalampalikis@pivotal.io>
-
由 Adam Berlin 提交于
We add information to the TwoPhaseHeaderFile about which tablespace we're attempting to drop. We also schedule a tablespace for deletion (similar to CREATE TABLESPACE) on commit, which waits until the transaction is committed to perform the filesystem deletion. This follows the recommendation for relation files to be deleted after the commit to not spend time deleting files during the commit. We also modified the distributed_commit record to pass the tablespace to be dropped to the standby. And, we modified the abort record, to help back out of a drop on the mirrors. - Bump the catalog for the drop tablespace xlog changes. - Extracts hook functions to be implemented by separate components The tests skip FTS probes during the test run to avoid FTS taking a primary down that has a fault point injected into it. Co-authored-by: NTaylor Vesely <tvesely@pivotal.io> Co-authored-by: NSoumyadeep Chakraborty <sochakraborty@pivotal.io> Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
-
- 09 7月, 2019 8 次提交
-
-
由 Daniel Gustafsson 提交于
Setting a variable to itself is a no-op which can be removed. This may have been introduced in error and instead masking a real bug, but if it so then we have lived with it for two years so I'm opting for removing. Reviewed-by: Asim R P and Bhuvnesh Chaudhary
-
由 Daniel Gustafsson 提交于
Reviewed-by: David Yozie
-
由 Peifeng Qiu 提交于
-
由 Abhijit Subramanya 提交于
-
由 Sambitesh Dash 提交于
-
由 Jesse Zhang 提交于
This test is run in a parallel group and uses table names that collide with other tests that either run before it, or concurrent with it. This leads to difficult-to-debug CI failures. Commit 95a33fb4 already address a similar issue in two other tests in the same group, but somehow missed this. This commit schema qualifies all objects in that test. Co-authored-by: NTaylor Vesely <tvesely@pivotal.io>
-
由 Lisa Owen 提交于
-
由 Lisa Owen 提交于
* docs - [pg->gp]_dist_wait_status UDF name change * add sample output from simon
-
- 08 7月, 2019 3 次提交
-
-
由 Jesse Zhang 提交于
In fact, if you manage to get an extra directory created, you can exercise the code being removed. It won't work anyway, with an error: > make[1]: Entering directory '/build/gpdb/gpcontrib' > make[1]: *** installcheck: No such file or directory. Stop. > make[1]: Leaving directory '/build/gpdb/gpcontrib' > make: *** [Makefile:96: installcheck] Error 2 This should have been deleted in commit ad6b920f as part of GPHDFS removal.
-
由 Daniel Gustafsson 提交于
-
由 Pengzhou Tang 提交于
background worker is not scheduled until distributed transactions are recovered if it needs to start at BgWorkerStart_RecoveryFinished or BgWorkerStart_ConsistentState because it's not safe to do a read or write if DTX is not recovered. GPDB is designed to do this check in master only, however Gp_role == GP_ROLE_DISPATCH is not a sufficient check for master. Spotted by Wang Hao <haowang@pivotal.io>
-
- 06 7月, 2019 2 次提交
-
-
由 Soumyadeep Chakraborty 提交于
Co-authored-by: NSoumyadeep Chakraborty <sochakraborty@pivotal.io>
-
由 Soumyadeep Chakraborty 提交于
High-level details: 1) Tear out existing 2PC infrastructure for databases 2) Introduce new structures (in-mem, on-disk) for making databases 2PC 3) Make sure ALTER DATABASE SET TABLESPACE (ADST) works correctly such that no orphaned files remain in the source/target tablespace directories in cases of commit/abort. 4) We add the dboid dir to pendingDbDeletes for abort after copying dboid dir in movedb. This ensures that the dboid dir is cleaned up from the target tablespace when we have a failure during the course of execution of the ALTER. 5) Re-engineer tests to assert filesystem changes and accommodate multiple failure scenarios during the course of a 2PC commit/abort. 6) Introduce a new generic mechanism for making mirrors catch up. This is done by inserting a XLOG_XACT_NOOP record and then waiting for the mirror to replay the record with gp_inject_fault2. The premise is that if the mirror has replayed this latest record, it would have replayed everything before the NOOP record. We have introduced a UDF and a fault injection point to make this possible. 7) Update pg_xlogdump to dump out pending DB deletes on commit/abort. Refactoring Notes: 1) The xlog insert for database drops in `DropDatabaseDirectory` is no longer required since we are already WALing database drops elsewhere. 2) `xact_get_distributed_info_from_commit` populated variables that were useful in the context of its only caller `xact_redo_distributed_commit`. If we keep it around there would be unnecessary code duplication -> thus deleting it. 3) Use DropRelationFiles() instead of duplicating logic in xact_redo_distributed_commit. 4) Refactor session-level lock release for the QD during movedb(). 5) smgr is a layer that pertains only to relations. So, we drop the smgr prefix for pending db delete functions. 6) DropDatabaseDirectories() pertains to dropping dboid dirs at the filesystem level. This means that it belongs inside storage.h/c. DropDatabaseDirectories forms the visible interface as contrasted to DropDatabaseDirectory. So moved the latter and made it static. 7) Rename DatabaseDropStorage to ScheduleDbDirDelete to call out its intent. 8) Extract pending db deletes and dboid dir removal to dedicated file 9) Tests for scenarios with and without faults now live in one consolidated file. Co-authored-by: NTaylor Vesely <tvesely@pivotal.io> Co-authored-by: NAdam Berlin <aberlin@pivotal.io> Co-authored-by: NJesse Zhang <jzhang@pivotal.io>
-
- 05 7月, 2019 3 次提交
-
-
由 Georgios Kokolatos 提交于
It is possible, and does happen, that CreateExternalStmt will be written upon during transformations. The transformer functions are not explicit regarding ownership and some of them might (and do) invoke walkers that can (and do) write on the transformed statement. However, some callers of these functions, do pass arguments that should be concidered read only. Such an example is passing a CreateExternalStmt which belongs to, and taken from, a cached plan which will be correctly retained for subsequent executions. Create a short lived memory context in the transformCreateExternalStmt, which will additionaly protect against leaks in long expanding statments, and operate in a copy of the passed statement. Reviewed-by: NAsim R P <apraveen@pivotal.io> Reviewed-by: NBaiShaoqi <sbai@pivotal.io>
-
由 Georgios Kokolatos 提交于
The macro IS_FILL_WORD was generating a false positive array-bounds warning. Convting the macro into an inline function removes the warning without altering the essence of the code. The macro was always used within a conditional so it is unlikely to suffer performance penalties.
-
由 Hao Wu 提交于
* Revert "CI: skip CLI behave tests that currently fail on ubuntu18.04" This reverts commit 6a0cb2c6. This commit is to re-enable behave test for centos/ubuntu * Ouput the same error message format gppkg for rpm and deb outputs error messages when consistency is broken, but the message is not the same, which make a pipeline fail. * Update gppkg for behave test and fix some minor error in gppkg Now, the binary sample.gppkg is removed, instead we add sample-*.rpm and sample-*.deb. Because sample.gppkg is platform specific and GP major version sensitive. The uploaded rpm/deb files are available unless the rpm/deb file is incompatible on the specific platform. GP_MAJORVERSION is dynamically retrieved from a makefile installed in the gpdb folder, so the gp major version in the gppkg will be always correct. sample.gppkg will only be generated when gppkg tag is provided
-
- 04 7月, 2019 5 次提交
-
-
由 Peifeng Qiu 提交于
The kerberos service name of gpdb and psql must match to allow proper kerberos authentication. For linux platform this is controlled by --with-krb-srvnam. For Windows platform it's hardcoded in template pg_config.h.win32. We currently build gpdb server without specifying --with-krb-srvnam, and the default value is "postgres". Change the correspond value that hardcoded Windows config template to "postgres" also. - Package necessary kerberos utility with windows installer - Add Kerberos auth test for Windows client
-
由 xiong-gang 提交于
As discussed in #6932, the size of interconnect connection hash table is 16, which would be too small for large cluster and could bring too many hash table collisions. Set to (2 * number segments).
-
由 Paul Guo 提交于
Sanity check update/delete to prevent potential data corruption when there are tuples with bad distribution. (#7304) Data being stored with bad distribution is not unusual in a real production environment. There could be issues if the ctid for update/delete is distributed to another segment. This patch will double check this and error out if needed. Reviewed-by: Jinbao Chen
-
由 Tang Pengzhou 提交于
* extend background worker framework for old auxiliary process GPDB used to have its own framework to manage its auxiliary processes like FTS, GDD, DtxRecovery, Perfmon, Backoff, etc. This old framework a few defects: 1. GPDB only, different concept and make code different from upstream in many places. 2. Not easy enough to add a new type of auxiliary process although it has tried to reduce the complex, but still lots of additional code needed out of core feature. 3. not clear for backend initialization and many duplicated code. 4. not clear for the components/resource needed. 5. using SIGUSR2 to shutdown which differs from upstream. 6. special code for auxiliary in the workflow. Actually, the old framework is very similar to the background worker framework in upstream. Now we have bgworker feature merged, it's time to covert those GPDB processes to background worker now and take advantage of the background worker feature. To do this, we need to extend the background worker framework a little bit: 1. extend bgw_start_time typedef enum { /* can start if startup process is lunched*/ BgWorkerStart_PostmasterStart, /* can start basicly when it's saft to read (in GPDB, after DTX recovery is done) */ BgWorkerStart_ConsistentState, /* can start basicly when it's safe to read/write (in GPDB, after DTX recovery is done) */ BgWorkerStart_RecoveryFinished +++ BgWorkerStart_DtxRecovering, } BgWorkerStartTime; A new type called BgWorkerStart_DtxRecovering which means a worker can be started after the startup process finished and before DTX recovery is done. This is important because DTX recovery process is also a bgworker now and it also need FTS prober once it hit an error. 2. adding a bgw_start_rule Some auxiliary processes can only be launched in QD or only be lanuched when some GUCs are set. So users can provide a custom function to specify the rule to wether launch a bgworker or not. * move fts to bgworker * move gdd to background worker * move dtx recovery * move perfmon_segmentinfo to bg worker * move backoff to bgworker * move perfmon to bgworker Reviewed-by: NAsim R P <apraveen@pivotal.io> Reviewed-by: NHubert Zhang <hzhang@pivotal.io>
-
由 David Kimura 提交于
Issue is that COPY WITH OIDS requires an OID from the QD for tuple insert. However, in the case of multi-insert we don't yet have the multiple OIDs required from the QD. Simplest approach is to disable multi-insert on COPY WITH OIDS. A better solution may be to send a list of OIDS from the QD which QE would use in multi-insert.
-
- 03 7月, 2019 5 次提交
-
-
由 Xiaoran Wang 提交于
Sometimes gpfdist with ssl can not start in 30 seconds.
-
由 Daniel Gustafsson 提交于
The mailmap file is used by Git to properly coalesce contributions which have been made by the same contributor with different email addresses or user names. Discussion: https://groups.google.com/a/greenplum.org/forum/#!topic/gpdb-dev/zxD4x6fis0E
-
由 Asim R P 提交于
Greenplum diverges from upstream PostgreSQL in abort transaction workflow. Greenplum must dispatch ABORT command to QEs. Consequently, transaction ID in top transaction state is reset before broadcasting the ABORT. Exported snapshots are closely tied to the transaction that exported them. They must be cleared before the transaction that exported them ceases to exist. In order to abide by this sequence of operations, Greenplum must also clear exported snapshots before marking the transaction as invalid. This patch moves the call to clear exported snapshots at a location that is right for Greenplum. Fixes Github issue #8020 Reviewed-by: Georgios Kokolatos and Jimmy Yih
-
由 Adam Lee 提交于
FDW or External table's INSERT actions are actually just some defined callbacks but not real heap table INSERT actions. Don't mark transactions doing write just because they insert into foreign or external tables, some FDW extensions report an error because they don't see XACT_EVENT_PRE_PREPARE coming.
-
由 Adam Lee 提交于
``` SET "request.header.user-agent" = 'curl/7.29.0'; ``` The double quote characters are needed in the above command, but Greenplum lost them while dispatching, which reports syntax error.
-
- 02 7月, 2019 6 次提交
-
-
由 Adam Berlin 提交于
We need to bump the catalog because of the change to the XLOG introduced by the create tablespace pr. (122c79f2)
-
由 Daniel Gustafsson 提交于
The checkinc.py script was performing a check for the presence of header files we're not shipping. This was essentially guaranteed to always pass as the code was able to compile, and has generally provided little value. Recent changes in macOS installations has also made it fail with false positives, so finally remove it to minimize developer time spent fighting quite useless tests. Any checks for headers should be performed in autoconf before the code is compiled, not during testing. Discussion: https://github.com/greenplum-db/gpdb/issues/7961 https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/cCXsHwPr7GM/XbExPStmAgAJ Reviewed-by: Ashwin Agrawal, Jimmy Yih
-
由 Mark Sliva 提交于
To match how bin_gpdb tar.gz is created. Co-authored-by: NMark Sliva <msliva@pivotal.io> Co-authored-by: NAmil Khanzada <akhanzada@pivotal.io>
-
由 dyozie 提交于
-
由 Mark Sliva 提交于
For commit e995ae86Co-authored-by: NMark Sliva <msliva@pivotal.io> Co-authored-by: NAmil Khanzada <akhanzada@pivotal.io>
-
由 Ashuka Xue 提交于
ICG changes for Create LAS Correlated Apply on in filter context
-