- 28 5月, 2018 2 次提交
-
-
由 Ning Yu 提交于
* rpt: reorganize data when ALTER from/to replicated. There was a bug that altering from/to a replicated table has no effect, the root cause is that we did not change gp_distribution_policy neither reorganize the data. Now we perform the data reorganization by creating a temp table with the new dist policy and transfering all the data to it. * rpt: support RETURNING for replicated tables. This is to support below syntax (suppose foo is a replicated table): INSERT INTO foo VALUES(1) RETURNING *; UPDATE foo SET c2=c2+1 RETURNING *; DELETE * FROM foo RETURNING *; A new motion type EXPLICIT GATHER MOTION is introduced in EXPLAIN output, data will be received from one explicit sender in this motion type. * rpt: fix motion type under explicit gather motion. Consider below query: INSERT INTO foo SELECT f1+10, f2, f3+99 FROM foo RETURNING *, f1+112 IN (SELECT q1 FROM int8_tbl) AS subplan; We used to generate a plan like this: Explicit Gather Motion 3:1 (slice2; segments: 3) -> Insert -> Seq Scan on foo SubPlan 1 (slice2; segments: 3) -> Gather Motion 3:1 (slice1; segments: 1) -> Seq Scan on int8_tbl A gather motion is used for the subplan, which is wrong and will cause a runtime error. A correct plan is like below: Explicit Gather Motion 3:1 (slice2; segments: 3) -> Insert -> Seq Scan on foo SubPlan 1 (slice2; segments: 3) -> Materialize -> Broadcast Motion 3:3 (slice1; segments: 3) -> Seq Scan on int8_tbl * rpt: add test case for with both PRIMARY and UNIQUE. On a replicated table we could set both PRIMARY KEY and UNIQUE constraints, test cases are added to ensure this feature during future development.
- 26 5月, 2018 1 次提交
-
-
由 Ashwin Agrawal 提交于
-
- 25 5月, 2018 3 次提交
-
-
由 Daniel Gustafsson 提交于
Running the full ICW under Travis was problematic, but the unittests doesn't require a running cluster so let's run them to boost our test coverage a little. This changes the invocation of mocker.py to accomodate how Travis need it to be, without changing the effect of it.
-
由 Ning Yu 提交于
* gdd: alloc MyProcPort on stack. We used to allocate MyProcPort with malloc() and does not check for the result, it's reported to trigger a crash in gprecoverseg test: (gdb) bt 0x00007f56b506794f in __strlen_sse42 () from /lib64/libc.so.6 0x00000000009f7894 in write_message_to_server_log () 0x00000000009fc200 in send_message_to_server_log () 0x00000000009ffb85 in EmitErrorReport () 0x00000000009fccf5 in errfinish () 0x0000000000acd99f in FtsTestSegmentDBIsDown () Now we allocate it directly on stack. We can not find out a way to construct a testcase for this issue, but according to the report it should be covered in existing tests. * gdd: store name of super user on stack. We used to store name of super user in a buffer allocated by strdup() and did not check for the result, it's reported to trigger a crash in gprecoverseg test. Now we store it in a buffer on stack. A fatal error will be triggered if no super user can be found. We can not figure out a way to construct a testcase for this change, but according to the report it should be covered in existing tests.
-
由 Jimmy Yih 提交于
The Postgres 9.1 merge introduced a problem where issuing a COPY FROM to a partition table could result in an unexpected error, "ERROR: extra data after last expected column", even though the input file was correct. This would happen if the partition table had partitions where the relnatts were not all the same (e.g. ALTER TABLE DROP COLUMN, ALTER TABLE ADD COLUMN, and then ALTER TABLE EXCHANGE PARTITION). The internal COPY logic would always use the COPY state's relation, the partition root, instead of the actual partition's relation to obtain the relnatts value. In fact, the only reason this is intermittently seen is because the COPY logic, when working on the leaf partition's relation that has a different relnatts value, was looking beyond a boolean array's allocated memory and got a phony value that would evaluate to TRUE. Co-authored-by: NJimmy Yih <jyih@pivotal.io> Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
- 24 5月, 2018 5 次提交
-
-
由 Daniel Gustafsson 提交于
gpfdist was mistankenly misspelled as gpfidst, this fixes all found occurrences in code and documentation.
-
由 Adam Lee 提交于
It's broken due to a gppylib issue which is not going to be fixed.
-
由 Jimmy Yih 提交于
To verify that autovacuum actually freezes template0, we used to just busy wait for about two minutes, expecting to observe the change of pg_database.datfrozenxid. While this "usually works", it's too sensitive to the amount of time it takes to vacuum freeze template0. Specifically, in some of our very I/O-deprived environments, this process sometimes takes slightly longer than two minutes. This patch introduces a fault injector to help us observe the expected vacuuming. The wait-in-a-loop is still there, but the bulk of the uncertain timing is now before the loop, not during the loop. Co-authored-by: NJesse Zhang <sbjesse@gmail.com> Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Jim Doty 提交于
The gen pipeline script outputs a suggested command when setting a dev pipeline. Currently the git remote and git branch have to be edited before executing the command. Since often times the branch has been created and is tracing remote, it is possible to guess those details. The case statements attempt to prevent suggesting using the production branches, and fall back to the same string as before. Authored-by: NJim Doty <jdoty@pivotal.io>
-
由 Mel Kiyama 提交于
-Move cast examples to install guide -Pointer from install guide to admin guide for cast description.
-
- 23 5月, 2018 5 次提交
-
-
由 Todd Sedano 提交于
Fixed typo [ci skip] Authored-by: NTodd Sedano <tsedano@pivotal.io>
-
由 Ashwin Agrawal 提交于
This was left behind to get converted to seconds by commit 7d59d215ec065983c666b80bc2c982d13e476c48. Most likely reason for gpexpand jobs.
-
由 Ashwin Agrawal 提交于
This is cherry-pcik of upstream commit c61559ec. --------------------- Reduce pg_ctl's reaction time when waiting for postmaster start/stop. pg_ctl has traditionally waited one second between probes for whether the start or stop request has completed. That behavior was embodied in the original shell script written in 1999 (commit 5b912b08) and I doubt anyone's questioned it since. Nowadays, machines are a lot faster, and the shell script is long since replaced by C code, so it's fair to reconsider how long we ought to wait. This patch adjusts the coding so that the wait time can be any even divisor of 1 second, and sets the actual probe rate to 10 per second. That's based on experimentation with the src/test/recovery TAP tests, which include a lot of postmaster starts and stops. This patch alone reduces the (non-parallelized) runtime of those tests from ~4m30s to ~3m5s on my machine. Increasing the probe rate further doesn't help much, so this seems like a good number. In the real world this probably won't have much impact, since people don't start/stop production postmasters often, and the shutdown checkpoint usually takes nontrivial time too. But it makes development work and testing noticeably snappier, and that's good enough reason for me. Also, by reducing the dead time in postmaster restart sequences, this change has made it easier to reproduce some bugs that have been lurking for awhile. Patches for those will follow. Discussion: https://postgr.es/m/18444.1498428798@sss.pgh.pa.us ---------------------
-
由 Jim Doty 提交于
Authored-by: NJim Doty <jdoty@pivotal.io>
-
由 Jim Doty 提交于
Authored-by: NJim Doty <jdoty@pivotal.io>
-
- 22 5月, 2018 7 次提交
-
-
由 Jamie McAtamney 提交于
This test explicitly creates some tables and verifies that they can be read from after gpaddmirrors has been run on the cluster. The existing TINC test that does this will be deleted after all the new gpaddmirrors tests have been added. We've also added some code to allow cluster creation on more than one segment host. Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
由 Jinbao Chen 提交于
-
由 Richard Guo 提交于
By design, all array types should be hashable by GPDB. Issue 3741 shows that GPDB suffers from text array type being not hashable error. This commit fixes that.
-
由 Jacob Champion 提交于
AIX builds are currently failing with the following error message: ../../config/install-sh: utils/errcodes.h does not exist. There is currently a hardcoded list of headers that must be generated before installing src/include on AIX. errcodes.h came in as part of the 9.1 merge, and it was missed in this list. For good measure, add this to the Windows build as well (it's not currently failing there) so that some future unrelated change doesn't introduce this same build failure. Co-authored-by: NAsim Praveen <apraveen@pivotal.io>
-
由 Jacob Champion 提交于
There are no distributed transactions during binary upgrade, so we can ignore them.
-
由 Jacob Champion 提交于
VACUUM FREEZE must function correctly during binary upgrade so that the new cluster's catalogs don't contain bogus transaction IDs. Do a simple check on the QD in our test script, by querying the age of all the rows in gp_segment_configuration.
-
由 Jacob Champion 提交于
As part of the merge with PostgreSQL 9.1, two gpdb/master commits were incorrectly reverted. This change replays the following commits: commit dd42f3d4 Author: Todd Sedano <tsedano@pivotal.io> Date: Thu May 17 10:47:30 2018 -0700 Syncing python-dependencies with pythonsrc-ext In comparing https://github.com/greenplum-db/pythonsrc-ext to python-dependencies.txt, there are several differences. pythonsrc-ext is the vendored python repo, so it should be in sync with pythonsrc-ext argparse is in pythonsrc-ext, adding it to python-dependencies.txt enum34, parse-type, ptyprocess, six are not in pythonsrc-ext, so moving them to python-developer-dependencies.txt Authored-by: NTodd Sedano <tsedano@pivotal.io> commit 7f9f11d3 Author: Daniel Gustafsson <dgustafsson@pivotal.io> Date: Fri May 18 09:43:48 2018 +0200 Fix typo in comment
-
- 21 5月, 2018 1 次提交
-
-
由 Hubert Zhang 提交于
Add hook framework in signal handler, e.g. StatementCancelHandler or die, to cancel the slow function in a 3rd party library. Add PLy_handle_cancel_interrupt hook to cancel plpython, which use Py_AddPendingCall to cancel embedded python asynchronously.
-
- 19 5月, 2018 2 次提交
-
-
由 Ashwin Agrawal 提交于
Many places is code need to check if its row or column oriented storage table, which means basically is it AppendOptimized table or not. Currently its done by combination of two macros RelationIsAoRows() and RelationIsAoCols(). Simplify the same with new macro RelationIsAppendOptimized().
-
由 Asim R P 提交于
This is the final batch of commits from PostgreSQL 9.1 development, up to the point where the REL9_1_STABLE branch was created, and 9.2 development started on the PostgreSQL master branch. Notable upstream changes: * Changes to "temporary" vs. "permanent" relation handling. Previously, relations were classified as either temporary or not-temporary. Now there is a concept of "relation persistence", which includes un-WAL-logged (but permanent) tables as a third option. Additionally, 9.1 adds the backend ID to the relpath of temporary tables. Because GPDB does not have a 1:1 relationship between backends and temp relations, this has been reverted for now and the concept of a special `TempRelBackendId` has been added. We may look into backporting pieces of the parallel session support from future versions of PostgreSQL to minimize this difference. There is currently a fatal flaw in our handling of relfilenodes for temporary relations: it is possible (though unlikely) for a temporary sequence and a permanent relation (or vice-versa) to point to the same shared buffer in the buffer cache. This may lead to corruption of that buffer. Our current attempt at a fix for this issue isn't good enough yet, so the offending code is marked with FIXMEs. We plan to solve this problem with a total overhaul of the way temporary relations are stored in the shared buffer cache. * Collation is now supported per-column. Many plan nodes now have their own concept of input and output collation IDs. Notably, since this change involves heavy changes to generated plans, ORCA does not yet support non-default collation. * The SERIALIZABLE isolation level was added. This is not currently supported in GPDB, because we don't have a way to analyze serialization conflicts in a distributed manner; we will "fall back" to REPEATABLE READ. * GROUP BY now allows selection of columns in the grouping query without mentioning those columns in the GROUP BY clause, as long as those columns are from a table that's being grouped by its primary key. This requires some modification to the parallel grouping planner in GPDB. * Support for INSTEAD OF triggers on views. Notably, this would allow "modification" of views, but GPDB currently has no support for this concept in the planner. This functionality is temporarily disabled for now. * CTEs (WITH ... AS) can now contain data modification statements (INSERT/UPDATE/DELETE). This is not yet supported in GPDB. * `make -k` is now supported for installcheck[-world] and several other Makefile targets. * Foreign tables. Currently, GPDB does not support planning for statements made against foreign tables, so CREATE FOREIGN TABLE has been disabled. * SELinux support and integration has been added with the SECURITY LABEL command and the associated sepgsql utility. SECURITY LABEL support is currently tested in GPDB via the upstream regression tests, but sepgsql is not. * GUC API changes: assignment hooks are now split into two phases (check/assign). Notable GPDB changes: * Several pieces of the COPY dispatching code have been heavily refactored/rewritten to match upstream more closely. For now, compatibility with 5.x external table catalog entries has been maintained, though this is likely to change. Co-authored-by: NAsim Praveen <apraveen@pivotal.io> Co-authored-by: NBhuvnesh Chaudhary <bchaudhary@pivotal.io> Co-authored-by: NDhanashree Kashid <dkashid@pivotal.io> Co-authored-by: NGoutam Tadi <gtadi@pivotal.io> Co-authored-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Co-authored-by: NJacob Champion <pchampion@pivotal.io> Co-authored-by: NJesse Zhang <sbjesse@gmail.com> Co-authored-by: NJinbao Chen <jinchen@pivotal.io> Co-authored-by: NLarry Hamel <lhamel@pivotal.io> Co-authored-by: NMax Yang <myang@pivotal.io> Co-authored-by: NMelanie Plageman <mplageman@pivotal.io> Co-authored-by: NOmer Arap <oarap@pivotal.io> Co-authored-by: NPaul Guo <paulguo@gmail.com> Co-authored-by: NRichard Guo <guofenglinux@gmail.com> Co-authored-by: NSambitesh Dash <sdash@pivotal.io> Co-authored-by: NShreedhar Hardikar <shardikar@pivotal.io> Co-authored-by: NTaylor Vesely <tvesely@pivotal.io> Co-authored-by: NVenkatesh Raghavan <vraghavan@pivotal.io>
-
- 18 5月, 2018 6 次提交
-
-
由 Daniel Gustafsson 提交于
-
由 Todd Sedano 提交于
In comparing https://github.com/greenplum-db/pythonsrc-ext to python-dependencies.txt, there are several differences. pythonsrc-ext is the vendored python repo, so it should be in sync with pythonsrc-ext argparse is in pythonsrc-ext, adding it to python-dependencies.txt enum34, parse-type, ptyprocess, six are not in pythonsrc-ext, so moving them to python-developer-dependencies.txt Authored-by: NTodd Sedano <tsedano@pivotal.io>
-
由 Todd Sedano 提交于
Authored-by: NTodd Sedano <tsedano@pivotal.io>
-
由 Bhuvnesh Chaudhary 提交于
-
由 Nadeem Ghani 提交于
The TINC tests in the walrep_1 job have been migrated to Behave. This commit removes the walrep_1 job and updates the generated master pipeline. Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
由 David Kimura 提交于
Issue is that during recovery in progress mirror can take longer than default timeout to finish promotion. Because the timeout is sometimes too short gprecoverseg would intermittently throw 'error 'can't start transaction' in 'BEGIN'. This commit leverages GUCS gp_gang_creation_retry_count and gp_gang_creation_retry_timer to increase the timeout alotted for gang retriable errors to approximately 30 seconds. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
- 17 5月, 2018 8 次提交
-
-
由 Nadeem Ghani 提交于
The walrep_1 TINC tests have been migrated to the Behave framework under gpMgmt top-level directory. We remove the Makefile target but keep the TINC tests because the walrep_2 target has some test dependencies. Co-authored-by: NJimmy Yih <jyih@pivotal.io> Co-authored-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Jimmy Yih 提交于
We added a job for gpactivatestandby and moved the gpinitstandby job to the multi-node CLI suite. Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Jimmy Yih 提交于
These scenarios come from the walrep_1 TINC Makefile target. They have been designed to run on both singlenode and multinode environments. The tests have not been removed from TINC yet because some walrep_2 TINC tests have dependencies. Co-authored-by: NJimmy Yih <jyih@pivotal.io> Co-authored-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Nadeem Ghani 提交于
These scenarios come from the walrep_1 TINC Makefile target. They have been designed to run on both singlenode and multinode environments. The tests have not been removed from TINC yet because some walrep_2 TINC tests have dependencies. Co-authored-by: NJimmy Yih <jyih@pivotal.io> Co-authored-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Nadeem Ghani 提交于
These are ported from the master/standby TINC tests for checking the following: 1. gpstart -a -y does nothing when there is no standby master configured 2. gpstop -a -y skips standby master when there is one configured Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Kevin Yeap 提交于
The gpactivatestandby utility always used $MASTER_DATA_DIRECTORY environment variable, even when the -d flag was populated by the user. Fix the issue by actually using the directory path supplied by the user. Co-authored-by: NKevin Yeap <kyeap@pivotal.io> Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Tingfang Bao 提交于
Quick indent fix, the tables string was appended wrongly. Signed-off-by: NMing LI <mli@apache.org>
-
由 Tingfang Bao 提交于
We find out all the sequences in the table, and verify if the sequence should be created. later, we get the complete dump sql that included the sequnce DDL and table DDL, It can create the sequence before creating the table This implementation is to avoid locking table during drop table with sequence in transaction, meantime execute command `psql -f ...` to create this sequence. Signed-off-by: NMing LI <mli@apache.org>
-