- 08 9月, 2018 2 次提交
-
-
由 Xin Zhang 提交于
Add behave test for FQDN_HBA flag support Co-authored-by: NXin Zhang <xzhang@pivotal.io> Co-authored-by: NGoutam Tadi <gtadi@pivotal.io>
-
由 Goutam Tadi 提交于
- Behave tests for gpinitsystem with fqdn Co-authored-by: NGoutam Tadi <gtadi@pivotal.io> Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
- 28 8月, 2018 2 次提交
-
-
由 Chris Hajas 提交于
Authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Chris Hajas 提交于
The gpstop, gpstart, and gpdeletesystem behave tests haven't been touched in years and aren't being run. Authored-by: NChris Hajas <chajas@pivotal.io>
-
- 27 8月, 2018 1 次提交
-
-
由 Wenlin Zhang 提交于
* Fix gpperfmon bug that ATExecSetDistributedBy statement produce wrong tsubmit. Bypass gpmon start packet for ExecutorStart from ATExecSetDistributedBy, make such kind of query no more record in gpperfmon. Co-authored-by: NWenlin Zhang <wzhang@pivotal.io> Co-authored-by: NRu Yang <ruyang@pivotal.io> Co-authored-by: NRenyuan Wang <rewang@pivotal.io>
-
- 17 8月, 2018 1 次提交
-
-
由 Nadeem Ghani 提交于
-
- 15 8月, 2018 1 次提交
-
-
由 David Kimura 提交于
The purpose of this refactor is to more closely align the GUC with postgres. It started as a suggestion in https://github.com/greenplum-db/gpdb/pull/4790. There are still differences, particularly around when this GUC can be set. In GPDB it can be set by anyone at any time (PGC_USERSET), however in postgres it is limited to postmaster restart (PGC_POSTMASTER). This difference was kept on purpose until we have more buy-in as it is a bigger change on the end-user. Co-authored-by: NJoao Pereira <jdealmeidapereira@pivotal.io>
-
- 26 7月, 2018 1 次提交
-
-
由 Chris Hajas 提交于
This was added when we added the openbar gpcrondump tests in c9770456. Since these tests no longer exist, we can remove this too. Authored-by: NChris Hajas <chajas@pivotal.io>
-
- 12 7月, 2018 4 次提交
-
-
由 Chris Hajas 提交于
fuzzystrmatch is now an extension and can be loaded through CREATE EXTENSION. This suite was probably more relevant when GPDB didn't support extensions. Authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Chris Hajas 提交于
misc.feature contained "Miscellaneous tests which do not belong to mgmt utilities" and are not being run/maintained. gplog.feature was an attempt to test the logging feature in gppylib, but likewise hasn't been run/maintained. Authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Chris Hajas 提交于
The maintained tests for gpfdist are located in src/bin/gpfdist/regress. Authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Chris Hajas 提交于
The maintained tests for gpload are located in gpMgmt/bin/gpload_test. Authored-by: NChris Hajas <chajas@pivotal.io>
-
- 10 7月, 2018 1 次提交
-
-
由 Daniel Gustafsson 提交于
Rather than hardcoding to require /bin/bash, move to using a lookup via "/usr/bin/env bash" to allow for greater portability of the code. This also changes the Bash test to checking if the current shell is actually Bash, rather than looking if bash is available on the file system (since we by the above mentioned changes no longer need that).
-
- 21 6月, 2018 2 次提交
-
-
由 Jamie McAtamney 提交于
Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io> Co-authored-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Nadeem Ghani 提交于
- Add mirrors with and without standby, and ensure that the host assignment is identical between the two. - Add mirrors, then kill one, and ensure that gprecoverseg operates correctly on the newly added mirror. Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NJacob Champion <pchampion@pivotal.io>
-
- 14 6月, 2018 1 次提交
-
-
由 Nadeem Ghani 提交于
- Add mirrors with and without standby, and ensure that the host assignment is identical between the two. - Add mirrors, then kill one, and ensure that gprecoverseg operates correctly on the newly added mirror. Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NJacob Champion <pchampion@pivotal.io>
-
- 12 6月, 2018 1 次提交
-
-
由 Jamie McAtamney 提交于
We have added a test case to verify that the mirror configuration generated by gpaddmirrors with the `-s` option is indeed spread over different hosts for each of the primaries. Co-authored-by: NJim Doty <jdoty@pivotal.io> Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io> Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NKevin Yeap <kyeap@pivotal.io> Co-authored-by: NShoaib Lari <slari@pivotal.io>
-
- 23 5月, 2018 2 次提交
-
-
由 Jim Doty 提交于
Authored-by: NJim Doty <jdoty@pivotal.io>
-
由 Jim Doty 提交于
Authored-by: NJim Doty <jdoty@pivotal.io>
-
- 22 5月, 2018 1 次提交
-
-
由 Jamie McAtamney 提交于
This test explicitly creates some tables and verifies that they can be read from after gpaddmirrors has been run on the cluster. The existing TINC test that does this will be deleted after all the new gpaddmirrors tests have been added. We've also added some code to allow cluster creation on more than one segment host. Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
- 17 5月, 2018 2 次提交
-
-
由 Jimmy Yih 提交于
These scenarios come from the walrep_1 TINC Makefile target. They have been designed to run on both singlenode and multinode environments. The tests have not been removed from TINC yet because some walrep_2 TINC tests have dependencies. Co-authored-by: NJimmy Yih <jyih@pivotal.io> Co-authored-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Nadeem Ghani 提交于
These scenarios come from the walrep_1 TINC Makefile target. They have been designed to run on both singlenode and multinode environments. The tests have not been removed from TINC yet because some walrep_2 TINC tests have dependencies. Co-authored-by: NJimmy Yih <jyih@pivotal.io> Co-authored-by: NNadeem Ghani <nghani@pivotal.io>
-
- 15 5月, 2018 2 次提交
-
-
由 Tingfang Bao 提交于
Because the pg_dump handle the implicit sequence (serial column type) and explicit sequence differently in different gpdb version, so we need to detect the sequences which not included in the table dump sql, then we should create them firstly. And also for all dependent sequence, setval() to source value after data transferred. Signed-off-by: NMing Li <mli@pivotal.io>
-
由 Nadeem Ghani 提交于
Prior to this commit, gpaddmirrors was missing two bits of work which were previously done by gpinitsystem/gpcreateseg. When adding mirrors to a cluster, the pg_hba.conf on primary segments has to be modified to allow replication connections, e.g. for pgbasebackup. And after the mirrors are built, the catalog has to be updated to reflect the new cluster config. Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NJim Doty <jdoty@pivotal.io>
-
- 08 5月, 2018 3 次提交
-
-
由 Tingfang Bao 提交于
because the behivor of pg_dump is different between gpdb 4 and gpdb 5 so the check_sequence function only can work on gpdb 5
-
由 Tingfang Bao 提交于
Signed-off-by: NAdam Lee <ali@pivotal.io>
-
由 Tingfang Bao 提交于
if the gptransfer with --full option, it should ensure the spcified database is not existed in the destination cluster. it isn't guaranteed, so we decide to remove this case Signed-off-by: NAdam Lee <ali@pivotal.io>
-
- 07 5月, 2018 1 次提交
-
-
由 Tingfang Bao 提交于
* Create the sequence which is belonged to a table. * Correct the sequence next value We will check the dependent sequences before transfering the table data. If the table contains the sequeces. gptransfer will pg_dump all the metadata of the sequences with data. and import it into the destiantion cluster Signed-off-by: NMing LI <mli@apache.org>
-
- 04 5月, 2018 1 次提交
-
-
由 Shoaib Lari 提交于
In addition, we clarified the steps that call gpexpand a second time. It is now clearer that the intention of that step is to cause the database to redistribute the tables. Co-authored-by: NShoaib Lari <slari@pivotal.io> Co-authored-by: NJim Doty <jdoty@pivotal.io>
-
- 02 5月, 2018 1 次提交
-
-
由 Marbin Tan 提交于
In addition to migrating the TiNC tests notable changes were made: + Made gpexpand tests for a cluster with mirrors work + Migrated Behave tests for duration and end time as behave tests + Add gpexpand behave tests to pipeline template + Cleanup some dead code in environment.py Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io> Co-authored-by: NJim Doty <jdoty@pivotal.io> Co-authored-by: NKevin Yeap <kyeap@pivotal.io> Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NShoaib Lari <slari@pivotal.io>
-
- 03 4月, 2018 1 次提交
-
-
由 Adam Lee 提交于
The pg_exttable.fmterrtbl column stored the OID of the error table, but without an error table it is just set to the OID of the external table. That is not necessary, there are other columns which indicate if error logging is enabled. Therefore this column can be removed.
-
- 29 3月, 2018 1 次提交
-
-
由 Pengzhou Tang 提交于
* Support replicated table in GPDB Currently, tables are distributed across all segments by hash or random in GPDB. There are requirements to introduce a new table type that all segments have the duplicate and full table data called replicated table. To implement it, we added a new distribution policy named POLICYTYPE_REPLICATED to mark a replicated table and added a new locus type named CdbLocusType_SegmentGeneral to specify the distribution of tuples of a replicated table. CdbLocusType_SegmentGeneral implies data is generally available on all segments but not available on qDisp, so plan node with this locus type can be flexibly planned to execute on either single QE or all QEs. it is similar with CdbLocusType_General, the only difference is that CdbLocusType_SegmentGeneral node can't be executed on qDisp. To guarantee this, we try our best to add a gather motion on the top of a CdbLocusType_SegmentGeneral node when planing motion for join even other rel has bottleneck locus type, a problem is such motion may be redundant if the single QE is not promoted to executed on qDisp finally, so we need to detect such case and omit the redundant motion at the end of apply_motion(). We don't reuse CdbLocusType_Replicated since it's always implies a broadcast motion bellow, it's not easy to plan such node as direct dispatch to avoid getting duplicate data. We don't support replicated table with inherit/partition by clause now, the main problem is that update/delete on multiple result relations can't work correctly now, we can fix this later. * Allow spi_* to access replicated table on QE Previously, GPDB didn't allow QE to access non-catalog table because the data is incomplete, we can remove this limitation now if it only accesses replicated table. One problem is QE need to know if a table is replicated table, previously, QE didn't maintain the gp_distribution_policy catalog, so we need to pass policy info to QE for replicated table. * Change schema of gp_distribution_policy to identify replicated table Previously, we used a magic number -128 in gp_distribution_policy table to identify replicated table which is quite a hack, so we add a new column in gp_distribution_policy to identify replicated table and partitioned table. This commit also abandon the old way that used 1-length-NULL list and 2-length-NULL list to identify DISTRIBUTED RANDOMLY and DISTRIBUTED FULLY clause. Beside, this commit refactor the code to make the decision-making of distribution policy more clear. * support COPY for replicated table * Disable row ctid unique path for replicated table. Previously, GPDB use a special Unique path on rowid to address queries like "x IN (subquery)", For example: select * from t1 where t1.c2 in (select c2 from t3), the plan looks like: -> HashAggregate Group By: t1.ctid, t1.gp_segment_id -> Hash Join Hash Cond: t2.c2 = t1.c2 -> Seq Scan on t2 -> Hash -> Seq Scan on t1 Obviously, the plan is wrong if t1 is a replicated table because ctid + gp_segment_id can't identify a tuple, in replicated table, a logical row may have different ctid and gp_segment_id. So we disable such plan for replicated table temporarily, it's not the best way because rowid unique way maybe the cheapest plan than normal hash semi join, so we left a FIXME for later optimization. * ORCA related fix Reported and added by Bhuvnesh Chaudhary <bchaudhary@pivotal.io> Fallback to legacy query optimizer for queries over replicated table * Adapt pg_dump/gpcheckcat to replicated table gp_distribution_policy is no longer a master-only catalog, do same check as other catalogs. * Support gpexpand on replicated table && alter the dist policy of replicated table
-
- 15 3月, 2018 1 次提交
-
-
由 Kevin Yeap 提交于
Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io> Co-authored-by: NKevin Yeap <kyeap@pivotal.io> Co-authored-by: NShoaib Lari <slari@pivotal.io> (cherry picked from commit ed48879627686b36336529d0b445073687ec68cd)
-
- 16 2月, 2018 1 次提交
-
-
由 Jimmy Yih 提交于
After filerep, filespaces, and persistent tables were removed, we disabled the gprecoverseg Behave tests which needed gprecoverseg to be refactored before the tests could run. Currently, gprecoverseg is in a usable state so it's time to fix these Behave tests and have them running again. Some refactoring, workarounds, and test removal was needed which is detailed here. Refactoring: 1. The tests used gpfaultinjector to cause a mirror or primary down situation. This is not really needed and just killing the segments is good enough. Also, injecting a fault on a mirror doesn't really do anything anymore because it would not trigger due to the mirror being in recovery mode. 2. The checksum test did not look correct so we refactored it to what we think it was suppose to look like. Workaround: Running incremental recovery after killing a primary to trigger failover will not work without pg_rewind. These calls have been changed to use full recovery similar to how gprecoverseg rebalancing works right now. Once pg_rewind is introduced, they should be changed back to use incremental recovery. Test Removal: 1. A scenario checked for failure when there were corrupted changetracking logs. If corrupted, a full recovery must be run. We delete this test since changetracking logs are no longer a thing. However, this scenario is very similar to our src/test/walrep missing_xlogs test. That might be good enough as a low-level replacement but we may want to add a full end-to-end scenario back in these Behave tests. 2. A scenario checked that gprecoverseg would not recover segments with persistent rebuild inconsistencies. Persistent tables no longer exist so the scenario is okay to remove. Co-authored-by: NJimmy Yih <jyih@pivotal.io> Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
- 12 2月, 2018 2 次提交
- 30 1月, 2018 1 次提交
-
-
由 Shoaib Lari 提交于
- Change gpdeletsystem to delete tablespaces before datadir - Refactor SegmentStart.noWait to pg_ctrl_wait - Create PgBaseBackup class - Revert the change to default Mirror status - Assorted typos and bugfixes Author: Shoaib Lari <slari@pivotal.io> Author: Jim Doty <jdoty@pivotal.io> Author: Nadeem Ghani <nghani@pivotal.io>
-
- 19 1月, 2018 1 次提交
-
-
由 Chris Hajas 提交于
I used the command `behave test/behave/mgmt_utils --format=steps.usage --dry-run` to identify unused steps. Some of these were false positives or were extra given/when/then steps that I did not remove. Author: Chris Hajas chajas@pivotal.io
-
- 18 1月, 2018 1 次提交
-
-
由 Daniel Gustafsson 提交于
Commit 5cc0ec50 changed gpperfmon_log_alert_level into a proper enum GUC, but failed to update the gpperfmon behave test.
-
- 17 1月, 2018 1 次提交
-
-
由 Heikki Linnakangas 提交于
-